Showing posts with label million. Show all posts
Showing posts with label million. Show all posts

Wednesday, March 21, 2012

how to improve the Performance

Hi There
We've a audit_log table with 82 million records. We have trigger on each
table in the database to log any changes for update or insert or delete for
all fields.
we dont have any primary key on the table, just only one index
Will it affect any performance on server side or client side while inserting
records. Just we thought about truncate records. But we have to change some
applications if we truncate.
Thanks
Thanks
Ganesh
--
Thanks
GaneshWhat are wanting to improve the performance of: inserts, selects, etc.? You
only mention inserts, so I will focus on that. When a row is inserted,
additional I/O and CPU time is spent updating indexes, so keep indexes on
the inserted table to a minimum.
Actually, you not need any indexes on this table at all. You could implement
a partitioned table strategy. For example, the trigger could perform the
initial insert into a table called [audit_current]. You could then schedule
a job that once per hour / day / w etc. will insert rows from
[audit_current] into [audit_history] and then truncate [audit_current].
Therefore, [audit_current] will contain only a small number of rows and can
be inserted quickly. The table table [audit_history] would contain the
remaining history of > 82 million rows, and and can be indexed more heavily
for select queries. On those occasions where you need to query the audit
data, you can manually run the InsertAuditHistory job to insure that it
contains all rows up to that point in time.
To understand what type of I/O and CPU is required for inserting or
selecting the audit table, you can use the following:
SET STATISTICS IO ON
SET STATISTICS TIME ON
SET STATISTICS PROFILE ON
select * from audit where ...
update audit ...
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
SET STATISTICS PROFILE OFF
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
> Hi There
> We've a audit_log table with 82 million records. We have trigger on each
> table in the database to log any changes for update or insert or delete
> for
> all fields.
> we dont have any primary key on the table, just only one index
> Will it affect any performance on server side or client side while
> inserting
> records. Just we thought about truncate records. But we have to change
> some
> applications if we truncate.
> Thanks
> --
> Thanks
> Ganesh
> --
> Thanks
> Ganesh|||Thanks for you reply
I want to improve Insert, Currently the application is being used by around
500 user it's very slow now. I thought this audit_log also may be improve
performance if we truncate this one. and also takes spaces too.
Thanks
Ganesh
"JT" wrote:

> What are wanting to improve the performance of: inserts, selects, etc.? Yo
u
> only mention inserts, so I will focus on that. When a row is inserted,
> additional I/O and CPU time is spent updating indexes, so keep indexes on
> the inserted table to a minimum.
> Actually, you not need any indexes on this table at all. You could impleme
nt
> a partitioned table strategy. For example, the trigger could perform the
> initial insert into a table called [audit_current]. You could then schedul
e
> a job that once per hour / day / w etc. will insert rows from
> [audit_current] into [audit_history] and then truncate [audit_current].
> Therefore, [audit_current] will contain only a small number of rows and ca
n
> be inserted quickly. The table table [audit_history] would contain the
> remaining history of > 82 million rows, and and can be indexed more heavil
y
> for select queries. On those occasions where you need to query the audit
> data, you can manually run the InsertAuditHistory job to insure that it
> contains all rows up to that point in time.
> To understand what type of I/O and CPU is required for inserting or
> selecting the audit table, you can use the following:
> SET STATISTICS IO ON
> SET STATISTICS TIME ON
> SET STATISTICS PROFILE ON
> select * from audit where ...
> update audit ...
> SET STATISTICS IO OFF
> SET STATISTICS TIME OFF
> SET STATISTICS PROFILE OFF
>
> "Ganesh" <gsganesh@.yahoo.com> wrote in message
> news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
>
>|||It sounds like your application makes frequent updates to tables, and you
are needing to audit all of this.
To benefit from parallel I/O processing, consider placing the audit tables
on a seperate and disk. Your log, audit_current, and audit_history tables
could perhaps be placed in the same file group.
http://msdn2.microsoft.com/en-us/library/ms187087.aspx
http://www.databasejournal.com/feat...cle.php/3114651
It is also possible that your insert performance is not related to I/O but
to blocking. When you have several processes attempting to insert into a
table simultaneously, there is a potential for blocking and even
deadlocking.
http://support.microsoft.com/defaul...kb;EN-US;224453
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:6FB90B35-7449-4FF6-91ED-984F25BCB9BA@.microsoft.com...
> Thanks for you reply
> I want to improve Insert, Currently the application is being used by
> around
> 500 user it's very slow now. I thought this audit_log also may be improve
> performance if we truncate this one. and also takes spaces too.
>
> --
> Thanks
> Ganesh
>
> "JT" wrote:
>

Monday, March 19, 2012

how to improve "update" speed

hi

in asp.net,i used sql server to store records.

in a table has 1 million records,but when i update the record,it is very slowly.

is "create index" helpful for "update" operation?

i need help,thanks a lot.

First ensure that you have properly created index on you table and primary key.

Second ensure that you have data in table only for one business period( on principle one year).

Make shrink on your data table.

Refactor your data index.

This will help you.

|||

Post the update statement you have as well as all the table information. Do an "EXEC sp_Help TableName" in your query analyzer and post the results here.

|||

Hi ndinakar:

mytable has 100,000 records, i use "exec sp_help mytable",its result is :

Name Owner Type Created_datetime

mytable dbo user table 2007-11-30 9:34:21

can i get further information

thanks a lot

|||

Hi yugiant,

Based on my understanding, you are now tring to creating indexes in your sql server database in order to improve your update performance. If I've understood you wrong, please feel free to tell me, thanks.

Yes I think using index is a good idea. Using indexes in your database will save a lot of time for locating the specific record. And since we need to first locate the record before updating it, I think using index is a great help for improving updating performance. exec sp_help will list properties of your table, including all the indexex you have made to your datatable. You can also get your datatable index information from Object Explorer(in the Indexes tree view node).

I strongly suggesting you reading this:http://msdn2.microsoft.com/en-us/library/ms189051.aspx I think it will be much of help to solve your problem. thanks.

Hope my suggestion helps