I want to split the records from table1 and insert the splited records to table2. The trigger is working fine when i am trying to update single record. but the trigger is not working properly at the time of batch update. At the time of batch update it is split the last record from the query and insert that into table2 instead of inserting all the records.
Pls help me........I suspect you wrote your trigger to work with one record at a time. I say this because of your comment of the trigger working for a one record update and the LAST record in a batch. Take a look at your trigger and ask yourself "What happens if I have three records to deal with in either the inserted OR deleted temp tables?"
Post your trigger and maybe we can offer some suggestions.|||Hi
This is the trigger i have used to split the records for various conditions from table1 and insert the splitted records into another table table2
The Problem i'm facing is if it is single update(if only one row is affected) the trigger is working fine,
but not for batch update.
if you are unable to follow this trigger pls give your own batch update example, we will try to implement that.
pls help me.
CREATE trigger udt_Break_Split
ON Hours_tmp
/*
this trigger is used to do the break split in the table hours_tmp
and insert the records into the table selnmuamid
*/
AFTER UPDATE
AS
Declare @.Id int
Declare @.Res_id nvarchar(8)
Declare @.DTime smalldatetime
Declare @.StartTime char(5)
Declare @.EndTime char(5)
Declare @.Pauze char(5)
Declare @.Hourworked nvarchar(5)
Declare @.Status char(1)
--Declare @.PPayroll bit
Declare @.Project nvarchar(10)
Declare @.Naam char(50)
Declare @.Type char(5)
Declare @.Hryear nvarchar(4)
Declare @.Totmin char(5)
Declare @.Endtimebr char(5)
Declare @.Stime char(5)
Declare @.Start char(5)
Declare @.Processed char(1)
Declare @.del_DTime smalldatetime
Declare @.del_StartTime char(5)
Declare @.del_EndTime char(5)
Declare @.Del_id int
Declare @.Totrows int
--update hours_tmp set hours_tmp.processed_payroll =1 from hours_tmp , inserted where hours_tmp.id = inserted.id
Select @.Id=Id,
@.Res_id=Res_id,
@.DTime=[Datetime],
@.Start=Starttime,
@.EndTime=Endtime,
@.Pauze=Pauze,
@.Hourworked=Hoursworked,
@.Status=Status,
@.Project=Project,
@.Naam=Naam,
@.Type=Type,
@.Hryear=Hryear,
@.Processed=Processed
From inserted -- i ,Hours_tmp h where i.[Id] = h.[id]
Select @.del_DTime=[DateTime],
@.del_Starttime=Starttime,
@.del_endTime=Endtime ,
@.del_id=id
From Deleted
Delete From Selnmuamid
Where Res_id=@.Res_id
and Datetime=@.del_DTime
and Starttime >=@.del_starttime
and Endtime <=@.del_endtime
-- This is To Get total hours (hourworked column in the table ) in minutes
Select @.Totmin=Sum(Convert(Integer,(Left(@.hourworked,Char index('.',@.hourworked)-1) *
60)) +Convert(Integer,(Right(@.hourworked,Len(@.hourworke d) -
Charindex('.',@.hourworked))))),@.Endtimebr=Sum(Conv ert(Integer,(Left(@.Endtime,Charindex(':',@.Endtime) -1) *
60)) + Convert(Integer,(Right(@.Endtime,Len(@.Endtime) -
Charindex(':',@.Endtime))))),@.Stime=Sum(Convert(Int eger,(Left(@.Start,Charindex(':',@.Start)-1) * 60)) +
Convert(Integer,(Right(@.Start,Len(@.Start) - Charindex(':',@.Start)))))
--Values Passed as Parameter to Stored procedure Break_Split
Exec Break_Split @.Id,@.Res_id,@.DTime,@.Start,@.EndTime,@.Pauze,
@.Hourworked,@.Status,@.Project,@.Naam,@.Type,
@.Hryear,@.Totmin,@.Endtimebr,@.STime,@.Processed|||basically I just wrapped your code in a while loop to step through each record in the inserted temp table. Then I moveded the delete Selnmuamid to the end.
This has NOT been tested!
Code:
------------------------------
CREATE trigger udt_Break_Split
ON Hours_tmp
/*
this trigger is used to do the break split in the table hours_tmp
and insert the records into the table selnmuamid
*/
AFTER UPDATE
AS
-- ----------------
-- One Declare is more efficient than multipule declares
-- ----------------
Declare @.Id int
, @.Res_id nvarchar(8)
, @.DTime smalldatetime
, @.StartTime char(5)
, @.EndTime char(5)
, @.Pauze char(5)
, @.Hourworked nvarchar(5)
, @.Status char(1)
-- , @.PPayroll bit
, @.Project nvarchar(10)
, @.Naam char(50)
, @.Type char(5)
, @.Hryear nvarchar(4)
, @.Totmin char(5)
, @.Endtimebr char(5)
, @.Stime char(5)
, @.Start char(5)
, @.Processed char(1)
, @.del_DTime smalldatetime
, @.del_StartTime char(5)
, @.del_EndTime char(5)
, @.Del_id int
, @.Totrows int
--update hours_tmp set hours_tmp.processed_payroll =1 from hours_tmp , inserted where hours_tmp.id = inserted.id
select @.Id = min(ID) from inserted
while (@.Id is not null) begin
Select @.Res_id = Res_id
, @.DTime = [Datetime]
, @.Start = Starttime
, @.EndTime = Endtime
, @.Pauze = Pauze
, @.Hourworked = Hoursworked
, @.Status = Status
, @.Project = Project
, @.Naam = Naam
, @.Type = Type
, @.Hryear = Hryear
, @.Processed = Processed
-- ---------------------------------------------------
-- Hoursworked, Endtime and StartTime need to be converted to minutes
-- ---------------------------------------------------
, @.Totmin = Convert(Integer,(Left(Hoursworked,Charindex('.',Ho ursworked)-1) * 60)) + Convert(Integer,(Right(Hoursworked,Len(Hoursworked ) - Charindex('. ',Hoursworked))))
, @.Endtimebr = Convert(Integer,(Left(Endtime, Charindex(':',Endtime)-1) * 60)) + Convert(Integer,(Right(Endtime, Len(Endtime) - Charindex(':',Endtime))))
, @.Stime = Convert(Integer,(Left(Starttime, Charindex(':',Starttime)-1) * 60)) + Convert(Integer,(Right(Starttime, Len(Starttime) - Charindex(':',Starttime)))))
From inserted i
where id = @.id
-- This is To Get total hours (hourworked column in the table ) in minutes
/*
Select @.Totmin = Sum(Convert(Integer,(Left(@.hourworked,Charindex('. ',@.hourworked)-1) * 60)) +
Convert(Integer,(Right(@.hourworked,Len(@.hourworked ) - Charindex('. ',@.hourworked)))))
, @.Endtimebr = Sum(Convert(Integer,(Left(@.Endtime,Charindex(':',@. Endtime)-1) * 60)) +
Convert(Integer,(Right(@.Endtime,Len(@.Endtime) - Charindex(':',@.Endtime)))))
, @.Stime = Sum(Convert(Integer,(Left(@.Start,Charindex(':',@.St art)-1) * 60)) +
Convert(Integer,(Right(@.Start,Len(@.Start) - Charindex(':',@.Start)))))
*/
-- Values Passed as Parameter to Stored procedure Break_Split
Exec Break_Split @.Id,@.Res_id,@.DTime,@.Start,@.EndTime,@.Pauze,@.Hourwor ked,@.Status,@.Project,@.Naam,@.Type,@.Hryear,@.Totmin,@. Endtimebr,@.STime,@.Processed
select @.Id = min(ID) from inserted where id > @.id
end
-- -------------------
-- The deleted temp table can be joined to the Selnmuamid table
-- to deleted the unwanted records. If you need the ID from the
-- inserted table then join the inserted table with the
-- deleted table and then join the Selnmuamid table.
-- This is a good example of how you can ditch the row-at-a-time
-- mentality and start using set theory for processing records.
-- -------------------
delete
from Selnmuamid s
join deleted d on s.Res_id = d.Res_id
where [s.Datetime] = d.del_DTime
and s.Starttime >= d.del_starttime
and s.Endtime <= d.del_endtime
/*
Select @.del_DTime = [DateTime]
, @.del_Starttime = Starttime
, @.del_endTime = Endtime
, @.del_id = [id]
From Deleted
where Id = @.Id
Delete
From Selnmuamid
Where Res_id = @.Res_id
and [Datetime] = @.del_DTime
and Starttime >= @.del_starttime
and Endtime <= @.del_endtime
*/
------------------------------
Showing posts with label trigger. Show all posts
Showing posts with label trigger. Show all posts
Friday, March 30, 2012
Wednesday, March 21, 2012
how to improve the Performance
Hi There
We've a audit_log table with 82 million records. We have trigger on each
table in the database to log any changes for update or insert or delete for
all fields.
we dont have any primary key on the table, just only one index
Will it affect any performance on server side or client side while inserting
records. Just we thought about truncate records. But we have to change some
applications if we truncate.
Thanks
Thanks
Ganesh
--
Thanks
GaneshWhat are wanting to improve the performance of: inserts, selects, etc.? You
only mention inserts, so I will focus on that. When a row is inserted,
additional I/O and CPU time is spent updating indexes, so keep indexes on
the inserted table to a minimum.
Actually, you not need any indexes on this table at all. You could implement
a partitioned table strategy. For example, the trigger could perform the
initial insert into a table called [audit_current]. You could then schedule
a job that once per hour / day / w
etc. will insert rows from
[audit_current] into [audit_history] and then truncate [audit_current].
Therefore, [audit_current] will contain only a small number of rows and can
be inserted quickly. The table table [audit_history] would contain the
remaining history of > 82 million rows, and and can be indexed more heavily
for select queries. On those occasions where you need to query the audit
data, you can manually run the InsertAuditHistory job to insure that it
contains all rows up to that point in time.
To understand what type of I/O and CPU is required for inserting or
selecting the audit table, you can use the following:
SET STATISTICS IO ON
SET STATISTICS TIME ON
SET STATISTICS PROFILE ON
select * from audit where ...
update audit ...
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
SET STATISTICS PROFILE OFF
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
> Hi There
> We've a audit_log table with 82 million records. We have trigger on each
> table in the database to log any changes for update or insert or delete
> for
> all fields.
> we dont have any primary key on the table, just only one index
> Will it affect any performance on server side or client side while
> inserting
> records. Just we thought about truncate records. But we have to change
> some
> applications if we truncate.
> Thanks
> --
> Thanks
> Ganesh
> --
> Thanks
> Ganesh|||Thanks for you reply
I want to improve Insert, Currently the application is being used by around
500 user it's very slow now. I thought this audit_log also may be improve
performance if we truncate this one. and also takes spaces too.
Thanks
Ganesh
"JT" wrote:
> What are wanting to improve the performance of: inserts, selects, etc.? Yo
u
> only mention inserts, so I will focus on that. When a row is inserted,
> additional I/O and CPU time is spent updating indexes, so keep indexes on
> the inserted table to a minimum.
> Actually, you not need any indexes on this table at all. You could impleme
nt
> a partitioned table strategy. For example, the trigger could perform the
> initial insert into a table called [audit_current]. You could then schedul
e
> a job that once per hour / day / w
etc. will insert rows from
> [audit_current] into [audit_history] and then truncate [audit_current].
> Therefore, [audit_current] will contain only a small number of rows and ca
n
> be inserted quickly. The table table [audit_history] would contain the
> remaining history of > 82 million rows, and and can be indexed more heavil
y
> for select queries. On those occasions where you need to query the audit
> data, you can manually run the InsertAuditHistory job to insure that it
> contains all rows up to that point in time.
> To understand what type of I/O and CPU is required for inserting or
> selecting the audit table, you can use the following:
> SET STATISTICS IO ON
> SET STATISTICS TIME ON
> SET STATISTICS PROFILE ON
> select * from audit where ...
> update audit ...
> SET STATISTICS IO OFF
> SET STATISTICS TIME OFF
> SET STATISTICS PROFILE OFF
>
> "Ganesh" <gsganesh@.yahoo.com> wrote in message
> news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
>
>|||It sounds like your application makes frequent updates to tables, and you
are needing to audit all of this.
To benefit from parallel I/O processing, consider placing the audit tables
on a seperate and disk. Your log, audit_current, and audit_history tables
could perhaps be placed in the same file group.
http://msdn2.microsoft.com/en-us/library/ms187087.aspx
http://www.databasejournal.com/feat...cle.php/3114651
It is also possible that your insert performance is not related to I/O but
to blocking. When you have several processes attempting to insert into a
table simultaneously, there is a potential for blocking and even
deadlocking.
http://support.microsoft.com/defaul...kb;EN-US;224453
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:6FB90B35-7449-4FF6-91ED-984F25BCB9BA@.microsoft.com...
> Thanks for you reply
> I want to improve Insert, Currently the application is being used by
> around
> 500 user it's very slow now. I thought this audit_log also may be improve
> performance if we truncate this one. and also takes spaces too.
>
> --
> Thanks
> Ganesh
>
> "JT" wrote:
>
We've a audit_log table with 82 million records. We have trigger on each
table in the database to log any changes for update or insert or delete for
all fields.
we dont have any primary key on the table, just only one index
Will it affect any performance on server side or client side while inserting
records. Just we thought about truncate records. But we have to change some
applications if we truncate.
Thanks
Thanks
Ganesh
--
Thanks
GaneshWhat are wanting to improve the performance of: inserts, selects, etc.? You
only mention inserts, so I will focus on that. When a row is inserted,
additional I/O and CPU time is spent updating indexes, so keep indexes on
the inserted table to a minimum.
Actually, you not need any indexes on this table at all. You could implement
a partitioned table strategy. For example, the trigger could perform the
initial insert into a table called [audit_current]. You could then schedule
a job that once per hour / day / w
[audit_current] into [audit_history] and then truncate [audit_current].
Therefore, [audit_current] will contain only a small number of rows and can
be inserted quickly. The table table [audit_history] would contain the
remaining history of > 82 million rows, and and can be indexed more heavily
for select queries. On those occasions where you need to query the audit
data, you can manually run the InsertAuditHistory job to insure that it
contains all rows up to that point in time.
To understand what type of I/O and CPU is required for inserting or
selecting the audit table, you can use the following:
SET STATISTICS IO ON
SET STATISTICS TIME ON
SET STATISTICS PROFILE ON
select * from audit where ...
update audit ...
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
SET STATISTICS PROFILE OFF
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
> Hi There
> We've a audit_log table with 82 million records. We have trigger on each
> table in the database to log any changes for update or insert or delete
> for
> all fields.
> we dont have any primary key on the table, just only one index
> Will it affect any performance on server side or client side while
> inserting
> records. Just we thought about truncate records. But we have to change
> some
> applications if we truncate.
> Thanks
> --
> Thanks
> Ganesh
> --
> Thanks
> Ganesh|||Thanks for you reply
I want to improve Insert, Currently the application is being used by around
500 user it's very slow now. I thought this audit_log also may be improve
performance if we truncate this one. and also takes spaces too.
Thanks
Ganesh
"JT" wrote:
> What are wanting to improve the performance of: inserts, selects, etc.? Yo
u
> only mention inserts, so I will focus on that. When a row is inserted,
> additional I/O and CPU time is spent updating indexes, so keep indexes on
> the inserted table to a minimum.
> Actually, you not need any indexes on this table at all. You could impleme
nt
> a partitioned table strategy. For example, the trigger could perform the
> initial insert into a table called [audit_current]. You could then schedul
e
> a job that once per hour / day / w
> [audit_current] into [audit_history] and then truncate [audit_current].
> Therefore, [audit_current] will contain only a small number of rows and ca
n
> be inserted quickly. The table table [audit_history] would contain the
> remaining history of > 82 million rows, and and can be indexed more heavil
y
> for select queries. On those occasions where you need to query the audit
> data, you can manually run the InsertAuditHistory job to insure that it
> contains all rows up to that point in time.
> To understand what type of I/O and CPU is required for inserting or
> selecting the audit table, you can use the following:
> SET STATISTICS IO ON
> SET STATISTICS TIME ON
> SET STATISTICS PROFILE ON
> select * from audit where ...
> update audit ...
> SET STATISTICS IO OFF
> SET STATISTICS TIME OFF
> SET STATISTICS PROFILE OFF
>
> "Ganesh" <gsganesh@.yahoo.com> wrote in message
> news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
>
>|||It sounds like your application makes frequent updates to tables, and you
are needing to audit all of this.
To benefit from parallel I/O processing, consider placing the audit tables
on a seperate and disk. Your log, audit_current, and audit_history tables
could perhaps be placed in the same file group.
http://msdn2.microsoft.com/en-us/library/ms187087.aspx
http://www.databasejournal.com/feat...cle.php/3114651
It is also possible that your insert performance is not related to I/O but
to blocking. When you have several processes attempting to insert into a
table simultaneously, there is a potential for blocking and even
deadlocking.
http://support.microsoft.com/defaul...kb;EN-US;224453
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:6FB90B35-7449-4FF6-91ED-984F25BCB9BA@.microsoft.com...
> Thanks for you reply
> I want to improve Insert, Currently the application is being used by
> around
> 500 user it's very slow now. I thought this audit_log also may be improve
> performance if we truncate this one. and also takes spaces too.
>
> --
> Thanks
> Ganesh
>
> "JT" wrote:
>
Subscribe to:
Posts (Atom)