Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Friday, March 23, 2012

How to include the date as part of the log file name...

Hello to all,
I'm using SQL Server agent to perform some schedule jobs.
I have it setup so that it will write to an result output
file (Jobs Steps -> Advanced -> Output file:
The problem is that I would like to name this output file
such as 'result%date%.log' where %date% is the current
date so that I can get a seperate log file each day.
Does anyone know how to do this ? %date% does not work.
Thanks in advance.
KinKin:
The way I have it up is that I do archiving of these output log files every
day as a separate job. In the archiving process I copy the files and add
date to the filename as follows:
declare @.cmd sysname
declare @.var sysname
set @.var = convert(varchar(11),getdate(),112)
--add date to file and archive
set @.cmd = 'copy e:\Logs\DBMaintenance\*.* e:\Logs\Archive_Logs\*' + @.var +
'.txt'
exec master..xp_cmdshell @.cmd
"Kin" <ninka_2000@.yahoo.com> wrote in message
news:017501c34fa8$212552a0$a301280a@.phx.gbl...
> Hello to all,
> I'm using SQL Server agent to perform some schedule jobs.
> I have it setup so that it will write to an result output
> file (Jobs Steps -> Advanced -> Output file:
> The problem is that I would like to name this output file
> such as 'result%date%.log' where %date% is the current
> date so that I can get a seperate log file each day.
> Does anyone know how to do this ? %date% does not work.
> Thanks in advance.
> Kin|||I'm not sure that there is a direct way to do that in SQL Agent. You could
always build you're own backup log command and execute that directly. Also,
you could simply let the job name it what it wants, and then add a step that
uses xp_cmdshell to rename the file.
--
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Kin" <ninka_2000@.yahoo.com> wrote in message
news:017501c34fa8$212552a0$a301280a@.phx.gbl...
> Hello to all,
> I'm using SQL Server agent to perform some schedule jobs.
> I have it setup so that it will write to an result output
> file (Jobs Steps -> Advanced -> Output file:
> The problem is that I would like to name this output file
> such as 'result%date%.log' where %date% is the current
> date so that I can get a seperate log file each day.
> Does anyone know how to do this ? %date% does not work.
> Thanks in advance.
> Kin|||>--Original Message--
>Hello to all,
>I'm using SQL Server agent to perform some schedule
jobs.
>I have it setup so that it will write to an result output
>file (Jobs Steps -> Advanced -> Output file:
>The problem is that I would like to name this output file
>such as 'result%date%.log' where %date% is the current
>date so that I can get a seperate log file each day.
>Does anyone know how to do this ? %date% does not work.
>Thanks in advance.
>Kin
>.
>

Wednesday, March 21, 2012

how to improve the Performance

Hi There
We've a audit_log table with 82 million records. We have trigger on each
table in the database to log any changes for update or insert or delete for
all fields.
we dont have any primary key on the table, just only one index
Will it affect any performance on server side or client side while inserting
records. Just we thought about truncate records. But we have to change some
applications if we truncate.
Thanks
Thanks
Ganesh
--
Thanks
GaneshWhat are wanting to improve the performance of: inserts, selects, etc.? You
only mention inserts, so I will focus on that. When a row is inserted,
additional I/O and CPU time is spent updating indexes, so keep indexes on
the inserted table to a minimum.
Actually, you not need any indexes on this table at all. You could implement
a partitioned table strategy. For example, the trigger could perform the
initial insert into a table called [audit_current]. You could then schedule
a job that once per hour / day / w etc. will insert rows from
[audit_current] into [audit_history] and then truncate [audit_current].
Therefore, [audit_current] will contain only a small number of rows and can
be inserted quickly. The table table [audit_history] would contain the
remaining history of > 82 million rows, and and can be indexed more heavily
for select queries. On those occasions where you need to query the audit
data, you can manually run the InsertAuditHistory job to insure that it
contains all rows up to that point in time.
To understand what type of I/O and CPU is required for inserting or
selecting the audit table, you can use the following:
SET STATISTICS IO ON
SET STATISTICS TIME ON
SET STATISTICS PROFILE ON
select * from audit where ...
update audit ...
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
SET STATISTICS PROFILE OFF
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
> Hi There
> We've a audit_log table with 82 million records. We have trigger on each
> table in the database to log any changes for update or insert or delete
> for
> all fields.
> we dont have any primary key on the table, just only one index
> Will it affect any performance on server side or client side while
> inserting
> records. Just we thought about truncate records. But we have to change
> some
> applications if we truncate.
> Thanks
> --
> Thanks
> Ganesh
> --
> Thanks
> Ganesh|||Thanks for you reply
I want to improve Insert, Currently the application is being used by around
500 user it's very slow now. I thought this audit_log also may be improve
performance if we truncate this one. and also takes spaces too.
Thanks
Ganesh
"JT" wrote:

> What are wanting to improve the performance of: inserts, selects, etc.? Yo
u
> only mention inserts, so I will focus on that. When a row is inserted,
> additional I/O and CPU time is spent updating indexes, so keep indexes on
> the inserted table to a minimum.
> Actually, you not need any indexes on this table at all. You could impleme
nt
> a partitioned table strategy. For example, the trigger could perform the
> initial insert into a table called [audit_current]. You could then schedul
e
> a job that once per hour / day / w etc. will insert rows from
> [audit_current] into [audit_history] and then truncate [audit_current].
> Therefore, [audit_current] will contain only a small number of rows and ca
n
> be inserted quickly. The table table [audit_history] would contain the
> remaining history of > 82 million rows, and and can be indexed more heavil
y
> for select queries. On those occasions where you need to query the audit
> data, you can manually run the InsertAuditHistory job to insure that it
> contains all rows up to that point in time.
> To understand what type of I/O and CPU is required for inserting or
> selecting the audit table, you can use the following:
> SET STATISTICS IO ON
> SET STATISTICS TIME ON
> SET STATISTICS PROFILE ON
> select * from audit where ...
> update audit ...
> SET STATISTICS IO OFF
> SET STATISTICS TIME OFF
> SET STATISTICS PROFILE OFF
>
> "Ganesh" <gsganesh@.yahoo.com> wrote in message
> news:BC71293F-0BBB-4209-897F-C068FA02CE2E@.microsoft.com...
>
>|||It sounds like your application makes frequent updates to tables, and you
are needing to audit all of this.
To benefit from parallel I/O processing, consider placing the audit tables
on a seperate and disk. Your log, audit_current, and audit_history tables
could perhaps be placed in the same file group.
http://msdn2.microsoft.com/en-us/library/ms187087.aspx
http://www.databasejournal.com/feat...cle.php/3114651
It is also possible that your insert performance is not related to I/O but
to blocking. When you have several processes attempting to insert into a
table simultaneously, there is a potential for blocking and even
deadlocking.
http://support.microsoft.com/defaul...kb;EN-US;224453
"Ganesh" <gsganesh@.yahoo.com> wrote in message
news:6FB90B35-7449-4FF6-91ED-984F25BCB9BA@.microsoft.com...
> Thanks for you reply
> I want to improve Insert, Currently the application is being used by
> around
> 500 user it's very slow now. I thought this audit_log also may be improve
> performance if we truncate this one. and also takes spaces too.
>
> --
> Thanks
> Ganesh
>
> "JT" wrote:
>

Friday, March 9, 2012

How to import apache log file into a table in SQL 2000

Hi friends,
I need to import a web log file coming from apache server into a table.But the problem is that log file is in some strange format.Its not in a comma delimited or not a tab delimited.
I am sending u some records of that file for a refrence.

62.219.115.5 - - [01/Mar/2007:04:26:51 -0700] "GET http://zdpub.dl.llnw.net/flvplayer3.../c4/window4.jpg HTTP/1.0" 304 177 "http://zdpub.vo.llnwd.net/o2/flvplayer/frontend6.swf" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; NetCaptor 7.5.4; .NET CLR 1.1.4322; FDM; .NET CLR 2.0.50727)"

165.21.154.9 - - [01/Mar/2007:04:53:53 -0700] "GET http://zdpub.dl.llnw.net/crankygeek...geeks.052.i.mp4 HTTP/1.0" 206 100740 "-" "iTunes/7.0.2 (Macintosh; N; PPC)"

217.255.204.70 - - [01/Mar/2007:10:30:01 -0700] "GET http://zdpub.dl.llnw.net/crankygeek...kygeeks.053.mov HTTP/1.0" 200 107184139 "-" "iTunes/7.0.2 (Macintosh; N; Intel)"

194.94.44.4 - - [01/Mar/2007:10:30:43 -0700] "GET http://zdpub.dl.llnw.net/crankygeek...geeks.053.i.mp4 HTTP/1.0" 200 687979 "-" "iTunes/7.0.2 (Macintosh; N; Intel)"

85.124.149.78 - - [01/Mar/2007:10:31:51 -0700] "GET http://zdpub.dl.llnw.net/crankygeek...kygeeks.053.mov HTTP/1.0" 200 138751823 "-" "Democracy/0.9.5.2 (http://www.participatoryculture.org)"

I m so confused how to import this file to a table.
And that is something that I need to do urgently.
Any help will be greatly apreciated.
Thanks.[color=black][font=Arial]I would change this file extension to txt. [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial]In Enterprise Manager go to Tools - Data Transformation Services - Import Data and follow a wizard. Change Data Source to txt file. [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial]It should create a table as is. [/font]


Hope it helps.|||

Quote:

Originally Posted by iburyak

[color=black][font=Arial]I would change this file extension to txt. [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial]In Enterprise Manager go to Tools - Data Transformation Services - Import Data and follow a wizard. Change Data Source to txt file. [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial] [/font]

[color=black][font=Arial]It should create a table as is. [/font]


Hope it helps.


Hi,
Actually I didnt understand what you are saying.
I mean I have table ready in my database in which I need to import this file.
e.g first column in my table is I.P add. in which it takes 62.219.115.5.
Second field is browser i.d.In that field instead of taking [01/Mar/2007:04:26:51 -0700] ,it takes - and so on.
So my problem is when I try to import data using DTS its not fetching correct data in respective columns.I guess this is because of the format of the file.
Thanks|||Do you need this as one time deal or on a permanent basis like on schedule every day?

You are having a hard case it is true.

What I was trying to tell you let SQL generate a new temporary table on a server and then deal with it on a server side. It probably will create one column with all text in it for each record. But then you would be able to process it and insert into production table if possible.

I don't think server would be able to split this record into correct columns with no delimiters.|||Had you ever seen this?

http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en

It is a log parser. I used it once with pretty good success.

Good Luck.|||Actually I'll have to do this once in a month.
I'll get log files for whole month from some other server.
And at the end of the month,I'll have to parse all files in to a table.|||Try to use logparser then.

Wednesday, March 7, 2012

how to implement user function/query

Hi,

In my database I have a table which is some kind of history log. The report is designed to group these entries by date. Showing the history data for the entries works fine, but for any history entry I need to show its previous entry, too (that means the table row with maximum date being smaller than the current selected). But what function/query do I have to implement?

Help would be highly appreciated.

Thank you,
gbruse
Select statement with between, and|||But how can I use the result of a select statement in a function? What I wanted to do is getting all dates and getting the latest which is below the current selected (of the group).

gbr|||send me ur code and table structure|||+------+-------+--+--+---+-------+
| Field | Type | Null | Key | Default | Extra |
+------+-------+--+--+---+-------+
| id | int(10) unsigned | | PRI | NULL | auto_increment |
| id_cust | int(10) unsigned | | | 0 | |
| add_data | varchar(20) | YES | | NULL | |
+------+-------+--+--+---+-------+

Imagine you have a table with the dates a user bought something. What I want to achive is showing all items groupes by dates and the date before the selected date.

The report shall look like:

date customer goods
old 06/02/05 Mr. Brown ...
now 06/03/05 Mr. Brown ...
-------------------
old 07/02/05 Mrs. Green ...
new 07/24/05 Mrs. Green ...

Thanks for your replay,

gbr.|||It would help if somebody could tell how to write a query (select xyz from tab where ...) and how to store the return value into a variable. Giving an example to familiarize with the crystal syntax would be enough for the beginning.

gbr|||Is there anyone who can tell how to store an entire column of a table in a variable?

When I use this function

local datetimevar array dates := {tab.date};
count(dates);

Only 1 is displayed though there are more than 1 columns in this table...

Friday, February 24, 2012

How to implement a change log for SQL database?

I need a log showing all changes in a few tables in a database to be able to show who has changed what and when (without this the customer is always claiming the database changes itself magically all by itself all the time *sigh*)

Is there a best practise way to do this? I'm not even sure if I change the SQL doing every insert/update/delete or if I rather install some triggers to track changes.

Thanks,
Sam

Triggers are the most common method for populating audit tables.

|||As mentioned, the most common way is by using Triggers.

If you're just looking for a quick and easy audit solution, I can recommend this script by Nigel Rivett.

You could also use the log file and something like Log Navigator to see who performed which changes.
|||

To expand on the 'Trigger' suggestions:

Create a 'Audit' table, exact same schema as the production table -

EXCEPT, remove all CONSTRAINTs, IDENTITY properties, and PRIMARY KEY constraints.

ADD a column for ChangeDate datetime DEFAULT getdate()

ADD a column for ChangeUser varchar(50) DEFAULT system_user

ADD a column for ActionType char(1) (Values will be 'U' or 'D'.)

Create a TRIGGER on the production table for UPDATE, DELETE