Showing posts with label store. Show all posts
Showing posts with label store. Show all posts

Friday, March 30, 2012

How to insert time in MS SQL DB

hi..

i want to store value of time in the database i.e. the 'time in' and 'time out' value for a particular entry of an employee. i have take datetime as a datatype in MS SQL 2000 database and my language is vb.net. How can i store time value in my database?

There is no datatype strictly for the time. I suggest storing the entire date and time value. What if your employee clocks in at 7:00 PM and clocks out at 3:00 AM?|||You told me write to store time in datetime data type along with date. But in what format should I enter the values if I want to store the values in 24 hrs. time format?|||Well, I didn't tell you to store the data in a datetime data type, but I did suggest it. You can see from thedatetime and smalldatetime documentation that SQL Server stores the data in two 4-byte integers, the first 4 bytes store the offset from the base date (January 1, 1900). The other 4 bytes store the number of milliseconds after midnight.

You might use something like this to set a date/time as a parameter value:

myCommand.Parameters.Add("@.DateTimeIn", SqlDbType.DateTime).Value = DateTime.Parse("2006-03-04 23:01:00")
|||

Datetime values by their nature do not have a "format". It's when you convert the datetime to a string that the string has a "format". (format is quoted because, yes, of course datetime has a format, as tmorton pointed out, but it's not what most people mean)

Just to point this out further, in tmorton's MyCommand.Parameters.Add statement above, you are giving the .value a datetime object. Datetime.Parse("some string"), takes the string in a format that datetime can covert to a valid datetime object, based on your current culture's standard date and time formats. The format tmorton chose (YYYY-MM-DD HH:MM:DD.SSS) is the one least likely to give any problems because it is universally available in all cultures, unlike say MM/DD/YYYY which is available in en-US, or (guessing) DD/MM/YYYY in en-GB, or DD.MM.YYYY in fr-FR (Sorry if these aren't the correct formats for England and/or France).

|||

Hi..

I got my problem soluation but a small problem still remailns.

I am displaying my records in datagrid with code as...

<asp:Label ID="timeinlbl" Width = 30 Runat = server text='<%#DataBinder.Eval(Container,"DataItem.timein","{0:hh:mm}")%>'>

It shows me time in '08:30' format. I want to also display am/pm format. What should I add to display the format 'am/pm'?

|||

geek.net wrote:

What should I add to display the format 'am/pm'?

Starting from here:BoundColumn.DataFormatString Property, I followed the link to:Formatting Overview. From there I followed the link to:Date and Time Format Strings, and then followed the final link to:Custom DateTime Format Strings. This page should help you discover the exact format string you need.

Wednesday, March 28, 2012

How to Insert date in sql server database

How to insert date to the sql server database.

I am getting input from the HTML form and store it to database using
ASP.

how to store date field, what datatype needed and what conversion
needed.

Thanx & Regards,
SSGSSG (ssg14j@.gmail.com) writes:
> How to insert date to the sql server database.
> I am getting input from the HTML form and store it to database using
> ASP.
> how to store date field, what datatype needed and what conversion
> needed.

The data type to use datetime or smalldatetime. These always include
the time portion, but set it to midnight for dates only.

Conversion should occur in the client, by using parameterised statements.

Here is an example that I have canned of a parameterised statement in ADO
(it's VB6 and not ASP, but I believe that they are not too different):

Set cmd = CreateObject("ADODB.Command")
Set cmd.ActiveConnection = cnn

cmd.CommandType = adCmdText
cmd.CommandText = " SELECT OrderID, OrderDate, CustomerID, ShipName " & _
" FROM dbo.Orders WHERE 1 = 1 "
If custid <> "" Then
cmd.CommandText = cmd.CommandText & " AND CustomerID LIKE ? "
cmd.Parameters.Append
cmd.CreateParameter("@.custid", adWChar, adParamInput, 5, custid)
End If

If shipname <> "" Then
cmd.CommandText = cmd.CommandText & " AND ShipName LIKE ? "
cmd.Parameters.Append cmd.CreateParameter("@.shipname", _
adVarWChar, adParamInput, 40, shipname)
End If

Set rs = cmd.Execute

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

How to insert date in database table in sql server

I have one form with fields name,date of birth , age.
I would like to get the input from user and store it to sql database
table.
After I stored , while viewing the table records, the dateof birth
field is 01.01.1900, how to store the exact date in database.
what datatype i have to store in database table.
Thanx & Regards,
SSGSSG wrote:
> I have one form with fields name,date of birth , age.
> I would like to get the input from user and store it to sql database
> table.
> After I stored , while viewing the table records, the dateof birth
> field is 01.01.1900, how to store the exact date in database.
> what datatype i have to store in database table.
> Thanx & Regards,
> SSG
Use datetime datatype and you can get plenty of information regarding
it in BOL.
Regards
Amish Shah

How to insert date in database table in sql server

I have one form with fields name,date of birth , age.
I would like to get the input from user and store it to sql database
table.
After I stored , while viewing the table records, the dateof birth
field is 01.01.1900, how to store the exact date in database.
what datatype i have to store in database table.
Thanx & Regards,
SSGSSG wrote:

> I have one form with fields name,date of birth , age.
> I would like to get the input from user and store it to sql database
> table.
> After I stored , while viewing the table records, the dateof birth
> field is 01.01.1900, how to store the exact date in database.
> what datatype i have to store in database table.
> Thanx & Regards,
> SSG
Use datetime datatype and you can get plenty of information regarding
it in BOL.
Regards
Amish Shah

How to insert BLOB data Using Store Procedure

Hi,

Can we insert BLOB data using store procedure using Oracle and ASP.Net.

I have inserted BLOB data using insert command, now i want to insert that BLOB via store procedure...

any links/tips will be helpful...

Hi,

Of course, you can use stored procedure to insert BLOB data. Use some data type that takes byte array as parameter, and you can insert into the table. You can just use the INSERT command as the content of the stored procedure.

Monday, March 26, 2012

how to insert a identity key through store procudure?

I want to insert 3 values to sql database through store procudure

-- a store procudure is like this--

ALTER PROCEDURE dbo.FreeExperience

@.identity ,

@.name nvarchar(20),
@.gender int,

AS
begin
Insert into list(cid,name,gender)
values (@.identity,@.name,@.gender)

end
RETURN

------------------

the cid is a identity value . it will +1 automatically when user insert a new data..

if i ingore this column in store procudure it will cause error (becuase this column is not allow null)

please help. thanks

Try

ALTER

PROCEDURE dbo.FreeExperience
@.namenvarchar(20),
@.genderint,
AS
INSERTINTO list(name,gender)values(@.name,@.gender)
RETURN
GO

or

CREATE

PROCEDURE dbo.FreeExperience2
@.identityINT,
@.namenvarchar(20),
@.genderint,
AS
SETIDENTITY_INSERT listON-- Allows Identity value to be supplied
Insertinto list(cid,name,gender)values(@.identity,@.name,@.gender)
SETIDENTITY_INSERT listOFF
RETURN|||

if you cid field is setup as identity in you table (cid int identity(1,1))

insert statement like this should work without any errors

Insert into list(name,gender)
values (@.name,@.gender)

and identity value will be automatically set by SQL server

|||

I still got error message...

exceptiona information: System.Data.Sqlclien.SqlException: procedure or parameters 'FreeExperience' must have a parameters'@.identity', but didn't provide..

p.s FreeExperience is my store procedure name..

|||

If you want SQL Sever to automatically add the Identity value for you, you need to remove the column entirely from the proc, and your INSERT statement. If you want to manually supply the value then you use the SET IDENTITY_INSERT <Table> ON clause, and provide the value you want to be used instead. IF the column is a Primary Key for the table, you also need to make sure the value you are providing does not already exist.

|||

thank you... but I am not sure if I really understand what did you mean

can you please give me a simple example for this ?

this store procudure is for a program which will insert new colum to table

1 column named " cid" in table which is set as a identity column will automatically +1 and insert to cid

|||

Use this stored procedure
alter PROCEDURE dbo.FreeExperience
@.name nvarchar(20),
@.gender int,
AS
Insert into list(name,gender) values (@.name,@.gender)
RETURN

|||

this stored procedure will cause error

because the cid column is not allow to be null...

|||

>>this stored procedure will cause error because the cid column is not allow to be null...

Identity columns are populated automatically by the database so the stored procedure will work!

|||

thank you .. but it really doesn't works..

it continue showing the error message...

can't insert NULL to 'cid,table'list'; not allow Null。INSERT fail..

I checked all column in table list from database, all column allow null except column cid.

the statement will work only if I set the cid column as accept null ..

please help..

|||

Modify ur table by setting the "IdentityIncrement=1, IdentitySeed=1" of column cId, then remove cId from ur storedprocedure

HTH

|||

Modify ur table by setting the "IsIdentity=Yes, IdentityIncrement=1, IdentitySeed=1" of column cId, then remove cId from ur storedprocedure

HTH

Friday, March 23, 2012

how to index large varchar field

Here's the situation: we have a character based field of up to 2048
characters in length that we want to store in a single column in a
sqlserver table. need to be able to search on this field and guarantee
uniqueness. it would be nice if this field has an index with a unique
constraint. the problem is that sqlserver can't index anything over
900 characters in length.
any suggestions appreciated.
we have a couple thoughts so far:
1 - compress this field in the data access layer before saving in sql -
that should keep the length of the field under 900 chars and let us
apply a unique index in sql. however, we would still have the
uncompressed, unindexed version of the field in another column
searching.
2 - split the field into multiple columns each under 900 characters.
apply an index to each column. however, we'd lose the uniqueness
constraint and this would complicate searches.
3 - combination of 1 and 2.
by the way - we're using SQL Server 2005.
In an effort to maintain uniqueness you could investigate creating three
additional columns (varchar (683)) then popultate the data from the field in
question evenly across the new columns. To ensure uniqueness you could
created a concatenated key with the three new columns. Just a thought....
Thomas
"steve.c.thompson@.gmail.com" wrote:

> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.
>
|||I was thinking the same thing Thomas. I created the three additional
columns and the tried to create the composity key, but sql won't create
an index across the three fields since the sum of the field lengths is
greater then 900.
steve.
|||steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and
> guarantee uniqueness. it would be nice if this field has an index
> with a unique constraint. the problem is that sqlserver can't index
> anything over 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql
> - that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
Compression means you're dealing with binary data and you're not going
to get a lot of compression on a max of 2K worth of data.

> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
Adds additional management.
I would recommend that you hash the data and generate a 32-bit hash
value. You can do this from the client using .Net, for example, or using
an extended stored procedure. SQL Server has the T-SQL BINARY_CHECKSUM()
function, but it has some limitations generating the same checksum for a
few different values. However, the T-SQL function would be quite easy to
implement using an INSERT/UPDATE trigger on the table.
select BINARY_CHECKSUM('ABC123'), BINARY_CHECKSUM('aBC123')
|||Steve,
I think you must get the data size down to 900 characters or less,
otherwise you will not be able to guarantee uniqueness.
As for the searching: if you have to search in the texts (for example
LIKE '%some text%'), then an index is only useful if the data column is
narrow in comparison to average row size. That is probably not the case
here, which means a table scan (or clustered index scan) is probably
most efficient.
If it is relatively small in comparison to the average row size, then
you could create a separate table with only a key column and the
varchar(2048) column, with a one-to-one relation to the original table.
HTH,
Gert-Jan
steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.
|||Thanks for the replies guys.
We were originally using a hash for the key as David suggested, but
thinking back to my days at university when we had to program hash
algorithms, they dont gaurantee unquiness. In which case the insert
would cause a sql exception and we wouldn't be able to store certain
data strings with non-unique hashes. That would be bad.
In the end, I think we're just going to have to reduce the field to 900
characters as Gert-Jan has suggested. I don't think there's any other
way around it.
steve
|||why didnt u use the text datatype..any reason
|||i need to be able to put a unique index on this field, text fields do
not allow that.

how to index large varchar field

Here's the situation: we have a character based field of up to 2048
characters in length that we want to store in a single column in a
sqlserver table. need to be able to search on this field and guarantee
uniqueness. it would be nice if this field has an index with a unique
constraint. the problem is that sqlserver can't index anything over
900 characters in length.
any suggestions appreciated.
we have a couple thoughts so far:
1 - compress this field in the data access layer before saving in sql -
that should keep the length of the field under 900 chars and let us
apply a unique index in sql. however, we would still have the
uncompressed, unindexed version of the field in another column
searching.
2 - split the field into multiple columns each under 900 characters.
apply an index to each column. however, we'd lose the uniqueness
constraint and this would complicate searches.
3 - combination of 1 and 2.
by the way - we're using SQL Server 2005.In an effort to maintain uniqueness you could investigate creating three
additional columns (varchar (683)) then popultate the data from the field in
question evenly across the new columns. To ensure uniqueness you could
created a concatenated key with the three new columns. Just a thought....
--
Thomas
"steve.c.thompson@.gmail.com" wrote:

> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.
>|||I was thinking the same thing Thomas. I created the three additional
columns and the tried to create the composity key, but sql won't create
an index across the three fields since the sum of the field lengths is
greater then 900.
steve.|||steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and
> guarantee uniqueness. it would be nice if this field has an index
> with a unique constraint. the problem is that sqlserver can't index
> anything over 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql
> - that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
Compression means you're dealing with binary data and you're not going
to get a lot of compression on a max of 2K worth of data.

> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
Adds additional management.
I would recommend that you hash the data and generate a 32-bit hash
value. You can do this from the client using .Net, for example, or using
an extended stored procedure. SQL Server has the T-SQL BINARY_CHECKSUM()
function, but it has some limitations generating the same checksum for a
few different values. However, the T-SQL function would be quite easy to
implement using an INSERT/UPDATE trigger on the table.
select BINARY_CHECKSUM('ABC123'), BINARY_CHECKSUM('aBC123')|||Steve,
I think you must get the data size down to 900 characters or less,
otherwise you will not be able to guarantee uniqueness.
As for the searching: if you have to search in the texts (for example
LIKE '%some text%'), then an index is only useful if the data column is
narrow in comparison to average row size. That is probably not the case
here, which means a table scan (or clustered index scan) is probably
most efficient.
If it is relatively small in comparison to the average row size, then
you could create a separate table with only a key column and the
varchar(2048) column, with a one-to-one relation to the original table.
HTH,
Gert-Jan
steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.|||Thanks for the replies guys.
We were originally using a hash for the key as David suggested, but
thinking back to my days at university when we had to program hash
algorithms, they dont gaurantee unquiness. In which case the insert
would cause a sql exception and we wouldn't be able to store certain
data strings with non-unique hashes. That would be bad.
In the end, I think we're just going to have to reduce the field to 900
characters as Gert-Jan has suggested. I don't think there's any other
way around it.
steve|||why didnt u use the text datatype..any reason|||i need to be able to put a unique index on this field, text fields do
not allow that.sql

how to index large varchar field

Here's the situation: we have a character based field of up to 2048
characters in length that we want to store in a single column in a
sqlserver table. need to be able to search on this field and guarantee
uniqueness. it would be nice if this field has an index with a unique
constraint. the problem is that sqlserver can't index anything over
900 characters in length.
any suggestions appreciated.
we have a couple thoughts so far:
1 - compress this field in the data access layer before saving in sql -
that should keep the length of the field under 900 chars and let us
apply a unique index in sql. however, we would still have the
uncompressed, unindexed version of the field in another column
searching.
2 - split the field into multiple columns each under 900 characters.
apply an index to each column. however, we'd lose the uniqueness
constraint and this would complicate searches.
3 - combination of 1 and 2.
by the way - we're using SQL Server 2005.In an effort to maintain uniqueness you could investigate creating three
additional columns (varchar (683)) then popultate the data from the field in
question evenly across the new columns. To ensure uniqueness you could
created a concatenated key with the three new columns. Just a thought....
--
Thomas
"steve.c.thompson@.gmail.com" wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.
>|||I was thinking the same thing Thomas. I created the three additional
columns and the tried to create the composity key, but sql won't create
an index across the three fields since the sum of the field lengths is
greater then 900.
steve.|||steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and
> guarantee uniqueness. it would be nice if this field has an index
> with a unique constraint. the problem is that sqlserver can't index
> anything over 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql
> - that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
Compression means you're dealing with binary data and you're not going
to get a lot of compression on a max of 2K worth of data.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
Adds additional management.
I would recommend that you hash the data and generate a 32-bit hash
value. You can do this from the client using .Net, for example, or using
an extended stored procedure. SQL Server has the T-SQL BINARY_CHECKSUM()
function, but it has some limitations generating the same checksum for a
few different values. However, the T-SQL function would be quite easy to
implement using an INSERT/UPDATE trigger on the table.
select BINARY_CHECKSUM('ABC123'), BINARY_CHECKSUM('aBC123')|||Steve,
I think you must get the data size down to 900 characters or less,
otherwise you will not be able to guarantee uniqueness.
As for the searching: if you have to search in the texts (for example
LIKE '%some text%'), then an index is only useful if the data column is
narrow in comparison to average row size. That is probably not the case
here, which means a table scan (or clustered index scan) is probably
most efficient.
If it is relatively small in comparison to the average row size, then
you could create a separate table with only a key column and the
varchar(2048) column, with a one-to-one relation to the original table.
HTH,
Gert-Jan
steve.c.thompson@.gmail.com wrote:
> Here's the situation: we have a character based field of up to 2048
> characters in length that we want to store in a single column in a
> sqlserver table. need to be able to search on this field and guarantee
> uniqueness. it would be nice if this field has an index with a unique
> constraint. the problem is that sqlserver can't index anything over
> 900 characters in length.
> any suggestions appreciated.
> we have a couple thoughts so far:
> 1 - compress this field in the data access layer before saving in sql -
> that should keep the length of the field under 900 chars and let us
> apply a unique index in sql. however, we would still have the
> uncompressed, unindexed version of the field in another column
> searching.
> 2 - split the field into multiple columns each under 900 characters.
> apply an index to each column. however, we'd lose the uniqueness
> constraint and this would complicate searches.
> 3 - combination of 1 and 2.
> by the way - we're using SQL Server 2005.|||Thanks for the replies guys.
We were originally using a hash for the key as David suggested, but
thinking back to my days at university when we had to program hash
algorithms, they dont gaurantee unquiness. In which case the insert
would cause a sql exception and we wouldn't be able to store certain
data strings with non-unique hashes. That would be bad.
In the end, I think we're just going to have to reduce the field to 900
characters as Gert-Jan has suggested. I don't think there's any other
way around it.
steve|||why didnt u use the text datatype..any reason|||i need to be able to put a unique index on this field, text fields do
not allow that.

Wednesday, March 21, 2012

How to include a dataset field value in the header

Since I can not directly use a field value of a dataset in the header and
footer section of a report, I'm trying to store this field value in a
variable in the custom code.
Is there a way to do this?
Thanks...I would suggest using parameter with default value from query.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"eralper" <eralper@.discussions.microsoft.com> wrote in message
news:5F028ED8-E8BC-4795-A79E-3481CECC9EA6@.microsoft.com...
> Since I can not directly use a field value of a dataset in the header and
> footer section of a report, I'm trying to store this field value in a
> variable in the custom code.
> Is there a way to do this?
> Thanks...sql

Monday, March 19, 2012

how to improve "update" speed

hi

in asp.net,i used sql server to store records.

in a table has 1 million records,but when i update the record,it is very slowly.

is "create index" helpful for "update" operation?

i need help,thanks a lot.

First ensure that you have properly created index on you table and primary key.

Second ensure that you have data in table only for one business period( on principle one year).

Make shrink on your data table.

Refactor your data index.

This will help you.

|||

Post the update statement you have as well as all the table information. Do an "EXEC sp_Help TableName" in your query analyzer and post the results here.

|||

Hi ndinakar:

mytable has 100,000 records, i use "exec sp_help mytable",its result is :

Name Owner Type Created_datetime

mytable dbo user table 2007-11-30 9:34:21

can i get further information

thanks a lot

|||

Hi yugiant,

Based on my understanding, you are now tring to creating indexes in your sql server database in order to improve your update performance. If I've understood you wrong, please feel free to tell me, thanks.

Yes I think using index is a good idea. Using indexes in your database will save a lot of time for locating the specific record. And since we need to first locate the record before updating it, I think using index is a great help for improving updating performance. exec sp_help will list properties of your table, including all the indexex you have made to your datatable. You can also get your datatable index information from Object Explorer(in the Indexes tree view node).

I strongly suggesting you reading this:http://msdn2.microsoft.com/en-us/library/ms189051.aspx I think it will be much of help to solve your problem. thanks.

Hope my suggestion helps

Wednesday, March 7, 2012

How to implement?

Hi all,

I had many groups of data which column A store number & column B store location. My syntax as below :-

select count(columnA) as no, columnB from table group by columnB

No columnB

20 A

10 B

5 C

How can I select 5 rows for Each ColumnB? (total 15 rows of record)

Thanks!

check out the docs about using TOP.|||TOP is part of it, but it can't get you what you're looking for. I've done it with a cursor before. Create a temp table or table variable. Create a cursor that is each unique value of B. Loop through the cursor inserting into the temp table the top 5 for each B. When the cursor is done, select from your temp table.|||I think that a temp table and a cursor are probably overkill in thissituation. I'd bet that a corrolated subquery would be moreefficient.
Something like this:

SELECT t1.columnA, t1.columnB
FROM table t1
WHERE t1.columnA IN (SELECT TOP 5 t2.columnA FROM table t2 WHERE t2.columnB = t1.columnB)

Sunday, February 19, 2012

How to I enter a small square or a small circle in a NVARCHAR field?

I want to store a small cirle in a text field. Can anyone tell me how I can enter it in ascii code.

Thanks

Hi,

see the ASCII table from here:

http://www.computerhope.com/jargon/a/ascii.htm

The Ascii Chacters can be used with the Function CHAR(Number)

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de