There is table with timestamp column in SQL Server 2008 R2. When I only add this column to my table I see values like this 0x00000000000007D1. I try to put data into it:
UPDATE test_time SET date3=
CONVERT(TIMESTAMP, CONVERT(datetime,'2002-08-20 14:00:00.000',120))
WHERE ogr_fid=1
But get error
Cannot update timestamp column
What's wrong here?
SQL Server's TIMESTAMP datatype has nothing to do with a date and time!
It's just a binary representation of a consecutive number - it's only good for making sure a row hasn't change since it's been read.
In never versions of SQL Server, it's being called RowVersion - since that's really what it is. See the MSDN docs on ROWVERSION:
Is a data type that exposes automatically generated, unique binary numbers within a database. rowversion is generally used as a mechanism
for version-stamping table rows. The
rowversion data type is just an incrementing number and does not
preserve a date or a time. To record a date or time, use a datetime2
data type.
So you cannot convert a string to a TIMESTAMP in SQL Server.
Related
I have migrated a SQL Server database from one server to another. I did this via export to BAK and then Restore on the new machine.
Seems to be a different format somewhere, as a simple query that was working previously, is now throwing an error, but I cannot see what it might be (collation and 'containment' info seem the same).
The old SQL Server version: Microsoft SQL Server 2012 - 11.0.6598.0 Express Edition
The new SQL Server version: Microsoft SQL Server 2019 - 15.0.2080.9 Express Edition
The error, below, refers to a date format:
SELECT userID FROM tblLogin
WHERE CAST('30/09/2021 00:52:14' AS datetime) < DATEADD(n,600,accessDate)
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
(Column accessDate is of type datetime, null)
Always always ALWAYS use the ISO-8601 date formats for literals in Sql Server. For historical reasons, the preferred option for date-only values is different than date-time values, so you want either yyyy-MM-ddTHH:mm:ss for date and time or yyyyMMdd for date-only. But the really important thing is always go in order from most significant (year) to least significant (second).
Anything else isn't really a date literal at all: it's a string you must convert to a date.
If we follow this correct convention (because anything else really is incorrect), you get it down to this (which doesn't even need the CAST() anymore, because Sql Server can interpret it as datetime from the beginning):
SELECT userID
FROM tblLogin
WHERE accessDate > DATEADD(n, -600, '2021-09-30T00:52:14')
Also note I inverted the check, moving the DATEADD() function to act on the literal, instead of the column. This is always preferred, because it can work with any index you may have on the accessDate column. The original code would have rendered any such index worthless for this query.
SQL Server 2019 - we have a column called Entity which is of type nvarchar(max). The data from this column is inserted from tables on the web as part of an automated process.
In querying for DISTINCT values in this column, we only expected one distinct value, but we actually were returned two. But the two values looked exactly the same inside SQL Server Management Studio.
So we added a CONVERT(varchar(max)) to the query in a new column, and we were able to see the difference, as follows:
Entity Converted
Security Law Security Law
Security Law Security ?Law
Does anyone know how or why this different value is occurring, and more importantly, how we can instruct SQL Server to treat these as duplicate values, by only analyzing the nvarchar version?
nvarchar() takes Unicode characters into account. Since you are copying data from web, there could be invisible characters.
you can use regex and extract ASCII characters alone and convert it to varchar so you get distinct values.
As part of my job duties, I'm responsible for extracting data from our vendor's Oracle 11g database, and loading it into our SQL Server 2016 database. I've been doing this successfully with SSIS and the Attunity Oracle connectors.
Today I was informed that there was a new column added to the existing Invoices table on the Oracle side. There was already a DATE column called Order Date, which contains valid date values with zero'd times, like 2017-12-25 00:00:00.
The new column is called Order Date Time and is also a DATE column. When I opened up the SSIS package and pulled up the Oracle source in my DFT, I previewed the data and found the values in Order Date Time to be 2432-82-75 50:08:01. I tried converting the column with CAST and all the TO_* functions, but the conversions either failed outright, or returned a string of zeros.
TO_CHAR("Order Date Time", 'YYYYMMDDHH24MISS')
yields 00000000000000
After a bit of Googling for "Oracle date value invalid", I'm now thinking that these DATE values are actually corrupted. Am I missing anything here? Is there some sort of special Oracle-specific technique for storing time values in a DATE column that I may not be aware of?
(And yes, it does bother me quite a bit that our vendor added another DATE column instead of just using the time portion of the existing Order Date column.)
Unfortunately, Oracle database engine allows inserting invalid date values, which leads to many problems especially when importing data to others database engines such as SQL Server.
To handle this issue, you have to implement the logic that fits your needs, as example:
You can exclude these records from you queries by filtering on acceptable date ranges: (WHERE date between ...)
You can Update records with invalid values by replacing with NULL
You can use a CASE statement in your query to replace values with NULL
I faced this issue one time while importing data to SQL Server from an Oracle data source, there was unacceptable date values, i decided to update all records where date are invalid and replace with NULL values before starting the import process.
There are many links related to this issue:
Detecting invalid values in the DB
How to identify invalid (corrupted) values stored in Oracle DATE columns
Corrupt date fields causing query failure in Oracle
Invalid Date in DATE Column SQLPlus VS SQLDeveloper
Ask Tom - date validation in oracle
Dealing with invalid dates
Error: Invalid date format
DB Connect; Oracle DB date field data is corrupt
I have a table where I save emails that have been sent. I decided then to add a TimeStamp field to this table so I can track when the e-mail had been sent. Data is being written to the table with out any issues, but when I go to view the table contents using Microsoft SQL Server 2008 Management Studio, the data contained within the Timestamp field is displayed like this: 0x000000000000000000845, even in records that have been written to the database since the Timestamp value was introduced
I then changed the field type to datetime, and it then displays a date. But it displays the date 1900-01-01 00:00:23 for example. I then changed it back to the Timestamp field, and it returned back in to it's current Hexadecimal format.
Am I doing anything wrong?
Cheers
I decided then to add a TimeStamp
field to this table so I can track
when the e-mail had been sent
Ah yes. Reading teh database would have shown you that the TMIestamp field - which is a legacy from Sybase server -does NOT store a timestamp. Basically it is something like a global operations counter. It has NO relation to time.
If you want a real timestamp, put in a DateTime type of column and set the system time as default / through atrigger etc. Timestamp is totally unsuiteable for that.
Again, no a MS thing - MS SQL Server started as Sybase SQL Server port for windows, and the Timestampdata type is a Sybase legacy.
I have some data with a timestamp in SQL Server, I would like to store that value in sqlce with out getting fancy to compare the two values.
What is the SQL Server timestamp equivalent in sqlce?
Timestamp is from MS Docs
timestamp is a data type that exposes automatically generated binary numbers, which are guaranteed to be unique within a database. timestamp is used typically as a mechanism for version-stamping table rows. The storage size is 8 bytes.
This value makes no sense outside the database it was created in. Thus I don't see how it can be converted.
In a non Sybase/ SQL Server database I would use a version number or last updated column
AS of Sql Compact 3.5, there is support for timestamps, per MSDN:
SQL Server Compact implements the
timestamp (rowversion) data type. The
rowversion is a data type that exposes
automatically generated binary
numbers, which are guaranteed to be
unique in a database. It is used
typically as a mechanism for
version-stamping table rows.
Ok the timestamp is a varbinary that auto generates. So to copy a time stamp you need a varbinary field.