I'm having (DT_DBTIMESTAMP2,7)GETDATE() in SSIS Derived Column Transformation and Table column with datetime2(7).
Even though I set 7 Digit Second Scale in both, but seems it comes only 3 digit.
For example, I expected like '2018-05-02 16:45:15.6192346' but it comes '2018-05-02 16:45:15.6190000'.
The reason why I need the millseconds, I'd like to sort out the latest record from any duplications using timestamp. I realized only 3 digit second scale is not enough for this pourpose.
Except for Derived Column Transformation and Table Columns, is there any requrired setting in SSIS package? Any advices would be appreciated.
GETDATE() returns a datetime, you should use SYSDATETIME() instead. See documentation.
edit
As noted by Larnu, you are probably using SSIS expression GETDATE, rather that the sql expression GETDATE as I assumed. The point is more-or-less the same though. GETDATE returns a DT_DBTIMESTAMP, where "The fractional seconds have a maximum scale of 3 digits." (Source).
Although this is almost the same as what HoneyBadger has said, I'm expanding a little, as the OP isn't using the GETDATE() expression in SQL Server. The value 2018-05-02 16:45:15.619 could never be returned by GETDATE() (Transact-SQL) as it's only accurate to 1/300th of a second (thus the final digit can only every be 0,3, and 7 (technically 0, 333333333~ and 666666666~, which is why the final digit is a 7, as it's rounded up)).
In SSIS the GETDATE() expression returns a datatype of DB_TIMESTAMP. According to the Documentation:
A timestamp structure that consists of year, month, day, hour, minute,
second, and fractional seconds. The fractional seconds have a maximum
scale of 3 digits.
Thus, the last 4 characters are lost. Unfortunately, I don't believe there is a function in SSIS that returns the the current date and time to the accuracy you require. Thus, if you need this high level, you'll likely need to use an expression in SQL Server that does, such as SYSDATETIME() that HoneyBadger recommended.
Related
I've been having problems with a query that returns data between two date times, the query that I'm trying to fix is this one
pay.date BETWEEN '01/06/2020 00:28:46 a. m.' AND '01/06/2020 10:38:45 a. m.'
That query does not detect the a. m. part and if I have a payment at 10 am and 10 pm it will detect both payments as the t. t. part is not detected, I've been searching for a while now with no luck, thanks in advance :)
Do the filtering by an actual datetime type:
cast(replace(replace(pay.date, ' a. m.', 'am'), ' p. m.', 'pm') as datetime)
It might be better to use convert() so you can specify the proper format. If you can't supply the date literals in a readily convertible format then do a similar replace and cast on those too.
Use a literal format that is unambiguous and not dependent on runtime or connection settings. More info in Tibor's discussion.
In this case:
where pay.date between '20200601 00:28:46' and '20200601 10:38:45'
Notice that I assume June, not January - adjust as needed. Between is inclusive and be certain that you understand the limitations of the datatype for pay.date. If datetime, the values are accurate to 3ms. Verify that your data is consistent with your assumption about accuracy to seconds.
I feel like I've read a ton of these corresponding posts such as,
converting Epoch timestamp to sql server(human readable format)
& How do I convert a SQL server timestamp to an epoch timestamp?
But can't seem to get my particular use-case working. I need to convert an epoch timestamp to a normal date/time value. Currently, the column is a nvarchar(max) type. Here's an example of one of the dates:
1478563200000
I'm trying to get it to look like the following:
2019-01-14 00:00:00.0000000
I've tried the following to no success all with the same error message:
select DATEADD(SS, CONVERT(BIGINT, baddate), '19700101') as gooddate
from table
"Arithmatic overflow error converting expression to data type int"
I've tried minutes, seconds, days, all same error message and at this point I'm about to tell the guys to send the data in a different format.
Try
select DATEADD(SS, CONVERT(INT, CONVERT(BIGINT, baddate)/1000), '19700101') as gooddate
from table
DATEADD expects an int, not a bigint. Since your timestamp is in milliseconds, it won't "fit" in an int. If you trade-in millisecond resolution for second resolution by dividing by 1000 it will fit in an integer and make DATEADD happy. So first we convert the NVARCHAR to BIGINT (why store as NVARCHAR in the first place?), then divide by 1000 and then convert to INT.
Another option is to divide the value by 1000 at the time of insert (and, again, make the column an int in the first place). That'll save a lot of CONVERTs everywhere (you can get rid of them all) and probably speed up your queries quite nicely. Then again, you could even convert the column to datetime (or datetime2 or whatever type is best suited) and leave out the entire dateadd/convert mess in your queries alltogether. Always try to get your data ad close to the final datatype you need it in later.
Edit: I just realized you can probably leave one convert out:
select DATEADD(SS, CONVERT(BIGINT, baddate)/1000, '19700101') as gooddate
from table
This is the same as the original suggestion, only this time the cast to int is implicit. But converting your data upon insert is still probably the better idea. So the rest of my post still stands.
You can get the correct result to the millisecond, that works with years 0001 through 9999, using the accepted answer
here.
declare #x nvarchar(max) = N'1478563200000'
select dbo.UnixTimeToDateTime2(#x)
Consider the following demonstration queries and the results (note the only difference in the two queries is the comparison operator in the WHERE clause):
The LUpd_DateTime column is a smalldatetime data type. Since the smalldatetime data type doesn't actually contain any seconds (rounding occurs up or down to the nearest minute), the only explanation I have for the two queries below is that SQL Server is converting the date string to a smalldatetime type and rounding up to the nearest minute, thus changing the date string to '9/20/2018 00:00:00 AM'.
Can anyone confirm this?
SQL Server is converting the date string to a smalldatetime type and
rounding up to the nearest minute, thus changing the date string to
'9/20/2018 00:00:00 AM'. Can anyone confirm this?
Yes. To compare two expressions SQL Server always converts both expressions to a common data type. Whichever expression's data type has the lower Data Type Precedence is converted. The "date string" is an expression of type varchar which has a lower precedence than smalldatetime. So the string is converted to smalldatetime for comparison. And you can verify that the conversion rounds to the nearest value:
select cast( '2018-09-19 11:59:59' as smalldatetime)
outputs
2018-09-19 12:00:00
I think you may have mistyped your explanation? You state that the column is a smalldatetime but then go on to say you think the query is converting the "date string to a smalldatetime". If what I said is correct, then a simple check of logic will show your assumption to be true. Yes, when converting it will become "09/20/2018 00:00:00 AM".
DECLARE #dateField AS date
SET #dateField = '2018-09-20 06:23:00'
SELECT CONVERT(smalldatetime, #dateField)
It is covered clearly in the documentation.
Defines a date that is combined with a time of day. The time is based
on a 24-hour day, with seconds always zero (:00) and without
fractional seconds.
If you look at your results all are 00 seconds.
Ok, I can't understand this thing.
A customer of mine has a legacy Windows application (to produce invoices) which stores date values as integers.
The problem is that what is represented like '01.01.2002' (value type: date) is indeed stored in SQL Server 2000 as 731217 (column type: integer).
Is it an already known methodology to convert date values into integers (for example - I don't know - in order to make date difference calculations easier?)
By the way, I have to migrate those data into a new application, but for as much I googled about it I can't figure out the algorithm used to apply such conversion.
Can anybody bring some light?
It looks like the number of days since Jan 1st 0000 (although that year doesn't really exists).
Anyway, take a date as a reference like Jan 1st 2000 and look what integer you have for that date (something like 730121).
You then take the difference between the integer you have for a particular date and the one for your reference date and you that number of days to your reference date with the DATEADD function.
DATEADD(day, *difference (eg 731217 - 730121)*, *reference date in proper SQLServer format*)
You can adjust if you're off by a day a two.
I have a large table with 1 million+ records. Unfortunately, the person who created the table decided to put dates in a varchar(50) field.
I need to do a simple date comparison -
datediff(dd, convert(datetime, lastUpdate, 100), getDate()) < 31
But it fails on the convert():
Conversion failed when converting datetime from character string.
Apparently there is something in that field it doesn't like, and since there are so many records, I can't tell just by looking at it. How can I properly sanitize the entire date field so it does not fail on the convert()? Here is what I have now:
select count(*)
from MyTable
where
isdate(lastUpdate) > 0
and datediff(dd, convert(datetime, lastUpdate, 100), getDate()) < 31
#SQLMenace
I'm not concerned about performance in this case. This is going to be a one time query. Changing the table to a datetime field is not an option.
#Jon Limjap
I've tried adding the third argument, and it makes no difference.
#SQLMenace
The problem is most likely how the data is stored, there are only two safe formats; ISO YYYYMMDD; ISO 8601 yyyy-mm-dd Thh:mm:ss:mmm (no spaces)
Wouldn't the isdate() check take care of this?
I don't have a need for 100% accuracy. I just want to get most of the records that are from the last 30 days.
#SQLMenace
select isdate('20080131') -- returns 1
select isdate('01312008') -- returns 0
#Brian Schkerke
Place the CASE and ISDATE inside the CONVERT() function.
Thanks! That did it.
Place the CASE and ISDATE inside the CONVERT() function.
SELECT COUNT(*) FROM MyTable
WHERE
DATEDIFF(dd, CONVERT(DATETIME, CASE IsDate(lastUpdate)
WHEN 1 THEN lastUpdate
ELSE '12-30-1899'
END), GetDate()) < 31
Replace '12-30-1899' with the default date of your choice.
How about writing a cursor to loop through the contents, attempting the cast for each entry?When an error occurs, output the primary key or other identifying details for the problem record.
I can't think of a set-based way to do this.
Not totally setbased but if only 3 rows out of 1 million are bad it will save you a lot of time
select * into BadDates
from Yourtable
where isdate(lastUpdate) = 0
select * into GoodDates
from Yourtable
where isdate(lastUpdate) = 1
then just look at the BadDates table and fix that
The ISDATE() would take care of the rows which were not formatted properly if it were indeed being executed first. However, if you look at the execution plan you'll probably find that the DATEDIFF predicate is being applied first - thus the cause of your pain.
If you're using SQL Server Management Studio hit CTRL+L to view the estimated execution plan for a particular query.
Remember, SQL isn't a procedural language and short circuiting logic may work, but only if you're careful in how you apply it.
How about writing a cursor to loop through the contents, attempting the cast for each entry?
When an error occurs, output the primary key or other identifying details for the problem record.
I can't think of a set-based way to do this.
Edit - ah yes, I forgot about ISDATE(). Definitely a better approach than using a cursor. +1 to SQLMenace.
In your convert call, you need to specify a third style parameter, e.g., the format of the datetimes that are stored as varchar, as specified in this document: CAST and CONVERT (T-SQL)
Print out the records. Give the hardcopy to the idiot who decided to use a varchar(50) and ask them to find the problem record.
Next time they might just see the point of choosing an appropriate data type.
The problem is most likely how the data is stored, there are only two safe formats
ISO YYYYMMDD
ISO 8601 yyyy-mm-dd Thh:mm:ss:mmm(no spaces)
these will work no matter what your language is.
You might need to do a SET DATEFORMAT YMD (or whatever the data is stored as) to make it work
Wouldn't the isdate() check take care of this?
Run this to see what happens
select isdate('20080131')
select isdate('01312008')
I am sure that changing the table/column might not be an option due to any legacy system requirements, but have you thought about creating a view which has the date conversion logic built in, if you are using a more recent version of sql, then you can possibly even use an indexed view?
I would suggest cleaning up the mess and changing the column to a datetime because doing stuff like this
WHERE datediff(dd, convert(datetime, lastUpdate), getDate()) < 31
cannot use an index and it will be many times slower than if you had a datetime colum,n and did
where lastUpdate > getDate() -31
You also need to take into account hours and seconds of course