SQL function SYSDATETIME() can return duplicated values? - sql-server

I have a webApi which registers orders, using a stored procedure, assigning the creation date = SYSDATETIME(). Every request registers one unique order.
The problem is than two different requests to my Api, in the same second, separated less than 200 milliseconds, that generate two different calls to my stored procedure (practically separated by a few milliseconds), are registered with the same date, exactly the same, with precission of milliseconds.
for example:
order 1: 100001 -> creation date = 2020-12-01 01:01:01.1234567
order 2: 100002 -> creation date = 2020-12-01 01:01:01.1234567
This is the code in my stored procedure
declare #date datetime2
select #date = SYSDATETIME()`
Theorically the sql function SYSDATETIME() doesn't repeat values, but in my case, they are repeated (several times in my database in different dates).
Any idea of what's happening?

From the docs:
Note
SQL Server obtains the date and time values by using the GetSystemTimeAsFileTime() Windows API. The accuracy depends on the computer hardware and version of Windows on which the instance of SQL Server is running. The precision of this API is fixed at 100 nanoseconds. The accuracy can be determined by using the GetSystemTimeAdjustment() Windows API.
So there can certainly be doubled up data. Windows does have a high precision clock, but SQL Server doesn't use it. If you need it, CLR might be an option.

Related

SQL Server: Design question - stored records as rows vs as BLOB - NVARCHAR(MAX)

I am creating a schedule for our engineer to analyze. The schedules are downloaded each day and the analysis is done on the local computers.
So, now, I am in this dilemma of storing the schedule in database as table rows or as nvarchar(max).
Here is the requirement
The schedules are generated each day. Each schedule is accurate to 1 seconds. So, at most, it will contain 86,400 records per schedule.
In a day, depending on the setting, the system can generate up to 100 schedules per engineer (we have a about 10 engineers)
The schedule contains the following fields: INT | INT | INT | INT | NVARCHAR(1024) | NVARCHAR(64) | BIT | BIT | DATETIME | DATETIME (In summary: 4x INTs, 2x NVARCHARs, 2x BITs, and 2x DATETIMEs)
The schedule is rarely going to be updated, but it can be updated. The updatable fields are: 2x BITs and 1x DATETIME.
Now looking at the common case scenario:
In a day, it will generates about 1,296,000 records per day.
This is the calculation of common case scenario:
- 10 seconds accuracy per schedule = 8,640 rows
- 5 engineers run the scheduler each day
- Each engineer generates about 30 schedules
So total is: 8,640 * 5 * 30 = 1,296,000 records
If I store each schedule as NVARCHAR(MAX) with comma delimited, then the number of records are reduced to only 150 records per day.
Here is the calculation:
- 10 seconds accuracy per schedule = 8,640 rows --> stored as NVARCHAR (becomes 1 record)
- 5 engineers run the scheduler each day
- Each engineer generates about 30 schedules
So total is: 5 * 30 = 150 records
Now, this is the requirement for those schedules:
The generated schedules can be viewed on the website.
The schedules is downloaded by the application each day for analysis.
The fields (2x BITs) can be updated once the analysis is completed. These fields can be updated by application (after finish analyzing the schedule) or can be updated (manually) by the engineer on the website.
All generated schedule must be stored for at least 3 months for auditing purposes.
What is your recommendation? Store schedules as table rows OR NVARCHAR(MAX)
Are their any benefits in storing the data in one column other than rows count? If not, as to me, you are save to store the data in normalized manner.
I have used both techniques for storing the data because of different requirements. And of course, storing data in VARBINARY(MAX) or NVARCHAR(MAX) lead to many difficulties:
not able to index and search by certain fields
in order to perform updates, the data must be normalized, modified and then build as a string/binary again
in order to perform reporting, the data must be normalized again
So, because of the above, I will advice to choose the table format. Also, if you are feeling the exporting the data in some kind of serialization is better, you can always implement such SQL CLR string concatenation function or use the built-in if using SQL Server 2017 and latter.
Also, it will be better to use separators like CHAR(31) and CHAR(30) for columns and rows. It is more clear then using tab/new lines/commas/semi-colons as it is unlikely the input data to contain such and break your data.

"AT TIME ZONE" Functionality for ms sql server 2014 [duplicate]

I having an column of UNIX time stamp in my database table, which comes from a system that is in the Kuwait time zone.
My database server's time zone is Eastern Time US & Canada. Now I need to convert the UNIX time stamp in to Kuwait time zone date value using an SQL query.
Can anyone tell me how I can convert this UNIX time stamp into a Kuwait time zone date value?
Unix timestamps are integer number of seconds since Jan 1st 1970 UTC.
Assuming you mean you have an integer column in your database with this number, then the time zone of your database server is irrelevant.
First convert the timestamp to a datetime type:
SELECT DATEADD(second, yourTimeStamp, '1970-01-01')
This will be the UTC datetime that corresponds to your timestamp.
Then you need to know how to adjust this value to your target time zone. In much of the world, a single zone can have multiple offsets, due to Daylight Saving Time.
Unfortunately, SQL Server has no ability to work work time zones directly. So if you were, for example, using US Pacific time, you would have no way of knowing if you should subtract 7 hours or 8 hours. Other databases (Oracle, Postgres, MySql, etc.) have built-in ways to handle this, but alas, SQL Server does not. So if you are looking for a general purpose solution, you will need to do one of the following:
Import time zone data into a table, and maintain that table as time zone rules change. Use that table with a bunch of custom logic to resolve the offset for a particular date.
Use xp_regread to get at the Windows registry keys that contain time zone data, and again use a bunch of custom logic to resolve the offset for a particular date. Of course, xp_regread is a bad thing to do, requires certain permissions granted, and is not supported or document.
Write a SQLCLR function that uses the TimeZoneInfo class in .Net. Unfortunately, this requires an "unsafe" SQLCLR assembly, and might cause bad things to happen.
IMHO, none of these approaches are very good, and there is no good solution to doing this directly in SQL. The best solution would be to return the UTC value (either the original integer, or the datetime at UTC) to your calling application code, and do the timezone conversion there instead (with, for example, TimeZoneInfo in .Net or similar mechanisms in other platforms).
HOWEVER - you have lucked out in that Kuwait is (and always has been) in a zone that does not change for Daylight Saving Time. It has always been UTC+03:00. So you can simply add three hours and return the result:
SELECT DATEADD(hour, 3, DATEADD(second, yourTimeStamp, '1970-01-01'))
But do recognize that this is not a general purpose solution that will work in any time zone.
If you wanted, you could return one of the other SQL data types, such as datetimeoffset, but this will only help you reflect that the value is three hours offset to whomever might look at it. It won't make the conversion process any different or better.
Updated Answer
I've created a project for supporting time zones in SQL Server. You can install it from here. Then you can simply convert like so:
SELECT Tzdb.UtcToLocal('2015-07-01 00:00:00', 'Asia/Kuwait')
You can use any time zone from the IANA tz database, including those that use daylight saving time.
You can still use the method I showed above to convert from a unix timestamp. Putting them both together:
SELECT Tzdb.UtcToLocal(DATEADD(second, yourTimeStamp, '1970-01-01'), 'Asia/Kuwait')
Updated Again
With SQL Server 2016, there is now built-in support for time zones with the AT TIME ZONE statement. This is also available in Azure SQL Database (v12).
SELECT DATEADD(second, yourTimeStamp, '1970-01-01') AT TIME ZONE 'Arab Standard Time'
More examples in this announcement.

Different performance for simple update query

I have a database restored on two different machines (developer machine, and tester machine), and whilst not identical they have similar performance.
Using the following query (obfuscated):
CREATE TABLE #TmpTable (MapID int, Primary key(MapID))
UPDATE MapTable
SET Tstamp = GetDate()
FROM MapTable
JOIN Territory as Territory ON Territory.ID = MapTable.TerritoryID
LEFT JOIN #TmpTable ON #TmpTable.MapID = MapTable.MapID
WHERE MapTable.TStamp > DateAdd(year, -100, GETDATE())
AND Territory.Name IS NOT null
AND Territory.Name NOT LIKE '!%'
AND #TmpTable.MapID IS NULL
For 400k records, the developer machine updates in about 4 seconds, but on the tester machine it updates in about 25 seconds; the same DB was restored on both machines.
The problem is that when running this through a tool we use, the timeout for queries is set to 30 seconds and it timeouts 90+ % of the time on tester machine.
But the execution plan on both is the same...
Can anyone suggest why this is, and possible optimisation(s)?
One thing that I can see that 'might' have an impact on performance is the use of the GETDATE() function that might be called twice for each record, but definitely once, so that's 400K calls to the function!
I would put the result of GETDATE() into a variable and use that, I always do that unless there is a very good reason no to, for example the changes in the date throughout the query is required like in batch processing within a CURSOR.
However, I doubt this will be the main issue with performance. With such a large difference in execution time between the different machines where the execution plan is the same I would be looking at factors external to SQL such as CPU usage, HDD usage, speed and fragmentation etc.

Converting MS SQL float row, rounding to 2 decimals, may we use MS Access for rounding?

We have a legacy financial application running which is no longer supported but important to us.
We have overwritten approximateley 250 rows manually in 2 columns "price" and "selling_price" and now the application crashes at some point where it calculates some reports.
I believe our mistake was to not round to 2 decimals before writing the valuies.
My plan is to use MS Access and update the values in the row.
Old values were like:
24.48
6.98
100.10
But we also wrote values like
20.19802
99.42882
108.1302
and I believe this lets it crash when it sums over them
Could it be a good idea just to make an MS Access query and overwrite with the rounded values? Or is it on the MS SQL Tabel level more accurate to use a SQL Query that modifies using T-SQL functions? My idea would be to overwrite with ROUND(108.1302, 2)
Someone with a lot of MS SQL experience?
This row is of type float, lenght 53, scale 0, allow Null = no.
I am not quite sure what happens internally and what the application is expecting, because float is stored in binary, so what we see cannot be the same that is internally stored.
In MS Access, we connect to the tables using ODBC, it would almost be the same function ROUND(108.1302, 2), but does this also lead to the same result?
How should this be overwritten, what seems safest from your experience?
If you're using Access as a front End for a SQL Server back End then the two approaches should effectively be the same.
You could check this before updating by running the conversion as a select statement in Access and then running it as a SQL pass through
SELECT
ColumnA, ROUND(ColumnA, 2) AS RoundedA
FROM TableName
WHERE ColumnA <> ROUND(ColumnA, 2)
GROUP BY ColumnA
ORDER BY ColumnA
And checking they match

How does DateTime.Now affect query plan caching in SQL Server?

Question:
Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains?
Possible Solution:
I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well.
Given Example:
Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc.
Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range.
Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question.
Example proc and execution (if that helps to understand):
CREATE PROCEDURE GetFooData
#StartDate datetime
#EndDate datetime
AS
SELECT *
FROM Foo
WHERE LogDate >= #StartDate
AND LogDate < #EndDate
Here's a sample execution using DateTime.Now:
EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now
Here's a sample execution using DateTime.Today.AddDays(1)
EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1)
The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.
The query plan will be cached regardless of parameter values. Parameters basically guarantee that a consistent, reusable query exists, since they are type-safe as far as SQL server is concerned.
What you want is not query plan, but result caching. And this will be affected by the behavior you describe.
Since you seem to handle whole days only, you can try passing in dates, not datetimes, to minimize different parameter values. Also try caching query results in the application instead of doing a database roundtrip every time.
Because you invoke a stored procedure, not directly a query, then your only query that changes is the actual batch you send to SQL, the EXEC GetFooData '2010-05-01', '2010-05-05' vs. GetFooData '2010-05-01', '2010-05-04 15:41:27'. This is a trivial batch, that will generate a trivial plan. While is true that, from a strict technical point of view, you are loosing some performance, it will be all but unmeasurable. The details why this happes are explained in this response: Dynamically created SQL vs Parameters in SQL Server
The good news is that by a minor change in your SqlClient invocation code, you'll benefit from even that minor performance improvement mentioned there. Change your SqlCommand code to be an explicit stored procedure invocation:
SqlCommand cmd = new SqlCommand("GetFooData", connection);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#StartDate", dateFrom);
cmd.Parameters.AddWithValue("#EndDate", DateTime.Now);
As a side note, storing localized times in the database is not a very good idea, due to the clients being on different time zones than the server and due to the complications of daylight savings change night. A much better solution is to always store UTC time and simply format it to user's local time in the application.
In your case, you are probably fine if the second parameter is just drifting upward in real time.
However, it is possible to become a victim of parameter sniffing where the first execution (which produces the cached execution plan) is called with parameters which produce a plan which is not typically good for the other parameters normally used (or the data profile changes drastically). The later invocations might use a plan which is sometimes so poor that it won't even complete properly.
If your data profile changes drastically by different choices of parameters, and the execution plan becomes poor for certain choices of parameters, you can mask the parameters into local variables - this will effectively prevent parameter sniffing in SQL Server 2005. There is also the WITH RECOMPILE (either in the SP or in the EXEC - but for heavily called SPs, this is not a viable option) In SQL Server 2008, I would almost always use the OPTIMIZE FOR UNKNOWN which will avoid producing a plan based on parameter sniffing.

Resources