Can some values of DATETIME2(2) compare less than itself? - sql-server

I have an SQL script of several steps that imports some data into a table and then summarizes what happened. It executes inside a single transaction. The table of interest has a column declared [TimeLastSeen] DATETIME2(2). At the start of the script I declare
DECLARE #now DATETIME2(2) = GetDate();
and use that consistently through the script in place of GetDate(). #now is assigned once and never changed.
When I import the data, I have (inside a large MERGE statement)
UPDATE ... SET [TimeLastSeen] = #now
and then a subsequent statement checks for records not seen in this import:
INSERT INTO #MergeResult ([Action], [TargetId], [TargetBusinessName], [TargetLicenseNumber])
SELECT 'MISSED', [Id], [BusinessName], [LicenseNumber]
FROM [Providers]
WHERE [TimeLastSeen] < #now;
This ran fine several times, but then one time it updated all the records, but reported that all the imported rows were missed. So first it SET [TimeLastSeen] = #now on every record in the table, and then WHERE [TimeLastSeen] < #now was true for every record, for the same value of #now.
I re-ran the script on the same import file, and it worked properly -- nothing matched WHERE [TimeLastSeen] < #now
So can some values of DATETIME2(2) evaluate as less than itself? Is the fix to declare greater precision, like DATETIME2(6), or do I add
DECLARE #justBeforeNow DATETIME2(6) = DATEADD(millisecond, -1, #now);
and use that value in the comparison?

Related

How do I stop/change an sql trigger after inserting a certain amount of records?

I currently have a lot of triggers running on most of my tables. I'm using Insert, Update and Delete triggers on all of them. They log into a separate tabel. However the processing time to use the software has increased because of this. It is barely/not noticable for smaller changes, however for big changes it can go from 10-15min to 1hr.
I would like to change my triggers to stop insterting new log records after say 250 log records in 1 minute (bulk action), delete the newly created logs and create 1 record mentiong bulk and the query used. Problem is I can't seem to get the trigger to stop when activated.
I have already created the conditions needed for this:
CREATE TRIGGER AUDIT_LOGGING_INSERT_ACTUALISERINGSCOEFFICIENT ON ACTUALISERINGSCOEFFICIENT FOR INSERT AS
BEGIN
SET NOCOUNT ON
DECLARE #Group_ID INT = (SELECT COALESCE(MAX(Audit_Trail_Group_ID), 0) FROM NST.dbo.Audit_Trail) + 1
DECLARE #BulkCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -1, GETDATE()))
IF #BulkCount < 250
BEGIN
INSERT ...
END
ELSE
BEGIN
DECLARE #BulkRecordCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -60, GETDATE()) AND Audit_Trail_Action LIKE '%BULK%')
IF #BulkRecordCount = 0
BEGIN
INSERT ...
END
END
END
However when I execute a query that changes 10000 plus records the trigger still inserts all 10000. When I execute it again right after it inserts 10000 BULK records. Probably because it executes the first time it triggers (goes through the function) 10000 times?
Also as you can see, this would work only if 1 bulk operation is used in the last 60 min.
Any ideas for handling bulk changes are welcome.
Didn't get it to work by logging the first 250 records.
Instead I did the following:
Created a new table with 'Action' and 'User' columns
I add a record everytime a bulk action starts and delete it when it ends
Changed the trigger so that if a record is found for the user in the new table that it only writes 1 bulk record in the log table
Problems associated:
Problem with this is that I also have had to manually go through the
biggest bulk functions and implement the add and delete.
An extra point of failure if the add record gets added but an exception occurs that doesnt delete the record again. -> Implemented a Try Catch where needed.

Why is my UPDATE stored procedure executed multiple times?

I use stored procedures to manage a warehouse. PDA scanners scan the added stock and send it in bulk (when plugged back) to a SQL database (SQL Server 2016).
The SQL database is fairly remote (other country), so there's sometimes delay in some queries, but this particular one is problematic: even if the stock table is fine, I had some problems with updating the occupancy of the warehouse spots. The PDA tracks the added items in every spot as a SMALLINT, then send back this value to the stored procedure below.
PDA "send_spots" query:
SELECT spot, added_items FROM spots WHERE change=1
Stored procedure:
CREATE PROCEDURE [dbo].[update_spots]
#spot VARCHAR(10),
#added_items SMALLINT
AS
BEGIN
BEGIN TRAN
UPDATE storage_spots
SET remaining_capacity = remaining_capacity - #added_items
WHERE storage_spot=#spot
IF ##ROWCOUNT <> 1
BEGIN
ROLLBACK TRAN
RETURN - 1
END
ELSE
BEGIN
COMMIT TRAN
RETURN 0
END
END
GO
If the remaining_capacity value is 0, the PDAs can't add more items to it on next round. But with this process, I had negative values because the query allegedly ran two times (so subtracted #added_items two times).
Is there a way for that to be possible? How could I fix it? From what I understood the transaction should be cancelled (ROLLBACK) if the affected rows are != 1, but maybe that's something else.
EDIT: current solution with the help of #Zero:
CREATE PROCEDURE [dbo].[update_spots]
#spot VARCHAR(10),
#added_racks SMALLINT
AS
BEGIN
-- Recover current capacity of the spot
DECLARE #orig_capacity SMALLINT
SELECT TOP 1
#orig_capacity = remaining_capacity
FROM storage_spots
WHERE storage_spot=#spot
-- Test if double is present in logs by comparing dates (last 10 seconds)
DECLARE #is_double BIT = 0
SELECT #is_double = CASE WHEN EXISTS(SELECT *
FROM spot_logs
WHERE log_timestamp >= dateadd(second, -10, getdate()) AND storage_spot=#spot AND delta=#added_racks)
THEN 1 ELSE 0 END
BEGIN
BEGIN TRAN
UPDATE storage_spots
SET remaining_capacity= #orig_capacity - #added_racks
WHERE storage_spot=#spot
IF ##ROWCOUNT <> 1 OR #is_double <> 0
-- If double, rollback UPDATE
ROLLBACK TRAN
ELSE
-- If no double, commit UPDATE
COMMIT TRAN
-- write log
INSERT INTO spot_logs
(storage_spot, former_capacity, new_capacity, delta, log_timestamp, double_detected)
VALUES
(#spot, #orig_capacity, #orig_capacity-#added_racks, #added_racks, getdate(), #is_double)
END
END
GO
I was thinking about possible causes (and a way to trace them) and then it hit me - you have no value validation!
Here's a simple example to illustrate the problem:
Spot | capacity
---------------
x1 | 1
Update spots set capacity = capacity - 2 where spot = 'X1'
Your scanner most likely gave you higher quantity than you had capacity to take in.
I am not sure how your business logic goes, but you need to perform something in lines of
Update spots set capacity = capacity - #added_items where spot = 'X1' and capacity >= #added_items
if ##rowcount <> 1;
rollback;
EDIT: few methods to trace your problem without implementing validation:
Create a logging table (with timestamp, user id (user, that is connected to DB), session id, old value, new value, delta value (added items).
Option 1:
Log all updates that change value from positive to negative (at least until you figure out the problem).
The drawback of this option is that it will not register double calls that do not result in a negative capacity.
Option 2: (logging identical updates):
Create script that creates a global temporary table and deletes records, from that table timestamps older than... let's say 10 minutes once every minute or so (play around with numbers).
This temporary table should hold the data passed to your update statement so 'spot', 'added_items' + 'timestamp' (for tracking).
Now the crucial part: When you call your update statement check if a similar record exists in the temporary table (same spot and added_items and where current timestamp is between timestamp and [timestamp + 1 second] - again play around with numbers). If a record like that exist log that update, if not add it to temporary table.
This will register identical updates that go within 1 second of each other (or whatever time-frame you choose).
I found here an alternative SQL query that does the update the way I need, but with a temporary value using DECLARE. Would it work better in my case, or is my initial query correct?
Initial query:
UPDATE storage_spots
SET remaining_capacity = remaining_capacity - #added_items
WHERE storage_spot=#spot
Alternative query:
DECLARE #orig_capacity SMALLINT
SELECT TOP 1 #orig_capacity = remaining_capacity
FROM storage_spots
WHERE spot=#spot
UPDATE Products
SET remaining_capacity = #orig_capacity - #added_items
WHERE spot=#spot
Also, should I get rid of the ROLLBACK/COMMIT instructions?

Force recalculation of GetDate() for each row

When running this command in SQL Server Management Studio
UPDATE LogChange
SET Created = GETDATE()
WHERE id > 6000
something strange happens.. :)
All rows get the same value in the Created column.
How do I "force" SQL Server to recalculate the GetDate for every row?
A cursor solved it
declare #id int;
Declare toLoop cursor for
select Id from LogChange where id > 6000
OPEN toLoop
FETCH NEXT FROM toLoop INTO #id
WHILE ##FETCH_STATUS = 0
BEGIN
Update LogChange set Created = GETDATE() where id = #id
FETCH NEXT FROM toLoop INTO #id
END
close toLoop
DEALLOCATE toLoop
I have found the following “getdate()” pattern extremely useful over the years. Especially when updating multiple tables in the same transaction. Having the time values exactly match had more benefit to me then being accurate to the millisecond on individual rows.
declare #dtNow datetime
SET #dtNow = GETDATE()
UPDATE LogChange
SET Created = #dtNow
WHERE id > 6000
I don't think there is a way to do them in a single query because the operations in the same clause of a query are evaluated all-at-once.
If you want to guarantee that separate values are applied to each row, write a query that definitely does that:
declare #dt datetime
set #dt = GETDATE()
;With Diffs as (
select
*,
ROW_NUMBER() OVER (ORDER BY id) * 10 as Offset
from
LogChange
WHERE id > 6000
)
UPDATE Diffs
SET Created = DATEADD(millisecond,Offset,#dt)
We know that the above will generate separate offsets for each row, and that we're definitely adding those offsets to a fixed value. It has often been observed that GETDATE() will return the same value for all rows but I cannot find any specific documentation that guarantees this, which is why I've also introduced a #dt variable so that I know it's only assigned once.
You can create a scalar function like this one:
CREATE FUNCTION dbo.sf_GetDate(#num int)
RETURNS DateTime
AS
BEGIN
RETURN GETDATE()
END
GO
Then use it like this:
UPDATE LogChange
SET Created = dbo.GetDate(Id)
WHERE id > 6000

How to cache stored procedure results to retrieve the results faster?

I have a stored procedure which takes a lot of time (around 5 minutes) to execute. This stored procedure fills up a table. Now I am retrieving the data from that table. I have created a job to execute this stored procedure every 15 minutes. But while the stored procedure is in execution, my table is empty and the front end shows no results at that time. My requirement is to show data at the front end side all the time.
Is there a way to cache the stored procedure results and use that result while the stored procedure executes?
Here is my stored procedure,
BEGIN
declare #day datetime
declare #maxdate datetime
set #maxdate = getdate()
set #day = Convert(Varchar(10),DATEADD(week, DATEDIFF(day, 0, getdate())/7, 0),110)
truncate table tblOpenTicketsPerDay
while #day <= #maxdate
begin
insert into tblOpenTicketsPerDay
select convert(varchar(20),datename(dw,#day)) day_name, count(*) Open_Tickets from
(select [status], createdate, closedate
FROM OPENROWSET('MSDASQL','DSN=SQLite','Select * from tickets') AS a
where createdate <= #day
except
select [status], createdate, closedate
FROM OPENROWSET('MSDASQL','DSN=SQLite','Select * from tickets') AS a
where closedate <= #day and [status] = 'closed') x
set #day = #day + 1
end
END
Any ideas will be helpful.
Thanks.
If I have understood correct then your main concern is: your stored procedure empties the table and then fills it up and since it takes time, your application have no data show.
In that case, you can have a secondary/auxiliary clone table; say tblOpenTicketsPerDay_clone and have your stored procedure fill that table instead like
insert into tblOpenTicketsPerDay_clone
select convert(varchar(20),datename(dw,#day)) day_name,
count(*) Open_Tickets from
That way your application will always have data to display since main table has the data. Once, the clone table is done filling up then transfer the same data to main table saying
delete from tblOpenTicketsPerDay;
insert into tblOpenTicketsPerDay
select * from tblOpenTicketsPerDay_clone;
No, but the problem is not caching, it isa totally bad approach to generate the data.
Generate new data into a temporary table, then MERGE The results (using the merge keyword) into the original table.
No sense in deleting the data first. That is a terrible design approach.

Table Variable inside cursor, strange behaviour - SQL Server

I observed a strange thing inside a stored procedure with select on table variables. It always returns the value (on subsequent iterations) that was fetched in the first iteration of cursor. Here is some sample code that proves this.
DECLARE #id AS INT;
DECLARE #outid AS INT;
DECLARE sub_cursor CURSOR FAST_FORWARD
FOR SELECT [TestColumn]
FROM testtable1;
OPEN sub_cursor;
FETCH NEXT FROM sub_cursor INTO #id;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #Log TABLE (LogId BIGINT NOT NULL);
PRINT 'id: ' + CONVERT (VARCHAR (10), #id);
INSERT INTO Testtable2 (TestColumn)
OUTPUT inserted.[TestColumn] INTO #Log
VALUES (#id);
IF ##ERROR = 0
BEGIN
SELECT TOP 1 #outid = LogId
FROM #Log;
PRINT 'Outid: ' + CONVERT (VARCHAR (10), #outid);
INSERT INTO [dbo].[TestTable3] ([TestColumn])
VALUES (#outid);
END
FETCH NEXT FROM sub_cursor INTO #id;
END
CLOSE sub_cursor;
DEALLOCATE sub_cursor;
However, while I was posting the code on SO and tried various combinations, I observed that removing top from the below line, gives me the right values out of table variable inside a cursor.
SELECT TOP 1 #outid = LogId FROM #Log;
which would make it like this
SELECT #outid = LogId FROM #Log;
I am not sure what is happening here. I thought TOP 1 on table variable should work, thinking that a new table is created on every iteration of the loop. Can someone throw light on the table variable scoping and lifetime.
Update: I have the solution to circumvent the strange behavior here.
As a solution, I have declared the table at the top before the loop and deleting all rows at the beginning of the loop.
There are numerous things a bit off with this code.
First off, you roll back your embedded transaction on error, but I never see you commit it on success. As written, this will leak a transaction, which could cause major issues for you in the following code.
What might be confusing you about the #Log table situation is that SQL Server doesn't use the same variable scoping and lifetime rules as C++ or other standard programming languages. Even when declaring your table variable in the cursor block you will only get a single #Log table which then lives for the remainder of the batch, and which gets multiple rows inserted into it.
As a result, your use of TOP 1 is not really meaningful, since there's no ORDER BY clause to impose any sort of deterministic ordering on the table. Without that, you get whatever order SQL Server sees fit to give you, which in this case appears to be the insertion order, giving you the first inserted element of that log table every time you run the SELECT.
If you truly want only the last ID value, you will need to provide some real ordering criterion for your #Log table -- some form of autonumber or date field alongside the data column that can be used to provide the proper ordering for what you want to do.

Resources