I'm looking to execute a delete statement that deletes database rows in a table in batches of 1000. The command is being called from a powershell script. The SQL command is below ...
USE AIS
GO
DECLARE
#rows INT = 1,
#batch_size INT = 1000,
#duration_in_days INT = -4
WHILE #rows > 0
BEGIN
--First delete all child rows to avoid FK issues
DELETE TOP (#batch_size)
FROM dbo.ais_audit
WHERE create_time < dateadd(day, #duration_in_days, getdate())
AND parent_audit_id IS NOT NULL;
--Now delete any parent rows
DELETE TOP (#batch_size)
FROM dbo.ais_audit
WHERE create_time < dateadd(day, #duration_in_days, getdate())
AND parent_audit_id IS NULL;
SELECT #rows = ##ROWCOUNT;
SELECT #rows AS 'DELETED ROW COUNT';
WAITFOR DELAY '00:00:01';
END
"#
The command I'm using to execute this in powershell is
Invoke-Sqlcmd -ServerInstance $Server -Database $Database -AccessToken $tok.Token -Query $command
Upon running the above, the first 1000 records get deleted, but then it fails with the following error
" Invoke-Sqlcmd : The DELETE statement conflicted with the SAME TABLE REFERENCE constraint
"FK_AIS_AUDIT". The conflict occurred in database "AIS", table "dbo.ais_audit", column
'parent_audit_id'.
The statement has been terminated. "
Running the SQL statement from SSMS works fine, so this is only an issue in Powershell it seems.
Based on your T-SQL, you have no guarantee that the batch of parent rows that you are trying to delete are in any way related to the batch of child rows that you have just deleted. To ensure that you don't have a foreign key conflict, you will need to either:
a) delete all batches of child rows before moving onto deleting the parent rows, or
b) record the list of parent_audit_ids for the batch of child rows that you are deleting & then use this list to delete the associated parent rows.
A is the easier option, but B will allow you to delete children & parent rows in associated batches. Which you choose is dependent on your requirements.
Related
I am exploring powershell to run the following SQL script stored in my machine with name "TestSQLScript.sql"
USE TestDB
GO
SELECT TOP (1000) [DocId]
,[DocumentInfo]
FROM [TestDB].[dbo].[Docs]
INSERT INTO WorkDocs
( [DocId]
,[DocumentInfo])
SELECT [DocId]
,[DocumentInfo]
FROM [TestDB].[dbo].[Docs]
SELECT TOP (1000) [DocId]
,[DocumentInfo]
FROM [TestDB].[dbo].[WorkDocs]
When I use the following command it executed well.
invoke-sqlcmd -inputfile "D:\TestSQLScript.sql" -serverinstance "MYPC\MSSQLSERVER2019" -Username "sa" -Password "***" -database "master"
When I am running the following it through error while accessing table WorkDocs.
invoke-DbaQuery -SQLInstance "MYPC\MSSQLSERVER2019" -sqlcredential $Cred -File "D:\TestSQLScript.sql" -database "master"
In the initial finding it is noticed that tables referenced as [TestDB].[dbo].[WorkDocs] are accessible, and table referenced without [TestDB].[dbo] is not accessible at Insert statement, while I have already used the database reference on top in the SQL file "TestSQLScript.sql"
USE TestDB
GO
I can not reference all the tables with [TestDB].[dbo] as I have hundreds of script and there are multiple database references in each SQL script. Can you suggest how can I resolve this?
In addition to this How can I get the result count for each SQL query in the file. I have Multiple Select, Insert, Update and truncate SQL command in one SQL file and I want to log each execution result. Please refer to below for more.
For Select Statements - 1000 Records fetched
For Insert Statements - 100 rows inserted into 'Table Name'
For Update Statements - 50 rows affected in 'Table Name'
For Truncate Statements - 'Table Name' has been truncated.
I currently have a lot of triggers running on most of my tables. I'm using Insert, Update and Delete triggers on all of them. They log into a separate tabel. However the processing time to use the software has increased because of this. It is barely/not noticable for smaller changes, however for big changes it can go from 10-15min to 1hr.
I would like to change my triggers to stop insterting new log records after say 250 log records in 1 minute (bulk action), delete the newly created logs and create 1 record mentiong bulk and the query used. Problem is I can't seem to get the trigger to stop when activated.
I have already created the conditions needed for this:
CREATE TRIGGER AUDIT_LOGGING_INSERT_ACTUALISERINGSCOEFFICIENT ON ACTUALISERINGSCOEFFICIENT FOR INSERT AS
BEGIN
SET NOCOUNT ON
DECLARE #Group_ID INT = (SELECT COALESCE(MAX(Audit_Trail_Group_ID), 0) FROM NST.dbo.Audit_Trail) + 1
DECLARE #BulkCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -1, GETDATE()))
IF #BulkCount < 250
BEGIN
INSERT ...
END
ELSE
BEGIN
DECLARE #BulkRecordCount INT = (SELECT COUNT(*) FROM NST.dbo.Audit_Trail WHERE Audit_Trail_User = CONCAT('7090-LOCAL-', UPPER(SUSER_SNAME())) AND GETDATE() >= DATEADD(MINUTE, -60, GETDATE()) AND Audit_Trail_Action LIKE '%BULK%')
IF #BulkRecordCount = 0
BEGIN
INSERT ...
END
END
END
However when I execute a query that changes 10000 plus records the trigger still inserts all 10000. When I execute it again right after it inserts 10000 BULK records. Probably because it executes the first time it triggers (goes through the function) 10000 times?
Also as you can see, this would work only if 1 bulk operation is used in the last 60 min.
Any ideas for handling bulk changes are welcome.
Didn't get it to work by logging the first 250 records.
Instead I did the following:
Created a new table with 'Action' and 'User' columns
I add a record everytime a bulk action starts and delete it when it ends
Changed the trigger so that if a record is found for the user in the new table that it only writes 1 bulk record in the log table
Problems associated:
Problem with this is that I also have had to manually go through the
biggest bulk functions and implement the add and delete.
An extra point of failure if the add record gets added but an exception occurs that doesnt delete the record again. -> Implemented a Try Catch where needed.
I need to create a trigger in SQL Server which is conditionally fired when a user attempts to delete a row in the table.
I have 2 tables in the database Airlines:
Passenger - it has information on passengers. I need to create trigger on this table, obviously
Record - it records the flight(s) a passenger has been on
My trigger should work as follows:
If a passenger has never been on a flight, it should be deleted if attempted.
But if he/she has NOT been on a flight, it should restrict the action and print the number of times he has been on any flight.
The only thing (I hope) I am struggling with is:
How would I specify any WHERE clause inside a query in the trigger if I do not know which particular passenger I need to look for until a user attempts to delete it?
So, long story short: is there any way to obtain a value passed in a query's WHERE clause to be used in a trigger?
Thank you very much for your time!
Here is my code:
ALTER TRIGGER Restrict_Delete
ON Records
INSTEAD OF DELETE
AS
BEGIN
DECLARE #r_count INT
SET #r_count = (SELECT DISTINCT COUNT(*)
FROM Passenger P, Records R
WHERE P.passenger_id = R.passenger_id
AND P.passenger_id = ???)
IF #r_count > 0
BEGIN
ROLLBACK TRAN
PRINT ('Permission denied. ' + CAST (#r_count AS Varchar(3)) + ' record(s) exist.')
END
ELSE
PRINT 'No records exist. Record deleted!'
END
How do I determine the passenger_id in my query?
You can use the deleted and inserted tables!
These are special tables that exist in triggers and contain records copied from the actual tables. When you change a row in a table, a copy of the old row goes into the deleted table, and a copy of the new row goes into the inserted table. Since you're just deleting, you only need to use the deleted table.
Here's how the SQL could look inside your trigger:
DECLARE #r_count INT
SET #r_count = Count(*)
FROM Records R -- You don't actually need the Passenger table for this.
WHERE r.Passenger_id IN (
select d.Passenger_id
from Deleted d
)
IF #r_count > 0
BEGIN
Rollback Tran
PRINT ('Permission denied. ' + CAST (#r_count AS Varchar(3)) + ' record(s) exist.')
END
ELSE
PRINT 'No records exist. Record deleted!'
Something you need to be aware of, though: a trigger is called once per statement, not once per record. So if you delete two passengers with one DELETE statement, you'll get only one trigger call. The logic you had (and I adapted) will check for any record that was deleted by that DELETE statement. You could get quite a large number for #r_count if you're doing a bulk delete!
If you need to code around that, try to avoid using a cursor actually in your trigger: it will make deletes very slow.
Also, be aware that the PRINT statement will appear in SSMS and can be retrieved in ADO.NET with a bit of fiddling around, but doesn't appear in traces or get returned as part of a recordset. If you need to log this failure, you're going to need to write to a database table.
I have the below trigger:
CREATE Trigger enroll_limit on Enrollments
Instead of Insert
As
Declare #Count int
Declare #Capacity int
Select #Count = COUNT(*) From Enrollments
Select #Capacity = Capacity From CourseSections
If #Count < #Capacity
Begin
Insert Into Enrollments Select * From Inserted
End
GO
I'm getting an error msg saying:
'CREATE TRIGGER' must be the first statement in a query batch.
The error message "'CREATE TRIGGER' must be the first statement in a query batch." usually occurs when a preceding group (batch) of statements does not have a terminating GO
So, I would suggest adding add a GO to the end of the preceding batch's statements.
If you are trying this from SQL Server Management Studio, here is another option which worked for me:
In the left pane, right-click on the database and select "New Query".
This connects you to the specific database. Now you can enter your create trigger statement as the first statement in the query window which opens. There is no need for a "use" command.
We have a script that must allow for being re-run several times.
We have an MS-SQL script that updates a table if a (now obsolete) column exists, then deletes the column. To ensure that the script can be run several times, it first checks for the existence of a column before performing the updates.
The script works as expected on our dev database, updating the data on the first run, then displaying the message 'Not updating' on subsequent runs.
On our test database the script runs fine on the first run, but errors with "Invalid column name 'OldColumn'" on subsequent runs; if I comment out the UPDATE and ALTER statements it runs as expected.
Is there a way to force the script to run even if there's a potential error, or is it something to do with how the database was set-up? (fingers crossed I'm not looking like a complete noob!)
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' AND COLUMN_NAME = 'OldColumn')
BEGIN
PRINT 'Updating and removing old column...'
UPDATE MyTable SET NewColumn='X' WHERE OldColumn=1;
ALTER TABLE MyTable DROP COLUMN OldColumn;
END
ELSE
PRINT 'Not updating'
GO
As a work around you could do
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'MyTable' AND COLUMN_NAME = 'OldColumn')
BEGIN
PRINT 'Updating and removing old column...'
EXEC ('UPDATE MyTable SET NewColumn=''X'' WHERE OldColumn=1;');
ALTER TABLE MyTable DROP COLUMN OldColumn;
END
ELSE
PRINT 'Not updating'