Delete From Statement in SQL 2014 Failing - sql-server

I have a stored procedure that does some clean up on a database each morning. This is a Connectwise database (Company CRM) which we have access for reporting. It currently lives on a VM Server 2012 in Microsoft Azure. In a nutshell this stored procedure combines data from multiple records into one record and deletes all the combined records.
Background into the issue experienced. We are deleting a record from one table (parent), there are several other tables (child) with foreign keys to this table and they have not been setup with the cascade delete option. Our Stored Procedure goes through each one of the other child tables and deletes the records relating to the parent table. Turns out this script started failing in February and we are just now noticing (smh).
Error Message Received -
Msg 512, Level 16, State 1, Procedure Alter_SR_Service_User_Defined_Field_Value, Line 53 [Batch Start Line 93]
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
delete
from
SR_Service_User_Defined_Field_Value
where
SR_Service_RecID in ( select
(s.SR_Service_RecID)
from
SR_Service s
join SR_Board sb on sb.SR_Board_RecID=s.SR_Board_RecID
where sb.Board_Name like 'Collections'
and s.SR_Status_RecID != 511
and cast(trim(left(right(s.Summary, len(s.Summary) - charindex('#',s.Summary)),5))as int)=#invoiceNumber
)
and SR_Service_RecID != ( select
min (s.SR_Service_RecID)
from
SR_Service s
join SR_Board sb on sb.SR_Board_RecID=s.SR_Board_RecID
where sb.Board_Name like 'Collections'
and s.SR_Status_RecID != 511
and cast(trim(left(right(s.Summary, len(s.Summary) - charindex('#',s.Summary)),5))as int)=#invoiceNumber
)
)
SR_Service_User_Defined_Field_Value Table (1st column is an primary key, second and third are foreign keys):
SR_Service_User_Defined_Field_Value_RecID
SR_Service_RecID
User_Defined_Field_RecID
User_Defined_Field_Value
Last_Update_UTC
Updated_By
5791
8009
30
ENGR
2022-04-18
jgriffin
5792
8009
51
NO
2022-04-18
jgriffin
5789
8240
30
ENGR
2022-04-18
jgriffin
5790
8240
51
NO
2022-04-18
jgriffin
5787
8420
30
ENGR
2022-04-18
jgriffin
5788
8420
51
NO
2022-04-18
jgriffin
Troubleshooting to date:
I have simplified the delete statement to just delete based off of SR_Service_RecIDs without the additional select subqueries with no luck.
I have tried to delete multiple records by SR_Service_User_Defined_Field_Value_RecID using the in statement with no luck. Where SR_Service_User_Defined_Field_Value_RecID in (5789,5787).
I could delete using the in statement with just one recid in the list. Where SR_Service_User_Defined_Field_Value_RecID in (5789).
I updated the FK SR_service_RecID to use cascade delete. I then tried to delete the one parent record and received the same error.
Trigger on this table:
-- Code for delete
if exists(select * from deleted) and not exists(Select * from inserted)
begin
INSERT INTO [TruCWHistorian].[dbo].[CWHistorian]( [Table], [Object_RecID], [FieldName], [Old_Value], [New_Value], [Date_Updated], [Updated_By])
SELECT
'SR_Service_User_Defined_Field_Value',
d.SR_Service_RecID,
u.Caption,
(select User_Defined_Field_Value FROM deleted),
'', -- no new value for deleted record case
getDate(),
'' -- no record of who made change in this case
FROM
deleted d
join User_Defined_Field u on u.User_Defined_Field_RecID=d.User_Defined_Field_RecID
end

In your trigger, this subquery is causing the problem (and there's no reason for it anyway, since with no correlation it returns all the rows from deleted):
(select User_Defined_Field_Value FROM deleted)
Why isn't it just:
INSERT INTO [TruCWHistorian].[dbo].[CWHistorian](...)
SELECT
'SR_Service_User_Defined_Field_Value',
d.SR_Service_RecID,
u.Caption,
d.User_Defined_Field_Value,
'', -- no new value for deleted record case
getDate(),
'' -- no record of who made change in this case
FROM
deleted d
...

Related

SQL Server trigger: limit record insert

I'm doing a database for a university work. At certain point we have to create a trigger/function that can limit insertion of records on my SQL Server Management Studio 17, taking into account different foreign keys. For example we can have 100 records, but we can only have 50 records for the same foreign key.
We have this constraint:
CREATE TABLE [dbo].[DiaFerias] WITH CHECK
ADD CONSTRAINT [Verificar22DiasFerias]
CHECK (([dbo].[verificarDiasFerias]((22)) ='True'))
With the help of this function:
ALTER FUNCTION [dbo].[verificarDiasFerias] (#contagem INT)
RETURNS VARCHAR(5)
AS
BEGIN
IF EXISTS (SELECT DISTINCT idFuncionario, idDiaFerias
FROM DiaFerias
GROUP BY idFuncionario, idDiaFerias
HAVING COUNT(*) <= #contagem)
RETURN 'True'
RETURN 'False'
END
Would I use a trigger here? Reluctantly, yes (See Enforce maximum number of child rows for how to do it without triggers, in a way I'd never recommend). Would I use that function? No.
I'd create an indexed view:
CREATE VIEW dbo.DiaFerias_Counts
WITH SCHEMABINDING
AS
SELECT idFuncionario, idDiaFerias, COUNT_BIG(*) as Cnt
FROM dbo.DiaFerias
GROUP BY idFuncionario, idDiaFerias
GO
CREATE UNIQUE CLUSTERED INDEX PK_DiaFerias_Counts on
dbo.DiaFerias_Counts (idFuncionario, idDiaFerias)
Why do this? So that SQL Server maintains these counts for us automatically, so we don't have to write a wide-ranging query in the triggers. We can now write the trigger, something like:
CREATE TRIGGER T_DiaFerias
ON dbo.DiaFerias
AFTER INSERT, UPDATE
AS
SET NOCOUNT ON;
IF EXISTS (
SELECT
*
FROM dbo.DiaFerias_Counts dfc
WHERE
dfc.Cnt > 22
AND
(EXISTS (select * from inserted i
where i.idFuncionario = dfc.idFuncionario AND i.idDiaFerias = dfc.idDiaFerias)
OR EXISTS (select * from deleted d
where d.idFuncionario = dfc.idFuncionario AND d.idDiaFerias = dfc.idDiaFerias)
)
)
BEGIN
RAISERROR('Constraint violation',16,1)
END
Hopefully you can see how it's meant to work - we only want to query counts for items that may have been affected by whatever caused us to trigger - so we use inserted and deleted to limit our search.
And we reject the change if any count is greater than 22, unlike your function which only starts rejecting rows if every count is greater than 22.

Archiving data from Table1 to Table2 upon condition | Sql Server 2017 Express

I have a database with two tables:
Table Student with the following
columns:
StudentID int identity,
StudentFN,
StudentLN,
Active bit,
EnrollmentDate
Table ArchivedStudent with the following columns:
ArvchivedStudentID int identity,
StudentID int,
StudentFN,
StudentLN,
WithdrawalDate getdate(),
ReasonDropped
In the long run, I'd like to schedule automatic updates for the table AcrchivedStudent and move the data from columns StudentID, StudentFN and StudentLN from table Student to table ArchnivedStudent when column Active changes from 1 (true) to 0 (false).
Here's my start up script that is not working:
update [as]
set [as].StudentID = s.StudentID,
[as].StudentFN = s.StudentFN,
[as].StudentLN = s.StudentLN
from ArchivedStudent [as]
inner join Student s
on [as].StudentID = s.StudentID
where s.Active = 0
go
The issue is that it does not return any results.
Once I'll be able to update table ArchivedStudent, I'd like to delete data of the students whose Active status changed to 0 in the Student table.
Your question still isn't very clear on the process. For example, do you want to allow the student to be deactivated for a certain period of time before they are moved to the archive table or do you want the student to be immediately moved to the archived table once the student is deactivated?
If the latter, this is much easier:
INSERT INTO ArchivedStudent (StudentId, StudentFn, StudentLn, WithdrawalDate)
SELECT S.StudentId, S.StudentFn, S.StudentLn, GETDATE()
FROM Student S
WHERE StudentId = ?
DELETE FROM Student WHERE StudentId = ?
If the former, then that is more challenging and we will require more detail.
Update 1:
To set the Withdrawal date based off a calculated value, use the following:
INSERT INTO ArchivedStudent (StudentId, StudentFn, StudentLn, WithdrawalDate)
SELECT S.StudentId, S.StudentFn, S.StudentLn, CAST(DATEADD(D,14,GETDATE()) AS DATE)
FROM Student S
WHERE StudentId = ?
Note 1: In DATEADD(), use a positive value for future dates and use a negative value for past dates. You can remove the DATE CAST if you need the actual time in addition to the date.
Note 2: The DELETE script posted in the original answer still stands.
You need a trigger to do it :
CREATE TRIGGER ArchiveStudent ON Student
FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO ArchivedStudent (StudentID, StudentFN, StudentLN)
SELECT
StudentID
, StudentFN
, StudentLN
FROM
Student
WHERE
Active = 0
DELETE FROM Student
WHERE
Active = 0
END
However, your approach it's simple, and risky at the same time. For instance, if someone made a student inactive by mistake, then the trigger will immediately insert that student into archive table then deleted. Surely you can retrieve it by many ways such as deleted, inserted tables or even get the max id of archive table, but why you put yourself in this situation in the first place?. This is one of many general issues could be experienced by the current approach. A better approach is to actually add more versioning or historian methods for the tables, and make the archives run either from SQL Job or a store procedure on a fixed dates rather than triggers. this would give you a scheduled and controlled data archiving.
You can even add a historian columns which will store the value of active column and the date of the change. Then, use trigger or store procedure to do it for you (or even a computed column with a generic scalar function that will be reused on multiple tables). for instance, if the student is inactive for 5 business days, then archive it and delete it from the table.
You could use a TRIGGER AFTER UPDATE on the Student table.
This trigger would:
- react only on UPDATE,
- transfer Student to ArchiveStudent, when Active is set to 0
- and set WithdrawalDate to 2 weeks from today.
Trigger:
CREATE TRIGGER [Student_Changed]
ON Student
AFTER UPDATE
AS
BEGIN
-- Archive
INSERT INTO ArchiveStudent
(StudentID, StudentFN, StudentLN, WithdrawalDate)
SELECT
DELETED.StudentID
,DELETED.[StudentFN]
,DELETED.[StudentLN]
,DATEADD(day, 14, GETDATE()) -- 2 Weeks from today
FROM DELETED
WHERE DELETED.Active = 1
-- Delete archived
DELETE FROM Student
WHERE StudentID = (SELECT DELETED.[StudentID] FROM DELETED)
AND Active = 0
END;
DEMO:
You can take a look at the SQL Fiddle solution here.
There are a number of solutions here that all appear partially correct, but with some issues. Your initial update of your archive table will not insert into the archive table, only update an existing row. And since you are trying to join between the live table and the archive table, you will get no results - well, no updates since an update statement doesn't produce "results" as such anyway.
So as other have said you would use two statements - one an insert statement and one a delete. I would tend to be on the careful side and make sure that I a) dont get duplicates in my archive table and b) dont delete from live before I am sure its made it into the archive. So the two statements would be:
insert archivestudent(...fieldlist...)
select * from student
where active=0
and not exists(select * from archivestudent where archivestudent.studentid=student.studentid)
delete student
where active=0
and exists(select * from archivestudent where archivedstudent.studentid=student.studentid)
You can then run this code whenever you wish, schedule it as a job to run each night, whatever makes sense in your app.
If, on the other hand you want to immediately run then a trigger is the way to go. Be aware though that triggers are set-based operations, meaning that the trigger runs once for all rows affected by an update. This means that the solution proposed by #Milan will fail if the triggering update affects more than one row, because the clause WHERE StudentID = (SELECT DELETED.[StudentID] FROM DELETED) will return more one value. An example might be update student set active=0 where enrolmentdate<'2017-01-01'
You should always join to the internal tables exposed inside a trigger, in this case the DELETED table
delete student
from deleted
join student on student.studentid=deleted.studentid
where active=0
I'd still be tempted to add the where exists/not exists clauses inside the trigger as well just to make it more error-proof.
You need two queries:
Insert
Insert into archivedstudent (studentid, student, studentln) select studentid, studentfn, studentln from student where active=0 and studentid not in (select studentid from archivedstudent);
And the delete
Delete from student where studentid in (select studentid from archivedstudent);
What you need is a trigger.SQL Server Trigger After Update for a Specific Value However you should be careful with triggers on large amounts of data, they can hurt performance.

Removing Duplicates with SQL Express 2017

I have a table of 120 million rows. About 8 million of those rows are duplicates depending on what value/column I use to determine duplicates. For argument sake, I'm testing out the email column vs multiple columns to see what happens with my data.
The file is about 10GB, so I cannot simply add another table to the database because of the size limits of SQL Express. Instead, I thought I'd try to extract, truncate, insert using a temp table since I've been meaning to try that method out.
I know I can use CTE to remove the duplicates, but every single time I try to do that it takes forever and my system locks up. My solution is to do the following.
1.Extract all rows to tempdb
2.Sort by Min(id)
3.Truncate original table
4.Transfer new unique data from tempdb back to main table
5.Take the extra duplicates and trim to uniques using Delimit
6.Import the leftover rows back into the database.
My table looks like the following.
Name Gender Age Email ID
Jolly Female 28 jolly#jolly.com 1
Jolly Female 28 jolly#jolly.com 2
Jolly Female 28 jolly#jolly.com 3
Kate Female 36 kate#kate.com 4
Kate Female 36 kate#kate.com 5
Kate Female 36 kate#kate.com 6
Jack Male 46 jack#jack.com 7
Jack Male 46 jack#jack.com 8
Jack Male 46 jack#jack.com 9
My code
SET IDENTITY_INSERT test.dbo.contacts ON
GO
select name, gender, age, email, id into ##contacts
from test.dbo.contacts
WHERE id IN
(SELECT MIN(id) FROM test.dbo.contacts GROUP BY name)
TRUNCATE TABLE test.dbo.contacts
INSERT INTO test.dbo.contacts
SELECT name, gender, age, total_score, id
from ##students
SET IDENTITY_INSERT test.dbo.contactsOFF
GO
This code is almost working, except for the following error that I see.
"An explicit value for the identity column in table 'test.dbo.contacts' can only be specified when a column list is used and IDENTITY_INSERT is ON.
I have absolutely no idea why I keep seeing that message since I turned identity_insert on and off.
Can somebody please tell me what I'm missing in the code? And if anybody has another solution to keep unique rows I'd love to hear about it.
You said that your original problem was that " it takes forever and my system locks up".
The problem is the amount of time necessary for the operation and the lock escalation to table lock.
My suggestion is to break down the operation so that you delete less than 5000 rows at time.
I assume you have less than 5000 duplicates for each name.
You can read more about lock escalation here:
https://www.sqlpassion.at/archive/2014/02/25/lock-escalations/
About your problem (identity insert), your script contains at least two errors so I guess it's not the original one, so it hard to say why the original one fails.
use test;
if object_ID('dbo.contacts') is not null drop table dbo.contacts;
CREATE TABLE dbo.contacts
(
id int identity(1,1) primary key clustered,
name nvarchar(50),
gender varchar(15),
age tinyint,
email nvarchar(50),
TS Timestamp
)
INSERT INTO [dbo].[contacts]([name],[gender],[age],[email])
VALUES
('Jolly','Female',28,'jolly#jolly.com'),
('Jolly','Female',28,'jolly#jolly.com'),
('Jolly','Female',28,'jolly#jolly.com'),
('Kate','Female',36,'kate#kate.com'),
('Kate','Female',36,'kate#kate.com'),
('Kate','Female',36,'kate#kate.com'),
('Jack','Male',46,'jack#jack.com'),
('Jack','Male',46,'jack#jack.com'),
('Jack','Male',46,'jack#jack.com');
--for the purpose of the lock escalation, I assume you have less then 5.000 duplicates for each single name.
if object_ID('tempdb..#KillList') is not null drop table #KillList;
SELECT KL.*, C.TS
into #KillList
from
(
SELECT [name], min(ID) GoodID
from dbo.contacts
group by name
having count(*) > 1
) KL inner join
dbo.contacts C
ON KL.GoodID = C.id
--This has the purpose of testing concurrent updates on relevant rows
--UPDATE [dbo].[contacts] SET Age = 47 where ID=7;
--DELETE [dbo].[contacts] where ID=7;
while EXISTS (SELECT top 1 1 from #KillList)
BEGIN
DECLARE #id int;
DECLARE #name nvarchar(50);
DECLARE #TS binary(8);
SELECT top 1 #id=GoodID, #name=Name, #TS=TS from #KillList;
BEGIN TRAN
if exists (SELECT * from [dbo].[contacts] where id=#id and TS=#TS)
BEGIN
DELETE FROM C
from [dbo].[contacts] C
where id <> #id and Name = #name;
DELETE FROM #KillList where Name = #name;
END
ELSE
BEGIN
ROLLBACK TRAN;
RAISERROR('Concurrency error while deleting %s', 16, 1, #name);
RETURN;
END
commit TRAN;
END
SELECT * from [dbo].[contacts];
I wrote it this way, that you can see the sub results of each query.
The inner sql should not have *, instead use id.
delete from [contacts] where id in
(
select id from
(
select *, ROW_NUMBER() over (partition by name, gender, age, email order by id) as rowid from [contacts]
) rowstobedeleted where rowid>1
)
If this takes too long/makes much load, you can use SET ROWCOUNT to provide smaller chunks, but then you need to run it until nothing is delete anymore.
I think that you need something like this:
INSERT INTO test.dbo.contacts (idcol1,col2)
VALUES (value1,value2)

SQL Server triggers to check credit card balance

I've been trying to find a solution to a very simple problem, but I just cant find out how to do it. I have two tables Transactions and Credit_Card.
Transactions
transid (PK), ccid (FK: to credit_card > ccid), amount, timestamp
Credit_Card
ccid (PK), Balance, creditlimit
I want to create a trigger so before someone inserts a transaction it checks that the transaction amount + the balance of the credit card does not go over the creditlimit and if it is, it rejects the insert.
"EDIT" The following code fixed my issue, big thanks to Dan Guzman for his contribution!
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT t.ccid, SUM(t.amount) AS amount
FROM inserted AS t
GROUP BY t.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
If I understand correctly, you just need to check the credit limit against newly inserted/updated transactions. Keep in mind that a SQL Server trigger fires once per statement and a statement may affect multiple rows. The virtual inserted will have images of the affected rows. You can use this to limit the credit check to only the credit cards affected by the related transactions.
CREATE TRIGGER TR_transactions
ON transactions FOR INSERT, UPDATE
AS
IF EXISTS(
SELECT 1
FROM (
SELECT inserted.ccid, SUM(inserted.amount) AS amount
FROM inserted
GROUP BY inserted.ccid) AS t
JOIN Credit_Card AS cc ON
cc.ccid = t.ccid
WHERE cc.creditlimit <= (t.amount + cc.balance)
)
BEGIN
RAISERROR('Credit limit exceeded', 16, 1);
ROLLBACK;
END;
EDIT
I removed the t alias from the inserted table and qualified the columns with inserted instead to better indicate the source of the data. It's generally a good practice to qualify column names with the table name or alias in multi-table queries to avoid ambiguity.
The integers 16 and 1 in the RAISERROR statement specify the severity and state of the raised error. See the SQL Server Books Online reference for details. Severity 11 and greater raise an error, with severities in the 11 through 16 range indicating a user-correctable error.
you can try this.
ALTER trigger [dbo].[TrigerOnInsertPonches]
On [dbo].[CHECKINOUT]
After Insert
As
BEGIN
DECLARE #ccid int
,#amount money
you have to tell sql that the trigger is ofter insert,
then you can declare the using variable. declare
select #ccid=o.ccid from inserted o;
I think is the correct whey to catch the id.
then you can make the select filtiering from this value.
I hope this can be usefull

SQL Server: A severe error occurred on the current command. The results, if any, should be discarded

I have the following SQL Server Query in a stored procedure and I am running this service from a windows application. I am populating the temp table variable with 30 million records and then comparing them with previous days records in tbl_ref_test_main to Add add and delete the different records. there is a trigger on tbl_ref_test_main on insert and delete. Trigger write the same record in another table. Because of the comparison of 30 million records its taking ages to produce the result and throws and error saying A severe error occurred on the current command. The results, if any, should be discarded.
Any suggestions please.
Thanks in advance.
-- Declare table variable to store the records from CRM database
DECLARE #recordsToUpload TABLE(ClassId NVARCHAR(100), Test_OrdID NVARCHAR(100),Test_RefId NVARCHAR(100),RefCode NVARCHAR(100));
-- Populate the temp table
INSERT INTO #recordsToUpload
SELECT
class.classid AS ClassId,
class.Test_OrdID AS Test_OrdID ,
CAST(ref.test_RefId AS VARCHAR(100)) AS Test_RefId,
ref.ecr_RefCode AS RefCode
FROM Dev_MSCRM.dbo.Class AS class
LEFT JOIN Dev_MSCRM.dbo.test_ref_class refClass ON refClass.classid = class.classid
LEFT JOIN Dev_MSCRM.dbo.test_ref ref ON refClass.test_RefId = ref.test_RefId
WHERE class.StateCode = 0
AND (ref.ecr_RefCode IS NULL OR (ref.statecode = 0 AND LEN(ref.ecr_RefCode )<= 18 ))
AND LEN(class.Test_OrdID )= 12
AND ((ref.ecr_RefCode IS NULL AND ref.test_RefId IS NULL)
OR (ref.ecr_RefCode IS NOT NULL AND ref.test_RefId IS NOT NULL));
-- Insert new records to Main table
INSERT INTO dbo.tbl_ref_test_main
Select * from #recordsToUpload
EXCEPT
SELECT * FROM dbo.tbl_ref_test_main;
-- Delete records from main table where similar records does not exist in temp table
DELETE P FROM dbo.tbl_ref_test_main AS P
WHERE EXISTS
(SELECT P.*
EXCEPT
SELECT * FROM #recordsToUpload);
-- Select and return the records to upload
SELECT Test_OrdID,
CASE
WHEN RefCode IS NULL THEN 'NA'
ELSE RefCode
END,
Operation AS 'Operation'
FROM tbl_daily_upload_records
ORDER BY Test_OrdID, Operation, RefCode;
My suggestion would be that 30 million rows is too large for the table variable, try creating a temporary table, populating it with the data and then performing the analysis there.
If this isn't possible/suitable then perhaps create a permanent table and truncating it between uses.

Resources