I have a problem where I insert User and Address in a transaction with a 10 second delay and if run my select statement during the execution of the transaction it will wait for transaction to finish but I will get a null on the join. Why don't my select wait for both User/Address data to be committed.
If I run the select statement after the transaction is finish I will get the correct result. Why do i get this error and what is the generic solution to make this work
BEGIN TRANSACTION
insert into user(dummy) values('text')
WAITFOR DELAY '00:00:10';
insert into address(ID_FK) values((SELECT SCOPE_IDENTITY()))
COMMIT TRANSACTION
Running during transaction result in null in join
select * from user u left join address a on u.id = a.ID_FK order by id desc
| ID | dummy | ID_FK |
| 101 | 'text' | null |
Running after transaction result in correct result
select * from user u left join address a on u.id = a.ID_FK order by id desc
| ID | dummy | ID_FK|
| 101 | 'text' | 101 |
This type of thing is entirely possible at default read committed level for on premise SQL Server as that uses read committed locking. It is then execution plan dependent what will happen.
An example is below
CREATE TABLE [user]
(
id INT IDENTITY PRIMARY KEY,
dummy VARCHAR(10)
);
CREATE TABLE [address]
(
ID_FK INT REFERENCES [user](id),
addr VARCHAR(30)
);
Connection One
BEGIN TRANSACTION
INSERT INTO [user]
(dummy)
VALUES ('text')
WAITFOR DELAY '00:00:20';
INSERT INTO address
(ID_FK,
addr)
VALUES (SCOPE_IDENTITY(),
'Address Line 1')
COMMIT TRANSACTION
Connection Two (run this whilst connection one is waiting the 20 seconds)
SELECT *
FROM [user] u
LEFT JOIN [address] a
ON u.id = a.ID_FK
ORDER BY id DESC
OPTION (MERGE JOIN)
Returns
id
dummy
ID_FK
addr
1
text
NULL
NULL
The execution plan is as follows
The scan on User is blocked by the open transaction in Connection 1 that has inserted the row there. This has to wait until that transaction commits and then eventually gets to read the newly inserted row.
Meanwhile the Sort operator has already requested the rows from address by this point as it consumes all its rows in its Open method (i.e. during operator initialisation). This is not blocked as no row has been inserted to address yet. It reads 0 rows from address which explains the final result.
If you switch to using read committed snapshot rather than read committed locking you won't get this issue as it will only read the committed state at the start of the statement so it isn't possible to get this kind of anomaly.
I have a VIEW (in SQL SERVER) containing the following columns:
itemID[vachar(50)]|itemStatus [vachar(20)]|itemCode[vachar(20)]|itemTime[varchar(5)]
The itemID column contains id values that do not change. The remaining 3 rows however get updated periodically. I understand it is more difficult create a trigger on a VIEW.
An example of the table containing data would be:
|itemID|imtemStatus|itemCode|itemTime|
|------|-----------|--------|--------|
| 1 | OK | 30 | 00:10 |
|------|-----------|--------|--------|
| 2 | OK | 40 | 02:30 |
|------|-----------|--------|--------|
| 3 | STOPPED | 30 | 00:01 |
|------|-----------|--------|--------|
When itemStatus = STOPPED & itemCode = 30
I would like to execute a stored procedure (sp_Alert) passing the itemID as a parameter
Any help would be greatly appreciated
Since a trigger is at least "not easy", I'd like to propose an ugly but functional way out. You can create a stored procedure that checks ItemCode and ItemStatus. If they match your criteria you can start the sp_Alert from this procedure.
create procedure check_status as
if (select 1
from vw_itemstatus
where itemStatus = 'STOPPED'
and itemCode = 30) is not null
begin
declare #item_id int
set #item_id = (select itemID
from vw_itemstatus
where itemStatus = 'STOPPED'
and itemCode = 30)
exec sp_Alert #item_id
end
Depending on how critical this functionality is and how many resources you can use for it, you can schedule this procedure via the SQL Server Agent. If you run this with a short interval, it will work "similar" to what you had in mind.
I need to create after insert and after update trigger. If the new row is inserted in table, it should run after insert trigger and put timestamp in other table. But if this row is edited again, it should update the other table also.
The status can be changed to active, pending etc. So when each status changes, I need to put timestamp. And for every new record I need to put new row.
Here is the table structure:
| ID | Name | Status |
|----|------|--------|
| 1 | xyz | Active |
| | | |
| | | |
Let's say this is new row inserted in table, so it should be inserted into another table. But when I change its status, it should update the other table against this ID.
| ID | Name | Active Staus | Other Status |
|----|------|--------------|--------------|
| 1 | xyz | TimeStamp | Time Stamp |
| | | | |
| | | | |`
USE [DemoDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[test_INSERT]
ON [dbo].[Demo]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE ID BIGINT
Declare #status varchar(50)
SELECT #ID = INSERTED.ID
FROM INSERTED
INSERT INTO [dbo].[LogTble]
VALUES(#ID,'timestamp')
END
Well - your trigger is halfway there.
You're assuming that there's only one row in Inserted - this is NOT generally the case! If your INSERT statements inserts multiple rows, there will be multiple entries in Inserted and code like this:
SELECT #ID = INSERTED.ID FROM INSERTED
will fail miserably.... you'll get one arbitary row selected - and all others are ignored....
You need to be aware that Inserted will contain multiple rows and you need to write your trigger accordingly - in a set-based fashion.
Try this:
ALTER TRIGGER [dbo].[test_INSERT]
ON [dbo].[Demo]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [dbo].[LogTble] (Id, ActiveStatus)
SELECT i.ID, SYSDATETIME()
FROM Inserted i
END
I am working on creating a Web API which will get account as the input parameter, which will have to create / update records in the table in SQL Server. So the web service will need to call the stored procedure which will accept the account. I created a sample table in the database with just two columns called Account and CounterSeq. I am trying to creating a stored procedure to create or update the records in the table.
Each account should have a CounterSeq associated with it. If the Account doesn't exists in the table, create Account name and associate CounterSeq = 001 to it. If the Account name already exists, just update like CounterSeq to CounterSeq + 1
+---------+----------------+
| Account | CounterSeq |
+---------+----------------+
| ABC | 001 |
| DEF | 002 |
+---------+----------------+
For this I create a TableType like this
USE [Demo]
GO
-- Create the data type
CREATE TYPE projectName_TT AS TABLE
(
Account nvarchar(50),
CounterSeq int
)
GO
And the stored procedure as below but I am missing how to insert the new record like for a new Account how to set the CounterSeq to 001 ?
USE [Demo]
GO
ALTER PROCEDURE [dbo].[uspInserorUpdateProjectName]
#projectName_TT AS projectName_TT READONLY
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION;
UPDATE prj
SET prj.Account = tt.Account,
prj.CounterSeq = tt.CounterSeq + 1
FROM dbo.[ProjectName] prj
INNER JOIN #projectName_TT tt ON prj.Account = tt.Account
INSERT INTO [dbo].[ProjectName](Account, CounterSeq)
SELECT tt.Account, tt.CounterSeq
FROM #projectName_TT tt
WHERE NOT EXISTS (SELECT 1
FROM [dbo].[ProjectName]
WHERE Account = tt.Account)
COMMIT TRANSACTION;
END;
First, I believe that the table you passed in should only have one field (based on your description If the Account doesn't exists in the Database create Account name and associate CounterSeq 001 to it. If the Account name already exists just update like CounterSeq to CounterSeq+1)
You can use this query (put it in your stored procedure between the BEGIN TRANSACTION, COMMIT TRANSACTION.
MERGE dbo.[ProjectName] prj
USING #projectName_TT tt
ON prj.Account = tt.Account
WHEN MATCHED THEN UPDATE SET prj.CounterSeq = prj.CounterSeq+1
WHEN NOT MATCHED THEN INSERT (Account,CounterSeq)
VALUES (tt.Account, 1);
I have T-SQL code like this:
DECLARE #xml XML = (SELECT CONVERT(xml, BulkColumn, 2) FROM OPENROWSET(Bulk 'C:\test.xml', SINGLE_BLOB) [blah])
-- Data for Table 1
SELECT
ES.value('id-number[1]', 'VARCHAR(8)') IDNumber,
ES.value('name[1]', 'VARCHAR(8)') Name,
ES.value('date[1]', 'VARCHAR(8)') Date,
ES.value('test[1]', 'VARCHAR(3)') Test,
ES.value('testing[1]', 'VARCHAR(511)') Testing,
ES.value('testingest[1]', 'VARCHAR(5)') Testingest
FROM #xml.nodes('xmlnodes/path') AS EfficiencyStatement(ES)
-- Data for Table 2
SELECT
U.value('fork[1]', 'VARCHAR(8)') Fork,
U.value('spoon[1]', 'VARCHAR(3)') Spoon,
U.value('spork[1]', 'VARCHAR(3)') Spork,
FROM #xml.nodes('xmlnodes/path/nextpath') AS Utensils(U)
Now, I've tried what I normally use, and other variants, such as:
AS XML ON xml.[id-number] = [table1].[id-number]
For the record, id-number is unique across the entire document. It can never occur again.
This is good for grabbing the data from my XML file, but there's zero referential integrity. How do I make sure that Table 2 (and up) maintains referential integrity when inserting?
This should be a much better explanation:
I want to load XML values from a file. For INSERT, I have no trouble using OPENXML and binding it based on the id-number using AS XML ON xml.[id-number] = [table1].[id-number] at the end.
I want to update the database record (with all linked tables and their columns) using UPDATE, MERGE, or something -- anything! To do this, I believe I need to find a way to maintain referential integrity based on the Foreign_ID value present in each table. There are dozens of tables which are all linked via Foreign_ID, so how do I update all of these?
Table Example
Table #1
+-------------+-----------+-----------+------------+---------+-----------+------------+
| Primary_Key | ID_Number | Name | Date | Test | Testing | Testingest |
+-------------+-----------+-----------+------------+---------+-----------+------------|
| 70001 | 12345 | Tom | 01/21/14 | Hi | Yep | Of course! |
| 70002 | 12346 | Dick | 02/22/14 | Bye | No | Never! |
| 70003 | 12347 | Harry | 03/23/14 | Sup | Dunno | Same. |
+----^--------+-----------+-----------+------------+---------+-----------+------------+
|
|-----------------|
|
Table #2 | Linked to primary key in the first table.
+-------------+--------v--------+-------------+-------------+------------+
| Primary_Key | Foreign_ID | Fork | Spoon | Spork |
+-------------+-----------------+-------------+-------------+------------+
| 0001 | 70001 | Yes | No | No |
| 0002 | 70002 | No | Yes | No |
| 0003 | 70003 | No | No | Yes |
+-------------+-----------------+-------------+-------------+------------+
After that is inserted, I need to be able to UPDATE the tables and columns from the XML files. After much research, I can't figure out how to update the values of every table linked by Foreign_ID while maintaining referential integrity. This means I am inserting the wrong data in the other tables.
I want the correct data updated. To update it correctly, I need to ensure that XQuery is matching the right data. Some tables have multiple fields for one particular Foreign_ID.
Here's the code I'm using:
DECLARE #xml XML = (SELECT CONVERT(xml, BulkColumn, 2) FROM OPENROWSET(Bulk 'C:\test.xml', SINGLE_BLOB) [blah])
-- Data for Table 1
SELECT
ES.value('id-number[1]', 'VARCHAR(8)') IDNumber,
ES.value('name[1]', 'VARCHAR(8)') Name,
ES.value('date[1]', 'VARCHAR(8)') Date,
ES.value('test[1]', 'VARCHAR(3)') Test,
ES.value('testing[1]', 'VARCHAR(511)') Testing,
ES.value('testingest[1]', 'VARCHAR(5)') Testingest
INTO #TempTable
FROM #xml.nodes('xmlnodes/path') AS EfficiencyStatement(ES)
-- #Serial Error: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
SET #IDNumber = (SELECT SerialNumber from #TempTable)
SET #Foreign_ID = (SELECT [Foreign_ID] from [table] WHERE [id-number] = #IDNumber)
MERGE dbo.[table1] AS CF
USING (SELECT IDNumber, Name, Date, Test, Testing, Testingest FROM #TempTable) AS src
ON CF.[id-number] = src.IDNumber
-- ID-Number is unique, and is used to setup the initial referential integrity. Foreign_ID does not exist in the XML files, so we are not matching on that.
WHEN MATCHED THEN UPDATE
SET
CF.[id-number] = src.IDNumber
-- and so on...
WHEN NOT MATCHED THEN
-- Insert statements here
GO
This works for the first table. It does not maintain integrity when updating the other tables via Foreign_ID. Note that SET #Serial has an error, but when I set it to anything else, it will update properly.
I am not fully sure what you are asking here, but if you cannot use the suggested article to enforce references in your XML, there is not really a post-op way for you to do it just in XML.
For Table2+ you can do EXISTS checks against TABLE 1 and process accordingly that way (see Referential integrity issue with Untyped XML in TSQL for example)
The only other way that I can think of is to create "real" tables that represent your schema for table 1, table 2...tableN that have the relevant FKs and insert into them.