I am building an application to transfer data from an SQL server to an offsite location via ftp and XML files.
I am building the XML data for each file via a query with FOR XML PATH('path'), TYPE.
I'm going to use a GUID to generate the filename as well as use as an identiifier within the file, currently my SQL to get the table is as follows (simplified):
SELECT LVL1.inv_account_no
, LVL1.cus_postcode
, CONVERT(varchar(255),NEWID()) + '.xml' as FileName
, (SELECT (SELECT CONVERT(varchar(255),NEWID()) FOR XML PATH('ident'), TYPE), (
SELECT.... [rest of very long nested select code for generating XML]
SQL Fiddle Example
This is giving me:
Account Postcode FileName xCol
AD0001 B30 3HX 2DF21466-2DA3-4D62-8B9B-FC3DF7BD1A00 <ident>656700EA-8FD5-4936-8172-0135DC49D200</ident>
AS0010 NN12 8TN 58339997-8271-4D8C-9C55-403DE98F06BE <ident>78F8078B-629E-4906-9C6B-2AE21782DC1D</ident>
Basically different GUID's for each row/use of NEWID().
Is there a way I can insert the same GUID into both columns without incrementing a cursor or doing two updates?
Try something like this:
SELECT GeneratedGuid, GeneratedGuid
FROM YourTable
LEFT JOIN (SELECT NEWID() AS GeneratedGuid) AS gg ON 1 = 1
"GeneratedGuid" has a different GUID for every row.
You could use a Common Table Expression to generate the NEWID for each resulting row.
Here is the SQL Fiddle : http://www.sqlfiddle.com/#!3/74c0c/1
CREATE TABLE TBL (
ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
GCOL VARCHAR(255),
XCOL XML)
create table tbl2 (
id int identity(1,1) not null primary key,
foo int not null )
insert into tbl2 (foo) values
(10),(20),(30)
; WITH cte_a as ( select NEWID() as ID )
INSERT INTO TBL
SELECT CONVERT(varchar(255),cte_a.ID )
, (SELECT CONVERT(varchar(255),cte_a.ID) FOR XML PATH('Output'), TYPE)
from tbl2, cte_a
Create a temporary variable and assign a NEWID() to it. Then you can use this variable as many times in your SELECT or INSERT queries. Value of temporary variable remain same till it's scope. In following example #gid is a temporary variable and assigned a GUID value as VARCHAR(36)
DECLARE #gid varchar(36)
SET #gid = CAST(NEWID() AS VARCHAR(36))
INSERT INTO [tableName] (col1, col2) VALUES(#gid, #gid)
after executing the above query col1 and col2 from [tablename] will have same guid value.
Related
I have the table structure something like below
Number Of Columns are more than 150.
Basically, I would want to query the table for a particular record say ID = 1 and It should return all column names which are not empty.
I am expecting the below output where in I should be able to fetch/show only those column names whose values are not null for a particular record.
Kindly note that I am trying to achieve this in Azure SQL DW that doesn't support FOR XML Path clause, Cursor, very limited dynamic SQL functionality.
UNPIVOT works, and you don't have to filter because "null values in the input of UNPIVOT disappear in the output", eg
create table #t(id int, col_1 char(1), col_2 char(1), col_3 char(1))
insert into #t values (1,'A',null,'G')
insert into #t values (2,'A','c', null)
select id, col, val
from
( select * from #t ) p
unpivot
( Val for Col in (col_1, col_2, col_3 ) ) as unpvt
order by id, col;
In Synapse SQL On-Demand You can also do this by using FOR JSON / OPENJSON to pivot.
Like this
create table #t(id int, col_1 char(1), col_2 char(1), col_3 char(1))
insert into #t values (1,'A',null,'G')
select [key] columnName
from openjson(
(
select * from #t
where id = 1
for json path, WITHOUT_ARRAY_WRAPPER
)) d
where value is not null
and [key] <> 'id'
outputs
So Synapse SQL Pools (aka Azure SQL DW) should enentually get this too.
We are running a table that holds some information for order of new products.
From time to time we receive new orders from a 3rd party system and insert them into our DB.
Sometimes, however, for a specific order there is already an entry in our table.
So instead of checking if there already IS an order, the colleagues just inserts new data sets into our table.
Now that the process of inserting is streamlined, I am supposed to consolidate the existing duplicates in the table.
The table looks like this:
I have 138 of these pairs where the PreOrderNumber occurrs twice. I'd like to insert the FK_VehicleFile number and the CommissionNumber to the row where the FK_Checklist is set and delete the duplicate with the missing FK_Checklist after that.
My idea is to write a transact script that looks like this:
First I store all the PreOrderNumbers that have duplicates in its an own table:
DECLARE #ResultSet TABLE ( PK_OrderNumber int,
FK_Checklist int,
FK_VehicleFile int,
PreOrderNumbers varchar(20))
INSERT INTO #ResultSet
SELECT PK_OrderNumber, PreOrderNumber
FROM [LUX_WEB_SAM].[dbo].[OrderNumbers]
GROUP BY PreOrderNumber
HAVING (COUNT(PreOrderNumber) > 1)
And that's it so far.
I'm very new to these kind of SQL scripts.
I think I need to use some kind of loop over all entries in the #ResultSet table to grab the FK_VehicleFile and CommissionNumber from the first data set and store them in the second data set.
Or do you have and suggestions how to solve this problem in a more easy way?
This response uses a CTE:
WITH [MergedOrders] AS
(
Select
ROW_NUMBER() OVER(PARTITION BY row1.PreOrderNumber ORDER BY row1.PK_OrderNumber) AS Instance
,row1.PK_OrderNumber AS PK_OrderNumber
,ISNULL(row1.FK_Checklist,row2.FK_Checklist) AS FK_Checklist
,ISNULL(row1.FK_VehicleFile,row2.FK_VehicleFile) AS FK_VehicleFile
,ISNULL(row1.PreOrderNumber,row2.PreOrderNumber) AS PreOrderNumber
,ISNULL(row1.CommissionNumber,row2.CommissionNumber) AS CommissionNumber
FROM [LUX_WEB_SAM].[dbo].[OrderNumbers] AS row1
INNER JOIN [LUX_WEB_SAM].[dbo].[OrderNumbers] AS row2
ON row1.PreOrderNumber = row2.PreOrderNumber
AND row1.PK_OrderNumber <> row2.PK_OrderNumber
)
SELECT
[PK_OrderNumber]
,[FK_Checklist]
,[FK_VehicleFile]
,[PreOrderNumber]
,[CommissionNumber]
FROM [MergedOrders]
WHERE Instance = 1 /* If we were to maintain Order Number of second instance, use 2 */
Here's the explanation:
A Common Table Expression (CTE) acts as an in-memory table, which we use to extract all rows that are repeated (NB: The INNER JOIN statement ensures that only rows that occur twice are selected). We use ISNULL to switch out values where one or the other is NULL, then select the output for our destination table.
You can take help from following scripts to perform your UPDATE and DELETE action.
Please keep in mind that both UPDATE and DELETE are risky operation and do your test first with test data.
CREATE TABLE #T(
Col1 VARCHAR(100),
Col2 VARCHAR(100),
Col3 VARCHAR(100),
Col4 VARCHAR(100),
Col5 VARCHAR(100)
)
INSERT INTO #T(Col1,Col2,Col3,Col4,Col5)
VALUES(30,NULL,222,00000002222,096),
(25,163,NULL,00000002222,NULL),
(30,163,NULL,00000002230,NULL)
SELECT * FROM #T
UPDATE A
SET A.Col3 = B.Col3, A.Col5 = B.Col5
FROM #T A
INNER JOIN #T B ON A.Col4 = B.Col4
WHERE A.Col2 IS NOT NULL AND B.Col2 IS NULL
DELETE FROM #T
WHERE Col4 IN (
SELECT Col4 FROM #T
GROUP BY Col4
HAVING COUNT(*) = 2
)
AND Col2 IS NULL
SELECT * FROM #T
I am trying to do the following but getting an "Invalid Column Name {column}" error. Can someone please help me see the error of my ways? We recently split a transaction table into 2 tables, one containing the often updated report column names and the other containing the unchanging transactions. This leave me trying to change what was a simple insert into 1 table to a complex insert into 2 tables with unique columns. I attempted to do that like so:
INSERT INTO dbo.ReportColumns
(
FullName
,Type
,Classification
)
OUTPUT INSERTED.Date, INSERTED.Amount, INSERTED.Id INTO dbo.Transactions
SELECT
[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}
The "INSERTED.Date, INSERTED.Amount" are the source of the errors, with or without the "INSERTED." in front.
-----------------UPDATE------------------
Aaron was correct and it was impossible to manage with an insert but I was able to vastly improve the functionality of the insert and add some other business rules with the Merge functionality. My final solution resembles the following:
DECLARE #TransactionsTemp TABLE
(
[Date] DATE NOT NULL,
Amount MONEY NOT NULL,
ReportColumnsId INT NOT NULL
)
MERGE INTO dbo.ReportColumns AS Trgt
USING ( SELECT
{FK}
,[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}) AS Src
ON Src.{FK} = Trgt.{FK}
WHEN MATCHED THEN
UPDATE SET
Trgt.FullName = Src.FullName,
Trgt.Type= Src.Type,
Trgt.Classification = Src.Classification
WHEN NOT MATCHED BY TARGET THEN
INSERT
(
FullName,
Type,
Classification
)
VALUES
(
Src.FullName,
Src.Type,
Src.Classification
)
OUTPUT Src.[Date], Src.Amount, INSERTED.Id INTO #TransactionsTemp;
MERGE INTO dbo.FinancialReport AS Trgt
USING (SELECT
[Date] ,
Amount ,
ReportColumnsId
FROM #TransactionsTemp) AS Src
ON Src.[Date] = Trgt.[Date] AND Src.ReportColumnsId = Trgt.ReportColumnsId
WHEN NOT MATCHED BY TARGET And Src.Amount <> 0 THEN
INSERT
(
[Date],
Amount,
ReportColumnsId
)
VALUES
(
Src.[Date],
Src.Amount,
Src.ReportColumnsId
)
WHEN MATCHED And Src.Amount <> 0 THEN
UPDATE SET Trgt.Amount = Src.Amount
WHEN MATCHED And Src.Amount = 0 THEN
DELETE;
Hope that helps someone else in the future. :)
Output clause will return values you are inserting into a table, you need multiple inserts, you can try something like following
declare #staging table (datecolumn date, amount decimal(18,2),
fullname varchar(50), type varchar(10),
Classification varchar(255));
INSERT INTO #staging
SELECT
[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}
Declare #temp table (id int, fullname varchar(50), type varchar(10));
INSERT INTO dbo.ReportColumns
(
FullName
,Type
,Classification
)
OUTPUT INSERTED.id, INSERTED.fullname, INSERTED.type INTO #temp
SELECT
FullName
,Type
,Classification
FROM #stage
INSERT into dbo.transacrions (id, date, amount)
select t.id, s.datecolumn, s.amount from #temp t
inner join #stage s on t.fullname = s.fullname and t.type = s.type
I am fairly certain you will need to have two inserts (or create a view and use an instead of insert trigger). You can only use the OUTPUT clause to send variables or actual inserted values ti another table. You can't use it to split up a select into two destination tables during an insert.
If you provide more information (like how the table has been split up and how the rows are related) we can probably provide a more specific answer.
Very simplified, I have two tables Source and Target.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
I would like to move all rows from #Source to #Target and know the TargetID for each SourceID because there are also the tables SourceChild and TargetChild that needs to be copied as well and I need to add the new TargetID into TargetChild.TargetID FK column.
There are a couple of solutions to this.
Use a while loop or cursors to insert one row (RBAR) to Target at a time and use scope_identity() to fill the FK of TargetChild.
Add a temp column to #Target and insert SourceID. You can then join that column to fetch the TargetID for the FK in TargetChild.
SET IDENTITY_INSERT OFF for #Target and handle assigning new values yourself. You get a range that you then use in TargetChild.TargetID.
I'm not all that fond of any of them. The one I used so far is cursors.
What I would really like to do is to use the output clause of the insert statement.
insert into #Target(TargetName)
output inserted.TargetID, S.SourceID
select SourceName
from #Source as S
But it is not possible
The multi-part identifier "S.SourceID" could not be bound.
But it is possible with a merge.
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result
TargetID SourceID
----------- -----------
2 1
4 3
I want to know if you have used this? If you have any thoughts about the solution or see any problems with it? It works fine in simple scenarios but perhaps something ugly could happen when the query plan get really complicated due to a complicated source query. Worst scenario would be that the TargetID/SourceID pairs actually isn't a match.
MSDN has this to say about the from_table_name of the output clause.
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
For some reason they don't say "rows to insert, update or delete" only "rows to update or delete".
Any thoughts are welcome and totally different solutions to the original problem is much appreciated.
In my opinion this is a great use of MERGE and output. I've used in several scenarios and haven't experienced any oddities to date.
For example, here is test setup that clones a Folder and all Files (identity) within it into a newly created Folder (guid).
DECLARE #FolderIndex TABLE (FolderId UNIQUEIDENTIFIER PRIMARY KEY, FolderName varchar(25));
INSERT INTO #FolderIndex
(FolderId, FolderName)
VALUES(newid(), 'OriginalFolder');
DECLARE #FileIndex TABLE (FileId int identity(1,1) PRIMARY KEY, FileName varchar(10));
INSERT INTO #FileIndex
(FileName)
VALUES('test.txt');
DECLARE #FileFolder TABLE (FolderId UNIQUEIDENTIFIER, FileId int, PRIMARY KEY(FolderId, FileId));
INSERT INTO #FileFolder
(FolderId, FileId)
SELECT FolderId,
FileId
FROM #FolderIndex
CROSS JOIN #FileIndex; -- just to illustrate
DECLARE #sFolder TABLE (FromFolderId UNIQUEIDENTIFIER, ToFolderId UNIQUEIDENTIFIER);
DECLARE #sFile TABLE (FromFileId int, ToFileId int);
-- copy Folder Structure
MERGE #FolderIndex fi
USING ( SELECT 1 [Dummy],
FolderId,
FolderName
FROM #FolderIndex [fi]
WHERE FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT
(FolderId, FolderName)
VALUES (newid(), 'copy_'+FolderName)
OUTPUT d.FolderId,
INSERTED.FolderId
INTO #sFolder (FromFolderId, toFolderId);
-- copy File structure
MERGE #FileIndex fi
USING ( SELECT 1 [Dummy],
fi.FileId,
fi.[FileName]
FROM #FileIndex fi
INNER
JOIN #FileFolder fm ON
fi.FileId = fm.FileId
INNER
JOIN #FolderIndex fo ON
fm.FolderId = fo.FolderId
WHERE fo.FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT ([FileName])
VALUES ([FileName])
OUTPUT d.FileId,
INSERTED.FileId
INTO #sFile (FromFileId, toFileId);
-- link new files to Folders
INSERT INTO #FileFolder (FileId, FolderId)
SELECT sfi.toFileId, sfo.toFolderId
FROM #FileFolder fm
INNER
JOIN #sFile sfi ON
fm.FileId = sfi.FromFileId
INNER
JOIN #sFolder sfo ON
fm.FolderId = sfo.FromFolderId
-- return
SELECT *
FROM #FileIndex fi
JOIN #FileFolder ff ON
fi.FileId = ff.FileId
JOIN #FolderIndex fo ON
ff.FolderId = fo.FolderId
I would like to add another example to add to #Nathan's example, as I found it somewhat confusing.
Mine uses real tables for the most part, and not temp tables.
I also got my inspiration from here: another example
-- Copy the FormSectionInstance
DECLARE #FormSectionInstanceTable TABLE(OldFormSectionInstanceId INT, NewFormSectionInstanceId INT)
;MERGE INTO [dbo].[FormSectionInstance]
USING
(
SELECT
fsi.FormSectionInstanceId [OldFormSectionInstanceId]
, #NewFormHeaderId [NewFormHeaderId]
, fsi.FormSectionId
, fsi.IsClone
, #UserId [NewCreatedByUserId]
, GETDATE() NewCreatedDate
, #UserId [NewUpdatedByUserId]
, GETDATE() NewUpdatedDate
FROM [dbo].[FormSectionInstance] fsi
WHERE fsi.[FormHeaderId] = #FormHeaderId
) tblSource ON 1=0 -- use always false condition
WHEN NOT MATCHED
THEN INSERT
( [FormHeaderId], FormSectionId, IsClone, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
VALUES( [NewFormHeaderId], FormSectionId, IsClone, NewCreatedByUserId, NewCreatedDate, NewUpdatedByUserId, NewUpdatedDate)
OUTPUT tblSource.[OldFormSectionInstanceId], INSERTED.FormSectionInstanceId
INTO #FormSectionInstanceTable(OldFormSectionInstanceId, NewFormSectionInstanceId);
-- Copy the FormDetail
INSERT INTO [dbo].[FormDetail]
(FormHeaderId, FormFieldId, FormSectionInstanceId, IsOther, Value, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
SELECT
#NewFormHeaderId, FormFieldId, fsit.NewFormSectionInstanceId, IsOther, Value, #UserId, CreatedDate, #UserId, UpdatedDate
FROM [dbo].[FormDetail] fd
INNER JOIN #FormSectionInstanceTable fsit ON fsit.OldFormSectionInstanceId = fd.FormSectionInstanceId
WHERE [FormHeaderId] = #FormHeaderId
Here's a solution that doesn't use MERGE (which I've had problems with many times I try to avoid if possible). It relies on two memory tables (you could use temp tables if you want) with IDENTITY columns that get matched, and importantly, using ORDER BY when doing the INSERT, and WHERE conditions that match between the two INSERTs... the first one holds the source IDs and the second one holds the target IDs.
-- Setup... We have a table that we need to know the old IDs and new IDs after copying.
-- We want to copy all of DocID=1
DECLARE #newDocID int = 99;
DECLARE #tbl table (RuleID int PRIMARY KEY NOT NULL IDENTITY(1, 1), DocID int, Val varchar(100));
INSERT INTO #tbl (DocID, Val) VALUES (1, 'RuleA-2'), (1, 'RuleA-1'), (2, 'RuleB-1'), (2, 'RuleB-2'), (3, 'RuleC-1'), (1, 'RuleA-3')
-- Create a break in IDENTITY values.. just to simulate more realistic data
INSERT INTO #tbl (Val) VALUES ('DeleteMe'), ('DeleteMe');
DELETE FROM #tbl WHERE Val = 'DeleteMe';
INSERT INTO #tbl (DocID, Val) VALUES (6, 'RuleE'), (7, 'RuleF');
SELECT * FROM #tbl t;
-- Declare TWO temp tables each with an IDENTITY - one will hold the RuleID of the items we are copying, other will hold the RuleID that we create
DECLARE #input table (RID int IDENTITY(1, 1), SourceRuleID int NOT NULL, Val varchar(100));
DECLARE #output table (RID int IDENTITY(1,1), TargetRuleID int NOT NULL, Val varchar(100));
-- Capture the IDs of the rows we will be copying by inserting them into the #input table
-- Important - we must specify the sort order - best thing is to use the IDENTITY of the source table (t.RuleID) that we are copying
INSERT INTO #input (SourceRuleID, Val) SELECT t.RuleID, t.Val FROM #tbl t WHERE t.DocID = 1 ORDER BY t.RuleID;
-- Copy the rows, and use the OUTPUT clause to capture the IDs of the inserted rows.
-- Important - we must use the same WHERE and ORDER BY clauses as above
INSERT INTO #tbl (DocID, Val)
OUTPUT Inserted.RuleID, Inserted.Val INTO #output(TargetRuleID, Val)
SELECT #newDocID, t.Val FROM #tbl t
WHERE t.DocID = 1
ORDER BY t.RuleID;
-- Now #input and #output should have the same # of rows, and the order of both inserts was the same, so the IDENTITY columns (RID) can be matched
-- Use this as the map from old-to-new when you are copying sub-table rows
-- Technically, #input and #output don't even need the 'Val' columns, just RID and RuleID - they were included here to prove that the rules matched
SELECT i.*, o.* FROM #output o
INNER JOIN #input i ON i.RID = o.RID
-- Confirm the matching worked
SELECT * FROM #tbl t
i'm trying to concatenate several columns from a persistent table into one column of a table variable, so that i can run a contains("foo" and "bar") and get a result even if foo is not in the same column as bar.
however, it isn't possible to create a unique index on a table variable, hence no fulltext index to run a contains.
is there a way to, dynamically, concatenate several columns and run a contains on them? here's an example:
declare #t0 table
(
id uniqueidentifier not null,
search_text varchar(max)
)
declare #t1 table ( id uniqueidentifier )
insert into
#t0 (id, search_text)
select
id,
foo + bar
from
description_table
insert into
#t1
select
id
from
#t0
where
contains( search_text, '"c++*" AND "programming*"' )
You cannot use CONTAINS on a table that has not been configured to use Full Text Indexing, and that cannot be applied to table variables.
If you want to use CONTAINS (as opposed to the less flexible PATINDEX) you will need to base the whole query on a table with a FT index.
You can't use full text indexing on a table variable but you can apply the full text parser. Would something like this do what you need?
declare #d table
(
id int identity(1,1),
testing varchar(1000)
)
INSERT INTO #D VALUES ('c++ programming')
INSERT INTO #D VALUES ('c# programming')
INSERT INTO #D VALUES ('c++ books')
SELECT id
FROM #D
CROSS APPLY sys.dm_fts_parser('"' + REPLACE(testing,'"','""') + '"', 1033, 0,0)
where display_term in ('c++','programming')
GROUP BY id
HAVING COUNT(DISTINCT display_term)=2
NB: There might well be a better way of using the parser but I couldn't quite figure it out. Details of it are at this link
declare #table table
(
id int,
fname varchar(50)
)
insert into #table select 1, 'Adam James Will'
insert into #table select 1, 'Jain William'
insert into #table select 1, 'Bob Rob James'
select * from #table where fname like '%ja%' and fname like '%wi%'
Is it something like this.