Comma Separated String from Junction Table Inner Query (SQL Server) - sql-server

I've searched many threads, sites, and blogs, as well as the API and can't seem to figure this one out. I want to create a CSV string from rows in a junction table, but here's the catch. It needs to be for an entire set of data. I know, confusing, here's the breakdown.
Let's use the generic example with the following tables:
Students (ID, Name, Gpa)
Classes (ID, Dept, Level)
StudentsInClasses (StudentID, ClassID)
Say I want to query for all classes, but within that query (for each record) I want to also get a CSV string of the Student IDs in each of the classes. The results should look something like this:
Class StudentsID_CSVString
----- --------------------
BA302 1,2,3,4,5,6,7,8,9
BA479 4,7,9,12,15
BA483 7,9,12,18
I tried to use a coalesce statement like the following, but I can't get it to work because of the goofy syntax for coalesce. Is there a better way to accomplish this goal or am I just making a dumb syntactical error?
declare #StudentsID_CSVString varchar(128)
select Dept + Level as 'Class',
coalesce(#StudentsID_CSVString + ',', '')
+ CAST(StudentID as varchar(8)) as 'StudentsID_CSVString'
from Classes
left outer join StudentsInClasses ON Classes.ID = StudentsInClasses.ClassID
I used the following code to test:
declare #Students table (ID int, Name varchar(50), Gpa decimal(3,2))
declare #Classes table (ID int, Dept varchar(4), [Level] varchar(3))
declare #StudentsInClasses table (StudentID int, ClassID int)
declare #StudentsID_CSVString varchar(128)
insert into #Students (ID) values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(17),(18)
insert into #Classes (ID, Dept, [Level]) values (1, 'BA', '302'), (2, 'BA', '379'), (3, 'BA', '483')
insert into #StudentsInClasses (StudentID, ClassID) values
(1,1),(2,1),(3,1),(4,1),(5,1),(6,1),(7,1),(8,1),(9,1),
(4,2),(7,2),(9,2),(12,2),(15,2),
(7,3),(9,3),(12,3),(18,3)

With the technique to fill a variable #StudentsID_CSVString with a comma separated string you will only be able to get the string for one class at a time. To get more than one class in the result set you can use for xml path.
select C.Dept+C.Level as Class,
stuff((select ','+cast(S.StudentID as varchar(10))
from StudentsInClasses as S
where S.ClassID = C.ID
for xml path('')), 1, 1, '') as StudentsID_CSVString
from Classes as C
Test the query here: https://data.stackexchange.com/stackoverflow/q/115573/

Related

Best way to insert the procedure results to a log table, done within the prc itself?

Ok I have been requested to add a logging process inside the procedure (I didn't come up with this, I don't think its the best thing to do but I am expected to do this at work). I will try to explain with a very simplified example:
CREATE TABLE testLog (
results varchar(200)
);
CREATE TABLE Person (
PersonID int,
name varchar(50),
pAddress varchar(50),
pPhone varchar(10)
);
INSERT INTO Person
VALUES (1, 'Anne', '123 St', '1111111111'),
(2, 'Peter', 'XYZ St', '222222222'),
(3, 'Jason', '890 St', '3333333333');
-------------------------------------------------
CREATE PROCEDURE SpcDetailsList #inPersonID int
AS
BEGIN
SELECT TOP 1
PersonID,
name,
pAddress,
pPhone
FROM Person p
WHERE p.PersonID = #inPersonID;
END;
Ok, this is what I have - you pass in the parameter values and get something (the actual procedure is confirmed to return always a single row or nothing). But now additionally I have to make the procedure log itself. And this is how I am planning to do it -
ALTER PROCEDURE SpcDetailsList #inPersonID int
AS
BEGIN
DECLARE #PersonID int,
#name varchar(50),
#pAddress varchar(50),
#pPhone varchar(10);
SELECT
#PersonID = PersonID,
#name = name,
#pAddress = pAddress,
#pPhone = pPhone
FROM Person p
WHERE p.PersonID = #inPersonID;
INSERT INTO testLog (results)
VALUES (CAST(#PersonID AS varchar(1)) + ' ' + #name + ' ' + #pAddress + ' ' + #pPhone);
SELECT
#PersonID AS PersonID,
#name AS name,
#pAddress AS pAddress,
#pPhone AS pPhone
END
The actual procedure can return about 10 to 15 fields. I have to cast them, and replace null for each of them (which I am not doing in this example) then concatenate them (the results needs to be inserted into single field as one string) to insert to log. This doesn't look nice to me. Are there better options to do this exact same thing with good performance?
Edit: I didn't mention here that I have about 10 procedures with different number of outputs which will have to be inserted to the same log table. And the actual log table has more columns including the SP name, parameter values passed in, etc.
If you have some latitude on what your log looks like, I'd recommend outputting it to XML. It does all the formatting, casting, etc. for you. It's also parse-able. You can also use a temporary table to eliminate your need to declare variables for all the fields. Something like this:
SELECT TOP 1
PersonID,
name,
pAddress,
pPhone
into #temp
FROM Person p
WHERE p.PersonID = #inPersonID
INSERT INTO testLog VALUES((
SELECT *
FROM #temp
FOR XML AUTO
))
select *
from #temp

Insert from single table into multiple tables, invalid column name error

I am trying to do the following but getting an "Invalid Column Name {column}" error. Can someone please help me see the error of my ways? We recently split a transaction table into 2 tables, one containing the often updated report column names and the other containing the unchanging transactions. This leave me trying to change what was a simple insert into 1 table to a complex insert into 2 tables with unique columns. I attempted to do that like so:
INSERT INTO dbo.ReportColumns
(
FullName
,Type
,Classification
)
OUTPUT INSERTED.Date, INSERTED.Amount, INSERTED.Id INTO dbo.Transactions
SELECT
[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}
The "INSERTED.Date, INSERTED.Amount" are the source of the errors, with or without the "INSERTED." in front.
-----------------UPDATE------------------
Aaron was correct and it was impossible to manage with an insert but I was able to vastly improve the functionality of the insert and add some other business rules with the Merge functionality. My final solution resembles the following:
DECLARE #TransactionsTemp TABLE
(
[Date] DATE NOT NULL,
Amount MONEY NOT NULL,
ReportColumnsId INT NOT NULL
)
MERGE INTO dbo.ReportColumns AS Trgt
USING ( SELECT
{FK}
,[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}) AS Src
ON Src.{FK} = Trgt.{FK}
WHEN MATCHED THEN
UPDATE SET
Trgt.FullName = Src.FullName,
Trgt.Type= Src.Type,
Trgt.Classification = Src.Classification
WHEN NOT MATCHED BY TARGET THEN
INSERT
(
FullName,
Type,
Classification
)
VALUES
(
Src.FullName,
Src.Type,
Src.Classification
)
OUTPUT Src.[Date], Src.Amount, INSERTED.Id INTO #TransactionsTemp;
MERGE INTO dbo.FinancialReport AS Trgt
USING (SELECT
[Date] ,
Amount ,
ReportColumnsId
FROM #TransactionsTemp) AS Src
ON Src.[Date] = Trgt.[Date] AND Src.ReportColumnsId = Trgt.ReportColumnsId
WHEN NOT MATCHED BY TARGET And Src.Amount <> 0 THEN
INSERT
(
[Date],
Amount,
ReportColumnsId
)
VALUES
(
Src.[Date],
Src.Amount,
Src.ReportColumnsId
)
WHEN MATCHED And Src.Amount <> 0 THEN
UPDATE SET Trgt.Amount = Src.Amount
WHEN MATCHED And Src.Amount = 0 THEN
DELETE;
Hope that helps someone else in the future. :)
Output clause will return values you are inserting into a table, you need multiple inserts, you can try something like following
declare #staging table (datecolumn date, amount decimal(18,2),
fullname varchar(50), type varchar(10),
Classification varchar(255));
INSERT INTO #staging
SELECT
[Date]
,Amount
,FullName
,Type
,Classification
FROM {multiple tables}
Declare #temp table (id int, fullname varchar(50), type varchar(10));
INSERT INTO dbo.ReportColumns
(
FullName
,Type
,Classification
)
OUTPUT INSERTED.id, INSERTED.fullname, INSERTED.type INTO #temp
SELECT
FullName
,Type
,Classification
FROM #stage
INSERT into dbo.transacrions (id, date, amount)
select t.id, s.datecolumn, s.amount from #temp t
inner join #stage s on t.fullname = s.fullname and t.type = s.type
I am fairly certain you will need to have two inserts (or create a view and use an instead of insert trigger). You can only use the OUTPUT clause to send variables or actual inserted values ti another table. You can't use it to split up a select into two destination tables during an insert.
If you provide more information (like how the table has been split up and how the rows are related) we can probably provide a more specific answer.

Stored procedure: Return column values as comma separated values [duplicate]

This question already has answers here:
How to concatenate text from multiple rows into a single text string in SQL Server
(47 answers)
Closed 7 years ago.
If I issue SELECT username FROM Users I get this result:
username
--------
Paul
John
Mary
but what I really need is one row with all the values separated by comma, like this:
Paul, John, Mary
How do I do this?
select
distinct
stuff((
select ',' + u.username
from users u
where u.username = username
order by u.username
for xml path('')
),1,1,'') as userlist
from users
group by username
had a typo before, the above works
This should work for you. Tested all the way back to SQL 2000.
create table #user (username varchar(25))
insert into #user (username) values ('Paul')
insert into #user (username) values ('John')
insert into #user (username) values ('Mary')
declare #tmp varchar(250)
SET #tmp = ''
select #tmp = #tmp + username + ', ' from #user
select SUBSTRING(#tmp, 0, LEN(#tmp))
good review of several approaches:
http://blogs.msmvps.com/robfarley/2007/04/07/coalesce-is-not-the-answer-to-string-concatentation-in-t-sql/
Article copy -
Coalesce is not the answer to string concatentation in T-SQL I've seen many posts over the years about using the COALESCE function to get string concatenation working in T-SQL. This is one of the examples here (borrowed from Readifarian Marc Ridey).
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
SELECT #categories
This query can be quite effective, but care needs to be taken, and the use of COALESCE should be properly understood. COALESCE is the version of ISNULL which can take more than two parameters. It returns the first thing in the list of parameters which is not null. So really it has nothing to do with concatenation, and the following piece of code is exactly the same - without using COALESCE:
DECLARE #categories varchar(200)
SET #categories = ''
SELECT #categories = #categories + ',' + Name
FROM Production.ProductCategory
SELECT #categories
But the unordered nature of databases makes this unreliable. The whole reason why T-SQL doesn't (yet) have a concatenate function is that this is an aggregate for which the order of elements is important. Using this variable-assignment method of string concatenation, you may actually find that the answer that gets returned doesn't have all the values in it, particularly if you want the substrings put in a particular order. Consider the following, which on my machine only returns ',Accessories', when I wanted it to return ',Bikes,Clothing,Components,Accessories':
DECLARE #categories varchar(200)
SET #categories = NULL
SELECT #categories = COALESCE(#categories + ',','') + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
SELECT #categories
Far better is to use a method which does take order into consideration, and which has been included in SQL2005 specifically for the purpose of string concatenation - FOR XML PATH('')
SELECT ',' + Name
FROM Production.ProductCategory
ORDER BY LEN(Name)
FOR XML PATH('')
In the post I made recently comparing GROUP BY and DISTINCT when using subqueries, I demonstrated the use of FOR XML PATH(''). Have a look at this and you'll see how it works in a subquery. The 'STUFF' function is only there to remove the leading comma.
USE tempdb;
GO
CREATE TABLE t1 (id INT, NAME VARCHAR(MAX));
INSERT t1 values (1,'Jamie');
INSERT t1 values (1,'Joe');
INSERT t1 values (1,'John');
INSERT t1 values (2,'Sai');
INSERT t1 values (2,'Sam');
GO
select
id,
stuff((
select ',' + t.[name]
from t1 t
where t.id = t1.id
order by t.[name]
for xml path('')
),1,1,'') as name_csv
from t1
group by id
;
FOR XML PATH is one of the only situations in which you can use ORDER BY in a subquery. The other is TOP. And when you use an unnamed column and FOR XML PATH(''), you will get a straight concatenation, with no XML tags. This does mean that the strings will be HTML Encoded, so if you're concatenating strings which may have the < character (etc), then you should maybe fix that up afterwards, but either way, this is still the best way of concatenating strings in SQL Server 2005.
building on mwigdahls answer. if you also need to do grouping here is how to get it to look like
group, csv
'group1', 'paul, john'
'group2', 'mary'
--drop table #user
create table #user (groupName varchar(25), username varchar(25))
insert into #user (groupname, username) values ('apostles', 'Paul')
insert into #user (groupname, username) values ('apostles', 'John')
insert into #user (groupname, username) values ('family','Mary')
select
g1.groupname
, stuff((
select ', ' + g.username
from #user g
where g.groupName = g1.groupname
order by g.username
for xml path('')
),1,2,'') as name_csv
from #user g1
group by g1.groupname
You can use this query to do the above task:
DECLARE #test NVARCHAR(max)
SELECT #test = COALESCE(#test + ',', '') + field2 FROM #test
SELECT field2 = #test
For detail and step by step explanation visit the following link
http://oops-solution.blogspot.com/2011/11/sql-server-convert-table-column-data.html
DECLARE #EmployeeList varchar(100)
SELECT #EmployeeList = COALESCE(#EmployeeList + ', ', '') +
CAST(Emp_UniqueID AS varchar(5))
FROM SalesCallsEmployees
WHERE SalCal_UniqueID = 1
SELECT #EmployeeList
source:
http://www.sqlteam.com/article/using-coalesce-to-build-comma-delimited-string
In SQLite this is simpler. I think there are similar implementations for MySQL, MSSql and Orable
CREATE TABLE Beatles (id integer, name string );
INSERT INTO Beatles VALUES (1, "Paul");
INSERT INTO Beatles VALUES (2, "John");
INSERT INTO Beatles VALUES (3, "Ringo");
INSERT INTO Beatles VALUES (4, "George");
SELECT GROUP_CONCAT(name, ',') FROM Beatles;
you can use stuff() to convert rows as comma separated values
select
EmployeeID,
stuff((
SELECT ',' + FPProjectMaster.GroupName
FROM FPProjectInfo AS t INNER JOIN
FPProjectMaster ON t.ProjectID = FPProjectMaster.ProjectID
WHERE (t.EmployeeID = FPProjectInfo.EmployeeID)
And t.STatusID = 1
ORDER BY t.ProjectID
for xml path('')
),1,1,'') as name_csv
from FPProjectInfo
group by EmployeeID;
Thanks #AlexKuznetsov for the reference to get this answer.
A clean and flexible solution in MS SQL Server 2005/2008 is to create a CLR Agregate function.
You'll find quite a few articles (with code) on google.
It looks like this article walks you through the whole process using C#.
If you're executing this through PHP, what about this?
$hQuery = mysql_query("SELECT * FROM users");
while($hRow = mysql_fetch_array($hQuery)) {
$hOut .= $hRow['username'] . ", ";
}
$hOut = substr($hOut, 0, strlen($hOut) - 1);
echo $hOut;

Using merge..output to get mapping between source.id and target.id

Very simplified, I have two tables Source and Target.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
I would like to move all rows from #Source to #Target and know the TargetID for each SourceID because there are also the tables SourceChild and TargetChild that needs to be copied as well and I need to add the new TargetID into TargetChild.TargetID FK column.
There are a couple of solutions to this.
Use a while loop or cursors to insert one row (RBAR) to Target at a time and use scope_identity() to fill the FK of TargetChild.
Add a temp column to #Target and insert SourceID. You can then join that column to fetch the TargetID for the FK in TargetChild.
SET IDENTITY_INSERT OFF for #Target and handle assigning new values yourself. You get a range that you then use in TargetChild.TargetID.
I'm not all that fond of any of them. The one I used so far is cursors.
What I would really like to do is to use the output clause of the insert statement.
insert into #Target(TargetName)
output inserted.TargetID, S.SourceID
select SourceName
from #Source as S
But it is not possible
The multi-part identifier "S.SourceID" could not be bound.
But it is possible with a merge.
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result
TargetID SourceID
----------- -----------
2 1
4 3
I want to know if you have used this? If you have any thoughts about the solution or see any problems with it? It works fine in simple scenarios but perhaps something ugly could happen when the query plan get really complicated due to a complicated source query. Worst scenario would be that the TargetID/SourceID pairs actually isn't a match.
MSDN has this to say about the from_table_name of the output clause.
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
For some reason they don't say "rows to insert, update or delete" only "rows to update or delete".
Any thoughts are welcome and totally different solutions to the original problem is much appreciated.
In my opinion this is a great use of MERGE and output. I've used in several scenarios and haven't experienced any oddities to date.
For example, here is test setup that clones a Folder and all Files (identity) within it into a newly created Folder (guid).
DECLARE #FolderIndex TABLE (FolderId UNIQUEIDENTIFIER PRIMARY KEY, FolderName varchar(25));
INSERT INTO #FolderIndex
(FolderId, FolderName)
VALUES(newid(), 'OriginalFolder');
DECLARE #FileIndex TABLE (FileId int identity(1,1) PRIMARY KEY, FileName varchar(10));
INSERT INTO #FileIndex
(FileName)
VALUES('test.txt');
DECLARE #FileFolder TABLE (FolderId UNIQUEIDENTIFIER, FileId int, PRIMARY KEY(FolderId, FileId));
INSERT INTO #FileFolder
(FolderId, FileId)
SELECT FolderId,
FileId
FROM #FolderIndex
CROSS JOIN #FileIndex; -- just to illustrate
DECLARE #sFolder TABLE (FromFolderId UNIQUEIDENTIFIER, ToFolderId UNIQUEIDENTIFIER);
DECLARE #sFile TABLE (FromFileId int, ToFileId int);
-- copy Folder Structure
MERGE #FolderIndex fi
USING ( SELECT 1 [Dummy],
FolderId,
FolderName
FROM #FolderIndex [fi]
WHERE FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT
(FolderId, FolderName)
VALUES (newid(), 'copy_'+FolderName)
OUTPUT d.FolderId,
INSERTED.FolderId
INTO #sFolder (FromFolderId, toFolderId);
-- copy File structure
MERGE #FileIndex fi
USING ( SELECT 1 [Dummy],
fi.FileId,
fi.[FileName]
FROM #FileIndex fi
INNER
JOIN #FileFolder fm ON
fi.FileId = fm.FileId
INNER
JOIN #FolderIndex fo ON
fm.FolderId = fo.FolderId
WHERE fo.FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT ([FileName])
VALUES ([FileName])
OUTPUT d.FileId,
INSERTED.FileId
INTO #sFile (FromFileId, toFileId);
-- link new files to Folders
INSERT INTO #FileFolder (FileId, FolderId)
SELECT sfi.toFileId, sfo.toFolderId
FROM #FileFolder fm
INNER
JOIN #sFile sfi ON
fm.FileId = sfi.FromFileId
INNER
JOIN #sFolder sfo ON
fm.FolderId = sfo.FromFolderId
-- return
SELECT *
FROM #FileIndex fi
JOIN #FileFolder ff ON
fi.FileId = ff.FileId
JOIN #FolderIndex fo ON
ff.FolderId = fo.FolderId
I would like to add another example to add to #Nathan's example, as I found it somewhat confusing.
Mine uses real tables for the most part, and not temp tables.
I also got my inspiration from here: another example
-- Copy the FormSectionInstance
DECLARE #FormSectionInstanceTable TABLE(OldFormSectionInstanceId INT, NewFormSectionInstanceId INT)
;MERGE INTO [dbo].[FormSectionInstance]
USING
(
SELECT
fsi.FormSectionInstanceId [OldFormSectionInstanceId]
, #NewFormHeaderId [NewFormHeaderId]
, fsi.FormSectionId
, fsi.IsClone
, #UserId [NewCreatedByUserId]
, GETDATE() NewCreatedDate
, #UserId [NewUpdatedByUserId]
, GETDATE() NewUpdatedDate
FROM [dbo].[FormSectionInstance] fsi
WHERE fsi.[FormHeaderId] = #FormHeaderId
) tblSource ON 1=0 -- use always false condition
WHEN NOT MATCHED
THEN INSERT
( [FormHeaderId], FormSectionId, IsClone, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
VALUES( [NewFormHeaderId], FormSectionId, IsClone, NewCreatedByUserId, NewCreatedDate, NewUpdatedByUserId, NewUpdatedDate)
OUTPUT tblSource.[OldFormSectionInstanceId], INSERTED.FormSectionInstanceId
INTO #FormSectionInstanceTable(OldFormSectionInstanceId, NewFormSectionInstanceId);
-- Copy the FormDetail
INSERT INTO [dbo].[FormDetail]
(FormHeaderId, FormFieldId, FormSectionInstanceId, IsOther, Value, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
SELECT
#NewFormHeaderId, FormFieldId, fsit.NewFormSectionInstanceId, IsOther, Value, #UserId, CreatedDate, #UserId, UpdatedDate
FROM [dbo].[FormDetail] fd
INNER JOIN #FormSectionInstanceTable fsit ON fsit.OldFormSectionInstanceId = fd.FormSectionInstanceId
WHERE [FormHeaderId] = #FormHeaderId
Here's a solution that doesn't use MERGE (which I've had problems with many times I try to avoid if possible). It relies on two memory tables (you could use temp tables if you want) with IDENTITY columns that get matched, and importantly, using ORDER BY when doing the INSERT, and WHERE conditions that match between the two INSERTs... the first one holds the source IDs and the second one holds the target IDs.
-- Setup... We have a table that we need to know the old IDs and new IDs after copying.
-- We want to copy all of DocID=1
DECLARE #newDocID int = 99;
DECLARE #tbl table (RuleID int PRIMARY KEY NOT NULL IDENTITY(1, 1), DocID int, Val varchar(100));
INSERT INTO #tbl (DocID, Val) VALUES (1, 'RuleA-2'), (1, 'RuleA-1'), (2, 'RuleB-1'), (2, 'RuleB-2'), (3, 'RuleC-1'), (1, 'RuleA-3')
-- Create a break in IDENTITY values.. just to simulate more realistic data
INSERT INTO #tbl (Val) VALUES ('DeleteMe'), ('DeleteMe');
DELETE FROM #tbl WHERE Val = 'DeleteMe';
INSERT INTO #tbl (DocID, Val) VALUES (6, 'RuleE'), (7, 'RuleF');
SELECT * FROM #tbl t;
-- Declare TWO temp tables each with an IDENTITY - one will hold the RuleID of the items we are copying, other will hold the RuleID that we create
DECLARE #input table (RID int IDENTITY(1, 1), SourceRuleID int NOT NULL, Val varchar(100));
DECLARE #output table (RID int IDENTITY(1,1), TargetRuleID int NOT NULL, Val varchar(100));
-- Capture the IDs of the rows we will be copying by inserting them into the #input table
-- Important - we must specify the sort order - best thing is to use the IDENTITY of the source table (t.RuleID) that we are copying
INSERT INTO #input (SourceRuleID, Val) SELECT t.RuleID, t.Val FROM #tbl t WHERE t.DocID = 1 ORDER BY t.RuleID;
-- Copy the rows, and use the OUTPUT clause to capture the IDs of the inserted rows.
-- Important - we must use the same WHERE and ORDER BY clauses as above
INSERT INTO #tbl (DocID, Val)
OUTPUT Inserted.RuleID, Inserted.Val INTO #output(TargetRuleID, Val)
SELECT #newDocID, t.Val FROM #tbl t
WHERE t.DocID = 1
ORDER BY t.RuleID;
-- Now #input and #output should have the same # of rows, and the order of both inserts was the same, so the IDENTITY columns (RID) can be matched
-- Use this as the map from old-to-new when you are copying sub-table rows
-- Technically, #input and #output don't even need the 'Val' columns, just RID and RuleID - they were included here to prove that the rules matched
SELECT i.*, o.* FROM #output o
INNER JOIN #input i ON i.RID = o.RID
-- Confirm the matching worked
SELECT * FROM #tbl t

contains search over a table variable or a temp table

i'm trying to concatenate several columns from a persistent table into one column of a table variable, so that i can run a contains("foo" and "bar") and get a result even if foo is not in the same column as bar.
however, it isn't possible to create a unique index on a table variable, hence no fulltext index to run a contains.
is there a way to, dynamically, concatenate several columns and run a contains on them? here's an example:
declare #t0 table
(
id uniqueidentifier not null,
search_text varchar(max)
)
declare #t1 table ( id uniqueidentifier )
insert into
#t0 (id, search_text)
select
id,
foo + bar
from
description_table
insert into
#t1
select
id
from
#t0
where
contains( search_text, '"c++*" AND "programming*"' )
You cannot use CONTAINS on a table that has not been configured to use Full Text Indexing, and that cannot be applied to table variables.
If you want to use CONTAINS (as opposed to the less flexible PATINDEX) you will need to base the whole query on a table with a FT index.
You can't use full text indexing on a table variable but you can apply the full text parser. Would something like this do what you need?
declare #d table
(
id int identity(1,1),
testing varchar(1000)
)
INSERT INTO #D VALUES ('c++ programming')
INSERT INTO #D VALUES ('c# programming')
INSERT INTO #D VALUES ('c++ books')
SELECT id
FROM #D
CROSS APPLY sys.dm_fts_parser('"' + REPLACE(testing,'"','""') + '"', 1033, 0,0)
where display_term in ('c++','programming')
GROUP BY id
HAVING COUNT(DISTINCT display_term)=2
NB: There might well be a better way of using the parser but I couldn't quite figure it out. Details of it are at this link
declare #table table
(
id int,
fname varchar(50)
)
insert into #table select 1, 'Adam James Will'
insert into #table select 1, 'Jain William'
insert into #table select 1, 'Bob Rob James'
select * from #table where fname like '%ja%' and fname like '%wi%'
Is it something like this.

Resources