How can I insert-select a sequence starting from a variable? - sql-server

There was a question here on SO that was since then removed. But while I was researching for ways to solve it, I was writing a script that avoids the use of an identity column and uses a sequence itself:
create table table1(Id int primary key, group_id int, Name varchar(64))
insert into table1(Id, group_id, Name) values (1, 1, 'a'), (2, 1, 'b'), (4, 1, 'c'), (8, 1, 'd')
declare #MaxId as int
select #MaxId = max(Id) + 1 from table1
declare #sql varchar(max)
set #sql = N'CREATE SEQUENCE MySequence AS INTEGER START WITH ' + cast(#maxId as varchar(10))
exec(#sql)
insert into table1(id, group_id, Name)
select next value for MySequence, 2, Name
from table1
where group_id = 1;
This actually works, that is, it successfully inserts four records with dynamically generated ids.
However, the the part of
declare #sql varchar(max)
set #sql = N'CREATE SEQUENCE MySequence AS INTEGER START WITH ' + cast(#maxId as varchar(10))
exec(#sql)
is very much counter-intuitive and hacky in my opinion.
Question: Is there a way to define a sequence that starts from a variable's value without generating a text and execute it?

The CREATE SEQUENCE syntax documentation shows a constant is required so you cannot specify a variable in the DDL statement.

Yes, creating a single-use dynamic sequence is a hack.
Instead use ROW_NUMBER(), something like
use tempdb
drop table if exists table1
go
create table table1(id int, group_id int, name varchar(200))
insert into table1(id,group_id,name) values (1,1,'a')
insert into table1(id,group_id,name) values (2,1,'a')
declare #maxValue int = (select max(id) from table1)
insert into table1(id, group_id, Name)
select #maxValue + row_number() over (order by Id), 2, Name
from table1
where group_id = 1;
select * from table1

Related

How to transform first row as column name?

I hope you are all well.
I would like your help on a data transformation task that I have.
I would like to convert the first row of a table to a column name
I am working on SQL Server Azure and I get daily data from another service.
This service loads a table that is of the same form.
and I would like to transform the data in the same manner
Do You have any idea how to do it ?
The way to solve this is by using a little dynamic SQL magic:
First, create and populate sample table (Please save us thus step in your future questions):
DECLARE #T As Table
(
Row_num int,
Line nvarchar(4000)
);
INSERT INTO #T (Row_Num, Line) VALUES
(1, 'Col1;Col2;Col3'),
(2, 'Val1;Val2;Val3'),
(3, 'Value1;Value2;Value1'),
(4, 'Val A; val B;Val A'),
(5, 'Value A; Value B;Value C');
Then, build a union all query that selects the values from every row but the first, replacing the semicolon (;) separator with a comma (,) surrounded by apostrophes ('). Add an apostrophe before and after the string (which means we are treating all the data as strings):
DECLARE #Sql nvarchar(max) = '';
SELECT #Sql += 'UNION ALL SELECT '''+ REPLACE(Line, ';', ''',''') + ''' '
FROM #T
WHERE Row_Num > 1;
Next, use stuff to replace the first UNION ALL with a common table expression declaration, specifying the column names in the declaration itself. Note that here we don't need the apostrophes anymore, just to replace the semicolon with a comma:
SELECT #Sql = STUFF(#Sql, 1, 10, 'WITH CTE('+ REPLACE(Line, ';', ',') +') AS (') + ') SELECT * FROM CTE'
FROM #T
WHERE Row_Num = 1;
Finally, execute the sql:
EXEC(#Sql)
Results:
Col1 Col2 Col3
Val1 Val2 Val3
Value1 Value2 Value1
Val A val B Val A
Value A Value B Value C
You can see a live demo on rextester.
Another possible approach is to transform your text data into valid JSON arrays and then use OPENJSON() with an explicit schema and dynamic statement.
Working example:
Input:
CREATE TABLE #Data (
RowNum int,
Line nvarchar(max)
)
INSERT INTO #Data
(RowNum, Line)
VALUES
(1, 'ColumnA;ColumnB;ColumnC'),
(2, 'ValueA1;ValueB1;ValueC1'),
(3, 'ValueA2;ValueB2;ValueC2'),
(4, 'ValueA3;ValueB3;ValueC3'),
(5, 'ValueA4;ValueB4;ValueC4'),
(6, 'ValueA5;ValueB5;ValueC5')
T-SQL:
-- Explicit schema generation
DECLARE #schema nvarchar(max)
SELECT #schema = STUFF((
SELECT CONCAT(N',', j.[value], N' nvarchar(max) ''$[', j.[key], N']''')
FROM #Data d
CROSS APPLY OPENJSON(CONCAT(N'["', REPLACE(d.Line, ';', '","'), N'"]')) j
WHERE d.RowNum = 1
FOR XML PATH('')
), 1, 1, N'')
-- Dymanic statement
DECLARE #stm nvarchar(max)
SET #stm = CONCAT(
N'SELECT j.* FROM #Data d ',
N'CROSS APPLY OPENJSON(CONCAT(N''[["'', REPLACE(d.Line, '';'', ''","''), N''"]]'')) ',
N'WITH (',
#schema,
N') j WHERE d.RowNum > 1'
)
-- Execution
EXEC sp_executesql #stm
Output:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1
ValueA2 ValueB2 ValueC2
ValueA3 ValueB3 ValueC3
ValueA4 ValueB4 ValueC4
ValueA5 ValueB5 ValueC5
Explanations:
The main part is to transform each row's data into valid JSON arrays. The count of the columns can be different.
Data from the first row will be used for explicit schema generation and values ColumnA;ColumnB;ColumnC are transformed into ["ColumnA","ColumnB","ColumnC"]. Values from subsequent rows ValueA1;ValueB1;ValueC1 are transformed into [["ValueA1","ValueB1","ValueC1"]].
Next simple examples demonstrate how OPENJSON() returns data with default and explicit schema:
With default schema:
DECLARE #json nvarchar(max)
SET #json = '["ValueA1", "ValueB1", "ValueC1"]'
SELECT *
FROM OPENJSON(#json)
Output for default schema:
----------------
key value type
----------------
0 ValueA1 1
1 ValueB1 1
2 ValueC1 1
With explicit schema:
SET #json = '[["ValueA1", "ValueB1", "ValueC1"]]'
SELECT *
FROM OPENJSON(#json)
WITH (
ColumnA nvarchar(max) '$[0]',
ColumnB nvarchar(max) '$[1]',
ColumnC nvarchar(max) '$[2]'
)
Output for explicit schema:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1

Concatenating with Cursor

I really want to learn and understand how to concatenate strings with the cursor approach.
Here is my table:
declare #t table (id int, city varchar(15))
insert into #t values
(1, 'Rome')
,(1, 'Dallas')
,(2, 'Berlin')
,(2, 'Rome')
,(2, 'Tokyo')
,(3, 'Miami')
,(3, 'Bergen')
I am trying to create a table that has all cities for each ID within one line sorted alphabetically.
ID City
1 Dallas, Rome
2 Berlin, Rome, Tokyo
3 Bergen, Miami
This is my code so far but it is not working and if somebody could walk me through each step I would be very happy and eager to learn it!
set nocount on
declare #tid int
declare #tcity varchar(15)
declare CityCursor CURSOR FOR
select * from #t
order by id, city
open CityCursor
fetch next from CityCursor into #tid, #tcity
while ( ##FETCH_STATUS = 0)
begin
if #tid = #tid -- my idea add all cities in one line within each id
print cast(#tid as varchar(2)) + ', '+ #tcity
else if #tid <> #tid --when it reaches a new id and we went through all cities it starts over for the next line
fetch next from CityCursor into #tid, #tcity
end
close CityCursor
deallocate CityCursor
select * from CityCursor
First, for future readers: A cursor, as Sean Lange wrote in his comment, is the wrong tool for this job. The correct way to do it is using a subquery with for xml.
However, since you wanted to know how to do it with a cursor, you where actually pretty close. Here is a working example:
set nocount on
declare #prevId int,
#tid int,
#tcity varchar(15)
declare #cursorResult table (id int, city varchar(32))
-- if you are expecting more than two cities for the same id,
-- the city column should be longer
declare CityCursor CURSOR FOR
select * from #t
order by id, city
open CityCursor
fetch next from CityCursor into #tid, #tcity
while ( ##FETCH_STATUS = 0)
begin
if #prevId is null or #prevId != #tid
insert into #cursorResult(id, city) values (#tid, #tcity)
else
update #cursorResult
set city = city +', '+ #tcity
where id = #tid
set #prevId = #tid
fetch next from CityCursor into #tid, #tcity
end
close CityCursor
deallocate CityCursor
select * from #cursorResult
results:
id city
1 Dallas, Rome
2 Berlin, Rome, Tokyo
3 Bergen, Miami
I've used another variable to keep the previous id value, and also inserted the results of the cursor into a table variable.
I have written nested cursor to sync with distinct city id. Although it has performance issue, you can try the following procedure
CREATE PROCEDURE USP_CITY
AS
BEGIN
set nocount on
declare #mastertid int
declare #detailstid int
declare #tcity varchar(MAX)
declare #finalCity varchar(MAX)
SET #finalCity = ''
declare #t table (id int, city varchar(max))
insert into #t values
(1, 'Rome')
,(1, 'Dallas')
,(2, 'Berlin')
,(2, 'Rome')
,(2, 'Tokyo')
,(3, 'Miami')
,(3, 'Bergen')
declare #finaltable table (id int, city varchar(max))
declare MasterCityCursor CURSOR FOR
select distinct id from #t
order by id
open MasterCityCursor
fetch next from MasterCityCursor into #mastertid
while ( ##FETCH_STATUS = 0)
begin
declare DetailsCityCursor CURSOR FOR
SELECT id,city from #t order by id
open DetailsCityCursor
fetch next from DetailsCityCursor into #detailstid,#tcity
while ( ##FETCH_STATUS = 0)
begin
if #mastertid = #detailstid
begin
SET #finalCity = #finalCity + CASE #finalCity WHEN '' THEN +'' ELSE ', ' END + #tcity
end
fetch next from DetailsCityCursor into #detailstid, #tcity
end
insert into #finaltable values(#mastertid,#finalCity)
SET #finalCity = ''
close DetailsCityCursor
deallocate DetailsCityCursor
fetch next from MasterCityCursor into #mastertid
end
close MasterCityCursor
deallocate MasterCityCursor
SELECT * FROM #finaltable
END
If you will face any problem, feel free to write in comment section. Thanks
Using a cursor for this is probably the slowest possible solution. If performance is important then there are three valid approaches. The first approach is FOR XML without special XML character protection.
declare #t table (id int, city varchar(15))
insert into #t values (1, 'Rome'),(1, 'Dallas'),(2, 'Berlin'),(2, 'Rome'),(2, 'Tokyo'),
(3, 'Miami'),(3, 'Bergen');
SELECT
t.id,
city = STUFF((
SELECT ',' + t2.city
FROM #t t2
WHERE t.id = t2.id
FOR XML PATH('')),1,1,'')
FROM #t as t
GROUP BY t.id;
The drawback to this approach is when you add a reserved XML character such as &, <, or >, you will get an XML entity back (e.g. "&amp" for "&"). To handle that you have to modify your query to look like this:
Sample data
IF OBJECT_ID('tempdb..#t') IS NOT NULL DROP TABLE #t;
CREATE TABLE #t (id int, words varchar(20))
INSERT #t VALUES (1, 'blah blah'),(1, 'yada yada'),(2, 'PB&J'),(2,' is good');
SELECT
t.id,
city = STUFF((
SELECT ',' + t2.words
FROM #t t2
WHERE t.id = t2.id
FOR XML PATH(''), TYPE).value('.','varchar(1000)'),1,1,'')
FROM #t as t
GROUP BY t.id;
The downside to this approach is that it will be slower. The good news (and another reason this approach is 100 times better than a cursor) is that both of these queries benefit greatly when the optimizer chooses a parallel execution plan.
The best approach is a new fabulous function available in SQL Server 2017, STRING_AGG. STRING_AGG does not have the problem with special XML characters and is, by far the cleanest approach:
SELECT t.id, STRING_AGG(t.words,',') WITHIN GROUP (ORDER BY t.id)
FROM #t as t
GROUP BY t.id;

Select (Select field from FieldTable) from Table

I'm using MSQL 2005. I have 2 table.A and B
Table A
- ID DOVKOD
- 1 KURSATIS
Table B
- ID KURALIS KURSATIS
- 1 2,2522 2,2685
- 2 2,4758 2,4874
Table A has only 1 record
When I execute Select (Select DOVKOD from Table A) from Table B I want to get same result as Select KURSATIS from Table B
I am gonna use it in a view. How can I do that. Thanks..
You can simply use a CASE expression:
SELECT CASE WHEN (SELECT DOVKOD FROM A) = 'KURSATIS' THEN KURSATIS
ELSE KURALIS
END
FROM B
SQL Fiddle Demo here
You must use Dynamic TSQL
SELECT #column=DOVKOD from Table A
EXEC ('Select ' + #column + ' from Table B')
If I understood you right then in table A you have the name of the column that you want to return. Then your solution is bad at all. I'll rather do something like that:
CREATE TABLE #TableA
(
ID INT, DOVKOD VARCHAR(100)
);
INSERT INTO #TableA VALUES (1, 'KURSATIS');
CREATE TABLE #TableB
(
ID INT, Value DECIMAL (18,2),Name VARCHAR(100)
);
INSERT INTO #TableB VALUES (1, 2.2522 , 'KURALIS');
INSERT INTO #TableB VALUES (2, 2.4758 , 'KURSATIS');
SELECT #TableB.* FROM #TableB JOIN #TableA ON #TableA.DOVKOD = #TableB.Name
The only way how to do this in MySQL is using Prepared statements. Dynamic pivot tables (transform rows to columns) is a good article about this.
SET #sql = NULL;
Select DOVKOD INTO #sql
FROM from Table A;
SET #sql = CONCAT('SELECT ', #sql, 'FROM Table B');
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

Using merge..output to get mapping between source.id and target.id

Very simplified, I have two tables Source and Target.
declare #Source table (SourceID int identity(1,2), SourceName varchar(50))
declare #Target table (TargetID int identity(2,2), TargetName varchar(50))
insert into #Source values ('Row 1'), ('Row 2')
I would like to move all rows from #Source to #Target and know the TargetID for each SourceID because there are also the tables SourceChild and TargetChild that needs to be copied as well and I need to add the new TargetID into TargetChild.TargetID FK column.
There are a couple of solutions to this.
Use a while loop or cursors to insert one row (RBAR) to Target at a time and use scope_identity() to fill the FK of TargetChild.
Add a temp column to #Target and insert SourceID. You can then join that column to fetch the TargetID for the FK in TargetChild.
SET IDENTITY_INSERT OFF for #Target and handle assigning new values yourself. You get a range that you then use in TargetChild.TargetID.
I'm not all that fond of any of them. The one I used so far is cursors.
What I would really like to do is to use the output clause of the insert statement.
insert into #Target(TargetName)
output inserted.TargetID, S.SourceID
select SourceName
from #Source as S
But it is not possible
The multi-part identifier "S.SourceID" could not be bound.
But it is possible with a merge.
merge #Target as T
using #Source as S
on 0=1
when not matched then
insert (TargetName) values (SourceName)
output inserted.TargetID, S.SourceID;
Result
TargetID SourceID
----------- -----------
2 1
4 3
I want to know if you have used this? If you have any thoughts about the solution or see any problems with it? It works fine in simple scenarios but perhaps something ugly could happen when the query plan get really complicated due to a complicated source query. Worst scenario would be that the TargetID/SourceID pairs actually isn't a match.
MSDN has this to say about the from_table_name of the output clause.
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
For some reason they don't say "rows to insert, update or delete" only "rows to update or delete".
Any thoughts are welcome and totally different solutions to the original problem is much appreciated.
In my opinion this is a great use of MERGE and output. I've used in several scenarios and haven't experienced any oddities to date.
For example, here is test setup that clones a Folder and all Files (identity) within it into a newly created Folder (guid).
DECLARE #FolderIndex TABLE (FolderId UNIQUEIDENTIFIER PRIMARY KEY, FolderName varchar(25));
INSERT INTO #FolderIndex
(FolderId, FolderName)
VALUES(newid(), 'OriginalFolder');
DECLARE #FileIndex TABLE (FileId int identity(1,1) PRIMARY KEY, FileName varchar(10));
INSERT INTO #FileIndex
(FileName)
VALUES('test.txt');
DECLARE #FileFolder TABLE (FolderId UNIQUEIDENTIFIER, FileId int, PRIMARY KEY(FolderId, FileId));
INSERT INTO #FileFolder
(FolderId, FileId)
SELECT FolderId,
FileId
FROM #FolderIndex
CROSS JOIN #FileIndex; -- just to illustrate
DECLARE #sFolder TABLE (FromFolderId UNIQUEIDENTIFIER, ToFolderId UNIQUEIDENTIFIER);
DECLARE #sFile TABLE (FromFileId int, ToFileId int);
-- copy Folder Structure
MERGE #FolderIndex fi
USING ( SELECT 1 [Dummy],
FolderId,
FolderName
FROM #FolderIndex [fi]
WHERE FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT
(FolderId, FolderName)
VALUES (newid(), 'copy_'+FolderName)
OUTPUT d.FolderId,
INSERTED.FolderId
INTO #sFolder (FromFolderId, toFolderId);
-- copy File structure
MERGE #FileIndex fi
USING ( SELECT 1 [Dummy],
fi.FileId,
fi.[FileName]
FROM #FileIndex fi
INNER
JOIN #FileFolder fm ON
fi.FileId = fm.FileId
INNER
JOIN #FolderIndex fo ON
fm.FolderId = fo.FolderId
WHERE fo.FolderName = 'OriginalFolder'
) d ON d.Dummy = 0
WHEN NOT MATCHED
THEN INSERT ([FileName])
VALUES ([FileName])
OUTPUT d.FileId,
INSERTED.FileId
INTO #sFile (FromFileId, toFileId);
-- link new files to Folders
INSERT INTO #FileFolder (FileId, FolderId)
SELECT sfi.toFileId, sfo.toFolderId
FROM #FileFolder fm
INNER
JOIN #sFile sfi ON
fm.FileId = sfi.FromFileId
INNER
JOIN #sFolder sfo ON
fm.FolderId = sfo.FromFolderId
-- return
SELECT *
FROM #FileIndex fi
JOIN #FileFolder ff ON
fi.FileId = ff.FileId
JOIN #FolderIndex fo ON
ff.FolderId = fo.FolderId
I would like to add another example to add to #Nathan's example, as I found it somewhat confusing.
Mine uses real tables for the most part, and not temp tables.
I also got my inspiration from here: another example
-- Copy the FormSectionInstance
DECLARE #FormSectionInstanceTable TABLE(OldFormSectionInstanceId INT, NewFormSectionInstanceId INT)
;MERGE INTO [dbo].[FormSectionInstance]
USING
(
SELECT
fsi.FormSectionInstanceId [OldFormSectionInstanceId]
, #NewFormHeaderId [NewFormHeaderId]
, fsi.FormSectionId
, fsi.IsClone
, #UserId [NewCreatedByUserId]
, GETDATE() NewCreatedDate
, #UserId [NewUpdatedByUserId]
, GETDATE() NewUpdatedDate
FROM [dbo].[FormSectionInstance] fsi
WHERE fsi.[FormHeaderId] = #FormHeaderId
) tblSource ON 1=0 -- use always false condition
WHEN NOT MATCHED
THEN INSERT
( [FormHeaderId], FormSectionId, IsClone, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
VALUES( [NewFormHeaderId], FormSectionId, IsClone, NewCreatedByUserId, NewCreatedDate, NewUpdatedByUserId, NewUpdatedDate)
OUTPUT tblSource.[OldFormSectionInstanceId], INSERTED.FormSectionInstanceId
INTO #FormSectionInstanceTable(OldFormSectionInstanceId, NewFormSectionInstanceId);
-- Copy the FormDetail
INSERT INTO [dbo].[FormDetail]
(FormHeaderId, FormFieldId, FormSectionInstanceId, IsOther, Value, CreatedByUserId, CreatedDate, UpdatedByUserId, UpdatedDate)
SELECT
#NewFormHeaderId, FormFieldId, fsit.NewFormSectionInstanceId, IsOther, Value, #UserId, CreatedDate, #UserId, UpdatedDate
FROM [dbo].[FormDetail] fd
INNER JOIN #FormSectionInstanceTable fsit ON fsit.OldFormSectionInstanceId = fd.FormSectionInstanceId
WHERE [FormHeaderId] = #FormHeaderId
Here's a solution that doesn't use MERGE (which I've had problems with many times I try to avoid if possible). It relies on two memory tables (you could use temp tables if you want) with IDENTITY columns that get matched, and importantly, using ORDER BY when doing the INSERT, and WHERE conditions that match between the two INSERTs... the first one holds the source IDs and the second one holds the target IDs.
-- Setup... We have a table that we need to know the old IDs and new IDs after copying.
-- We want to copy all of DocID=1
DECLARE #newDocID int = 99;
DECLARE #tbl table (RuleID int PRIMARY KEY NOT NULL IDENTITY(1, 1), DocID int, Val varchar(100));
INSERT INTO #tbl (DocID, Val) VALUES (1, 'RuleA-2'), (1, 'RuleA-1'), (2, 'RuleB-1'), (2, 'RuleB-2'), (3, 'RuleC-1'), (1, 'RuleA-3')
-- Create a break in IDENTITY values.. just to simulate more realistic data
INSERT INTO #tbl (Val) VALUES ('DeleteMe'), ('DeleteMe');
DELETE FROM #tbl WHERE Val = 'DeleteMe';
INSERT INTO #tbl (DocID, Val) VALUES (6, 'RuleE'), (7, 'RuleF');
SELECT * FROM #tbl t;
-- Declare TWO temp tables each with an IDENTITY - one will hold the RuleID of the items we are copying, other will hold the RuleID that we create
DECLARE #input table (RID int IDENTITY(1, 1), SourceRuleID int NOT NULL, Val varchar(100));
DECLARE #output table (RID int IDENTITY(1,1), TargetRuleID int NOT NULL, Val varchar(100));
-- Capture the IDs of the rows we will be copying by inserting them into the #input table
-- Important - we must specify the sort order - best thing is to use the IDENTITY of the source table (t.RuleID) that we are copying
INSERT INTO #input (SourceRuleID, Val) SELECT t.RuleID, t.Val FROM #tbl t WHERE t.DocID = 1 ORDER BY t.RuleID;
-- Copy the rows, and use the OUTPUT clause to capture the IDs of the inserted rows.
-- Important - we must use the same WHERE and ORDER BY clauses as above
INSERT INTO #tbl (DocID, Val)
OUTPUT Inserted.RuleID, Inserted.Val INTO #output(TargetRuleID, Val)
SELECT #newDocID, t.Val FROM #tbl t
WHERE t.DocID = 1
ORDER BY t.RuleID;
-- Now #input and #output should have the same # of rows, and the order of both inserts was the same, so the IDENTITY columns (RID) can be matched
-- Use this as the map from old-to-new when you are copying sub-table rows
-- Technically, #input and #output don't even need the 'Val' columns, just RID and RuleID - they were included here to prove that the rules matched
SELECT i.*, o.* FROM #output o
INNER JOIN #input i ON i.RID = o.RID
-- Confirm the matching worked
SELECT * FROM #tbl t

contains search over a table variable or a temp table

i'm trying to concatenate several columns from a persistent table into one column of a table variable, so that i can run a contains("foo" and "bar") and get a result even if foo is not in the same column as bar.
however, it isn't possible to create a unique index on a table variable, hence no fulltext index to run a contains.
is there a way to, dynamically, concatenate several columns and run a contains on them? here's an example:
declare #t0 table
(
id uniqueidentifier not null,
search_text varchar(max)
)
declare #t1 table ( id uniqueidentifier )
insert into
#t0 (id, search_text)
select
id,
foo + bar
from
description_table
insert into
#t1
select
id
from
#t0
where
contains( search_text, '"c++*" AND "programming*"' )
You cannot use CONTAINS on a table that has not been configured to use Full Text Indexing, and that cannot be applied to table variables.
If you want to use CONTAINS (as opposed to the less flexible PATINDEX) you will need to base the whole query on a table with a FT index.
You can't use full text indexing on a table variable but you can apply the full text parser. Would something like this do what you need?
declare #d table
(
id int identity(1,1),
testing varchar(1000)
)
INSERT INTO #D VALUES ('c++ programming')
INSERT INTO #D VALUES ('c# programming')
INSERT INTO #D VALUES ('c++ books')
SELECT id
FROM #D
CROSS APPLY sys.dm_fts_parser('"' + REPLACE(testing,'"','""') + '"', 1033, 0,0)
where display_term in ('c++','programming')
GROUP BY id
HAVING COUNT(DISTINCT display_term)=2
NB: There might well be a better way of using the parser but I couldn't quite figure it out. Details of it are at this link
declare #table table
(
id int,
fname varchar(50)
)
insert into #table select 1, 'Adam James Will'
insert into #table select 1, 'Jain William'
insert into #table select 1, 'Bob Rob James'
select * from #table where fname like '%ja%' and fname like '%wi%'
Is it something like this.

Resources