Just wondering how I might go about adding the ouputted results as a new column to an exsisting table.
What I'm tryng to do is extract the date from a string which is in another column. I have the below code to do this:
Code
CREATE FUNCTION dbo.udf_GetNumeric
(
#strAlphaNumeric VARCHAR(256)
)
RETURNS VARCHAR(256)
AS
BEGIN
DECLARE #intAlpha INT
SET #intAlpha = PATINDEX('%[^0-9]%', #strAlphaNumeric)
BEGIN
WHILE #intAlpha > 0
BEGIN
SET #strAlphaNumeric = STUFF(#strAlphaNumeric, #intAlpha, 1, '' )
SET #intAlpha = PATINDEX('%[^0-9]%', #strAlphaNumeric )
END
END
RETURN ISNULL(#strAlphaNumeric,0)
END
GO
Now use the function as
SELECT dbo.udf_GetNumeric(column_name)
from table_name
The issue is that I want the result to be placed in a new column in an exsisting table. I have tried the below code but no luck.
ALTER TABLE [Data_Cube_Data].[dbo].[DB_Test]
ADD reportDated nvarchar NULL;
insert into [DB].[dbo].[DB_Test](reportDate)
SELECT
(SELECT dbo.udf_GetNumeric(FileNamewithDate) from [DB].[dbo].[DB_Test])
The syntax should be an UPDATE, not an INSERT, because you want to update existing rows, not insert new ones:
UPDATE Data_Cube_Data.dbo.DB_Test -- you don't need square bracket noise
SET reportDate = dbo.udf_GetNumeric(FileNamewithDate);
But yeah, I agree with the others, the function looks like the result of a "how can I make this object the least efficient thing in my entire database?" contest. Here's a better alternative:
-- better, set-based TVF with no while loop
CREATE FUNCTION dbo.tvf_GetNumeric
(#strAlphaNumeric varchar(256))
RETURNS TABLE
AS
RETURN
(
WITH cte(n) AS
(
SELECT TOP (256) n = ROW_NUMBER() OVER (ORDER BY ##SPID)
FROM sys.all_objects
)
SELECT output = COALESCE(STRING_AGG(
SUBSTRING(#strAlphaNumeric, n, 1), '')
WITHIN GROUP (ORDER BY n), '')
FROM cte
WHERE SUBSTRING(#strAlphaNumeric, n, 1) LIKE '%[0-9]%'
);
Then the query is:
UPDATE t
SET t.reportDate = tvf.output
FROM dbo.DB_Test AS t
CROSS APPLY dbo.tvf_GetNumeric(t.FileNamewithDate) AS tvf;
Example db<>fiddle that shows this has the same behavior as your existing function.
The function
As i mentioned in the comments, I would strongly suggest rewriting the function, it'll perform terribly. Multi-line table value function can perform poorly, and you also have a WHILE which will perform awfully. SQL is a set based language, and so you should be using set based methods.
There are a couple of alternatives though:
Inlinable Scalar Function
SQL Server 2019 can inline function, so you could inline the above. I do, however, assume that your value can only contain the characters A-z and 0-9. if it can contain other characters, such as periods (.), commas (,), quotes (") or even white space ( ), or your not on 2019 then don't use this:
CREATE OR ALTER FUNCTION dbo.udf_GetNumeric (#strAlphaNumeric varchar(256))
RETURNS varchar(256) AS
BEGIN
RETURN TRY_CONVERT(int,REPLACE(TRANSLATE(LOWER(#strAlphaNumeric),'abcdefghigclmnopqrstuvwxyz',REPLICATE('|',26)),'|',''));
END;
GO
SELECT dbo.udf_GetNumeric('abs132hjsdf');
The LOWER is there in case you are using a case sensitive collation.
Inline Table Value Function
This is the better solution in my mind, and doesn't have the caveats of the above.
It uses a Tally to split the data into individual characters, and then only reaggregate the characters that are a digit. Note that I assume you are using SQL Server 2017+ here:
DROP FUNCTION udf_GetNumeric; --Need to drop as it's a scalar function at the moment
GO
CREATE OR ALTER FUNCTION dbo.udf_GetNumeric (#strAlphaNumeric varchar(256))
RETURNS table AS
RETURN
WITH N AS (
SELECT N
FROM (VALUES(NULL),(NULL),(NULL),(NULL)) N(N)),
Tally AS(
SELECT TOP (LEN(#strAlphaNumeric))
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3, N N4)
SELECT STRING_AGG(CASE WHEN V.C LIKE '[0-9]' THEN V.C END,'') WITHIN GROUP (ORDER BY T.I) AS strNumeric
FROM Tally T
CROSS APPLY (VALUES(SUBSTRING(#strAlphaNumeric,T.I,1)))V(C);
GO
SELECT *
FROM dbo.udf_GetNumeric('abs132hjsdf');
Your table
You define reportDated as nvarchar; this means nvarchar(1). Your function, however, returns a varchar(256); this will rarely fit in an nvarchar(1).
Define the column properly:
ALTER TABLE [dbo].[DB_Test] ADD reportDated varchar(256) NULL;
If you've already created the column then do the following:
ALTER TABLE [dbo].[DB_Test] ALTER COLUMN reportDated varchar(256) NULL;
I note, however, that the column is called "dated", which implies a date value, but it's a (n)varchar; that sounds like a flaw.
Updating the column
Use an UPDATE statement. Depending on the solution this would one of the following:
--Scalar function
UPDATE [dbo].[DB_Test]
SET reportDated = dbo.udf_GetNumeric(FileNamewithDate);
--Table Value Function
UPDATE DBT
SET reportDated = GN.strNumeric
FROM [dbo].[DB_Test] DBT
CROSS APPLY dbo.udf_GetNumeric(FileNamewithDate);
Related
In t-sql my dilemma is that I have to parse a potentially long string (up to 500 characters) for any of over 230 possible values and remove them from the string for reporting purposes. These values are a column in another table and they're all upper case and 4 characters long with the exception of two that are 5 characters long.
Examples of these values are:
USFRI
PROME
AZCH
TXJS
NYDS
XVIV. . . . .
Example of string before:
"Offered to XVIV and USFRI as back ups. No response as of yet."
Example of string after:
"Offered to and as back ups. No response as of yet."
Pretty sure it will have to be a UDF but I'm unable to come up with anything other than stripping ALL the upper case characters out of the string with PATINDEX which is not the objective.
This is unavoidably cludgy but one way is to split your string into rows, once you have a set of words the rest is easy; Simply re-aggregate while ignoring the matching values*:
with t as (
select 'Offered to XVIV and USFRI as back ups. No response as of yet.' s
union select 'Another row AZCH and TXJS words.'
), v as (
select * from (values('USFRI'),('PROME'),('AZCH'),('TXJS'),('NYDS'),('XVIV'))v(v)
)
select t.s OriginalString, s.Removed
from t
cross apply (
select String_Agg(j.[value], ' ') within group(order by Convert(tinyint,j.[key])) Removed
from OpenJson(Concat('["',replace(s, ' ', '","'),'"]')) j
where not exists (select * from v where v.v = j.[value])
)s;
* Requires a fully-supported version of SQL Server.
build a function to do the cleaning of one sentence, then call that function from your query, something like this SELECT Col1, dbo.fn_ReplaceValue(Col1) AS cleanValue, * FROM MySentencesTable. Your fn_ReplaceValue will be something like the code below, you could also create the table variable outside the function and pass it as parameter to speed up the process, but this way is all self contained.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION fn_ReplaceValue(#sentence VARCHAR(500))
RETURNS VARCHAR(500)
AS
BEGIN
DECLARE #ResultVar VARCHAR(500)
DECLARE #allValues TABLE (rowID int, sValues VARCHAR(15))
DECLARE #id INT = 0
DECLARE #ReplaceVal VARCHAR(10)
DECLARE #numberOfValues INT = (SELECT COUNT(*) FROM MyValuesTable)
--Populate table variable with all values
INSERT #allValues
SELECT ROW_NUMBER() OVER(ORDER BY MyValuesCol) AS rowID, MyValuesCol
FROM MyValuesTable
SET #ResultVar = #sentence
WHILE (#id <= #numberOfValues)
BEGIN
SET #id = #id + 1
SET #ReplaceVal = (SELECT sValue FROM #allValues WHERE rowID = #id)
SET #ResultVar = REPLACE(#ResultVar, #ReplaceVal, SPACE(0))
END
RETURN #ResultVar
END
GO
I suggest creating a table (either temporary or permanent), and loading these 230 string values into this table. Then use it in the following delete:
DELETE
FROM yourTable
WHERE col IN (SELECT col FROM tempTable);
If you just want to view your data sans these values, then use:
SELECT *
FROM yourTable
WHERE col NOT IN (SELECT col FROM tempTable);
I created a user-defined function in SQL Server 2012 that returns XML. I would like to call the function in a SELECT statement. Is this possible?
When I try doing it, I get the error:
The FOR XML clause is not allowed in a ASSIGNMENT statement.
I want the SELECT statement to return a set of these named methods that have dependencies of other named methods within their logic.
In the main CTE, I get the latest versions of methods that have dependencies. The UDF goes thru the logic of each method and returns any methods called within it. So, I want to call the UDF in the SELECT statement and return XML of the dependent method names.
The function works and returns XML data. This is the function:
ALTER FUNCTION [dbo].[GetCalledMLMs]
(
-- Add the parameters for the function here
#MLM_Txt nvarchar(MAX)
)
RETURNS XML
AS
BEGIN
-- Declare the return variable here
DECLARE #CalledMLMs XML
Declare #MLMTbl table (pos int, endpos int, CalledMLM nvarchar(200))
--Logic to get the data...
Select #CalledMLMs = CalledMLM from #MLMTbl FOR XML PATH
-- Return the result of the function
RETURN #CalledMLMs
END
This is the CTE that calls the UDF.
;with cte as
(
select distinct Name, max(ID) as LatestVersion
from MLM_T
where Logic like '%:= MLM %' and Logic not like '%standard_libs := mlm%'
group by Name
)
select MLM2.Name, LatestVersion,
dbo.GetCalledMLMs(MLM2.Logic) as CalledMLMs
from cte join MLM_T MLM2 on cte.Name = MLM2.Name
and cte.LatestVersion = MLM2.ID
and MLM2.Active = 1 and MLM2.Status in (3, 4)
When running this query I get the error that XML is not allowed to be used in assignment statement.
Is there any way to call a function in the SELECT statment that returns an XML data type?
If you want to set a variable to a value you have to use SET and a scalar value on the right side.
The syntax SELECT #SomeVariable=SomeColumn FROM SomeTable is not possible with FOR XML (and rather dangerous anyway...), because the XML is not a column of the SELECT but something after the process of selecting.
Your problem is situated here:
Select #CalledMLMs = CalledMLM from #MLMTbl FOR XML PATH
Try to change this to
SET #CalledMLMs = (SELECT CalledMLM FROM #MLMTbl FRO XML PATH);
I solved the problem by changing the function to return a table, not XML.
So it looks like this:
FUNCTION [dbo].[GetCalledMLMsTbl]
(
-- Add the parameters for the function here
#MLM_Txt nvarchar(MAX)
)
--RETURNS XML
RETURNS #MLMTbl TABLE
(
pos int,
endpos int,
CalledMLM nvarchar(200)
)
AS
BEGIN
--logic here
insert into #MLMTbl (pos, endpos, CalledMLM) Values (#startpos, #endpos, #MLM_name)
RETURN
END
Then I called the function in the 'from' clause in the select
;with cte as
(
select distinct Name, max(ID) as LatestVersion
from CV3MLM
where Logic like '%:= MLM %' and Logic not like '%standard_libs := mlm%'
--and Name not like '%V61_CCC'
group by Name
)
select MLM2.Name, LatestVersion, C.CalledMLM
from cte join MLM_tbl MLM2 on cte.Name = MLM2.Name and cte.LatestVersion = MLM2.ID
and MLM2.Active = 1 and MLM2.Status in (3, 4)
cross apply dbo.GetCalledMLMsTbl(MLM2.Logic) C
order by MLM2.Name, LatestVersion
I have a table like:
TemplateBody
---------------------------------------------------------------------
1.This is To inform #FirstName# about the issues regarding #Location#
Here the key strings are #FirstName# and #Location# which are distinguished by hash tags.
I have another table with the replacement values:
Variables | TemplateValues
-----------------------------
1.#FirstName# | Joseph William
2.#Location# | Alaska
I need to replace these two key strings with their values in the first table.
There are several ways this can be done. I'll list two ways. Each one has advantages and disadvantages. I would personally use the first one (Dynamic SQL).
1. Dynamic SQL
Advantages: Fast, doesn't require recursion
Disadvantages: Can't be used to update table variables
2. Recursive CTE
Advantages: Allows updates of table variables
Disadvantages: Requires recursion and is memory intensive, recursive CTE's are slow
1.A. Dynamic SQL: Regular tables and Temporary tables.
This example uses a temporary table as the text source:
CREATE TABLE #tt_text(templatebody VARCHAR(MAX));
INSERT INTO #tt_text(templatebody)VALUES
('This is to inform #first_name# about the issues regarding #location#');
CREATE TABLE #tt_repl(variable VARCHAR(256),template_value VARCHAR(8000));
INSERT INTO #tt_repl(variable,template_value)VALUES
('#first_name#','Joseph William'),
('#location#','Alaska');
DECLARE #rep_call NVARCHAR(MAX)='templatebody';
SELECT
#rep_call='REPLACE('+#rep_call+','''+REPLACE(variable,'''','''''')+''','''+REPLACE(template_value,'''','''''')+''')'
FROM
#tt_repl;
DECLARE #stmt NVARCHAR(MAX)='SELECT '+#rep_call+' FROM #tt_text';
EXEC sp_executesql #stmt;
/* Use these statements if you want to UPDATE the source rather than SELECT from it
DECLARE #stmt NVARCHAR(MAX)='UPDATE #tt_text SET templatebody='+#rep_call;
EXEC sp_executesql #stmt;
SELECT * FROM #tt_text;*/
DROP TABLE #tt_repl;
DROP TABLE #tt_text;
1.B. Dynamic SQL: Table variables.
Requires to have the table defined as a specific table type. Example type definition:
CREATE TYPE dbo.TEXT_TABLE AS TABLE(
id INT IDENTITY(1,1) PRIMARY KEY,
templatebody VARCHAR(MAX)
);
GO
Define a table variable of this type, and use it in a Dynamic SQL statement as follows. Note that updating a table variable this way is not possible.
DECLARE #tt_text dbo.TEXT_TABLE;
INSERT INTO #tt_text(templatebody)VALUES
('This is to inform #first_name# about the issues regarding #location#');
DECLARE #tt_repl TABLE(id INT IDENTITY(1,1),variable VARCHAR(256),template_value VARCHAR(8000));
INSERT INTO #tt_repl(variable,template_value)VALUES
('#first_name#','Joseph William'),
('#location#','Alaska');
DECLARE #rep_call NVARCHAR(MAX)='templatebody';
SELECT
#rep_call='REPLACE('+#rep_call+','''+REPLACE(variable,'''','''''')+''','''+REPLACE(template_value,'''','''''')+''')'
FROM
#tt_repl;
DECLARE #stmt NVARCHAR(MAX)='SELECT '+#rep_call+' FROM #tt_text';
EXEC sp_executesql #stmt,N'#tt_text TEXT_TABLE READONLY',#tt_text;
2. Recursive CTE:
The only reasons why you would write this using a recursive CTE is that you intend to update a table variable, or you are not allowed to use Dynamic SQL somehow (eg company policy?).
Note that the default maximum recursion level is 100. If you have more than a 100 replacement variables you should increase this level by adding OPTION(MAXRECURSION 32767) at the end of the query (see Query Hints - MAXRECURSION).
DECLARE #tt_text TABLE(id INT IDENTITY(1,1),templatebody VARCHAR(MAX));
INSERT INTO #tt_text(templatebody)VALUES
('This is to inform #first_name# about the issues regarding #location#');
DECLARE #tt_repl TABLE(id INT IDENTITY(1,1),variable VARCHAR(256),template_value VARCHAR(8000));
INSERT INTO #tt_repl(variable,template_value)VALUES
('#first_name#','Joseph William'),
('#location#','Alaska');
;WITH cte AS (
SELECT
t.id,
l=1,
templatebody=REPLACE(t.templatebody,r.variable,r.template_value)
FROM
#tt_text AS t
INNER JOIN #tt_repl AS r ON r.id=1
UNION ALL
SELECT
t.id,
l=l+1,
templatebody=REPLACE(t.templatebody,r.variable,r.template_value)
FROM
cte AS t
INNER JOIN #tt_repl AS r ON r.id=t.l+1
)
UPDATE
#tt_text
SET
templatebody=cte.templatebody
FROM
#tt_text AS t
INNER JOIN cte ON
cte.id=t.id
WHERE
cte.l=(SELECT MAX(id) FROM #tt_repl);
/* -- if instead you wanted to select the replaced strings, comment out
-- the above UPDATE statement, and uncomment this SELECT statement:
SELECT
templatebody
FROM
cte
WHERE
l=(SELECT MAX(id) FROM #tt_repl);*/
SELECT*FROM #tt_text;
As long as the values for the Variables are unique ('#FirstName#' etc.) you can join each variable to the table containing TemplateBody:
select replace(replace(t.TemplateBody,'#FirstName#',variable.theVariable),'#Location#',variable2.theVariable)
from
[TemplateBodyTable] t
left join
(
select TemplateValues theVariable,Variables
from [VariablesTable] v
) variable on variable.Variables='#FirstName#'
left join
(
select TemplateValues theVariable,Variables
from [VariablesTable] v
) variable2 on variable2.Variables='#Location#'
A common table expression would allow you to loop through your templates and replace all variables in that template using a variables table. If you have a lot of variables, the level of recursion might go beyond the default limit of 100 recursions. You can play with the MAXRECURSION option according to your need.
DECLARE #Templates TABLE(Body nvarchar(max));
INSERT INTO #Templates VALUES ('This is to inform #FirstName# about the issues regarding #Location#');
DECLARE #Variables TABLE(Name nvarchar(50), Value nvarchar(max));
INSERT INTO #Variables VALUES ('#FirstName#', 'Joseph William'),
('#Location#', 'Alaska');
WITH replacing(Body, Level) AS
(
SELECT t.Body, 1 FROM #Templates t
UNION ALL
SELECT REPLACE(t.Body, v.Name, v.Value), t.Level + 1
FROM replacing t INNER JOIN #Variables v ON PATINDEX('%' + v.Name + '%', t.Body) > 0
)
SELECT TOP 1 r.Body
FROM replacing r
WHERE r.Level = (SELECT MAX(Level) FROM replacing)
OPTION (MAXRECURSION 0);
I have a comma-separated list column available which has values like
Product1, Product2, Product3
I need to search whether the given product name exists in this column.
I used this SQL and it is working fine.
Select *
from ProductsList
where productname like '%Product1%'
This query is working very slowly. Is there a more efficient way I can search for a product name in the comma-separated list to improve the performance of the query?
Please note I have to search comma separated list before performing any other select statements.
user defined functions for comma separation of the string
Create FUNCTION [dbo].[BreakStringIntoRows] (#CommadelimitedString varchar(max))
RETURNS #Result TABLE (Column1 VARCHAR(max))
AS
BEGIN
DECLARE #IntLocation INT
WHILE (CHARINDEX(',', #CommadelimitedString, 0) > 0)
BEGIN
SET #IntLocation = CHARINDEX(',', #CommadelimitedString, 0)
INSERT INTO #Result (Column1)
--LTRIM and RTRIM to ensure blank spaces are removed
SELECT RTRIM(LTRIM(SUBSTRING(#CommadelimitedString, 0, #IntLocation)))
SET #CommadelimitedString = STUFF(#CommadelimitedString, 1, #IntLocation, '')
END
INSERT INTO #Result (Column1)
SELECT RTRIM(LTRIM(#CommadelimitedString))--LTRIM and RTRIM to ensure blank spaces are removed
RETURN
END
Declare #productname Nvarchar(max)
set #productname='Product1,Product2,Product3'
select * from product where [productname] in(select * from [dbo].[![enter image description here][1]][1][BreakStringIntoRows](#productname))
Felix is right and the 'right answer' is to normalize your table. Although, maybe you have 500k lines of code that expect this column to exist as it is. So your next best (non-destructive) answer is:
Create a table to hold normalize data:
CREATE TABLE ProductsList2 (ProductId INT, ProductName VARCHAR)
Create a TRIGGER that on UPDATE/INSERT/DELETE maintains ProductList2 by splitting the string 'Product1,Product2,Product3' into three records.
Index your new table.
Query against your new table:
SELECT *
FROM ProductsList
WHERE ProductId IN (SELECT x.ProductId
FROM ProductsList2 x
WHERE x.ProductName = 'Product1')
I have a SourceTable and a table variable #TQueries containing various T-SQL predicates that target SourceTable.
The expected result is to dynamically generate SELECT statements that return a list of Id's as specified by the predicates in #TQueries. Each dynamically generated SELECT statement also needs to execute in a particular order, and the final set of values needs to be unique and the ordering must be preserved.
Fortunately, there's a limit to how many values need to be retrieved and how many dynamic queries need to be generated. The Id list should contain at most 10 Ids, and we don't expect more than 7 queries.
The following is a sample of this setup, not the actual data/database:
-- Set up some test data, this is quick and dirty just to provide some data to test against
IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SourceTable]') AND type in (N'U'))
BEGIN
-- Create a numbers table, sorta
SELECT TOP 20
IDENTITY(INT,1,1) AS Id,
ABS(CHECKSUM(NewId())) % 100 AS [SomeValue]
INTO [SourceTable]
FROM sysobjects a
END
DECLARE #TQueries TABLE (
[Ordinal] INT,
[WherePredicate] NVARCHAR(MAX),
[OrderByPredicate] NVARCHAR(MAX)
);
-- Simulate SELECTs with different order by that get different data due to varying WHERE clauses and ORDER conditions
INSERT INTO #TQueries VALUES ( 1, N'[Id] IN (6,11,13,7,10,3,15)', '[SomeValue] ASC' ) -- Sort Asc
INSERT INTO #TQueries VALUES ( 2, N'[Id] IN (9,15,14,20,17)', '[SomeValue] DESC' ) -- Sort Desc
INSERT INTO #TQueries VALUES ( 3, N'[Id] IN (20,10,1,16,11,19,9,15,17,6,2,3,13)', 'NEWID()' ) -- Sort Random
My main issue has been avoiding the use of a CURSOR or iterating through the rows one by one. The closest I've come to a set operation that meets this criteria is using a table variable to store the results of each query or a massive CTE.
Suggestions and comments are welcome.
Here's a solution that builds a single statement both to run all the queries and to return the results.
It uses a similar approach as in your answer when iterating over the #TQueries table, i.e. it also uses {...} tokens where column values from #TQuery should go, and it puts the values there with nested REPLACE() calls.
Other than that, it heavily depends on ranking functions, and I'm not sure if doesn't really abuse them. You'd need to test this method before deciding if it's better or worse than the one you've got so far.
DECLARE #QueryTemplate nvarchar(max), #FinalSQL nvarchar(max);
SET #QueryTemplate =
N'SELECT
[Id],
QueryRank = {Ordinal},
RowRank = ROW_NUMBER() OVER (ORDER BY {OrderByPredicate})
FROM [dbo].[SourceTable]
WHERE {WherePredicate}
';
SET #FinalSQL =
N'WITH AllData AS (
' +
SUBSTRING(
(
SELECT
'UNION ALL ' +
REPLACE(REPLACE(REPLACE(#QueryTemplate,
'{Ordinal}' , [Ordinal] ),
'{OrderByPredicate}', [OrderByPredicate]),
'{WherePredicate}' , [WherePredicate] )
FROM #TQueries
ORDER BY [Ordinal]
FOR XML PATH (''), TYPE
).value('.', 'nvarchar(max)'),
11, -- starting just after the first 'UNION ALL '
CAST(0x7FFFFFFF AS int) -- max int; no need to specify the exact length
) +
'),
RankedData AS (
SELECT
[Id],
QueryRank,
RowRank,
ValueRank = ROW_NUMBER() OVER (PARTITION BY [Id] ORDER BY QueryRank)
FROM AllData
)SELECT TOP (#top)
[Id]
FROM RankedData
WHERE ValueRank = 1
ORDER BY
QueryRank,
RowRank
';
PRINT #FinalSQL;
EXECUTE sp_executesql #FinalSQL, N'#top int', 10;
Basically, every subquery gets these auxiliary columns:
QueryRank – a constant value (within the subquery's result set) derived from [Ordinal];
RowRank – a ranking assigned to a row based on the [OrderByPredicate].
The result sets are UNIONed and then every entry of every unique value is again ranked (ValueRank) based on the query ranking.
When pulling the final result set, duplicates are suppressed (by the condition ValueRank = 1), and QueryRank and RowRank are used in the ORDER BY clause to preserve the original row order.
I used EXECUTE sp_executesql #query instead of EXECUTE (#query), because the former allows you to add parameters to the query. In particular, I parametrised the number of results to return (the argument of TOP). But you could certainly concatenate that value into the dynamic script directly, just like other things, if you prefer EXECUTE () over EXECUTE sq_executesql.
If you like, you can try this query at SQL Fiddle. (Note: the SQL Fiddle version replaces the #TQueries table variable with the TQueries table.)
This is what I've managed to piece together cobbled from my original response and improved upon by comments from #AndriyM
DECLARE #sql_prefix NVARCHAR(MAX);
SET #sql_prefix =
N'DECLARE #TResults TABLE (
[Ordinal] INT IDENTITY(1,1),
[ContentItemId] INT
);
DECLARE #max INT, #top INT;
SELECT #max = 10;';
DECLARE #sql_insert_template NVARCHAR(MAX), #sql_body NVARCHAR(MAX);
SET #sql_insert_template =
N'SELECT #top = #max - COUNT(*) FROM #TResults;
INSERT INTO #TResults
SELECT TOP (#top) [Id]
FROM [dbo].[SourceTable]
WHERE
{WherePredicate}
AND NOT EXISTS (
SELECT 1
FROM #TResults AS [tr]
WHERE [tr].[ContentItemId] = [SourceTable].[Id]
)
ORDER BY {OrderByPredicate};';
WITH Query ([Ordinal],[SqlCommand]) AS (
SELECT
[Ordinal],
REPLACE(REPLACE(#sql_insert_template, '{WherePredicate}', [WherePredicate]), '{OrderByPredicate}', [OrderByPredicate])
FROM #TQueries
)
SELECT
#sql_body = #sql_prefix + (
SELECT [SqlCommand]
FROM Query
ORDER BY [Ordinal] ASC
FOR XML PATH(''),TYPE).value('.', 'varchar(max)') + CHAR(13)+CHAR(10)
+N' SELECT * FROM #TResults ORDER BY [Ordinal]';
EXEC(#sql_body);
The basic idea is to use a table variable to hold the results of each query. I create a template for the SQL and replace the values in the template based on what is stored in #TQueries.
Once the entire script is completed I run it with EXEC.