This is a simplified version of my problem.
Using the following SELECT statement I output my data as Xml.
SET #result = (
SELECT * FROM MyTable
FOR XML path('receipts')
)
SELECT #result AS xmloutput
I then write the output into an Xml file that uses a timestamp in the filename:
SET #sqlCommand =
'bcp "EXEC ' +#db + '.dbo.MyStoreProcedure" queryout "' + #filePath + #fileName + ' " -T -C 1252 -w'
EXEC master..xp_cmdshell #sqlCommand
In the snippet above:
MyStoreProcedure is basically the code from the first snippet i.e. the SELECT results.
#filename has a structure like YYYYMMDDHHMMSS_customers.xml.
Now the problem is, the program that is supposed to read this Xml file has a limit of 20000 records. What I should do is split the results into separate Xml files. So if the original SELECT query gives 25000 records. They should be split into two non-overlapping files: First one containing 20000 and the second the remaining 5000.
To avoid overwriting on the same file (in case the process goes very fast), we can have a 1-second wait between each batch so that the next file will get a new name.
How can I implement this split?
Thanks in advance.
The function NTILE (Starting with SQL Server 2012) allows you to group your data. If you have a lower version you can easily simulate this with a combination of ROW_NUMBER and integer division.
The following writes your table rows together with a chunk number into a temp table. The WHILE takes chunk after chunk. You can use the chunk-number to add it to your file name.
The function WAITFOR (again 2012+) allows to pause automatically.
DECLARE #ApproxRowsPerChunk INT=10; --will float a little...
SELECT NTILE((SELECT Count(*) FROM sys.objects) / #ApproxRowsPerChunk) OVER(ORDER BY o.object_id) AS ChunkNumber
,*
INTO #StagingTable
FROM sys.objects AS o;
DECLARE #nr INT=1;
DECLARE #maxNr INT=(SELECT MAX(ChunkNumber) FROM #StagingTable);
WHILE #nr<=#maxNr
BEGIN
SELECT * FROM #StagingTable WHERE ChunkNumber=#nr FOR XML PATH('test');
SET #nr=#nr+1;
WAITFOR DELAY '00:00:01';
END
Hint
This would allow you, to integrate something like 1 of 17 into your XML (and into the file's name if needed).
please see if the below helps. I basically take your finished myTableresults, assign row_number() and divide them into partitions, I then loop through the partitions as per your batch size, executing your bulk copy in the loop. You may have some more work to do on dynamically setting the file names (you could use the #current_partition variable in your file name builder).
I have commented my code:
-- Dummy data, represents your results set
IF OBJECT_ID('tempdb..#t1') IS NOT NULL
DROP TABLE #t1
CREATE TABLE #t1 (
initials VARCHAR(10)
,no_cars VARCHAR(10)
)
INSERT INTO #t1
VALUES
('AA',1)
,('BB',1)
,('CC',1)
,('DD',1)
,('EE',1)
,('FF',1)
,('GG',1)
,('HH',1)
,('II',1)
,('JJ',1)
,('KK',1)
---- end of test data creation
-- Assign query partition size, in your case would be 20,000. Must be float (or other decimal).
DECLARE #partition_size FLOAT = 3;
IF OBJECT_ID('tempdb..#t2') IS NOT NULL
DROP TABLE #t2;
-- Assign your results set a row number
WITH [cte1] AS (
SELECT
initials
,no_cars
,ROW_NUMBER() OVER (ORDER BY no_cars ASC) AS row_no
FROM #t1
)
-- Assign the query partition by running a ceiling command on the row number, store the results in a temp table
SELECT
initials
,no_cars
,CEILING(row_no/#partition_size) AS query_partition
INTO #t2
FROM cte1
--- Now, create a loop to go through each partition
-- Your business variables
DECLARE #result XML
DECLARE #sqlcommand NVARCHAR(4000)
DECLARE #db VARCHAR(50) = 'db'
DECLARE #filepath VARCHAR(50) = 'C:\temp'
DECLARE #filename VARCHAR(50) = 'dynamic2017010.xml'
-- Find highest partition
DECLARE #current_partition INT = 1
DECLARE #max_partition INT = (SELECT MAX(query_partition) FROM #t2)
WHILE #current_partition <= #max_partition
BEGIN
SET #result = (
SELECT initials
,no_cars FROM #t2
WHERE query_partition = #current_partition
FOR XML path('receipts')
)
SELECT #result AS xmloutput
-- other code..?
SET #sqlCommand =
'bcp "EXEC ' +#db + '.dbo.MyStoreProcedure" queryout "' + #filePath + #fileName + ' " -T -C 1252 -w'
EXEC master..xp_cmdshell #sqlCommand
SET #current_partition += 1
END
Related
I am searching for a loop query over multiple databases and insert result into existing table in one database to collect al data.
There are 28 existing databases at the moment but when i start the query below it says table already exists at the second database.
when this works i want to loop a much larger query then this.
I also tried executing and union all but if a new database is added it must be collected autmatically.
See example i've tried below:
--drop table if exists [hulptabellen].dbo.HIdatabases
declare #dbList table (dbName varchar(128), indx int)
insert into #dbList
select dbName = dbname, row_number() over (order by dbname)
from [hulptabellen].dbo.HIdatabases
--declare variables for use in the while loop
declare #index int = 1
declare #totalDBs int = (select count(*) from #dbList)
declare #currentDB varchar(128)
declare #cmd varchar(300)
--define the command which will be used on each database.
declare #cmdTemplate varchar(300) =
'
use {dbName};
select * insert into [hulptabellen].dbo.cladrloc from {dbname}.dbo.cladrloc
'
--loop through each database and execute the command
while #index <= #totalDBs
begin
set #currentDB = (select dbName from #dbList where indx = #index)
set #cmd = replace(#cmdTemplate, '{dbName}', #currentDB)
execute(#cmd)
set #index += 1
end
Create the table outside your loop and insert into the table this way:
INSERT INTO [hulptabellen].dbo.cladrloc (col1,col2)
SELECT col1,col2
FROM {dbname}.dbo.cladrloc
FYI: When you use the following syntax, a new table is created, so it can be executed only once.
SELECT *
INTO [hulptabellen].dbo.cladrloc
FROM {dbname}.dbo.cladrloc
I want to truncate multiple tables. I know that it isn't possible in the same way that DELETE will delete the rows from multiple tables.
In this question truncate multi tables IndoKnight provides the OP-designated best answer. I want to try that. However, I get a syntax error at:
TRUNCATE TABLE #tableName
To troubleshoot I tried printing the variables because when I first tried using TRUNCATE TABLE I needed to include the database name and schema (e.g. NuggetDemoDB.dbo.tablename) to get it to work. I CAN print the variable #tableList. But I CANNOT print #tableName.
DECLARE #delimiter CHAR(1),
#tableList VARCHAR(MAX),
#tableName VARCHAR(20),
#currLen INT
SET #delimiter = ','
SET #tableList = 'Employees,Products,Sales'
--PRINT #tableList
WHILE LEN(#tableList) > 0
BEGIN
SELECT #currLen =
(
CASE charindex( #delimiter, #tableList )
WHEN 0 THEN len( #tableList )
ELSE ( charindex( #delimiter, #tableList ) -1 )
END
)
SET #tableName = SUBSTRING (#tableList,1,#currLen )
--PRINT #tableName
TRUNCATE TABLE #tableName
SELECT tableList =
(
CASE ( len( #tableList ) - #currLen )
WHEN 0 THEN ''
ELSE right( #tableList, len( #tableList ) - #currLen - 1 )
END
)
END
Edit: Fixed the table list to remove the extra "Sales" from the list of tables and added "Employees".
Even thought Sales is listed twice... No harm
Declare #TableList varchar(max)
SET #tableList = 'Sales,Products,Sales'
Set #tableList = 'Truncate Table '+replace(#tablelist,',',';Truncate Table ')+';'
Print #TableList
--Exec(#tablelist) --<< If you are TRULY comfortable with the results
Returns
Truncate Table Sales;Truncate Table Products;Truncate Table Sales
First and foremost, you may want to consider spending a little energy to come up with a SQL implementation for splitting a string into rows, e.g. Split, List, etc. This will prove to be helpful not only for this exercise, but for many others. Then this post is not about how to turn a comma separated list into rows and we can then concentrate on the dynamic SQL needed in order to do what is needed.
Example
The below example assumes that you have a function named List to take care of transposing the comma separated list into rows.
declare
#TableList varchar(max) = 'Sales, Products, Sales';
declare
#Sql varchar(max) = (
select distinct 'truncate table ' + name + ';'
from List(#TableList)
for xml path(''));
exec (#Sql);
One last thing about truncate of delete
Truncate will not work if you are truncating data where there is a foreign key relationship to another table.
You will get something like the below error.
Msg 4712, Level 16, State 1, Line 19
Cannot truncate table 'Something' because it is being referenced by a FOREIGN KEY constraint.
Below is an example that uses a table variable instead of delimited list. If the source of your table list is already in a table, you could tweak this script to use that as the source instead. Note that the extra Sales table is redundant (gleaned from the script your question) and can be removed. The table names can be database and/or schema qualified if desired.
DECLARE #tableList TABLE(TableName nvarchar(393));
DECLARE #TruncateTableBatch nvarchar(MAX);
INSERT INTO #tableList VALUES
(N'Sales')
, (N'Products')
, (N'Sales');
SET #TruncateTableBatch = (SELECT N'TRUNCATE TABLE ' + TableName + N'
'
FROM #tableList
FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)');
--PRINT #SQL;
EXECUTE(#TruncateTableBatch);
What about something like:
exec sp_msforeachtable
#command1 ='truncate table ?'
,#whereand = ' and object_id In (select object_id from sys.objects where name in ("sales", "products")'
Have not tested it yet. But it might give a useful hint.
I currently store my csv formatted files on disk and then query them like this:
SELECT *
FROM OPENROWSET(BULK 'C:\myfile.csv',
FORMATFILE = 'C\format.fmt',
FIRSTROW = 2) AS rs
Where format.fmt are the defined format of the columns in the csv file. This works very well.
But I'm interested in storing the file in a SQL Server table instead of storing them at disk.
So when having a VARBINARY(MAX) datatype column. How do I query them?
If I have a table like:
CREATE TABLE FileTable
(
[FileName] NVARCHAR(256)
,[File] VARBINARY(MAX)
)
With one row 'myfile.csv', '0x427574696B3B44616....'
How to read that file content into a temporary table for example?
If you really need to work with varbinary data, you can just cast it back to nvarchar:
DECLARE #bin VARBINARY(MAX)
SET #bin = 0x5468697320697320612074657374
SELECT CAST(#bin as VARCHAR(MAX))
-- gives This is a test
Once you've got it into that format, you can use a split function to turn it into a table. Don't ask me why there isn't a built-in split function in SQL Server, given that it's such a screamingly obvious oversight, but there isn't. So create your own with the code below:
CREATE FUNCTION [dbo].[fn_splitDelimitedToTable] ( #delimiter varchar(3), #StringInput VARCHAR(8000) )
RETURNS #OutputTable TABLE ([String] VARCHAR(100), [Hierarchy] int )
AS
BEGIN
DECLARE #String VARCHAR(100)
DECLARE #row int = 0
WHILE LEN(#StringInput) > 0
BEGIN
SET #row = #row + 1
SET #String = LEFT(#StringInput,
ISNULL(NULLIF(CHARINDEX(#delimiter, #StringInput) - 1, -1),
LEN(#StringInput)))
SET #StringInput = SUBSTRING(#StringInput,
ISNULL(NULLIF(CHARINDEX(#delimiter, #StringInput), 0),
LEN(#StringInput)) + 1, LEN(#StringInput))
INSERT INTO #OutputTable ( [String], [Hierarchy] )
VALUES ( #String, #row )
END
RETURN
END
Put it all together:
select CAST('one,two,three' as VARBINARY)
-- gives 0x6F6E652C74776F2C7468726565
DECLARE #bin VARBINARY(MAX)
SET #bin = 0x6F6E652C74776F2C7468726565
select * from fn_splitDelimitedToTable(',', CAST(#bin as VARCHAR(MAX)))
gives this result:
string hierarchy
================
one 1
two 2
three 3
And of course, you can get the result into a temp table to work with if you so wish:
select * into #myTempTable
from fn_splitDelimitedToTable(',', CAST(#bin as VARCHAR(MAX)))
If you've got CSV data, why not just import it into the database?
You can also use BULK INSERT to do this as in this question.
Assuming you've created a table with the correct format to import the data into (e.g. 'MyImportTable') something like the following could be used:
BULK INSERT MyImportTable
FROM 'C:\myfile.csv'
WITH (DATAFILETYPE='char',
FIRSTROW = 2,
FORMATFILE = 'C\format.fmt');
EDIT 1:
With the data imported into the database, you can now query the table directly, and avoid having the CSV altogether like so:
SELECT *
FROM MyImportTable
With the reference to the original CSV no longer required you can delete/archive the original CSV.
EDIT 2:
If you've enabled xp_cmdshell, and you have the appropriate permissions, you can delete the file from SQL with the following:
xp_cmdshell 'del c:\myfile.csv'
Lastly, if you want to enable xp_cmdshell use the following:
exec sp_configure
go
exec sp_configure 'xp_cmdshell', 1
go
reconfigure
go
I wrote this SQL in a stored procedure but not working,
declare #tableName varchar(max) = 'TblTest'
declare #col1Name varchar(max) = 'VALUE1'
declare #col2Name varchar(max) = 'VALUE2'
declare #value1 varchar(max)
declare #value2 varchar(200)
execute('Select TOP 1 #value1='+#col1Name+', #value2='+#col2Name+' From '+ #tableName +' Where ID = 61')
select #value1
execute('Select TOP 1 #value1=VALUE1, #value2=VALUE2 From TblTest Where ID = 61')
This SQL throws this error:
Must declare the scalar variable "#value1".
I am generating the SQL dynamically and I want to get value in a variable. What should I do?
The reason you are getting the DECLARE error from your dynamic statement is because dynamic statements are handled in separate batches, which boils down to a matter of scope. While there may be a more formal definition of the scopes available in SQL Server, I've found it sufficient to generally keep the following three in mind, ordered from highest availability to lowest availability:
Global:
Objects that are available server-wide, such as temporary tables created with a double hash/pound sign ( ##GLOBALTABLE, however you like to call # ). Be very wary of global objects, just as you would with any application, SQL Server or otherwise; these types of things are generally best avoided altogether. What I'm essentially saying is to keep this scope in mind specifically as a reminder to stay out of it.
IF ( OBJECT_ID( 'tempdb.dbo.##GlobalTable' ) IS NULL )
BEGIN
CREATE TABLE ##GlobalTable
(
Val BIT
);
INSERT INTO ##GlobalTable ( Val )
VALUES ( 1 );
END;
GO
-- This table may now be accessed by any connection in any database,
-- assuming the caller has sufficient privileges to do so, of course.
Session:
Objects which are reference locked to a specific spid. Off the top of my head, the only type of session object I can think of is a normal temporary table, defined like #Table. Being in session scope essentially means that after the batch ( terminated by GO ) completes, references to this object will continue to resolve successfully. These are technically accessible by other sessions, but it would be somewhat of a feat do to so programmatically as they get sort of randomized names in tempdb and accessing them is a bit of a pain in the ass anyway.
-- Start of session;
-- Start of batch;
IF ( OBJECT_ID( 'tempdb.dbo.#t_Test' ) IS NULL )
BEGIN
CREATE TABLE #t_Test
(
Val BIT
);
INSERT INTO #t_Test ( Val )
VALUES ( 1 );
END;
GO
-- End of batch;
-- Start of batch;
SELECT *
FROM #t_Test;
GO
-- End of batch;
Opening a new session ( a connection with a separate spid ), the second batch above would fail, as that session would be unable to resolve the #t_Test object name.
Batch:
Normal variables, such as your #value1 and #value2, are scoped only for the batch in which they are declared. Unlike #Temp tables, as soon as your query block hits a GO, those variables stop being available to the session. This is the scope level which is generating your error.
-- Start of session;
-- Start of batch;
DECLARE #test BIT = 1;
PRINT #test;
GO
-- End of batch;
-- Start of batch;
PRINT #Test; -- Msg 137, Level 15, State 2, Line 2
-- Must declare the scalar variable "#Test".
GO
-- End of batch;
Okay, so what?
What is happening here with your dynamic statement is that the EXECUTE() command effectively evaluates as a separate batch, without breaking the batch you executed it from. EXECUTE() is good and all, but since the introduction of sp_executesql(), I use the former only in the most simple of instances ( explicitly, when there is very little "dynamic" element of my statements at all, primarily to "trick" otherwise unaccommodating DDL CREATE statements to run in the middle of other batches ). #AaronBertrand's answer above is similar and will be similar in performance to the following, leveraging the function of the optimizer when evaluating dynamic statements, but I thought it might be worthwhile to expand on the #param, well, parameter.
IF NOT EXISTS ( SELECT 1
FROM sys.objects
WHERE name = 'TblTest'
AND type = 'U' )
BEGIN
--DROP TABLE dbo.TblTest;
CREATE TABLE dbo.TblTest
(
ID INTEGER,
VALUE1 VARCHAR( 1 ),
VALUE2 VARCHAR( 1 )
);
INSERT INTO dbo.TblTest ( ID, VALUE1, VALUE2 )
VALUES ( 61, 'A', 'B' );
END;
SET NOCOUNT ON;
DECLARE #SQL NVARCHAR( MAX ),
#PRM NVARCHAR( MAX ),
#value1 VARCHAR( MAX ),
#value2 VARCHAR( 200 ),
#Table VARCHAR( 32 ),
#ID INTEGER;
SET #Table = 'TblTest';
SET #ID = 61;
SET #PRM = '
#_ID INTEGER,
#_value1 VARCHAR( MAX ) OUT,
#_value2 VARCHAR( 200 ) OUT';
SET #SQL = '
SELECT #_value1 = VALUE1,
#_value2 = VALUE2
FROM dbo.[' + REPLACE( #Table, '''', '' ) + ']
WHERE ID = #_ID;';
EXECUTE dbo.sp_executesql #statement = #SQL, #param = #PRM,
#_ID = #ID, #_value1 = #value1 OUT, #_value2 = #value2 OUT;
PRINT #value1 + ' ' + #value2;
SET NOCOUNT OFF;
Declare #v1 varchar(max), #v2 varchar(200);
Declare #sql nvarchar(max);
Set #sql = N'SELECT #v1 = value1, #v2 = value2
FROM dbo.TblTest -- always use schema
WHERE ID = 61;';
EXEC sp_executesql #sql,
N'#v1 varchar(max) output, #v2 varchar(200) output',
#v1 output, #v2 output;
You should also pass your input, like wherever 61 comes from, as proper parameters (but you won't be able to pass table and column names that way).
Here is a simple example :
Create or alter PROCEDURE getPersonCountByLastName (
#lastName varchar(20),
#count int OUTPUT
)
As
Begin
select #count = count(personSid) from Person where lastName like #lastName
End;
Execute below statements in one batch (by selecting all)
1. Declare #count int
2. Exec getPersonCountByLastName kumar, #count output
3. Select #count
When i tried to execute statements 1,2,3 individually, I had the same error.
But when executed them all at one time, it worked fine.
The reason is that SQL executes declare, exec statements in different sessions.
Open to further corrections.
This will occur in SQL Server as well if you don't run all of the statements at once. If you are highlighting a set of statements and executing the following:
DECLARE #LoopVar INT
SET #LoopVar = (SELECT COUNT(*) FROM SomeTable)
And then try to highlight another set of statements such as:
PRINT 'LoopVar is: ' + CONVERT(NVARCHAR(255), #LoopVar)
You will receive this error.
-- CREATE OR ALTER PROCEDURE
ALTER PROCEDURE out (
#age INT,
#salary INT OUTPUT)
AS
BEGIN
SELECT #salary = (SELECT SALARY FROM new_testing where AGE = #age ORDER BY AGE OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY);
END
-----------------DECLARE THE OUTPUT VARIABLE---------------------------------
DECLARE #test INT
---------------------THEN EXECUTE THE QUERY---------------------------------
EXECUTE out 25 , #salary = #test OUTPUT
print #test
-------------------same output obtain without procedure-------------------------------------------
SELECT * FROM new_testing where AGE = 25 ORDER BY AGE OFFSET 0 ROWS FETCH NEXT 1 ROWS ONLY
I'm trying to debug someone else's SQL reports and have placed the underlying reports query into a query windows of SQL 2012.
One of the parameters the report asks for is a list of integers. This is achieved on the report through a multi-select drop down box. The report's underlying query uses this integer list in the where clause e.g.
select *
from TabA
where TabA.ID in (#listOfIDs)
I don't want to modify the query I'm debugging but I can't figure out how to create a variable on the SQL Server that can hold this type of data to test it.
e.g.
declare #listOfIDs int
set listOfIDs = 1,2,3,4
There is no datatype that can hold a list of integers, so how can I run the report query on my SQL Server with the same values as the report?
Table variable
declare #listOfIDs table (id int);
insert #listOfIDs(id) values(1),(2),(3);
select *
from TabA
where TabA.ID in (select id from #listOfIDs)
or
declare #listOfIDs varchar(1000);
SET #listOfIDs = ',1,2,3,'; --in this solution need put coma on begin and end
select *
from TabA
where charindex(',' + CAST(TabA.ID as nvarchar(20)) + ',', #listOfIDs) > 0
Assuming the variable is something akin to:
CREATE TYPE [dbo].[IntList] AS TABLE(
[Value] [int] NOT NULL
)
And the Stored Procedure is using it in this form:
ALTER Procedure [dbo].[GetFooByIds]
#Ids [IntList] ReadOnly
As
You can create the IntList and call the procedure like so:
Declare #IDs IntList;
Insert Into #IDs Select Id From dbo.{TableThatHasIds}
Where Id In (111, 222, 333, 444)
Exec [dbo].[GetFooByIds] #IDs
Or if you are providing the IntList yourself
DECLARE #listOfIDs dbo.IntList
INSERT INTO #listofIDs VALUES (1),(35),(118);
You are right, there is no datatype in SQL-Server which can hold a list of integers. But what you can do is store a list of integers as a string.
DECLARE #listOfIDs varchar(8000);
SET #listOfIDs = '1,2,3,4';
You can then split the string into separate integer values and put them into a table. Your procedure might already do this.
You can also use a dynamic query to achieve the same outcome:
DECLARE #SQL nvarchar(8000);
SET #SQL = 'SELECT * FROM TabA WHERE TabA.ID IN (' + #listOfIDs + ')';
EXECUTE (#SQL);
Note: I haven't done any sanitation on this query, please be aware that it's vulnerable to SQL injection. Clean as required.
For SQL Server 2016+ and Azure SQL Database, the STRING_SPLIT function was added that would be a perfect solution for this problem. Here is the documentation:
https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql
Here is an example:
/*List of ids in a comma delimited string
Note: the ') WAITFOR DELAY ''00:00:02''' is a way to verify that your script
doesn't allow for SQL injection*/
DECLARE #listOfIds VARCHAR(MAX) = '1,3,a,10.1,) WAITFOR DELAY ''00:00:02''';
--Make sure the temp table was dropped before trying to create it
IF OBJECT_ID('tempdb..#MyTable') IS NOT NULL DROP TABLE #MyTable;
--Create example reference table
CREATE TABLE #MyTable
([Id] INT NOT NULL);
--Populate the reference table
DECLARE #i INT = 1;
WHILE(#i <= 10)
BEGIN
INSERT INTO #MyTable
SELECT #i;
SET #i = #i + 1;
END
/*Find all the values
Note: I silently ignore the values that are not integers*/
SELECT t.[Id]
FROM #MyTable as t
INNER JOIN
(SELECT value as [Id]
FROM STRING_SPLIT(#listOfIds, ',')
WHERE ISNUMERIC(value) = 1 /*Make sure it is numeric*/
AND ROUND(value,0) = value /*Make sure it is an integer*/) as ids
ON t.[Id] = ids.[Id];
--Clean-up
DROP TABLE #MyTable;
The result of the query is 1,3
In the end i came to the conclusion that without modifying how the query works i could not store the values in variables. I used SQL profiler to catch the values and then hard coded them into the query to see how it worked. There were 18 of these integer arrays and some had over 30 elements in them.
I think that there is a need for MS/SQL to introduce some aditional datatypes into the language. Arrays are quite common and i don't see why you couldn't use them in a stored proc.
There is a new function in SQL called string_split if you are using list of string.
Ref Link STRING_SPLIT (Transact-SQL)
DECLARE #tags NVARCHAR(400) = 'clothing,road,,touring,bike'
SELECT value
FROM STRING_SPLIT(#tags, ',')
WHERE RTRIM(value) <> '';
you can pass this query with in as follows:
SELECT *
FROM [dbo].[yourTable]
WHERE (strval IN (SELECT value FROM STRING_SPLIT(#tags, ',') WHERE RTRIM(value) <> ''))
I use this :
1-Declare a temp table variable in the script your building:
DECLARE #ShiftPeriodList TABLE(id INT NOT NULL);
2-Allocate to temp table:
IF (SOME CONDITION)
BEGIN
INSERT INTO #ShiftPeriodList SELECT ShiftId FROM [hr].[tbl_WorkShift]
END
IF (SOME CONDITION2)
BEGIN
INSERT INTO #ShiftPeriodList
SELECT ws.ShiftId
FROM [hr].[tbl_WorkShift] ws
WHERE ws.WorkShift = 'Weekend(VSD)' OR ws.WorkShift = 'Weekend(SDL)'
END
3-Reference the table when you need it in a WHERE statement :
INSERT INTO SomeTable WHERE ShiftPeriod IN (SELECT * FROM #ShiftPeriodList)
You can't do it like this, but you can execute the entire query storing it in a variable.
For example:
DECLARE #listOfIDs NVARCHAR(MAX) =
'1,2,3'
DECLARE #query NVARCHAR(MAX) =
'Select *
From TabA
Where TabA.ID in (' + #listOfIDs + ')'
Exec (#query)