I have a simple SELECT statement in SQL server that returns multiple values as follows :
1 Dog
2 Cat
3 Mouse
I need to know if there's a way to loop through them to store each value in a variable for example :
declare #animal1 nvarchar(50)
declare #animal2 nvarchar(50)
declare #animal3 nvarchar(50)
#animal1 = dog
#animal2 = cat
#animal3 = mouse
Eventually I want all the words to form a string and to insert that string in 1 column of a 2nd table
DECLARE #animals TABLE (
id int identity(1,1),--identity is optional if your select returns an id
name varchar(50)
);
insert into #animals
--Your Select statement here
Declare #string varchar(1000);
select #string = ISNULL(#string + ', ', '') + animals.animal_name
from
(
Select id, name from #animals
) animals (id, animal_name)
declare #my_string varchar(1000)
select #my_string = ISNULL(#my_string + ', ', '') + animals.animal_name
from
(
values
(1, 'dog'),
(2, 'cat'),
(3, 'mouse')
) animals (id, animal_name)
select #my_string
Not sure exactly what you mean, but can you use a table variable?
DECLARE #animals TABLE (
animal varchar(50)
);
and insert the values?
Related
I hope you are all well.
I would like your help on a data transformation task that I have.
I would like to convert the first row of a table to a column name
I am working on SQL Server Azure and I get daily data from another service.
This service loads a table that is of the same form.
and I would like to transform the data in the same manner
Do You have any idea how to do it ?
The way to solve this is by using a little dynamic SQL magic:
First, create and populate sample table (Please save us thus step in your future questions):
DECLARE #T As Table
(
Row_num int,
Line nvarchar(4000)
);
INSERT INTO #T (Row_Num, Line) VALUES
(1, 'Col1;Col2;Col3'),
(2, 'Val1;Val2;Val3'),
(3, 'Value1;Value2;Value1'),
(4, 'Val A; val B;Val A'),
(5, 'Value A; Value B;Value C');
Then, build a union all query that selects the values from every row but the first, replacing the semicolon (;) separator with a comma (,) surrounded by apostrophes ('). Add an apostrophe before and after the string (which means we are treating all the data as strings):
DECLARE #Sql nvarchar(max) = '';
SELECT #Sql += 'UNION ALL SELECT '''+ REPLACE(Line, ';', ''',''') + ''' '
FROM #T
WHERE Row_Num > 1;
Next, use stuff to replace the first UNION ALL with a common table expression declaration, specifying the column names in the declaration itself. Note that here we don't need the apostrophes anymore, just to replace the semicolon with a comma:
SELECT #Sql = STUFF(#Sql, 1, 10, 'WITH CTE('+ REPLACE(Line, ';', ',') +') AS (') + ') SELECT * FROM CTE'
FROM #T
WHERE Row_Num = 1;
Finally, execute the sql:
EXEC(#Sql)
Results:
Col1 Col2 Col3
Val1 Val2 Val3
Value1 Value2 Value1
Val A val B Val A
Value A Value B Value C
You can see a live demo on rextester.
Another possible approach is to transform your text data into valid JSON arrays and then use OPENJSON() with an explicit schema and dynamic statement.
Working example:
Input:
CREATE TABLE #Data (
RowNum int,
Line nvarchar(max)
)
INSERT INTO #Data
(RowNum, Line)
VALUES
(1, 'ColumnA;ColumnB;ColumnC'),
(2, 'ValueA1;ValueB1;ValueC1'),
(3, 'ValueA2;ValueB2;ValueC2'),
(4, 'ValueA3;ValueB3;ValueC3'),
(5, 'ValueA4;ValueB4;ValueC4'),
(6, 'ValueA5;ValueB5;ValueC5')
T-SQL:
-- Explicit schema generation
DECLARE #schema nvarchar(max)
SELECT #schema = STUFF((
SELECT CONCAT(N',', j.[value], N' nvarchar(max) ''$[', j.[key], N']''')
FROM #Data d
CROSS APPLY OPENJSON(CONCAT(N'["', REPLACE(d.Line, ';', '","'), N'"]')) j
WHERE d.RowNum = 1
FOR XML PATH('')
), 1, 1, N'')
-- Dymanic statement
DECLARE #stm nvarchar(max)
SET #stm = CONCAT(
N'SELECT j.* FROM #Data d ',
N'CROSS APPLY OPENJSON(CONCAT(N''[["'', REPLACE(d.Line, '';'', ''","''), N''"]]'')) ',
N'WITH (',
#schema,
N') j WHERE d.RowNum > 1'
)
-- Execution
EXEC sp_executesql #stm
Output:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1
ValueA2 ValueB2 ValueC2
ValueA3 ValueB3 ValueC3
ValueA4 ValueB4 ValueC4
ValueA5 ValueB5 ValueC5
Explanations:
The main part is to transform each row's data into valid JSON arrays. The count of the columns can be different.
Data from the first row will be used for explicit schema generation and values ColumnA;ColumnB;ColumnC are transformed into ["ColumnA","ColumnB","ColumnC"]. Values from subsequent rows ValueA1;ValueB1;ValueC1 are transformed into [["ValueA1","ValueB1","ValueC1"]].
Next simple examples demonstrate how OPENJSON() returns data with default and explicit schema:
With default schema:
DECLARE #json nvarchar(max)
SET #json = '["ValueA1", "ValueB1", "ValueC1"]'
SELECT *
FROM OPENJSON(#json)
Output for default schema:
----------------
key value type
----------------
0 ValueA1 1
1 ValueB1 1
2 ValueC1 1
With explicit schema:
SET #json = '[["ValueA1", "ValueB1", "ValueC1"]]'
SELECT *
FROM OPENJSON(#json)
WITH (
ColumnA nvarchar(max) '$[0]',
ColumnB nvarchar(max) '$[1]',
ColumnC nvarchar(max) '$[2]'
)
Output for explicit schema:
-----------------------
ColumnA ColumnB ColumnC
-----------------------
ValueA1 ValueB1 ValueC1
I have a table called #Tbl1, Each GROUP is 1 row and I have to extract the number of rows for each to #Tbl_Insert type.
Declare #Tbl1 Table (TableName NVARCHAR(250),ColumnName NVARCHAR(250),DataType NVARCHAR(250),DataValue NVARCHAR(250),InGroup NVARCHAR(250))
Declare #Tbl_Insert Table (ID INT, Name NVARCHAR(250), Age INT)
-- Sample Data
Insert Into #Tbl1 values ('#Tbl_Insert','ID','INT','1','Group1'),('#Tbl_Insert','Name','NVARCHAR(250)','John.Adam','Group1'),('#Tbl_Insert','Age','INT','10','Group1')
Insert Into #Tbl1 values ('#Tbl_Insert','ID','INT','2','Group2'),('#Tbl_Insert','Name','NVARCHAR(250)','Andy.Law','Group2'),('#Tbl_Insert','Age','INT','18','Group2')
I can convert #tbl1 to row by row into #Table_TEMP
Declare #Table_TEMP (Data nvarchar(max))
Insert Into #Table_TEMP
SELECT LEFT([DataValues] , LEN([DataValues] )-1)
FROM #Tbl1 AS extern
CROSS APPLY
(
SELECT Concat('''', Replace( ISNULL([DataValue],''), '''','' ) + ''',')
FROM #Tbl1 AS intern
WHERE extern.InGroup = intern.InGroup
Order By InGroup, ColumnName
FOR XML PATH('')
) pre_trimmed ( [DataValues])
GROUP BY InGroup, [DataValues]
I have to extract the number of rows in #Tbl1 ( Or #Table_TEMP) to #Tbl_Insert.
I don't want to use cursor to loop Insert row by row in #Table_TEMP, because, when you met with big data (example > 10000 rows). It's run to slow.
Please help.
I found sample in stackorverflow
Declare #tbl_Temp Table (Data NVARCHAR(MAX))
Declare #tbl2 Table (A NVARCHAR(MAX),B NVARCHAR(MAX),C NVARCHAR(MAX))
Insert Into #tbl_Temp values ('a1*b1*c1')
INSERT INTO #tbl2 (A,B,C)
SELECT PARSENAME(REPLACE(Data,'*','.'),3)
,PARSENAME(REPLACE(Data,'*','.'),2)
,PARSENAME(REPLACE(Data,'*','.'),1)
FROM #tbl_Temp
select * from #tbl2
It's nearly the same, but,
My data have "DOT", can not use PARSENAME
I must know numbers of DOT to Build Dynamics SQL??
PARSENAME only support 3 "DOT", It's null when More Dot.
EXAMPLE:
Declare #ObjectName nVarChar(1000)
Set #ObjectName = 'HeadOfficeSQL1.Northwind.dbo.Authors'
SELECT
PARSENAME(#ObjectName, 5) as Server4,
PARSENAME(#ObjectName, 4) as Server,
PARSENAME(#ObjectName, 3) as DB,
PARSENAME(#ObjectName, 2) as Owner,
PARSENAME(#ObjectName, 1) as Object
If, i understand correctly you will need to use apply in order to fetch the records & insert the data into other table
insert into #Tbl_Insert (ID, Name, Age)
select max(a.id) [id], max(a.Name) [Name], max(a.Age) [Age] from #Tbl1 t
cross apply
(values
(case when t.ColumnName = 'ID' then t.DataValue end,
case when t.ColumnName = 'Name' then t.DataValue end,
case when t.ColumnName = 'Age' then t.DataValue end, t.InGroup)
) as a(id, Name, Age, [Group])
group by a.[Group]
select * from #Tbl_Insert
I do both #Tbl_Insert & create 1 store to do like PARSENAME. It's improved performance.
create function dbo.fnGetCsvPart(#csv varchar(8000),#index tinyint, #last bit = 0)
returns varchar(4000)
as
/* function to retrieve 0 based "column" from csv string */
begin
declare #i int; set #i = 0
while 1 = 1
begin
if #index = 0
begin
if #last = 1 or charindex(',',#csv,#i+1) = 0
return substring(#csv,#i+1,len(#csv)-#i+1)
else
return substring(#csv,#i+1,charindex(',',#csv,#i+1)-#i-1)
end
select #index = #index-1, #i = charindex(',',#csv,#i+1)
if #i = 0 break
end
return null
end
GO
Problem Statement :
when #a has a single word(Ex. 'name1') OR comma separated string (Example 'name1,name2,name3') then the query should return the manager names of employees with name1 and name2 and name3
when #a has an empty string then return the manager names of all the employees in the emp_master table
I have defined a stored procedure where I pass a variable.
This variable can be a comma separated string, a single word or an empty string.
If the string is comma separated then I split that string and get values based on the return table of split statement
else
I get the related value of the non comma separated data using normal subquery
I have tried to achieve this in the following way
Declare #a varchar(50)= ''
select emp.Name from
emp_master emp
where
(LEN(#a)=0 AND emp.Name in
(
SELECT DISTINCT [Name] FROM
[dbo].[Emp_Master] WHERE [EmpId] IN
(
SELECT
DISTINCT [MGR_ID]
FROM [dbo].[Emp_Master]
)
)
)
OR
emp.Name in (Select * from [dbo].[SplitString](#a, ','))
Details for the above sample:
[dbo].[SplitString] - custom written function : returns a table of split values. So
Select * from [dbo].SplitString
will return
SplitTable
----------
name1
name2
name3
and
Select * from [dbo].[SplitString](',','name1')
will return
SplitTable
----------
name1
[dbo].[Emp_Master] contains data for all the employees
[MGR_ID] is the column which has the employeeID of the employee manager
#a is the input variable
The Database is MS SQL 2008
My current solution(the above insane query) solves my purpose but it is very slow, it would be helpful to get an optimized and faster working solution for the problem
Emp_master Table has 400 000 rows, 30 columns
There are 18 000 managers in that table
CREATE NONCLUSTERED INDEX ix ON dbo.Emp_Master ([MGR_ID])
GO
DECLARE #a VARCHAR(50) = ''
DECLARE #t TABLE (val VARCHAR(50) PRIMARY KEY WITH(IGNORE_DUP_KEY=ON))
INSERT INTO #t
SELECT item = t.c.value('.', 'INT')
FROM (
SELECT txml = CAST('<r>' + REPLACE(#a, ',', '</r><r>') + '</r>' AS XML)
) r
CROSS APPLY txml.nodes('/r') t(c)
SELECT /*DISTINCT*/ [Name]
FROM dbo.Emp_Master e1
WHERE (
#a = ''
AND
e1.[EmpId] IN (SELECT DISTINCT MGR_ID FROM dbo.Emp_Master)
)
OR (
#a != ''
AND
e.Name IN (SELECT * FROM #t)
)
OPTION(RECOMPILE)
TRY THIS
CREATE NONCLUSTERED INDEX IX_MGR_ID_Emp_Master ON dbo.Emp_Master ([MGR_ID])
GO
Create Procedure searchname (#a varchar(255))
as
IF (#a = '')
BEGIN
EXEC Searchname1 #a
END
ELSE
BEGIN
EXEC Searchname2 #a
END
GO
Create Procedure Searchname1 (#a varchar(255))
AS
SELECT DISTINCT [Name] FROM
[dbo].[Emp_Master] m1 WHERE
exists
(
SELECT
*
FROM [dbo].[Emp_Master] m2
WHERE
m1.[EmpId]= m2.[MGR_ID]
)
GO
Create Procedure Searchname2 (#a varchar(max))
AS
Select #a = ' SELECT '''+replace( #a,',',''' Union ALL SELECT ''')+' '''
Create table #names (name varchar(255))
insert into #names
EXEC ( #a )
select emp.Name from
emp_master emp
WHERE
emp.Name in( Select name FRom #names)
option (recompile)
IF YOU ARE ALREADY DEALING WITH SQL INJECTION AT APPLICATION LEVEL
THEN
ALTER procedure [dbo].[Searchname2] (#a varchar(max))
AS
select #a = ''''+replace ( #a,',',''',''')+''''
DECLARE #sql NVARCHAR(MAX) = N'
select distinct emp.Name from
emp_master emp
WHERE
emp.Name in( '+#a+')'
EXEC (#sql)
I need move last 2 characters of string to become first 2, for example, "ABC PT" become "PT ABC".
Thanks for help.
DECLARE #String VARCHAR(100) = 'ABC PT'
SELECT RIGHT(#String, 2) + ' ' + LEFT(#String, LEN(#String) -2)
RESULT : PT ABC
You can use substring function.
for example :
select substring('ABC PT',len('ABC PT')-1,2)+' '+stuff('ABC PT',len('ABC PT')-1,2,'')
Query:
DECLARE #Str as nvarchar(10);
SET #Str = 'ABC PT';
SELECT RTRIM(RIGHT(#Str,2)+' '+SUBSTRING(#Str, 1 , LEN(#Str)-2))
Result:
PT ABC
below is the select statement, use it to update if you want to
c1 is the column name in the table test15.
If you have a variable then replace c1 with the variable name and remove the from clause.
select RIGHT(c1,2)+SUBSTRING(c1,1,len(c1)-2) from test15
CREATE TABLE #TEMP
(
ID INT IDENTITY(1,1) ,
NAME VARCHAR(50)
)
INSERT INTO #TEMP VALUES('PC1AB')
INSERT INTO #TEMP VALUES('PC2XY')
INSERT INTO #TEMP VALUES('PC3NA')
INSERT INTO #TEMP VALUES('PC3NAXBBNTEYE12')
SELECT SUBSTRING(NAME,LEN(NAME)-1,2)+LTRIM(LEFT(NAME,LEN(NAME)-2)) FROM #TEMP
I have a list of IDs in a text file like this:
24641985 ,
24641980 ,
24641979 ,
24641978 ,
24641976 ,
24641974 ,
...
...
24641972 ,
24641971 ,
24641970 ,
24641968 ,
24641965)
There's tens of thousands of them.
Now I need to know which ids are in this list, that do not correspond to an ID in my table.
I guess I should put them into a temporary table, then say something like:
select theId
from #tempIdCollection
where theId not in (select customerId from customers)
Problem is I don't know how to get them into the temp table!
Can anyone help? This doesn't have to be efficient. I've just got to run it once. Any solution suggestions welcome!
Thanks in advance!
-Ev
I'd use a table variable. You declare it just like a regular variable.
declare #tempThings table (items int)
insert #tempThings values (1)
Have a "permanent temp" table, also known as an "inbox" table. Just a simple tabled named something like "temp_bunchOfKeys".
Your basic sequence is:
1) Truncate temp_bunchOfKeys
2) BCP the text file into temp_bunchOfKeys
3) Your sql is then:
select theId
from Temp_BunchOfKeys
where theId not in (select customerId from customers)
I had the same problem but with strings instead of integers, and solved it by using a split function (see code below) that returns a table variable with the list content. Modify the function to suit your purpose.
Example of how to call the function
create table #t (Id int, Value varchar(64))
insert into #t (Id, Value)
select Id, Item
from dbo.fnSplit('24641978, 24641976, ... 24641972, 24641971', ',')
/*Do your own stuff*/
drop table #t
Function
if object_id(N'dbo.fnSplit', N'TF') is not null
drop function dbo.fnSplit
GO
create function dbo.fnSplit(#string varchar(max), #delimiter char(1))
returns #temptable table (Id int, Item varchar(8000))
as
begin
-- NB! len() does a rtrim() (ex. len('2 ') = 1)
if ( len( #string ) < 1 or #string is null ) return
declare #idx int
declare #slice varchar(8000)
declare #stringLength int
declare #counter int ; set #counter = 1
set #idx = charindex( #delimiter, #string )
while #idx!= 0
begin
set #slice = ltrim( rtrim( left(#string, #idx - 1)))
set #slice = replace( replace(#slice, char(10), ''), char(13), '')
insert into #temptable(Id, Item) values(#counter, #slice)
-- To handle trailing blanks use datalength()
set #stringLength = datalength(#string)
set #string = right( #string, (#stringLength - #idx) )
set #idx = charindex( #delimiter, #string )
set #counter = #counter + 1
end
-- What's left after the last delimiter
set #slice = ltrim(rtrim(#string))
set #slice = replace( replace(#slice, char(10), ''), char(13), '')
insert into #temptable(Id, Item) values(#counter, #slice)
return
end
GO
You can copy paste all those ids from text file to a excel file. Then use import from excel feature in the Sql server to create a table out of that excel file. Quite simple really. Let me know if you need more specific instructions.