SSIS Import CSV which is part structured part unstructured - sql-server

I have a CSV file that is in the following format:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
I need to import this using SSIS into SQL Server 2016.
I know how to get the second part of the data in (just skip n number of rows; the files are all consistent).
But I need some of the data in the first part of the file. There's two things I'm not sure on how to do:
obtain the data when its in the format column1=label, column2=data
how to parse through the file so that I can obtain the customer data and the order data in one go. There are some 50k files to go through, so would prefer to avoid running through them twice.
Do I have to bite the bullet and iterate through the files twice? And if so, how would you parse the data so that I get the column names and values ready for import to SQL table.
I thought perhaps the best way would be a script task, and creating a number of output columns. But not sure on how to assign each value to each new output column I created.

This will get all data onto one row. You may have to make modifications on data types and number of columns etc. This is a script component source. Dont forget to add your output with proper data types.
string[] lines = System.IO.File.ReadAllLines(#"d:\Imports\Sample.txt");
//Declare cust info
string fname = null;
string lname = null;
string address = null;
int ctr = 0;
foreach (string line in lines)
{
ctr++;
switch (ctr)
{
case 1:
fname = line.Split(',')[1].Trim();
break;
case 2:
lname = line.Split(',')[1].Trim();
break;
case 3:
address = line.Split(',')[1].Trim();
break;
case 4:
break;
case 5:
break;
default: //data rows
string[] cols = line.Split(',');
//Outpuit data
Output0Buffer.AddRow();
Output0Buffer.fname = fname;
Output0Buffer.lname = lname;
Output0Buffer.Address = address;
Output0Buffer.OrderNum = Int32.Parse(cols[0].ToString());
Output0Buffer.OrderDate = DateTime.Parse(cols[1].ToString());
Output0Buffer.OrderAmount = Decimal.Parse(cols[2].ToString());
break;
}
}
Here is your sample output:

#KeerKolloft,
As promised, here's a T-SQL-only solution. The overall goal for me was to store the first section of data in one table and the second section in another in a "Normalized" form with a "CustomerID" being the common value between the two tables.
I also wanted to do a "full monty" demo complete with test files (I generate 10 of them in the code below).
This following bit of code creates the 10 test/demo files in a given path, which you'll probably need to change. This is NOT a part of the solution... we're just generating test files here. Please read the comments for more information.
/**********************************************************************************************************************
Purpose:
Create 10 files to demonstrate this problem with. Each file will contain random but constrained test data similar to
the following format specified by the OP.
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file name follows the pattern of "CustomerNNNN" where "NNNN" is the Left Zero Padded CustomerID. If that's not
right for your file names, you'll have to make a change in the code below where the file names get created.
The files for my test are stored in a folder called "D:\Temp\". Again, you will need to change that to suite yourself.
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- Create a table of names and addresses to be used to create section 1 of each file.
--=====================================================================================================================
--===== If the table already exits, drop it to make reruns in SSMS easier.
DROP TABLE IF EXISTS #Section1
;
--===== Create and populate the table on-the-fly.
SELECT names.FileNum
,unpvt.*
INTO #Section1
FROM (--===== I used the form just to make things easier to read/edit for testing.
VALUES
( 1 ,'Arlen' ,'Aki' ,'8990 Damarkus Street')
,( 2 ,'Landynn' ,'Sailer' ,'7053 Parish Street')
,( 3 ,'Kelso' ,'Aasha' ,'7374 Amra Street')
,( 4 ,'Drithi' ,'Layne' ,'36 Samer Street')
,( 5 ,'Lateef' ,'Kristel' ,'5888 Aarna Street')
,( 6 ,'Elisha' ,'Ximenna' ,'311 Jakel Street')
,( 7 ,'Aidy' ,'Phoenyx' ,'4607 Caralina Street')
,( 8 ,'Surie' ,'Bee' ,'5629 Legendary Street')
,( 9 ,'Braidyn' ,'Naava' ,'4553 Ellia Street')
,(10 ,'Korbin' ,'Kort' ,'1926 Julyana Street')
)names(FileNum,FirstName,LastName,Address)
CROSS APPLY
(--===== This creates 5 lines for each name to be used as the section 1 data for each file.
VALUES
( 1 ,'FirstName, ' + FirstName)
,( 2 ,'LastName, ' + LastName)
,( 3 ,'Address, ' + Address)
,( 4 ,'') -- Blank Line
,( 5 ,'OrderNumber,OrderDate,OrderAmount') --Next Section Line
)unpvt(SortOrder,SectionLine)
ORDER BY names.FileNum,unpvt.SortOrder
;
-- SELECT * FROM #Section1
;
--=====================================================================================================================
-- Build 1 file for each of the name/address combinations above.
-- Each file name is in the form of "FILEnnnn" where "nnnn" is the left zero padded file counter.
--=====================================================================================================================
--===== Preset the loop counter (gotta use a loop for this one because we can only create 1 file at a time here).
DECLARE #FileCounter INT = 1;
WHILE #FileCounter <= 10
BEGIN
--===== Start over with the table for section 2.
DROP TABLE IF EXISTS ##FileOutput
;
--===== Grab the section 1 data for this file and start the file output table with it.
SELECT SectionLine
INTO ##FileOutput
FROM #Section1
WHERE FileNum = #FileCounter
ORDER BY SortOrder
;
--===== Build section 2 data (OrderNumber in same order as OrderDate and then DESC by OrderNumber like the OP had it)
WITH cteSection2 AS
(--==== This will build anywhere from 1 to 200 random but constrained rows of data
SELECT TOP (ABS(CHECKSUM(NEWID())%200)+1)
OrderDate = CONVERT(CHAR(10), DATEADD(dd, ABS(CHECKSUM(NEWID())%DATEDIFF(dd,'2019','2020')) ,'2019') ,23)
,OrderAmount = ABS(CHECKSUM(NEWID())%999)+1
FROM sys.all_columns
)
INSERT INTO ##FileOutput
(SectionLine)
SELECT TOP 2000000000 --The TOP is necessary to get the SORT to work correctly here
SectionLine = CONCAT(ROW_NUMBER() OVER (ORDER BY OrderDate),',',OrderDate,',',OrderAmount)
FROM cteSection2
ORDER BY OrderDate DESC
;
--===== Create a file from the data we created in the ##FileOutput table.
-- Note that this over writes any files with the same name that already exist.
DECLARE #BCPCmd VARCHAR(256);
SELECT #BCPCmd = CONCAT('BCP "SELECT SectionLine FROM ##FileOutput" queryout "D:\Temp\Customer',RIGHT(#FileCounter+10000,4),'.txt" -c -T');
EXEC xp_CmdShell #BCPCmd
;
--===== Bump the counter for the next file
SELECT #FileCounter += 1
;
END
;
GO
Now, we could do what I used to do in the old days... we could use SQL Server to isolate the first and second sections and use xp_CmdShell to BCP them out to work files and simply re-import them. In fact, I'd likely still do that because it's a lot simpler and I've found a way to use xp_CmdShell in a very safe manner. Still, a lot of people get all puffed up about using it, so we won't do it that way.
First, we'll need a string splitter. We can't use the bloody STRING_SPLIT() function that MS built in as of 2016 because it doesn't return the ordinal positions of the elements it splits out. The following string splitter (up to 8kB) is the fastest non-CLR T-SQL-only splitter you'll be able to find. Of course, it's also fully documented and contains two tests in the flower box to verify its operation.
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading and trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Note that this function uses a binary collation and is, therefore, case sensitive.
The original article for the concept of this splitter may be found at the following URL. You can also find
performance tests at this link although they are now a bit out of date. This function is much faster as of Rev 09,
which was built specifically for use in SQL Server 2012 and above andd is about twice as fast as the version
document in the article.
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
-----------------------------------------------------------------------------------------------------------------------
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# of returns & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' UNION ALL --4 E E E E
SELECT 17, ',,,,,,' --7 (All Empty Strings)
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolved externally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use a MAX datatype will cause it to run twice as slow. It's just the nature of
MAX datatypes whether it fits in-row or not.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the folks listed in the Revision
History below:
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago. The link below no long works but has been preserved herer
for posterity sake.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
The original article can be seen at then following special site, as least as of 29 Sep 2019.
http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html#
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Itzik-Ben Gan, Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer
- A further 15-20% performance enhancement has been discovered and incorporated into this code which also
eliminated the need for a "zero" position in the cteTally table.
Rev 08 - 24 Mar 2014 - Eirikur Eiriksson
- Further performance modification (twice as fast) For SQL Server 2012 and greater by using LEAD to find the
next delimiter for the current element, which eliminates the need for CHARINDEX, which eliminates the need
for a second scan of the string being split.
REF: https://www.sqlservercentral.com/articles/reaping-the-benefits-of-the-window-functions-in-t-sql-2
Rev 09 - 29 Sep 2019 - Jeff Moden
- Combine the improvements by Peter de Heer and Eirikur Eiriksson for use on SQL Server 2012 and above.
- Add Test 17 to the test code above.
- Modernize the generation of the embedded "Tally" generation available as of 2012. There's no significant
performance increase but it makes the code much shorter and easier to understand.
- Check/change all URLs in the notes abobe to ensure that they're still viable.
- Add a binary collation for a bit more of an edge on performance.
- Removed "Other Note" #7 above as UNPIVOT is no longern applicable (never was for performance).
**********************************************************************************************************************/
--=========== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--=========== "Inline" CTE Driven "Tally Table” produces values from 0 up to 10,000, enough to cover VARCHAR(8000).
WITH E1(N) AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))E0(N))
,E4(N) AS (SELECT 1 FROM E1 a, E1 b, E1 c, E1 d)
,cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
,cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString COLLATE Latin1_General_BIN,t.N,1)
= #pDelimiter COLLATE Latin1_General_BIN
)
--=========== Do the actual split.
-- The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER (ORDER BY s.N1)
, Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF((LEAD(s.N1,1,1) OVER (ORDER BY s.N1)-1),0)-s.N1,8000))
FROM cteStart s
;
Once the you've setup the splitter and built the test files, the following code demonstrates a nasty fast (still not as fast if this file could just be imported, though) method of loading each file, parsing each section of the file, and loading each of the two sections into their respective normalized tables. The details are in the comments in the code.
Unfortunately, this forum won't allow for more than 30,000 characters and so I need to continue this in the next post down.

To continue with the rest of the code...
Look for the word "TODO" in this code to see where you'll need to make changes to handle your actual files. Like I said, the details are in the comments of the code.
As a bit of a sidebar, one of the advantages of doing these type of things in stored procedures is that it's a heck of a lot easy to copy stored procedures than it is to copy "SSIS packages" when the time comes (and it WILL come) to migrate to a new system.
/**********************************************************************************************************************
Purpose:
Import the files we created above to demonstrate one possible solution.
As a reminder, the files look like the following:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
Note that the files this code looks for are in the file path of "D:\Temp\" and the file name pattern is "CustomerNNNN"
where the "NNNN" is the Left Zero Padded CustomerID. You need to change those if your stuff is different.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- CREATE THE NECESSARY TABLES
-- I'm using TempTables as both the working tables and the final target tables because I didn't want to take a
-- a chance with accidently dropping one of your tables.
--=====================================================================================================================
--===== This is where the customer information from the first section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #Customer;
CREATE TABLE #Customer
(
CustomerID INT NOT NULL
,FirstName VARCHAR(50) NOT NULL
,LastName VARCHAR(50) NOT NULL
,Address VARCHAR(50) NOT NULL
,CONSTRAINT PK_#Customer PRIMARY KEY CLUSTERED (CustomerID)
)
;
--===== This is where the order information from the second section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #CustomerOrder;
CREATE TABLE #CustomerOrder
(
CustomerID INT NOT NULL
,OrderNumber INT NOT NULL
,OrderDate DATE NOT NULL
,OrderAmount INT NOT NULL
,CONSTRAINT PK_#CustomerOrder PRIMARY KEY CLUSTERED (CustomerID,OrderNumber)
)
;
--===== We'll store all file names in this table.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #DirTree;
CREATE TABLE #DirTree
(
FileName VARCHAR(500) PRIMARY KEY CLUSTERED
,Depth INT
,IsFile BIT
)
;
--===== This is where the filtered list of files we want to work with will be stored.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #FileControl;
CREATE TABLE #FileControl
(
FileControlID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
,FileName VARCHAR(500) NOT NULL
,CustomerID AS CONVERT(INT,LEFT(RIGHT(FileName,8),4))
)
;
--===== This is where we'll temporarily import files to be worked on one at a time.
-- Ironically, this needs to be a non-temporary table because we need to create
-- a view on it to avoid needing a BCP Format File to skip the LineNumber column
-- during the "flat" import.
DROP TABLE IF EXISTS dbo.FileContent;
CREATE TABLE dbo.FileContent
(
LineNumber INT IDENTITY(1,1)
,LineContent VARCHAR(100)
)
;
--===== This is the view that we'll actually import to and it will target the table above.
-- It replaces a BCP Format File to skip the LineNumber column in the target table.
-- It's being created using Dynamic SQL to avoid the use of "GO".
DROP VIEW IF EXISTS dbo.vFileContent;
EXEC ('CREATE VIEW dbo.vFileContent AS SELECT LineContent FROM dbo.FileContent')
;
--=====================================================================================================================
-- Find the files we want to load.
-- The xp_DirTree command does not allow for wild cards and so we have to load all file and directory names that
-- are in #FilePath and then filter and copy just the ones we want to a file control table.
--=====================================================================================================================
--===== Local variables populated in this section
DECLARE #FilePath VARCHAR(500) = 'D:\Temp\' --TODO Change this if you need to.
,#FileCount INT
;
--===== Load all names in the #FilePath whether they are file names or directory names.
INSERT INTO #DirTree WITH (TABLOCK)
(FileName, Depth, IsFile)
EXEC xp_DirTree #FilePath,1,1
;
--===== Filter the names of files that we want and load them into a numbered control table to step through the files later.
INSERT INTO #FileControl
(FileName)
SELECT FileName
FROM #DirTree
WHERE FileName LIKE 'Customer[0-9][0-9][0-9][0-9].txt' --TODO you will likely need to change this pattern for file names.
AND IsFile = 1
ORDER BY FileName --Just to help keep track.
;
--===== Remember the number of file names we loaded for the upcoming control loop.
SELECT #FileCount = ##ROWCOUNT
;
--SELECT * FROM #FileControl;
--=====================================================================================================================
-- This loop is the "control" loop that loads each file one at a time and parses the information out of section 1
-- and section 2 of the file and stores the data in the respective tables.
--=====================================================================================================================
--===== Define the local variables populated in this section.
DECLARE #Counter INT = 1
,#Section1LastLine INT = 3 --TODO you'll need to change this to 24 according to you specs on the real files.
,#Section2FirstLine INT = 5 --TODO you'll also need to change this but I don't know what it will be for you.
;
--===== Setup the loop counter
WHILE #Counter <= #FileCount
BEGIN
--===== These are variables that are used within this loop.
-- No... this doesn't create an error and they're really handy when trying to troubleshoot.
DECLARE #FileName VARCHAR(500)
,#CustomerID INT
,#SQL VARCHAR(8000)
;
--===== This gets the next file from the file control table according to #Counter.
-- TODO... you might have to change where you get the CustomerID from.
-- I'm getting it from the "patterned" file names in this case because I had nothing else to go on
-- in your description ofn the problem.
SELECT #FileName = CONCAT(#FilePath,FileName)
,#CustomerID = CustomerID
FROM #FileControl
WHERE FileControlID = #Counter -- select* from #FileControl
;
--===== Clear the guns to get ready to load and work on a new file.
TRUNCATE TABLE dbo.FileContent
;
--===== Calculate the BULK INSERT command we need to load the given file.
SELECT #SQL = '
BULK INSERT dbo.vFileContent
FROM '+QUOTENAME(#FileName,'''')+'
WITH (
BATCHSIZE = 2000000000 --Import everything in one shot for performance/potential minimal logging.
,CODEPAGE = ''RAW'' --Ignore any code pages.
,DATAFILETYPE = ''char'' --This is NOT a unicode file. It''s ANSI text.
,FIELDTERMINATOR = '','' --The delimiter between the fields in the file.
,ROWTERMINATOR = ''\n'' --The rows were not generated on a Windows box so only "LineFeed" is used.
,KEEPNULLS --Adjacent delimiters will create NULLs rather than blanks.
,TABLOCK --Allows for "minimal logging" when possible (and it is for this import)
)
;'
--PRINT #SQL
EXEC (#SQL)
;
--===== Read Section 1 (customer information)
-- This builds the dynamic SQL to parse and store the customer information in section 1.
SELECT #SQL = CONCAT('INSERT INTO #Customer',CHAR(10),'(CustomerID');
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1))
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine;
SELECT #SQL += CONCAT(')',CHAR(10),'SELECT',CHAR(10));
SELECT #SQL += CONCAT(' CustomerID=',#CustomerID,CHAR(10));
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1),'='
,QUOTENAME(LTRIM(RTRIM(SUBSTRING(LineContent,CHARINDEX(',',LineContent)+1,50))),'''')
,CHAR(10)
)
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine
;
EXEC (#SQL)
;
--===== This parses and stores the information from section 2.
-- Since you said the order of the columns never changes, I hard-coded the results for performance
-- using an ancient "Black Arts" form of code known as a "CROSSTAB", which pivots the data result
-- from the splitter faster than PIVOT usually does and also allows exquisite control in the code.
INSERT INTO #CustomerOrder
(OrderNumber,CustomerID,OrderDate,OrderAmount)
SELECT OrderNumber = MAX(CASE WHEN split.ItemNumber = 1 THEN Item ELSE -1 END)
,CustomerID = #CustomerID
,OrderDate = MAX(CASE WHEN split.ItemNumber = 2 THEN Item ELSE '1753' END)
,OrderAmount = MAX(CASE WHEN split.ItemNumber = 3 THEN Item ELSE -1 END)
FROM dbo.FileContent fc
CROSS APPLY dbo.DelimitedSplit8K(fc.LineContent,',') split
WHERE LineNumber > #Section2FirstLine
GROUP BY LineNumber
;
--===== Bump the counter
SELECT #Counter += 1
;
END
;
--===== All done. Display the results of the two tables we populated from all 10 files.
SELECT * FROM #Customer;
SELECT * FROM #CustomerOrder;

Related

SQL Server trigger (I need to move through a hierarchical tree structure from any given node)

Good day
I have a legacy database that was designed for a specific front end application. I am doing multiples cases of additional app development using this data however, the legacy database has proven inadequate to work with going into the future. Unfortunately the legacy database has to stay in place due to the fact that i still need the front end application running.
I have created a new database of similar structure that will be used, every time a vehicle (the example we'll use) is added to the legacy database through the front end application I have set up a trigger to push the specified data into the new database on insert (this is all working perfectly).
Now to get to my problem. Each vehicle is allocated a location key which describes which location it belongs to on the hierarchical tree structure of locations. I need to take this location which could be from any tree level and find all the nodes below and above it in the legacy database using the locations table, then add all the location keys of the nodes to the vehicle table in the new database which will comprise of 7 levels (columns). I only need to get Location 0,1,2,3,4,5,6,7.
For example I will have seven columns of which any may be the vehicles registered location.
(Level0Key, Level1Key, Level2key,...,...,..., Level6Key, Level7Key)
As I understand you'll need to see the legacy databases vehicles table, logical level table and locations table (where all locations are listed with there parent keys) in order to help me.
I will attach these tables and the simple trigger I have, I cannot explain how much id appreciate any help whether its a statement of logic or the coded trigger that might work (Bonus). A huge thanks in advance.
I am just battling with exporting all the LocKeys to the variables #level1Key etc..
Locations Table
Logical levels table
Vehicles table
Code:
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER dbo.transferVehicle
ON dbo.Vehicles
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Level0Key INT, #Level1Key INT, #Level2Key INT, #Level3Key INT, #Level4Key INT, #Level5Key INT,#Level6Key INT,#Level7Key INT, #LocKey INT;
SELECT #LocKey = [LocKey] FROM Inserted ;
with tbParent as
(
select * from Canepro.dbo.locations where LocKey= #LocKey
union all
select locations.* from Canepro.dbo.locations join tbParent on locations.LocKey = tbParent.ParentKey
),
tbsons as
(
select * from Canepro.dbo.locations where LocKey= #LocKey
union all
select locations.* from Canepro.dbo.locations join tbsons on locations.ParentKey= tbsons.LocKey
),
tball as
(
select * from tbParent as p
union
select * from tbsons as s
),
final as
(
select number = ROW_NUMBER() OVER (ORDER BY t.LocKey), t.LocKey,t.LocName , t.ParentKey
from tball as t
)
--I now need to export all rows (LocKeys) from final into the variables
-- if i use two select statments (see below) i get an error on the second
select #LocKey1 = LocKey from final where number = 1
select #LocKey2 = Lockey from final where number = 2
INSERT INTO [NewDatabase].dbo.Vehicles (VehCode, VehicleNumber, RegistrationNumber, Description, FuelKey, CatKey, Active, ExpectedConsumption, IsPetrol, LicenseExpiryDate, FuelTankCapacity, OdometerReading, Level0LocKey, Level1LocKey, Level2LocKey,Level3LocKey, Level4LocKey, Level5LocKey, Level6LocKey, Level7Key)
SELECT
VehCode, VehicleNumber, RegistrationNumber, Description, FuelType, CatKey, Active, ExpectedConsumption, IsPetrol, LicenseExpiryDate, FuelTankCapacity, OdometerReading, LocKey, #Level0Key, #Level1Key, #Level2Key, #Level3Key, #Level4Key, #Level5Key, #Level6Key, #Level7Key -- then all the other nodes that relate to the lockey, above and below is level from level0 (The top of the tree) to level 6 of the tree
FROM
inserted;
END
GO
Expected input from insert:
Vkey : 185
Lockey : 60000690
VehCode : 52
VehicleNumber : 80/11A52
RegistrationNumber :NUF 37746
Description : Ford 6610 4x4 (52)
FuelType : 174
CatKey : 7
Active : 1
Expected consumption : Null
IsPetrol : 0
LicenseExpiryDate : 2011-04-30 00:00:00
FuelTankCapacity : 150
OdomenterReading : Hours
Expected output into new database :
Vkey : 185
Lockey : 60000690
VehCode : 52
VehicleNumber : 80/11A52
RegistrationNumber :NUF 37746
Description : Ford 6610 4x4 (52)
FuelType : 174
CatKey : 7
Active : 1
Expected consumption : Null
IsPetrol : 0
LicenseExpiryDate : 2011-04-30 00:00:00
FuelTankCapacity : 150
OdomenterReading : Hours
Level0Key : 60000291 (Top Tree node)
Level1Key : 60002764 (Second Level of tree)
Level2Key : 60000841 (third level of tree)
Level3Key : 60000177 (Fourth level of tree)
Level4Key : 60000179 (Fifth level of tree)
Level5Key : 60000181 (sixth level of tree)
Level6Key : 60000205 (seventh level of tree)
Level7Key : 60000690 (Eighth level of tree)
( We can see this one is the same as the Lockey)
Would really really appreciate some help
Problem 1
if i use two select statments (see below) i get an error on the second
This doesn't work because your CTE's disappear after the first statement. So you need to save the data into a work table.
Example:
-- Set up a table variable to save results into
DECLARE #WorkTable TABLE (LevelNumber INT,LocKey INT,ParentKey INT)
DECLARE #LocKey INT = 11;
with tbParent as
(
select * from [Location] where LocKey= #LocKey
union all
select [Location].* from [Location] join tbParent on [Location].LocKey = tbParent.ParentKey
),
tbsons as
(
select * from [Location] where LocKey= #LocKey
union all
select [Location].* from [Location] join tbsons on [Location].ParentKey= tbsons.LocKey
),
tball as
(
select * from tbParent as p
union
select * from tbsons as s
),
final as
(
select LevelNumber = ROW_NUMBER() OVER (ORDER BY t.LocKey), t.LocKey, t.ParentKey
from tball as t
)
-- Save the results into the table variable
INSERT INTO #WorkTable (LevelNumber,LocKey,ParentKey)
SELECT LevelNumber,LocKey,ParentKey from final
-- now we can do what we like with the table variables
select #LocKey1 = LocKey from final where number = 1
select #LocKey2 = Lockey from final where number = 2
But again I must caution you against forcing a self referencing tree into fixed levels unless you are certain the data always comes out this way.
Problem 2
SELECT #LocKey = [LocKey] FROM Inserted ;
INSERTED can contain many rows. This just gets the first one. If there is any operation that inserts or updates many rows, your trigger won't work properly. You need to loop (or join) inserted and work on every row in it.
Example of DDL and Inserts
Below is an example of table DDL and sample data. This allows us to set up your data and work with it locally.
CREATE TABLE [LOCATION] (LocKey INT , ParentKey INT , TreeLevel INT)
INSERT INTO [LOCATION]
SELECT LocKey,ParentKey,TreeLevel
FROM
(
VALUES
(1,60000291,1),
(2,50000199,6),
(6,60000706,8),
(7,60000707,8),
(8,6,9),
(9,6,9),
(10,6,9),
(11,6,9),
(12,6,9),
(13,6,9),
(14,6,9),
(15,6,9),
(16,6,9),
(17,6,9)
) As T(LocKey,ParentKey,TreeLevel)

Grouping data into fuzzy gaps and islands

This is essentially a gaps and islands problem however it's atypical. I did cut the example down to bare minimum. I need to identify gaps that exceed a certain threshold and duplicates can't be a problem although this example removes them.
In any case the common solution of using ROW_NUMBER() doesn't help since gaps of even 1 can't be handled and the gap value is a parameter in 'real life'.
The code below actually works correctly. And it's super fast! But if you look at it you'll see why people are rather gun shy about relying upon it. The method was first published 9 years ago here http://www.sqlservercentral.com/articles/T-SQL/68467/ and I've read all 32 pages of comments. Nobody has successfully poked holes in it other than to say "it's not documented behavior". I've tried it on every version from 2005 to 2019 and it works.
The question is, beyond using a cursor or while loop to look at many millions of rows 1 by 1 - which takes, I don't know how long because I cancel after 30 min. - is there a 'supported' way to get the same results in a reasonable amount of time? Even 100x slower would complete 4M rows in 10 minutes and I can't find a way to come close to that!
CREATE TABLE #t (CreateDate date not null
,TufpID int not null
,Cnt int not null
,FuzzyGroup int null);
ALTER TABLE #t ADD CONSTRAINT PK_temp PRIMARY KEY CLUSTERED (CreateDate,TufpID);
-- Takes 40 seconds to write 4.4M rows from a source of 70M rows.
INSERT INTO #T
SELECT X.CreateDate
,X.TufpID
,Cnt = COUNT(*)
,FuzzyGroup = null
FROM SessionState SS
CROSS APPLY(VALUES (CAST(SS.CreateDate as date),SS.TestUser_Form_Part_id)) X(CreateDate,TufpID)
GROUP BY X.CreateDate
,X.TufpID
ORDER BY x.CreateDate,x.TufpID;
-- Takes 6 seconds to update 4.4M rows. They WILL update in clustered index order!
-- (Provided all the rules are followed - see the link above)
DECLARE #FuzzFactor int = 38
DECLARE #Prior int = -#FuzzFactor; -- Insure 1st row has it's own group
DECLARE #Group int;
DECLARE #CDate date;
UPDATE #T
SET #Group = FuzzyGroup = CASE WHEN t.TufpID - #PRIOR < #FuzzFactor AND t.CreateDate = #CDate
THEN #Group ELSE t.TufpID END
,#CDate = CASE WHEN #CDate = t.CreateDate THEN #CDate ELSE t.CreateDate END
,#Prior = CASE WHEN #Prior = t.TufpID-1 THEN #Prior + 1 ELSE t.TufpID END
FROM #t t WITH (TABLOCKX) OPTION(MAXDOP 1);
After the above executes the FuzzyGroup column contains the lowest value of TufpID in the group. IOW the first row (in clustered index order) contains the value of it's own TufpID column. Thereafter every row gets the same value until the date changes or a gap size (in this case 38) is exceeded. In those cases the current TufpID becomes the value put into FuzzyGroup until another change is detected. So after 6 seconds I can run queries that group by FuzzyGroup and analyze the islands.
In practice I do some running counts and totals as well in the same pass and so it takes 8 seconds not 6 but I could do those things with window functions pretty easily if I need to so I left them off.
This is the smallest table and I'll eventually need to handle 100M rows. Thus 10 minutes for 4.4M is probably not good enough but it's a place to start.
This should be reasonably efficient and avoid relying on undocumented behaviour
WITH T1
AS (SELECT *,
PrevTufpID = LAG(TufpID)
OVER (PARTITION BY CreateDate
ORDER BY TufpID)
FROM #T),
T2
AS (SELECT *,
_FuzzyGroup = MAX(CASE
WHEN PrevTufpID IS NULL
OR TufpID - PrevTufpID >= #FuzzFactor
THEN TufpID
END)
OVER (PARTITION BY CreateDate
ORDER BY TufpID ROWS UNBOUNDED PRECEDING)
FROM T1)
UPDATE T2
SET FuzzyGroup = _FuzzyGroup
The execution plan has a single ordered scan through the clustered index, with the row values then flowing through some window function operators and into the update.

Expression to find multiple spaces in string

We handle a lot of sensitive data and I would like to mask passenger names using only the first and last letter of each name part and join these by three asterisks (***),
For example: the name 'John Doe' will become 'J***n D***e'
For a name that consists of two parts this is doable by finding the space using the expression:
LEFT(CardHolderNameFromPurchase, 1) +
'***' +
CASE WHEN CHARINDEX(' ', PassengerName) = 0
THEN RIGHT(PassengerName, 1)
ELSE SUBSTRING(PassengerName, CHARINDEX(' ', PassengerName) -1, 1) +
' ' +
SUBSTRING(PassengerName, CHARINDEX(' ', PassengerName) +1, 1) +
'***' +
RIGHT(PassengerName, 1)
END
However, the passenger name can have more than two parts, there is no real limit to it. How should can I find the indices of all spaces within an expression? Or should I maybe tackle this problem in a different way?
Any help or pointer is much appreciated!
This solution does what you want it to, but is really the wrong approach to use when trying to hide personally identifiable data, as per Gordon's explanation in his answer.
SQL:
declare #t table(n nvarchar(20));
insert into #t values('John Doe')
,('JohnDoe')
,('John Doe Two')
,('John Doe Two Three')
,('John O''Neill');
select n
,stuff((select ' ' + left(s.item,1) + '***' + right(s.item,1)
from dbo.fn_StringSplit4k(t.n,' ',null) as s
for xml path('')
),1,1,''
) as mask
from #t as t;
Output:
+--------------------+-------------------------+
| n | mask |
+--------------------+-------------------------+
| John Doe | J***n D***e |
| JohnDoe | J***e |
| John Doe Two | J***n D***e T***o |
| John Doe Two Three | J***n D***e T***o T***e |
| John O'Neill | J***n O***l |
+--------------------+-------------------------+
String splitting function based on Jeff Moden's Tally Table approach:
create function [dbo].[fn_StringSplit4k]
(
#str nvarchar(4000) = ' ' -- String to split.
,#delimiter as nvarchar(1) = ',' -- Delimiting value to split on.
,#num as int = null -- Which value to return, null returns all.
)
returns table
as
return
-- Start tally table with 10 rows.
with n(n) as (select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1)
-- Select the same number of rows as characters in #str as incremental row numbers.
-- Cross joins increase exponentially to a max possible 10,000 rows to cover largest #str length.
,t(t) as (select top (select len(isnull(#str,'')) a) row_number() over (order by (select null)) from n n1,n n2,n n3,n n4)
-- Return the position of every value that follows the specified delimiter.
,s(s) as (select 1 union all select t+1 from t where substring(isnull(#str,''),t,1) = #delimiter)
-- Return the start and length of every value, to use in the SUBSTRING function.
-- ISNULL/NULLIF combo handles the last value where there is no delimiter at the end of the string.
,l(s,l) as (select s,isnull(nullif(charindex(#delimiter,isnull(#str,''),s),0)-s,4000) from s)
select rn
,item
from(select row_number() over(order by s) as rn
,substring(#str,s,l) as item
from l
) a
where rn = #num
or #num is null;
GO
If you consider PassengerName as sensitive information, then you should not be storing it in clear text in generally accessible tables. Period.
There are several different options.
One is to have reference tables for sensitive information. Any table that references this would have an id rather than the name. Viola. No sensitive information is available without access to the reference table, and that would be severely restricted.
A second method is a reversible compression algorithm. This would allow the the value to be gibberish, but with the right knowledge, it could be transformed back into a meaningful value. Typical methods for this are the public key encryption algorithms devised by Rivest, Shamir, and Adelman (RSA encoding).
If you want to do first and last letters of names, I would be really careful about Asian names. Many of them consist of two or three letters, when written in Latin script. That isn't much hiding. SQL Server does not have simple mechanisms to do this. You can write a user-defined function with a loop to manager the process. However, I view this as the least secure and least desirable approach.
This uses Jeff Moden's DelimitedSplit8K, as well as the new functionality in SQL Server 2017 STRING_AGG. As I don't know what version you're using, I've just gone "whole hog" and assumed you're using the latest version.
Jeff's function is invaluable here, as it returns the ordinal position, something which Microsoft have foolishly omitted from their own function, STRING_SPLIT (and didn't add in 2017 either). Ordinal position is key here, so we can't make use of the built in function.
WITH VTE AS(
SELECT *
FROM (VALUES ('John Doe'),('Jane Bloggs'),('Edgar Allan Poe'),('Mr George W. Bush'),('Homer J Simpson')) V(FullName)),
Masking AS (
SELECT *,
ISNULL(STUFF(Item, 2, LEN(item) -2,'***'), Item) AS MaskedPart
FROM VTE V
CROSS APPLY dbo.delimitedSplit8K(V.Fullname, ' '))
SELECT STRING_AGG(MaskedPart,' ') AS MaskedFullName
FROM Masking
GROUP BY Fullname;
Edit: Nevermind, OP has commented they are using 2008, so STRING_AGG is out of the question. #iamdave, however, has posted an answer which is very similar to my own, just do it the "old fashioned XML way".
Depending on your version of SQL Server, you may be able to use the built-in string split to rows on spaces in the name, do your string formatting, and then roll back up to name level using an XML path.
create table dataset (id int identity(1,1), name varchar(50));
insert into dataset (name) values
('John Smith'),
('Edgar Allen Poe'),
('One Two Three Four');
with split as (
select id, cs.Value as Name
from dataset
cross apply STRING_SPLIT (name, ' ') cs
),
formatted as (
select
id,
name,
left(name, 1) + '***' + right(name, 1) as out
from split
)
SELECT
id,
(SELECT ' ' + out
FROM formatted b
WHERE a.id = b.id
FOR XML PATH('')) [out_name]
FROM formatted a
GROUP BY id
Result:
id out_name
1 J***n S***h
2 E***r A***n P***e
3 O***e T***o T***e F***r
You can do that using this function.
create function [dbo].[fnMaskName] (#var_name varchar(100))
RETURNS varchar(100)
WITH EXECUTE AS CALLER
AS
BEGIN
declare #var_part varchar(100)
declare #var_return varchar(100)
declare #n_position smallint
set #var_return = ''
set #n_position = 1
WHILE #n_position<>0
BEGIN
SET #n_position = CHARINDEX(' ', #var_name)
IF #n_position = 0
SET #n_position = LEN(#var_name)
SET #var_part = SUBSTRING(#var_name, 1, #n_position)
SET #var_name = SUBSTRING(#var_name, #n_position+1, LEN(#var_name))
if #var_part<>''
SET #var_return = #var_return + stuff(#var_part, 2, len(#var_part)-2, replicate('*',len(#var_part)-2)) + ' '
END
RETURN(#var_return)
END

how to select data row from a comma separated value field

My question is not exactly but similar to this question
How to SELECT parts from a comma-separated field with a LIKE statement
but i have not seen any answer there. So I am posting my question again.
i have the following table
╔════════════╦═════════════╗
║ VacancyId ║ Media ║
╠════════════╬═════════════╣
║ 1 ║ 32,26,30 ║
║ 2 ║ 31, 25,20 ║
║ 3 ║ 21,32,23 ║
╚════════════╩═════════════╝
I want to select data who has media id=30 or media=21 or media= 40
So in this case the output will return the 1st and the third row.
How can I do that ?
I have tried media like '30' but that does not return any value. Plus i just dont need to search for one string in that field .
My database is SQL Server
Thank you
It's never good to use the comma separated values to store in database if it is feasible try to make separate tables to store them as most probably this is 1:n relationship.
If this is not feasible then there are following possible ways you can do this,
If your number of values to match are going to stay same, then you might want to do the series of Like statement along with OR/AND depending on your requirement.
Ex.-
WHERE
Media LIKE '%21%'
OR Media LIKE '%30%'
OR Media LIKE '%40%'
However above query will likely to catch all the values which contains 21 so even if columns with values like 1210,210 will also be returned. To overcome this you can do following trick which is hamper the performance as it uses functions in where clause and that goes against making Seargable queries.
But here it goes,
--Declare valueSearch variable first to value to match for you can do this for multiple values using multiple variables.
Declare #valueSearch = '21'
-- Then do the matching in where clause
WHERE
(',' + RTRIM(Media) + ',') LIKE '%,' + #valueSearch + ',%'
If the number of values to match are going to change then you might want to look into FullText Index and you should thinking about the same.
And if you decide to go with this after Fulltext Index you can do as below to get what you want,
Ex.-
WHERE
CONTAINS(Media, '"21" OR "30" OR "40"')
The best possible way i can suggest is first you have do comma separated value to table using This link and you will end up with table looks like below.
SELECT * FROM Table
WHERE Media in('30','28')
It will surely works.
You can use this, but the performance is inevitably poor. You should, as others have said, normalise this structure.
WHERE
',' + media + ',' LIKE '%,21,%'
OR ',' + media + ',' LIKE '%,30,%'
Etc, etc...
If you are certain that any Media value containing the string 30 will be one you wish to return, you just need to include wildcards in your LIKE statement:
SELECT *
FROM Table
WHERE Media LIKE '%30%'
Bear in mind though that this would also return a record with a Media value of 298,300,302 for example, so if this is problematic for you, you'll need to consider a more sophisticated method, like:
SELECT *
FROM Table
WHERE Media LIKE '%,30,%'
OR Media LIKE '30,%'
OR Media LIKE '%,30'
OR Media = '30'
If there might be spaces in the strings (as per in your question), you'll also want to strip these out:
SELECT *
FROM Table
WHERE REPLACE(Media,' ','') LIKE '%,30,%'
OR REPLACE(Media,' ','') LIKE '30,%'
OR REPLACE(Media,' ','') LIKE '%,30'
OR REPLACE(Media,' ','') = '30'
Edit: I actually prefer Coder of Code's solution to this:
SELECT *
FROM Table
WHERE ',' + LTRIM(RTRIM(REPLACE(Media,' ',''))) + ',' LIKE '%,30,%'
You mention that would wish to search for multiple strings in this field, which is also possible:
SELECT *
FROM Table
WHERE Media LIKE '%30%'
OR Media LIKE '%28%'
SELECT *
FROM Table
WHERE Media LIKE '%30%'
AND Media LIKE '%28%'
I agree not a good idea comma seperated values stored like that. Bu if you have to;
I think using inline function is will give better performance;
Select VacancyId, Media from (
Select 1 as VacancyId, '32,26,30' as Media
union all
Select 2, '31,25,20'
union all
Select 3, '21,32,23'
) asa
CROSS APPLY dbo.udf_StrToTable(Media, ',') tbl
where CAST(tbl.Result as int) in (30,21,40)
Group by VacancyId, Media
Output is;
VacancyId Media
----------- ---------
1 32,26,30
3 21,32,23
and our inline function script is;
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[udf_StrToTable]') and xtype in (N'FN', N'IF', N'TF'))
drop function [dbo].udf_StrToTable
GO
CREATE FUNCTION udf_StrToTable (#List NVARCHAR(MAX), #Delimiter NVARCHAR(1))
RETURNS TABLE
With Encryption
AS
RETURN
( WITH Split(stpos,endpos)
AS(
SELECT 0 AS stpos, CHARINDEX(#Delimiter,#List) AS endpos
UNION ALL
SELECT CAST(endpos+1 as int), CHARINDEX(#Delimiter,#List,endpos+1)
FROM Split
WHERE endpos > 0
)
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) as inx,
SUBSTRING(#List,stpos,COALESCE(NULLIF(endpos,0),LEN(#List)+1)-stpos) Result
FROM Split
)
GO
This solution uses a RECURSIVE CTE to identify the position of each comma within the string then uses SUBSTRING to return all strings between the commas.
I've left some unnecessary code in place to help you get you head round what it's doing. You can strip it down to provide exactly what you need.
DROP TABLE #TMP
CREATE TABLE #TMP(ID INT, Vals CHAR(100))
INSERT INTO #TMP(ID,VALS)
VALUES
(1,'32,26,30')
,(2,'31, 25,20')
,(3,'21,32,23')
;WITH cte
AS
(
SELECT
ID
,VALS
,0 POS
,CHARINDEX(',',VALS,0) REM
FROM
#TMP
UNION ALL
SELECT ID,VALS,REM,CHARINDEX(',',VALS,REM+1)
FROM
cte c
WHERE CHARINDEX(',',VALS,REM+1) > 0
UNION ALL
SELECT ID,VALS,REM,LEN(VALS)
FROM
cte c
WHERE POS+1 < LEN(VALS) AND CHARINDEX(',',VALS,REM+1) = 0
)
,cte_Clean
AS
(
SELECT ID,CAST(REPLACE(LTRIM(RTRIM(SUBSTRING(VALS,POS+1,REM-POS))),',','') AS INT) AS VAL FROM cte
WHERE POS <> REM
)
SELECT
ID
FROM
cte_Clean
WHERE
VAL = 32
ORDER BY ID

T-SQL trying to determine the largest string from a set of concatenated strings in a database

I have two tables. One has an Order number, and details about the order:
CREATE TABLE #Order ( OrderID int )
and the second contains comments about the order:
CREATE TABLE #OrderComments ( OrderID int
Comment VarChar(500) )
Order ID Comments
~~~~~~~~ ~~~~~~~~
1 Loved this item!
1 Could use some work
1 I've had better
2 Try the veal
I'm tasked with determining the maximum length of the output, then returning output like the following:
Order ID Comments Length
~~~~~~~~ ~~~~~~~~ ~~~~~~
1 Loved this item! | Could use some work | I've had better 56
2 Try the veal 12
So, in this example, if this is all of the data, I'm looking for "56").
The main purpose is to determine the maximum length of all comments when appended together, including the | delimiter. This will be used when constructing the table this output will be put into, to determine if we can get the data within the 8,060 size limit for a row or if we need to use varchar(max) or text to hold the data.
I have tried a couple of subqueries that can generate this output to variables, but I haven't found one yet that could generate the above output. If I could get that, then I could just do a SELECT TOP 1 ... ORDER BY 3 DESC to get the number I'm looking for.
To find out what the length of the longest string will be if you trim and concatenate all the (not null) comments belonging to an OrderId with a delimiter of length three you can use
SELECT TOP(1) SUM(LEN(Comment)) + 3* (COUNT(Comment) - 1) AS Length
FROM OrderComments
GROUP BY OrderId
ORDER BY Length DESC
To actually do the concatenation you can use XML PATH as demonstrated in many other answers on this site.
WITH O AS
(
SELECT DISTINCT OrderID
FROM #Order
)
SELECT O.OrderID,
LEFT(y.Comments, LEN(y.Comments) - 1) AS Comments
FROM O
CROSS APPLY (SELECT ltrim(rtrim(Comment)) + ' | '
FROM #OrderComments oc
WHERE oc.OrderID = O.OrderID
AND Comment IS NOT NULL
FOR XML PATH(''), TYPE) x (Comments)
CROSS APPLY (SELECT x.Comments.value('.', 'VARCHAR(MAX)')) y(Comments)
All you need is STUFF function and XML PATH
Check out this sql fiddle
http://www.sqlfiddle.com/#!3/65cc6/5

Resources