Get percentage of matching strings - sql-server

I have two string to match and get the percentage of matching.
Given:
String 1: John Smith Makde
String 2: Makde John Smith
Used the following user defined scalar function.
CREATE FUNCTION [dbo].[udf_GetPercentageOfTwoStringMatching]
(
#string1 NVARCHAR(1000)
,#string2 NVARCHAR(1000)
)
RETURNS INT
--WITH ENCRYPTION
AS
BEGIN
DECLARE #levenShteinNumber INT
DECLARE #string1Length INT = LEN(#string1), #string2Length INT = LEN(#string2)
DECLARE #maxLengthNumber INT = CASE WHEN #string1Length > #string2Length THEN #string1Length ELSE #string2Length END
SELECT #levenShteinNumber = [dbo].[f_ALGORITHM_LEVENSHTEIN] (#string1 ,#string2)
DECLARE #percentageOfBadCharacters INT = #levenShteinNumber * 100 / #maxLengthNumber
DECLARE #percentageOfGoodCharacters INT = 100 - #percentageOfBadCharacters
-- Return the result of the function
RETURN #percentageOfGoodCharacters
END
Function calling:
SELECT dbo.f_GetPercentageOfTwoStringMatching('John Smith Makde','Makde John Smith')
Output:
7
But when I give both the string as same with same position:
SELECT dbo.f_GetPercentageOfTwoStringMatching('John Smith Makde','John Smith Makde')
Output:
100
Expected Result: As the both strings words are same but with different sequence I want 100% matching percentage.
100

+1 for the question. It appears you are trying to determine how similar two names are. It's hard to determine how you are doing that. I'm very familiar with the Levenshtein Distance for example but don't understand how you are trying to use it. To get you started I put together two ways you might approach this. This won't be a complete answer but rather the tools you will need to do whatever you're trying.
To compare the number of matching "name parts" you could use DelimitedSplit8K like this:
DECLARE
#String1 VARCHAR(100) = 'John Smith Makde Sr.',
#String2 VARCHAR(100) = 'Makde John Smith Jr.';
SELECT COUNT(*)/(1.*LEN(#String1)-LEN(REPLACE(#string1,' ',''))+1)
FROM
(
SELECT s1.item
FROM dbo.delimitedSplit8K(#String1,' ') AS s1
INTERSECT
SELECT s2.item
FROM dbo.delimitedSplit8K(#String2,' ') AS s2
) AS a
Here Im splitting the names into atomic values and counting which ones match. Then we divide that number by the number of values. 3/4 = .75 for 75%; 3 of the four names match.
Another method would be to use NGrams8K like so:
DECLARE
#String1 VARCHAR(100) = 'John Smith Makde Sr.',
#String2 VARCHAR(100) = 'Makde John Smith Jr.';
SELECT (1.*f.L-f.MM)/f.L
FROM
(
SELECT
MM = SUM(ABS(s1.C-s2.C)),
L = CASE WHEN LEN(#String1)>LEN(#string2) THEN LEN(#String1) ELSE LEN(#string2) END
FROM
(
SELECT s1.token, COUNT(*)
FROM samd.NGrams8k(#String1,1) AS s1
GROUP BY s1.token
) AS s1(T,C)
JOIN
(
SELECT s1.token, COUNT(*)
FROM samd.NGrams8k(#String2,1) AS s1
GROUP BY s1.token
) AS s2(T,C)
ON s1.T=s2.T -- Letters that are equal
AND s1.C<>s2.C -- ... but the QTY is different
) AS f;
Here we're counting the characters and substracting the mismatches. There are two (one extra J and one extra S). The longer of the two strings is 20, there are 18 characters where the letter and qty are equal. 18/20 = .9 OR 90%.
Again, what you are doing is not complicated, I would just need more detail for a better answer.

Doing this for millions of rows again and again will be a nightmare... I'd add another column (or a 1:1 related side table) to permantently store a normalized string. Try this:
--Create a mockup table and fill it with some dummy data
CREATE TABLE #MockUpYourTable(ID INT IDENTITY, SomeName VARCHAR(1000));
INSERT INTO #MockUpYourTable VALUES('Makde John Smith')
,('Smith John Makde')
,('Some other string')
,('string with with duplicates with');
GO
--Add a column to store the normalized strings
ALTER TABLE #MockupYourTable ADD NormalizedName VARCHAR(1000);
GO
--Use this script to split your string in fragments and re-concatenate them as canonically ordered, duplicate-free string.
UPDATE #MockUpYourTable SET NormalizedName=CAST('<x>' + REPLACE((SELECT LOWER(SomeName) AS [*] FOR XML PATH('')),' ','</x><x>') + '</x>' AS XML)
.query(N'
for $fragment in distinct-values(/x/text())
order by $fragment
return $fragment
').value('.','nvarchar(1000)');
GO
--Check the result
SELECT * FROM #MockUpYourTable
ID SomeName NormalizedName
----------------------------------------------------------
1 Makde John Smith john makde smith
2 Smith John Makde john makde smith
3 Some other string other some string
4 string with with duplicates with duplicates string with
--Clean-Up
GO
DROP TABLE #MockUpYourTable
Hint Use a trigger ON INSERT, UPDATE to keep these values synced.
Now you can use the same transformation against your strings you want this to compare with and use your former approach. Due to the re-sorting, identical fragments will return 100% similarity.

Related

SSIS Import CSV which is part structured part unstructured

I have a CSV file that is in the following format:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
I need to import this using SSIS into SQL Server 2016.
I know how to get the second part of the data in (just skip n number of rows; the files are all consistent).
But I need some of the data in the first part of the file. There's two things I'm not sure on how to do:
obtain the data when its in the format column1=label, column2=data
how to parse through the file so that I can obtain the customer data and the order data in one go. There are some 50k files to go through, so would prefer to avoid running through them twice.
Do I have to bite the bullet and iterate through the files twice? And if so, how would you parse the data so that I get the column names and values ready for import to SQL table.
I thought perhaps the best way would be a script task, and creating a number of output columns. But not sure on how to assign each value to each new output column I created.
This will get all data onto one row. You may have to make modifications on data types and number of columns etc. This is a script component source. Dont forget to add your output with proper data types.
string[] lines = System.IO.File.ReadAllLines(#"d:\Imports\Sample.txt");
//Declare cust info
string fname = null;
string lname = null;
string address = null;
int ctr = 0;
foreach (string line in lines)
{
ctr++;
switch (ctr)
{
case 1:
fname = line.Split(',')[1].Trim();
break;
case 2:
lname = line.Split(',')[1].Trim();
break;
case 3:
address = line.Split(',')[1].Trim();
break;
case 4:
break;
case 5:
break;
default: //data rows
string[] cols = line.Split(',');
//Outpuit data
Output0Buffer.AddRow();
Output0Buffer.fname = fname;
Output0Buffer.lname = lname;
Output0Buffer.Address = address;
Output0Buffer.OrderNum = Int32.Parse(cols[0].ToString());
Output0Buffer.OrderDate = DateTime.Parse(cols[1].ToString());
Output0Buffer.OrderAmount = Decimal.Parse(cols[2].ToString());
break;
}
}
Here is your sample output:
#KeerKolloft,
As promised, here's a T-SQL-only solution. The overall goal for me was to store the first section of data in one table and the second section in another in a "Normalized" form with a "CustomerID" being the common value between the two tables.
I also wanted to do a "full monty" demo complete with test files (I generate 10 of them in the code below).
This following bit of code creates the 10 test/demo files in a given path, which you'll probably need to change. This is NOT a part of the solution... we're just generating test files here. Please read the comments for more information.
/**********************************************************************************************************************
Purpose:
Create 10 files to demonstrate this problem with. Each file will contain random but constrained test data similar to
the following format specified by the OP.
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file name follows the pattern of "CustomerNNNN" where "NNNN" is the Left Zero Padded CustomerID. If that's not
right for your file names, you'll have to make a change in the code below where the file names get created.
The files for my test are stored in a folder called "D:\Temp\". Again, you will need to change that to suite yourself.
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- Create a table of names and addresses to be used to create section 1 of each file.
--=====================================================================================================================
--===== If the table already exits, drop it to make reruns in SSMS easier.
DROP TABLE IF EXISTS #Section1
;
--===== Create and populate the table on-the-fly.
SELECT names.FileNum
,unpvt.*
INTO #Section1
FROM (--===== I used the form just to make things easier to read/edit for testing.
VALUES
( 1 ,'Arlen' ,'Aki' ,'8990 Damarkus Street')
,( 2 ,'Landynn' ,'Sailer' ,'7053 Parish Street')
,( 3 ,'Kelso' ,'Aasha' ,'7374 Amra Street')
,( 4 ,'Drithi' ,'Layne' ,'36 Samer Street')
,( 5 ,'Lateef' ,'Kristel' ,'5888 Aarna Street')
,( 6 ,'Elisha' ,'Ximenna' ,'311 Jakel Street')
,( 7 ,'Aidy' ,'Phoenyx' ,'4607 Caralina Street')
,( 8 ,'Surie' ,'Bee' ,'5629 Legendary Street')
,( 9 ,'Braidyn' ,'Naava' ,'4553 Ellia Street')
,(10 ,'Korbin' ,'Kort' ,'1926 Julyana Street')
)names(FileNum,FirstName,LastName,Address)
CROSS APPLY
(--===== This creates 5 lines for each name to be used as the section 1 data for each file.
VALUES
( 1 ,'FirstName, ' + FirstName)
,( 2 ,'LastName, ' + LastName)
,( 3 ,'Address, ' + Address)
,( 4 ,'') -- Blank Line
,( 5 ,'OrderNumber,OrderDate,OrderAmount') --Next Section Line
)unpvt(SortOrder,SectionLine)
ORDER BY names.FileNum,unpvt.SortOrder
;
-- SELECT * FROM #Section1
;
--=====================================================================================================================
-- Build 1 file for each of the name/address combinations above.
-- Each file name is in the form of "FILEnnnn" where "nnnn" is the left zero padded file counter.
--=====================================================================================================================
--===== Preset the loop counter (gotta use a loop for this one because we can only create 1 file at a time here).
DECLARE #FileCounter INT = 1;
WHILE #FileCounter <= 10
BEGIN
--===== Start over with the table for section 2.
DROP TABLE IF EXISTS ##FileOutput
;
--===== Grab the section 1 data for this file and start the file output table with it.
SELECT SectionLine
INTO ##FileOutput
FROM #Section1
WHERE FileNum = #FileCounter
ORDER BY SortOrder
;
--===== Build section 2 data (OrderNumber in same order as OrderDate and then DESC by OrderNumber like the OP had it)
WITH cteSection2 AS
(--==== This will build anywhere from 1 to 200 random but constrained rows of data
SELECT TOP (ABS(CHECKSUM(NEWID())%200)+1)
OrderDate = CONVERT(CHAR(10), DATEADD(dd, ABS(CHECKSUM(NEWID())%DATEDIFF(dd,'2019','2020')) ,'2019') ,23)
,OrderAmount = ABS(CHECKSUM(NEWID())%999)+1
FROM sys.all_columns
)
INSERT INTO ##FileOutput
(SectionLine)
SELECT TOP 2000000000 --The TOP is necessary to get the SORT to work correctly here
SectionLine = CONCAT(ROW_NUMBER() OVER (ORDER BY OrderDate),',',OrderDate,',',OrderAmount)
FROM cteSection2
ORDER BY OrderDate DESC
;
--===== Create a file from the data we created in the ##FileOutput table.
-- Note that this over writes any files with the same name that already exist.
DECLARE #BCPCmd VARCHAR(256);
SELECT #BCPCmd = CONCAT('BCP "SELECT SectionLine FROM ##FileOutput" queryout "D:\Temp\Customer',RIGHT(#FileCounter+10000,4),'.txt" -c -T');
EXEC xp_CmdShell #BCPCmd
;
--===== Bump the counter for the next file
SELECT #FileCounter += 1
;
END
;
GO
Now, we could do what I used to do in the old days... we could use SQL Server to isolate the first and second sections and use xp_CmdShell to BCP them out to work files and simply re-import them. In fact, I'd likely still do that because it's a lot simpler and I've found a way to use xp_CmdShell in a very safe manner. Still, a lot of people get all puffed up about using it, so we won't do it that way.
First, we'll need a string splitter. We can't use the bloody STRING_SPLIT() function that MS built in as of 2016 because it doesn't return the ordinal positions of the elements it splits out. The following string splitter (up to 8kB) is the fastest non-CLR T-SQL-only splitter you'll be able to find. Of course, it's also fully documented and contains two tests in the flower box to verify its operation.
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading and trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Note that this function uses a binary collation and is, therefore, case sensitive.
The original article for the concept of this splitter may be found at the following URL. You can also find
performance tests at this link although they are now a bit out of date. This function is much faster as of Rev 09,
which was built specifically for use in SQL Server 2012 and above andd is about twice as fast as the version
document in the article.
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
-----------------------------------------------------------------------------------------------------------------------
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# of returns & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' UNION ALL --4 E E E E
SELECT 17, ',,,,,,' --7 (All Empty Strings)
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolved externally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use a MAX datatype will cause it to run twice as slow. It's just the nature of
MAX datatypes whether it fits in-row or not.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the folks listed in the Revision
History below:
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago. The link below no long works but has been preserved herer
for posterity sake.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
The original article can be seen at then following special site, as least as of 29 Sep 2019.
http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html#
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Itzik-Ben Gan, Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer
- A further 15-20% performance enhancement has been discovered and incorporated into this code which also
eliminated the need for a "zero" position in the cteTally table.
Rev 08 - 24 Mar 2014 - Eirikur Eiriksson
- Further performance modification (twice as fast) For SQL Server 2012 and greater by using LEAD to find the
next delimiter for the current element, which eliminates the need for CHARINDEX, which eliminates the need
for a second scan of the string being split.
REF: https://www.sqlservercentral.com/articles/reaping-the-benefits-of-the-window-functions-in-t-sql-2
Rev 09 - 29 Sep 2019 - Jeff Moden
- Combine the improvements by Peter de Heer and Eirikur Eiriksson for use on SQL Server 2012 and above.
- Add Test 17 to the test code above.
- Modernize the generation of the embedded "Tally" generation available as of 2012. There's no significant
performance increase but it makes the code much shorter and easier to understand.
- Check/change all URLs in the notes abobe to ensure that they're still viable.
- Add a binary collation for a bit more of an edge on performance.
- Removed "Other Note" #7 above as UNPIVOT is no longern applicable (never was for performance).
**********************************************************************************************************************/
--=========== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--=========== "Inline" CTE Driven "Tally Tableā€ produces values from 0 up to 10,000, enough to cover VARCHAR(8000).
WITH E1(N) AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))E0(N))
,E4(N) AS (SELECT 1 FROM E1 a, E1 b, E1 c, E1 d)
,cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
,cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString COLLATE Latin1_General_BIN,t.N,1)
= #pDelimiter COLLATE Latin1_General_BIN
)
--=========== Do the actual split.
-- The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER (ORDER BY s.N1)
, Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF((LEAD(s.N1,1,1) OVER (ORDER BY s.N1)-1),0)-s.N1,8000))
FROM cteStart s
;
Once the you've setup the splitter and built the test files, the following code demonstrates a nasty fast (still not as fast if this file could just be imported, though) method of loading each file, parsing each section of the file, and loading each of the two sections into their respective normalized tables. The details are in the comments in the code.
Unfortunately, this forum won't allow for more than 30,000 characters and so I need to continue this in the next post down.
To continue with the rest of the code...
Look for the word "TODO" in this code to see where you'll need to make changes to handle your actual files. Like I said, the details are in the comments of the code.
As a bit of a sidebar, one of the advantages of doing these type of things in stored procedures is that it's a heck of a lot easy to copy stored procedures than it is to copy "SSIS packages" when the time comes (and it WILL come) to migrate to a new system.
/**********************************************************************************************************************
Purpose:
Import the files we created above to demonstrate one possible solution.
As a reminder, the files look like the following:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
Note that the files this code looks for are in the file path of "D:\Temp\" and the file name pattern is "CustomerNNNN"
where the "NNNN" is the Left Zero Padded CustomerID. You need to change those if your stuff is different.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- CREATE THE NECESSARY TABLES
-- I'm using TempTables as both the working tables and the final target tables because I didn't want to take a
-- a chance with accidently dropping one of your tables.
--=====================================================================================================================
--===== This is where the customer information from the first section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #Customer;
CREATE TABLE #Customer
(
CustomerID INT NOT NULL
,FirstName VARCHAR(50) NOT NULL
,LastName VARCHAR(50) NOT NULL
,Address VARCHAR(50) NOT NULL
,CONSTRAINT PK_#Customer PRIMARY KEY CLUSTERED (CustomerID)
)
;
--===== This is where the order information from the second section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #CustomerOrder;
CREATE TABLE #CustomerOrder
(
CustomerID INT NOT NULL
,OrderNumber INT NOT NULL
,OrderDate DATE NOT NULL
,OrderAmount INT NOT NULL
,CONSTRAINT PK_#CustomerOrder PRIMARY KEY CLUSTERED (CustomerID,OrderNumber)
)
;
--===== We'll store all file names in this table.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #DirTree;
CREATE TABLE #DirTree
(
FileName VARCHAR(500) PRIMARY KEY CLUSTERED
,Depth INT
,IsFile BIT
)
;
--===== This is where the filtered list of files we want to work with will be stored.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #FileControl;
CREATE TABLE #FileControl
(
FileControlID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
,FileName VARCHAR(500) NOT NULL
,CustomerID AS CONVERT(INT,LEFT(RIGHT(FileName,8),4))
)
;
--===== This is where we'll temporarily import files to be worked on one at a time.
-- Ironically, this needs to be a non-temporary table because we need to create
-- a view on it to avoid needing a BCP Format File to skip the LineNumber column
-- during the "flat" import.
DROP TABLE IF EXISTS dbo.FileContent;
CREATE TABLE dbo.FileContent
(
LineNumber INT IDENTITY(1,1)
,LineContent VARCHAR(100)
)
;
--===== This is the view that we'll actually import to and it will target the table above.
-- It replaces a BCP Format File to skip the LineNumber column in the target table.
-- It's being created using Dynamic SQL to avoid the use of "GO".
DROP VIEW IF EXISTS dbo.vFileContent;
EXEC ('CREATE VIEW dbo.vFileContent AS SELECT LineContent FROM dbo.FileContent')
;
--=====================================================================================================================
-- Find the files we want to load.
-- The xp_DirTree command does not allow for wild cards and so we have to load all file and directory names that
-- are in #FilePath and then filter and copy just the ones we want to a file control table.
--=====================================================================================================================
--===== Local variables populated in this section
DECLARE #FilePath VARCHAR(500) = 'D:\Temp\' --TODO Change this if you need to.
,#FileCount INT
;
--===== Load all names in the #FilePath whether they are file names or directory names.
INSERT INTO #DirTree WITH (TABLOCK)
(FileName, Depth, IsFile)
EXEC xp_DirTree #FilePath,1,1
;
--===== Filter the names of files that we want and load them into a numbered control table to step through the files later.
INSERT INTO #FileControl
(FileName)
SELECT FileName
FROM #DirTree
WHERE FileName LIKE 'Customer[0-9][0-9][0-9][0-9].txt' --TODO you will likely need to change this pattern for file names.
AND IsFile = 1
ORDER BY FileName --Just to help keep track.
;
--===== Remember the number of file names we loaded for the upcoming control loop.
SELECT #FileCount = ##ROWCOUNT
;
--SELECT * FROM #FileControl;
--=====================================================================================================================
-- This loop is the "control" loop that loads each file one at a time and parses the information out of section 1
-- and section 2 of the file and stores the data in the respective tables.
--=====================================================================================================================
--===== Define the local variables populated in this section.
DECLARE #Counter INT = 1
,#Section1LastLine INT = 3 --TODO you'll need to change this to 24 according to you specs on the real files.
,#Section2FirstLine INT = 5 --TODO you'll also need to change this but I don't know what it will be for you.
;
--===== Setup the loop counter
WHILE #Counter <= #FileCount
BEGIN
--===== These are variables that are used within this loop.
-- No... this doesn't create an error and they're really handy when trying to troubleshoot.
DECLARE #FileName VARCHAR(500)
,#CustomerID INT
,#SQL VARCHAR(8000)
;
--===== This gets the next file from the file control table according to #Counter.
-- TODO... you might have to change where you get the CustomerID from.
-- I'm getting it from the "patterned" file names in this case because I had nothing else to go on
-- in your description ofn the problem.
SELECT #FileName = CONCAT(#FilePath,FileName)
,#CustomerID = CustomerID
FROM #FileControl
WHERE FileControlID = #Counter -- select* from #FileControl
;
--===== Clear the guns to get ready to load and work on a new file.
TRUNCATE TABLE dbo.FileContent
;
--===== Calculate the BULK INSERT command we need to load the given file.
SELECT #SQL = '
BULK INSERT dbo.vFileContent
FROM '+QUOTENAME(#FileName,'''')+'
WITH (
BATCHSIZE = 2000000000 --Import everything in one shot for performance/potential minimal logging.
,CODEPAGE = ''RAW'' --Ignore any code pages.
,DATAFILETYPE = ''char'' --This is NOT a unicode file. It''s ANSI text.
,FIELDTERMINATOR = '','' --The delimiter between the fields in the file.
,ROWTERMINATOR = ''\n'' --The rows were not generated on a Windows box so only "LineFeed" is used.
,KEEPNULLS --Adjacent delimiters will create NULLs rather than blanks.
,TABLOCK --Allows for "minimal logging" when possible (and it is for this import)
)
;'
--PRINT #SQL
EXEC (#SQL)
;
--===== Read Section 1 (customer information)
-- This builds the dynamic SQL to parse and store the customer information in section 1.
SELECT #SQL = CONCAT('INSERT INTO #Customer',CHAR(10),'(CustomerID');
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1))
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine;
SELECT #SQL += CONCAT(')',CHAR(10),'SELECT',CHAR(10));
SELECT #SQL += CONCAT(' CustomerID=',#CustomerID,CHAR(10));
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1),'='
,QUOTENAME(LTRIM(RTRIM(SUBSTRING(LineContent,CHARINDEX(',',LineContent)+1,50))),'''')
,CHAR(10)
)
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine
;
EXEC (#SQL)
;
--===== This parses and stores the information from section 2.
-- Since you said the order of the columns never changes, I hard-coded the results for performance
-- using an ancient "Black Arts" form of code known as a "CROSSTAB", which pivots the data result
-- from the splitter faster than PIVOT usually does and also allows exquisite control in the code.
INSERT INTO #CustomerOrder
(OrderNumber,CustomerID,OrderDate,OrderAmount)
SELECT OrderNumber = MAX(CASE WHEN split.ItemNumber = 1 THEN Item ELSE -1 END)
,CustomerID = #CustomerID
,OrderDate = MAX(CASE WHEN split.ItemNumber = 2 THEN Item ELSE '1753' END)
,OrderAmount = MAX(CASE WHEN split.ItemNumber = 3 THEN Item ELSE -1 END)
FROM dbo.FileContent fc
CROSS APPLY dbo.DelimitedSplit8K(fc.LineContent,',') split
WHERE LineNumber > #Section2FirstLine
GROUP BY LineNumber
;
--===== Bump the counter
SELECT #Counter += 1
;
END
;
--===== All done. Display the results of the two tables we populated from all 10 files.
SELECT * FROM #Customer;
SELECT * FROM #CustomerOrder;

Expression to find multiple spaces in string

We handle a lot of sensitive data and I would like to mask passenger names using only the first and last letter of each name part and join these by three asterisks (***),
For example: the name 'John Doe' will become 'J***n D***e'
For a name that consists of two parts this is doable by finding the space using the expression:
LEFT(CardHolderNameFromPurchase, 1) +
'***' +
CASE WHEN CHARINDEX(' ', PassengerName) = 0
THEN RIGHT(PassengerName, 1)
ELSE SUBSTRING(PassengerName, CHARINDEX(' ', PassengerName) -1, 1) +
' ' +
SUBSTRING(PassengerName, CHARINDEX(' ', PassengerName) +1, 1) +
'***' +
RIGHT(PassengerName, 1)
END
However, the passenger name can have more than two parts, there is no real limit to it. How should can I find the indices of all spaces within an expression? Or should I maybe tackle this problem in a different way?
Any help or pointer is much appreciated!
This solution does what you want it to, but is really the wrong approach to use when trying to hide personally identifiable data, as per Gordon's explanation in his answer.
SQL:
declare #t table(n nvarchar(20));
insert into #t values('John Doe')
,('JohnDoe')
,('John Doe Two')
,('John Doe Two Three')
,('John O''Neill');
select n
,stuff((select ' ' + left(s.item,1) + '***' + right(s.item,1)
from dbo.fn_StringSplit4k(t.n,' ',null) as s
for xml path('')
),1,1,''
) as mask
from #t as t;
Output:
+--------------------+-------------------------+
| n | mask |
+--------------------+-------------------------+
| John Doe | J***n D***e |
| JohnDoe | J***e |
| John Doe Two | J***n D***e T***o |
| John Doe Two Three | J***n D***e T***o T***e |
| John O'Neill | J***n O***l |
+--------------------+-------------------------+
String splitting function based on Jeff Moden's Tally Table approach:
create function [dbo].[fn_StringSplit4k]
(
#str nvarchar(4000) = ' ' -- String to split.
,#delimiter as nvarchar(1) = ',' -- Delimiting value to split on.
,#num as int = null -- Which value to return, null returns all.
)
returns table
as
return
-- Start tally table with 10 rows.
with n(n) as (select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1)
-- Select the same number of rows as characters in #str as incremental row numbers.
-- Cross joins increase exponentially to a max possible 10,000 rows to cover largest #str length.
,t(t) as (select top (select len(isnull(#str,'')) a) row_number() over (order by (select null)) from n n1,n n2,n n3,n n4)
-- Return the position of every value that follows the specified delimiter.
,s(s) as (select 1 union all select t+1 from t where substring(isnull(#str,''),t,1) = #delimiter)
-- Return the start and length of every value, to use in the SUBSTRING function.
-- ISNULL/NULLIF combo handles the last value where there is no delimiter at the end of the string.
,l(s,l) as (select s,isnull(nullif(charindex(#delimiter,isnull(#str,''),s),0)-s,4000) from s)
select rn
,item
from(select row_number() over(order by s) as rn
,substring(#str,s,l) as item
from l
) a
where rn = #num
or #num is null;
GO
If you consider PassengerName as sensitive information, then you should not be storing it in clear text in generally accessible tables. Period.
There are several different options.
One is to have reference tables for sensitive information. Any table that references this would have an id rather than the name. Viola. No sensitive information is available without access to the reference table, and that would be severely restricted.
A second method is a reversible compression algorithm. This would allow the the value to be gibberish, but with the right knowledge, it could be transformed back into a meaningful value. Typical methods for this are the public key encryption algorithms devised by Rivest, Shamir, and Adelman (RSA encoding).
If you want to do first and last letters of names, I would be really careful about Asian names. Many of them consist of two or three letters, when written in Latin script. That isn't much hiding. SQL Server does not have simple mechanisms to do this. You can write a user-defined function with a loop to manager the process. However, I view this as the least secure and least desirable approach.
This uses Jeff Moden's DelimitedSplit8K, as well as the new functionality in SQL Server 2017 STRING_AGG. As I don't know what version you're using, I've just gone "whole hog" and assumed you're using the latest version.
Jeff's function is invaluable here, as it returns the ordinal position, something which Microsoft have foolishly omitted from their own function, STRING_SPLIT (and didn't add in 2017 either). Ordinal position is key here, so we can't make use of the built in function.
WITH VTE AS(
SELECT *
FROM (VALUES ('John Doe'),('Jane Bloggs'),('Edgar Allan Poe'),('Mr George W. Bush'),('Homer J Simpson')) V(FullName)),
Masking AS (
SELECT *,
ISNULL(STUFF(Item, 2, LEN(item) -2,'***'), Item) AS MaskedPart
FROM VTE V
CROSS APPLY dbo.delimitedSplit8K(V.Fullname, ' '))
SELECT STRING_AGG(MaskedPart,' ') AS MaskedFullName
FROM Masking
GROUP BY Fullname;
Edit: Nevermind, OP has commented they are using 2008, so STRING_AGG is out of the question. #iamdave, however, has posted an answer which is very similar to my own, just do it the "old fashioned XML way".
Depending on your version of SQL Server, you may be able to use the built-in string split to rows on spaces in the name, do your string formatting, and then roll back up to name level using an XML path.
create table dataset (id int identity(1,1), name varchar(50));
insert into dataset (name) values
('John Smith'),
('Edgar Allen Poe'),
('One Two Three Four');
with split as (
select id, cs.Value as Name
from dataset
cross apply STRING_SPLIT (name, ' ') cs
),
formatted as (
select
id,
name,
left(name, 1) + '***' + right(name, 1) as out
from split
)
SELECT
id,
(SELECT ' ' + out
FROM formatted b
WHERE a.id = b.id
FOR XML PATH('')) [out_name]
FROM formatted a
GROUP BY id
Result:
id out_name
1 J***n S***h
2 E***r A***n P***e
3 O***e T***o T***e F***r
You can do that using this function.
create function [dbo].[fnMaskName] (#var_name varchar(100))
RETURNS varchar(100)
WITH EXECUTE AS CALLER
AS
BEGIN
declare #var_part varchar(100)
declare #var_return varchar(100)
declare #n_position smallint
set #var_return = ''
set #n_position = 1
WHILE #n_position<>0
BEGIN
SET #n_position = CHARINDEX(' ', #var_name)
IF #n_position = 0
SET #n_position = LEN(#var_name)
SET #var_part = SUBSTRING(#var_name, 1, #n_position)
SET #var_name = SUBSTRING(#var_name, #n_position+1, LEN(#var_name))
if #var_part<>''
SET #var_return = #var_return + stuff(#var_part, 2, len(#var_part)-2, replicate('*',len(#var_part)-2)) + ' '
END
RETURN(#var_return)
END

Split Single Column into multiple and Load it to a Table or a View

I'm using SQL Server 2008. I have a source table with a few columns (A, B) containing string data to split into a multiple columns. I do have function that does the split already written.
The data from the Source table (the source table format cannot be modified) is used in a View being created. But I need to have my View have already split data for Column A and B from the Source table. So, my view will have extra columns that are not in the Source table.
Then the View populated with the Source table is used to Merge with the Other Table.
There two questions here:
Can I split column A and B from the Source table when creating a View, but do not change the Source Table?
How to use my existing User Defined Function in the View "Select" statement to accomplish this task?
Idea in short:
String to split is also shown in the example in the commented out section. Pretty much have Destination table, vStandardizedData View, SP that uses the View data to Merge to tblStandardizedData table. So, in my Source column I have column A and B that I need to split before loading to tblStandardizedData table.
There are five objects that I'm working on:
Source File
Destination Table
vStandardizedData View
tblStandardizedData table
Stored procedure that does merge
(Update and Insert) form the vStandardizedData View.
Note: all the 5 objects a listed in the order they are supposed to be created and loaded.
Separately from this there is an existing UDFunction that can split the string which I was told to use
Example of the string in column A (column B has the same format data) to be split:
6667 Mission Street, 4567 7rd Street, 65 Sully Pond Park
Desired result:
User-defined function returns a table variable:
CREATE FUNCTION [Schema].[udfStringDelimeterfromTable]
(
#sInputList VARCHAR(MAX) -- List of delimited items
, #Delimiter CHAR(1) = ',' -- delimiter that separates items
)
RETURNS #List TABLE (Item VARCHAR(MAX)) WITH SCHEMABINDING
/*
* Returns a table of strings that have been split by a delimiter.
* Similar to the Visual Basic (or VBA) SPLIT function. The
* strings are trimmed before being returned. Null items are not
* returned so if there are multiple separators between items,
* only the non-null items are returned.
* Space is not a valid delimiter.
*
* Example:
SELECT * FROM [Schema].[udfStringDelimeterfromTable]('abcd,123, 456, efh,,hi', ',')
*
* Test:
DECLARE #Count INT, #Delim CHAR(10), #Input VARCHAR(128)
SELECT #Count = Count(*)
FROM [Schema].[udfStringDelimeterfromTable]('abcd,123, 456', ',')
PRINT 'TEST 1 3 lines:' + CASE WHEN #Count=3
THEN 'Worked' ELSE 'ERROR' END
SELECT #DELIM=CHAR(10)
, #INPUT = 'Line 1' + #delim + 'line 2' + #Delim
SELECT #Count = Count(*)
FROM [Schema].[udfStringDelimeterfromTable](#Input, #Delim)
PRINT 'TEST 2 LF :' + CASE WHEN #Count=2
THEN 'Worked' ELSE 'ERROR' END
What I'd ask you, is to read this: How to create a Minimal, Complete, and Verifiable example.
In general: If you use your UDF, you'll get table-wise data. It was best, if your UDF would return the item together with a running number. Otherwise you'll first need to use ROW_NUMBER() OVER(...) to create a part number in order to create your target column names via string concatenation. Then use PIVOT to get the columns side-by-side.
An easier approach could be a string split via XML like in this answer
A quick proof of concept to show the principles:
DECLARE #tbl TABLE(ID INT,YourValues VARCHAR(100));
INSERT INTO #tbl VALUES
(1,'6667 Mission Street, 4567 7rd Street, 65 Sully Pond Park')
,(2,'Other addr1, one more addr, and another one, and even one more');
WITH Casted AS
(
SELECT *
,CAST('<x>' + REPLACE(YourValues,',','</x><x>') + '</x>' AS XML) AS AsXml
FROM #tbl
)
SELECT *
,LTRIM(RTRIM(AsXml.value('/x[1]','nvarchar(max)'))) AS Address1
,LTRIM(RTRIM(AsXml.value('/x[2]','nvarchar(max)'))) AS Address2
,LTRIM(RTRIM(AsXml.value('/x[3]','nvarchar(max)'))) AS Address3
,LTRIM(RTRIM(AsXml.value('/x[4]','nvarchar(max)'))) AS Address4
,LTRIM(RTRIM(AsXml.value('/x[5]','nvarchar(max)'))) AS Address5
FROM Casted
If your values might include forbidden characters (especially <,> and &) you can find an approach to deal with this in the linked answer.
The result
+----+---------------------+-----------------+--------------------+-------------------+----------+
| ID | Address1 | Address2 | Address3 | Address4 | Address5 |
+----+---------------------+-----------------+--------------------+-------------------+----------+
| 1 | 6667 Mission Street | 4567 7rd Street | 65 Sully Pond Park | NULL | NULL |
+----+---------------------+-----------------+--------------------+-------------------+----------+
| 2 | Other addr1 | one more addr | and another one | and even one more | NULL |
+----+---------------------+-----------------+--------------------+-------------------+----------+

Any reason why I shouldn't use "between X and Y" on a varchar field in SQL to return a number?

I've got an indexed (but not unique) varchar field of Employee IDs in a table, and in a query I need to return rows that are exactly 4 numerical characters but also over 1000.
I've found various questions on here about using validation methods to check that the field contains 0-9 characters, or doesn't contain a-z characters etc, but these are unrelated to this question.
Background:
I've got a table with various values, sample set as follows:
EmployeeID
----------
6745
EMP1
EMP2
1874
LTST
5694
0014
What I would like to do is return all values except EMP1, EMP2, LTST and 0014.
My question is, are there any reasons why I shouldn't use a Where clause like where EmployeeID between '1000' and '9999'? Reason for this being employeeid is a varchar column
If I can do this, should I also Order By employee ID, or does this not matter?
I believe "0014" would be left out of the where clause between '1000' and '9999', so that's a reason. Perhaps between '0000' and '9999' would suit your purposes better. Just remember that you're still sorting based on text. If you have any entries like "1_99", this would also show up in your query results with your given between clause.
If you're looking to only return 4-character numbers excluding leading zeroes, then the following addition should suffice:
WHERE EmployeeID BETWEEN '1000' AND '9999' AND TRY_CAST(EmployeeID As int) IS NOT NULL
...or, more intuitively:
WHERE TRY_CAST(EmployeeID As int) BETWEEN 1000 AND 9999
Run the following code as an example and you'll see that SQL Server doesn't treat INT the same as integers stored as VARCHAR:
WITH IntsAsVars
AS (
SELECT var = '1000',
int = 1000
UNION ALL
SELECT var = '100',
int = 100
UNION ALL
SELECT var = '9999',
int = 999
UNION ALL
SELECT var = '99',
int = 99
UNION ALL
SELECT var = '750',
int = 750
UNION ALL
SELECT var = '10',
int = 10
UNION ALL
SELECT var = '2',
int = 2
)
SELECT *
FROM IntsAsVars
--WHERE var BETWEEN '2' AND '750'
/* should return 2, 10, 99, 100 & 750 if it works like INT
but does it? */
ORDER BY
--var ASC,
int ASC;
Running it without the where clause gets the following so SQL Server doesn't consider the other records to be between 2 and 750 when they are stored as varchar.:
If your real data is exactly as the sample data in regard of the non-numeric values beginning with a letter, you could use your query to achieve the desired result.
However be aware of of the sort order of the data. If you have got an EmployeeId of 1ABC it will be included in the data returned by WHERE EmployeeID BETWEEN '1000' AND '9999'!
Your approach is not suitable to filter out non-numeric values!
An additional ORDER BY affects the order of the results only, it has no effect on the evaluation of the WHERE condition.
I'd say the simplest way is to use like:
select * from yourtable
where EmployeeID like '[1-9][0-9][0-9][0-9]'
lets say you have this input:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
CREATE TABLE #test
(
EmployeeID VARCHAR(255)
)
CREATE CLUSTERED INDEX CIX_test_EmployeeID ON #test(EmployeeID)
INSERT INTO #test
VALUES
('6745'),
('EMP1'),
('EMP2'),
('1874'),
('LTST'),
('5694'),
('1000'),
('9999'),
('10L'),
('187'),
('9X9'),
('7est'),
('1ok'),
('0_o'),
('0014');
Your statement would also return '1ok','187', '10L' and so on.
Since you mentioned that your employeeID has a fixed length, you could use something like this:
SELECT *
FROM #test
WHERE EmployeeID LIKE '[1-9][0-9][0-9][0-9]'

Return Multi Row DataSet as Single Row CSV without Temp Table

I'm doing some reporting against a silly database and I have to do
SELECT [DESC] as 'Description'
FROM dbo.tbl_custom_code_10 a
INNER JOIN dbo.Respondent b ON CHARINDEX(',' + a.code + ',', ',' + b.CC10) > 0
WHERE recordid = 116
Which Returns Multiple Rows
Palm
Compaq
Blackberry
Edit *
Schema is
Respondent Table (At a Glance) ...
*recordid lname fname address CC10 CC11 CC12 CC13*
116 Smith John Street 1,4,5, 1,3,4, 1,2,3, NULL
Tbl_Custom_Code10
*code desc*
0 None
1 Palm
10 Samsung
11 Treo
12 HTC
13 Nokia
14 LG
15 HP
16 Dash
Result set will always be 1 row, so John Smith: | 646-465-4566 | Has a Blackberry, Palm, Compaq | Likes: Walks on the beach, Rainbows, Saxophone
However I need to be able to use this within another query ... like
Select b.Name, c.Number, d.MulitLineCrap FROM Tables
How can I go about this, Thanks in advance ...
BTW I could also do it in LINQ if any body had any ideas ...
Here is one way to make a comma-separated list based on a query (just replace the query inside the first WITH block). Now, how that joins up with your query against b and c, I have no idea. You'll need to supply a more complete question - including specifics on how many rows come back from the second query and whether "MultilineCrap" is the same for each of those rows or if it depends on data in b/c.
;WITH x([DESC]) AS
(
SELECT d FROM (VALUES('Palm'),('Compaq'),('Blackberry')) AS x(d)
)
SELECT STUFF((SELECT ',' + [DESC]
FROM x
FOR XML PATH(''), TYPE).value(N'./text()[1]', N'varchar(max)'),1,1,'');
EDIT
Given the new requirements, perhaps this is the best way:
CREATE FUNCTION dbo.GetMultiLineCrap
(
#s VARCHAR(MAX)
)
RETURNS VARCHAR(MAX)
AS
BEGIN
DECLARE #x VARCHAR(MAX) = '';
SELECT #x += ',' + [desc]
FROM dbo.tbl_custom_code_10
WHERE ',' + #s LIKE '%,' + RTRIM(code) + ',%';
RETURN (SELECT STUFF(#x, 1, 1, ''));
END
GO
SELECT r.LName, r.FName, MultilineCrap = dbo.GetMultiLineCrap(r.CC10)
FROM dbo.Respondent AS r
WHERE recordid = 116;
Please use aliases that make a little bit of sense, instead of just serially applying a, b, ,c, etc. Your queries will be easier to read, I promise.

Resources