I have a CSV file that is in the following format:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
I need to import this using SSIS into SQL Server 2016.
I know how to get the second part of the data in (just skip n number of rows; the files are all consistent).
But I need some of the data in the first part of the file. There's two things I'm not sure on how to do:
obtain the data when its in the format column1=label, column2=data
how to parse through the file so that I can obtain the customer data and the order data in one go. There are some 50k files to go through, so would prefer to avoid running through them twice.
Do I have to bite the bullet and iterate through the files twice? And if so, how would you parse the data so that I get the column names and values ready for import to SQL table.
I thought perhaps the best way would be a script task, and creating a number of output columns. But not sure on how to assign each value to each new output column I created.
This will get all data onto one row. You may have to make modifications on data types and number of columns etc. This is a script component source. Dont forget to add your output with proper data types.
string[] lines = System.IO.File.ReadAllLines(#"d:\Imports\Sample.txt");
//Declare cust info
string fname = null;
string lname = null;
string address = null;
int ctr = 0;
foreach (string line in lines)
{
ctr++;
switch (ctr)
{
case 1:
fname = line.Split(',')[1].Trim();
break;
case 2:
lname = line.Split(',')[1].Trim();
break;
case 3:
address = line.Split(',')[1].Trim();
break;
case 4:
break;
case 5:
break;
default: //data rows
string[] cols = line.Split(',');
//Outpuit data
Output0Buffer.AddRow();
Output0Buffer.fname = fname;
Output0Buffer.lname = lname;
Output0Buffer.Address = address;
Output0Buffer.OrderNum = Int32.Parse(cols[0].ToString());
Output0Buffer.OrderDate = DateTime.Parse(cols[1].ToString());
Output0Buffer.OrderAmount = Decimal.Parse(cols[2].ToString());
break;
}
}
Here is your sample output:
#KeerKolloft,
As promised, here's a T-SQL-only solution. The overall goal for me was to store the first section of data in one table and the second section in another in a "Normalized" form with a "CustomerID" being the common value between the two tables.
I also wanted to do a "full monty" demo complete with test files (I generate 10 of them in the code below).
This following bit of code creates the 10 test/demo files in a given path, which you'll probably need to change. This is NOT a part of the solution... we're just generating test files here. Please read the comments for more information.
/**********************************************************************************************************************
Purpose:
Create 10 files to demonstrate this problem with. Each file will contain random but constrained test data similar to
the following format specified by the OP.
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file name follows the pattern of "CustomerNNNN" where "NNNN" is the Left Zero Padded CustomerID. If that's not
right for your file names, you'll have to make a change in the code below where the file names get created.
The files for my test are stored in a folder called "D:\Temp\". Again, you will need to change that to suite yourself.
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- Create a table of names and addresses to be used to create section 1 of each file.
--=====================================================================================================================
--===== If the table already exits, drop it to make reruns in SSMS easier.
DROP TABLE IF EXISTS #Section1
;
--===== Create and populate the table on-the-fly.
SELECT names.FileNum
,unpvt.*
INTO #Section1
FROM (--===== I used the form just to make things easier to read/edit for testing.
VALUES
( 1 ,'Arlen' ,'Aki' ,'8990 Damarkus Street')
,( 2 ,'Landynn' ,'Sailer' ,'7053 Parish Street')
,( 3 ,'Kelso' ,'Aasha' ,'7374 Amra Street')
,( 4 ,'Drithi' ,'Layne' ,'36 Samer Street')
,( 5 ,'Lateef' ,'Kristel' ,'5888 Aarna Street')
,( 6 ,'Elisha' ,'Ximenna' ,'311 Jakel Street')
,( 7 ,'Aidy' ,'Phoenyx' ,'4607 Caralina Street')
,( 8 ,'Surie' ,'Bee' ,'5629 Legendary Street')
,( 9 ,'Braidyn' ,'Naava' ,'4553 Ellia Street')
,(10 ,'Korbin' ,'Kort' ,'1926 Julyana Street')
)names(FileNum,FirstName,LastName,Address)
CROSS APPLY
(--===== This creates 5 lines for each name to be used as the section 1 data for each file.
VALUES
( 1 ,'FirstName, ' + FirstName)
,( 2 ,'LastName, ' + LastName)
,( 3 ,'Address, ' + Address)
,( 4 ,'') -- Blank Line
,( 5 ,'OrderNumber,OrderDate,OrderAmount') --Next Section Line
)unpvt(SortOrder,SectionLine)
ORDER BY names.FileNum,unpvt.SortOrder
;
-- SELECT * FROM #Section1
;
--=====================================================================================================================
-- Build 1 file for each of the name/address combinations above.
-- Each file name is in the form of "FILEnnnn" where "nnnn" is the left zero padded file counter.
--=====================================================================================================================
--===== Preset the loop counter (gotta use a loop for this one because we can only create 1 file at a time here).
DECLARE #FileCounter INT = 1;
WHILE #FileCounter <= 10
BEGIN
--===== Start over with the table for section 2.
DROP TABLE IF EXISTS ##FileOutput
;
--===== Grab the section 1 data for this file and start the file output table with it.
SELECT SectionLine
INTO ##FileOutput
FROM #Section1
WHERE FileNum = #FileCounter
ORDER BY SortOrder
;
--===== Build section 2 data (OrderNumber in same order as OrderDate and then DESC by OrderNumber like the OP had it)
WITH cteSection2 AS
(--==== This will build anywhere from 1 to 200 random but constrained rows of data
SELECT TOP (ABS(CHECKSUM(NEWID())%200)+1)
OrderDate = CONVERT(CHAR(10), DATEADD(dd, ABS(CHECKSUM(NEWID())%DATEDIFF(dd,'2019','2020')) ,'2019') ,23)
,OrderAmount = ABS(CHECKSUM(NEWID())%999)+1
FROM sys.all_columns
)
INSERT INTO ##FileOutput
(SectionLine)
SELECT TOP 2000000000 --The TOP is necessary to get the SORT to work correctly here
SectionLine = CONCAT(ROW_NUMBER() OVER (ORDER BY OrderDate),',',OrderDate,',',OrderAmount)
FROM cteSection2
ORDER BY OrderDate DESC
;
--===== Create a file from the data we created in the ##FileOutput table.
-- Note that this over writes any files with the same name that already exist.
DECLARE #BCPCmd VARCHAR(256);
SELECT #BCPCmd = CONCAT('BCP "SELECT SectionLine FROM ##FileOutput" queryout "D:\Temp\Customer',RIGHT(#FileCounter+10000,4),'.txt" -c -T');
EXEC xp_CmdShell #BCPCmd
;
--===== Bump the counter for the next file
SELECT #FileCounter += 1
;
END
;
GO
Now, we could do what I used to do in the old days... we could use SQL Server to isolate the first and second sections and use xp_CmdShell to BCP them out to work files and simply re-import them. In fact, I'd likely still do that because it's a lot simpler and I've found a way to use xp_CmdShell in a very safe manner. Still, a lot of people get all puffed up about using it, so we won't do it that way.
First, we'll need a string splitter. We can't use the bloody STRING_SPLIT() function that MS built in as of 2016 because it doesn't return the ordinal positions of the elements it splits out. The following string splitter (up to 8kB) is the fastest non-CLR T-SQL-only splitter you'll be able to find. Of course, it's also fully documented and contains two tests in the flower box to verify its operation.
CREATE FUNCTION [dbo].[DelimitedSplit8K]
/**********************************************************************************************************************
Purpose:
Split a given string at a given delimiter and return a list of the split elements (items).
Notes:
1. Leading and trailing delimiters are treated as if an empty string element were present.
2. Consecutive delimiters are treated as if an empty string element were present between them.
3. Except when spaces are used as a delimiter, all spaces present in each element are preserved.
Returns:
iTVF containing the following:
ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)
Item = Element value as a VARCHAR(8000)
Note that this function uses a binary collation and is, therefore, case sensitive.
The original article for the concept of this splitter may be found at the following URL. You can also find
performance tests at this link although they are now a bit out of date. This function is much faster as of Rev 09,
which was built specifically for use in SQL Server 2012 and above andd is about twice as fast as the version
document in the article.
http://www.sqlservercentral.com/Forums/Topic1101315-203-4.aspx
-----------------------------------------------------------------------------------------------------------------------
CROSS APPLY Usage Examples and Tests:
--=====================================================================================================================
-- TEST 1:
-- This tests for various possible conditions in a string using a comma as the delimiter. The expected results are
-- laid out in the comments
--=====================================================================================================================
--===== Conditionally drop the test tables to make reruns easier for testing.
-- (this is NOT a part of the solution)
IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL DROP TABLE #JBMTest
;
--===== Create and populate a test table on the fly (this is NOT a part of the solution).
-- In the following comments, "b" is a blank and "E" is an element in the left to right order.
-- Double Quotes are used to encapsulate the output of "Item" so that you can see that all blanks
-- are preserved no matter where they may appear.
SELECT *
INTO #JBMTest
FROM ( --# of returns & type of Return Row(s)
SELECT 0, NULL UNION ALL --1 NULL
SELECT 1, SPACE(0) UNION ALL --1 b (Empty String)
SELECT 2, SPACE(1) UNION ALL --1 b (1 space)
SELECT 3, SPACE(5) UNION ALL --1 b (5 spaces)
SELECT 4, ',' UNION ALL --2 b b (both are empty strings)
SELECT 5, '55555' UNION ALL --1 E
SELECT 6, ',55555' UNION ALL --2 b E
SELECT 7, ',55555,' UNION ALL --3 b E b
SELECT 8, '55555,' UNION ALL --2 b B
SELECT 9, '55555,1' UNION ALL --2 E E
SELECT 10, '1,55555' UNION ALL --2 E E
SELECT 11, '55555,4444,333,22,1' UNION ALL --5 E E E E E
SELECT 12, '55555,4444,,333,22,1' UNION ALL --6 E E b E E E
SELECT 13, ',55555,4444,,333,22,1,' UNION ALL --8 b E E b E E E b
SELECT 14, ',55555,4444,,,333,22,1,' UNION ALL --9 b E E b b E E E b
SELECT 15, ' 4444,55555 ' UNION ALL --2 E (w/Leading Space) E (w/Trailing Space)
SELECT 16, 'This,is,a,test.' UNION ALL --4 E E E E
SELECT 17, ',,,,,,' --7 (All Empty Strings)
) d (SomeID, SomeValue)
;
--===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)
SELECT test.SomeID, test.SomeValue, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM #JBMTest test
CROSS APPLY dbo.DelimitedSplit8K(test.SomeValue,',') split
;
--=====================================================================================================================
-- TEST 2:
-- This tests for various "alpha" splits and COLLATION using all ASCII characters from 0 to 255 as a delimiter against
-- a given string. Note that not all of the delimiters will be visible and some will show up as tiny squares because
-- they are "control" characters. More specifically, this test will show you what happens to various non-accented
-- letters for your given collation depending on the delimiter you chose.
--=====================================================================================================================
WITH
cteBuildAllCharacters (String,Delimiter) AS
(
SELECT TOP 256
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
CHAR(ROW_NUMBER() OVER (ORDER BY (SELECT NULL))-1)
FROM master.sys.all_columns
)
SELECT ASCII_Value = ASCII(c.Delimiter), c.Delimiter, split.ItemNumber, Item = QUOTENAME(split.Item,'"')
FROM cteBuildAllCharacters c
CROSS APPLY dbo.DelimitedSplit8K(c.String,c.Delimiter) split
ORDER BY ASCII_Value, split.ItemNumber
;
-----------------------------------------------------------------------------------------------------------------------
Other Notes:
1. Optimized for VARCHAR(8000) or less. No testing or error reporting for truncation at 8000 characters is done.
2. Optimized for single character delimiter. Multi-character delimiters should be resolved externally from this
function.
3. Optimized for use with CROSS APPLY.
4. Does not "trim" elements just in case leading or trailing blanks are intended.
5. If you don't know how a Tally table can be used to replace loops, please see the following...
http://www.sqlservercentral.com/articles/T-SQL/62867/
6. Changing this function to use a MAX datatype will cause it to run twice as slow. It's just the nature of
MAX datatypes whether it fits in-row or not.
-----------------------------------------------------------------------------------------------------------------------
Credits:
This code is the product of many people's efforts including but not limited to the folks listed in the Revision
History below:
I also thank whoever wrote the first article I ever saw on "numbers tables" which is located at the following URL
and to Adam Machanic for leading me to it many years ago. The link below no long works but has been preserved herer
for posterity sake.
http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html
The original article can be seen at then following special site, as least as of 29 Sep 2019.
http://web.archive.org/web/20150411042510/http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html#
-----------------------------------------------------------------------------------------------------------------------
Revision History:
Rev 00 - 20 Jan 2010 - Concept for inline cteTally: Itzik-Ben Gan, Lynn Pettis and others.
Redaction/Implementation: Jeff Moden
- Base 10 redaction and reduction for CTE. (Total rewrite)
Rev 01 - 13 Mar 2010 - Jeff Moden
- Removed one additional concatenation and one subtraction from the SUBSTRING in the SELECT List for that tiny
bit of extra speed.
Rev 02 - 14 Apr 2010 - Jeff Moden
- No code changes. Added CROSS APPLY usage example to the header, some additional credits, and extra
documentation.
Rev 03 - 18 Apr 2010 - Jeff Moden
- No code changes. Added notes 7, 8, and 9 about certain "optimizations" that don't actually work for this
type of function.
Rev 04 - 29 Jun 2010 - Jeff Moden
- Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary "Table Spool" when the
function is used in an UPDATE statement even though the function makes no external references.
Rev 05 - 02 Apr 2011 - Jeff Moden
- Rewritten for extreme performance improvement especially for larger strings approaching the 8K boundary and
for strings that have wider elements. The redaction of this code involved removing ALL concatenation of
delimiters, optimization of the maximum "N" value by using TOP instead of including it in the WHERE clause,
and the reduction of all previous calculations (thanks to the switch to a "zero based" cteTally) to just one
instance of one add and one instance of a subtract. The length calculation for the final element (not
followed by a delimiter) in the string to be split has been greatly simplified by using the ISNULL/NULLIF
combination to determine when the CHARINDEX returned a 0 which indicates there are no more delimiters to be
had or to start with. Depending on the width of the elements, this code is between 4 and 8 times faster on a
single CPU box than the original code especially near the 8K boundary.
- Modified comments to include more sanity checks on the usage example, etc.
- Removed "other" notes 8 and 9 as they were no longer applicable.
Rev 06 - 12 Apr 2011 - Jeff Moden
- Based on a suggestion by Ron "Bitbucket" McCullough, additional test rows were added to the sample code and
the code was changed to encapsulate the output in pipes so that spaces and empty strings could be perceived
in the output. The first "Notes" section was added. Finally, an extra test was added to the comments above.
Rev 07 - 06 May 2011 - Peter de Heer
- A further 15-20% performance enhancement has been discovered and incorporated into this code which also
eliminated the need for a "zero" position in the cteTally table.
Rev 08 - 24 Mar 2014 - Eirikur Eiriksson
- Further performance modification (twice as fast) For SQL Server 2012 and greater by using LEAD to find the
next delimiter for the current element, which eliminates the need for CHARINDEX, which eliminates the need
for a second scan of the string being split.
REF: https://www.sqlservercentral.com/articles/reaping-the-benefits-of-the-window-functions-in-t-sql-2
Rev 09 - 29 Sep 2019 - Jeff Moden
- Combine the improvements by Peter de Heer and Eirikur Eiriksson for use on SQL Server 2012 and above.
- Add Test 17 to the test code above.
- Modernize the generation of the embedded "Tally" generation available as of 2012. There's no significant
performance increase but it makes the code much shorter and easier to understand.
- Check/change all URLs in the notes abobe to ensure that they're still viable.
- Add a binary collation for a bit more of an edge on performance.
- Removed "Other Note" #7 above as UNPIVOT is no longern applicable (never was for performance).
**********************************************************************************************************************/
--=========== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--=========== "Inline" CTE Driven "Tally Tableā produces values from 0 up to 10,000, enough to cover VARCHAR(8000).
WITH E1(N) AS (SELECT N FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))E0(N))
,E4(N) AS (SELECT 1 FROM E1 a, E1 b, E1 c, E1 d)
,cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
,cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString COLLATE Latin1_General_BIN,t.N,1)
= #pDelimiter COLLATE Latin1_General_BIN
)
--=========== Do the actual split.
-- The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER (ORDER BY s.N1)
, Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF((LEAD(s.N1,1,1) OVER (ORDER BY s.N1)-1),0)-s.N1,8000))
FROM cteStart s
;
Once the you've setup the splitter and built the test files, the following code demonstrates a nasty fast (still not as fast if this file could just be imported, though) method of loading each file, parsing each section of the file, and loading each of the two sections into their respective normalized tables. The details are in the comments in the code.
Unfortunately, this forum won't allow for more than 30,000 characters and so I need to continue this in the next post down.
To continue with the rest of the code...
Look for the word "TODO" in this code to see where you'll need to make changes to handle your actual files. Like I said, the details are in the comments of the code.
As a bit of a sidebar, one of the advantages of doing these type of things in stored procedures is that it's a heck of a lot easy to copy stored procedures than it is to copy "SSIS packages" when the time comes (and it WILL come) to migrate to a new system.
/**********************************************************************************************************************
Purpose:
Import the files we created above to demonstrate one possible solution.
As a reminder, the files look like the following:
Firstname, Andrew
Lastname, Smith
Address,1 new street
OrderNumber,OrderDate,OrderAmount
4,2020-04-04,100
3,2020-04-01,200
2,2020-03-25,100
1,2020-03-02,50
Each file will have the identical format where the first section will always have the same number of lines. The OP
specified that there will be 24 lines in the first section but I'm only using 3 for this demo.
The second section of each file will always have exactly the same format (including the column names) but the number
of lines containing the "CSV" data can vary (quite randomly) anywhere from just 1 line to as many as 200 lines.
Note that the files this code looks for are in the file path of "D:\Temp\" and the file name pattern is "CustomerNNNN"
where the "NNNN" is the Left Zero Padded CustomerID. You need to change those if your stuff is different.
***** PLEASE NOTE THAT THIS IS NOT A PART OF THE SOLUTION TO THE PROBLEM. WE'RE JUST CREATING TEST FILES HERE! *****
Revision History
Rev 00 - 08 May 2020 - Jeff Moden
- Initial Creation and Unit Test.
- Ref: https://stackoverflow.com/questions/61580198/ssis-import-csv-which-is-part-structured-part-unstructured
**********************************************************************************************************************/
--=====================================================================================================================
-- CREATE THE NECESSARY TABLES
-- I'm using TempTables as both the working tables and the final target tables because I didn't want to take a
-- a chance with accidently dropping one of your tables.
--=====================================================================================================================
--===== This is where the customer information from the first section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #Customer;
CREATE TABLE #Customer
(
CustomerID INT NOT NULL
,FirstName VARCHAR(50) NOT NULL
,LastName VARCHAR(50) NOT NULL
,Address VARCHAR(50) NOT NULL
,CONSTRAINT PK_#Customer PRIMARY KEY CLUSTERED (CustomerID)
)
;
--===== This is where the order information from the second section of all files will be stored.
-- It should probably be a permanent table.
DROP TABLE IF EXISTS #CustomerOrder;
CREATE TABLE #CustomerOrder
(
CustomerID INT NOT NULL
,OrderNumber INT NOT NULL
,OrderDate DATE NOT NULL
,OrderAmount INT NOT NULL
,CONSTRAINT PK_#CustomerOrder PRIMARY KEY CLUSTERED (CustomerID,OrderNumber)
)
;
--===== We'll store all file names in this table.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #DirTree;
CREATE TABLE #DirTree
(
FileName VARCHAR(500) PRIMARY KEY CLUSTERED
,Depth INT
,IsFile BIT
)
;
--===== This is where the filtered list of files we want to work with will be stored.
-- It should probably continue to be a Temp Table.
DROP TABLE IF EXISTS #FileControl;
CREATE TABLE #FileControl
(
FileControlID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
,FileName VARCHAR(500) NOT NULL
,CustomerID AS CONVERT(INT,LEFT(RIGHT(FileName,8),4))
)
;
--===== This is where we'll temporarily import files to be worked on one at a time.
-- Ironically, this needs to be a non-temporary table because we need to create
-- a view on it to avoid needing a BCP Format File to skip the LineNumber column
-- during the "flat" import.
DROP TABLE IF EXISTS dbo.FileContent;
CREATE TABLE dbo.FileContent
(
LineNumber INT IDENTITY(1,1)
,LineContent VARCHAR(100)
)
;
--===== This is the view that we'll actually import to and it will target the table above.
-- It replaces a BCP Format File to skip the LineNumber column in the target table.
-- It's being created using Dynamic SQL to avoid the use of "GO".
DROP VIEW IF EXISTS dbo.vFileContent;
EXEC ('CREATE VIEW dbo.vFileContent AS SELECT LineContent FROM dbo.FileContent')
;
--=====================================================================================================================
-- Find the files we want to load.
-- The xp_DirTree command does not allow for wild cards and so we have to load all file and directory names that
-- are in #FilePath and then filter and copy just the ones we want to a file control table.
--=====================================================================================================================
--===== Local variables populated in this section
DECLARE #FilePath VARCHAR(500) = 'D:\Temp\' --TODO Change this if you need to.
,#FileCount INT
;
--===== Load all names in the #FilePath whether they are file names or directory names.
INSERT INTO #DirTree WITH (TABLOCK)
(FileName, Depth, IsFile)
EXEC xp_DirTree #FilePath,1,1
;
--===== Filter the names of files that we want and load them into a numbered control table to step through the files later.
INSERT INTO #FileControl
(FileName)
SELECT FileName
FROM #DirTree
WHERE FileName LIKE 'Customer[0-9][0-9][0-9][0-9].txt' --TODO you will likely need to change this pattern for file names.
AND IsFile = 1
ORDER BY FileName --Just to help keep track.
;
--===== Remember the number of file names we loaded for the upcoming control loop.
SELECT #FileCount = ##ROWCOUNT
;
--SELECT * FROM #FileControl;
--=====================================================================================================================
-- This loop is the "control" loop that loads each file one at a time and parses the information out of section 1
-- and section 2 of the file and stores the data in the respective tables.
--=====================================================================================================================
--===== Define the local variables populated in this section.
DECLARE #Counter INT = 1
,#Section1LastLine INT = 3 --TODO you'll need to change this to 24 according to you specs on the real files.
,#Section2FirstLine INT = 5 --TODO you'll also need to change this but I don't know what it will be for you.
;
--===== Setup the loop counter
WHILE #Counter <= #FileCount
BEGIN
--===== These are variables that are used within this loop.
-- No... this doesn't create an error and they're really handy when trying to troubleshoot.
DECLARE #FileName VARCHAR(500)
,#CustomerID INT
,#SQL VARCHAR(8000)
;
--===== This gets the next file from the file control table according to #Counter.
-- TODO... you might have to change where you get the CustomerID from.
-- I'm getting it from the "patterned" file names in this case because I had nothing else to go on
-- in your description ofn the problem.
SELECT #FileName = CONCAT(#FilePath,FileName)
,#CustomerID = CustomerID
FROM #FileControl
WHERE FileControlID = #Counter -- select* from #FileControl
;
--===== Clear the guns to get ready to load and work on a new file.
TRUNCATE TABLE dbo.FileContent
;
--===== Calculate the BULK INSERT command we need to load the given file.
SELECT #SQL = '
BULK INSERT dbo.vFileContent
FROM '+QUOTENAME(#FileName,'''')+'
WITH (
BATCHSIZE = 2000000000 --Import everything in one shot for performance/potential minimal logging.
,CODEPAGE = ''RAW'' --Ignore any code pages.
,DATAFILETYPE = ''char'' --This is NOT a unicode file. It''s ANSI text.
,FIELDTERMINATOR = '','' --The delimiter between the fields in the file.
,ROWTERMINATOR = ''\n'' --The rows were not generated on a Windows box so only "LineFeed" is used.
,KEEPNULLS --Adjacent delimiters will create NULLs rather than blanks.
,TABLOCK --Allows for "minimal logging" when possible (and it is for this import)
)
;'
--PRINT #SQL
EXEC (#SQL)
;
--===== Read Section 1 (customer information)
-- This builds the dynamic SQL to parse and store the customer information in section 1.
SELECT #SQL = CONCAT('INSERT INTO #Customer',CHAR(10),'(CustomerID');
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1))
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine;
SELECT #SQL += CONCAT(')',CHAR(10),'SELECT',CHAR(10));
SELECT #SQL += CONCAT(' CustomerID=',#CustomerID,CHAR(10));
SELECT #SQL += CONCAT(',',SUBSTRING(LineContent,1,CHARINDEX(',',LineContent)-1),'='
,QUOTENAME(LTRIM(RTRIM(SUBSTRING(LineContent,CHARINDEX(',',LineContent)+1,50))),'''')
,CHAR(10)
)
FROM dbo.FileContent
WHERE LineNumber <= #Section1LastLine
;
EXEC (#SQL)
;
--===== This parses and stores the information from section 2.
-- Since you said the order of the columns never changes, I hard-coded the results for performance
-- using an ancient "Black Arts" form of code known as a "CROSSTAB", which pivots the data result
-- from the splitter faster than PIVOT usually does and also allows exquisite control in the code.
INSERT INTO #CustomerOrder
(OrderNumber,CustomerID,OrderDate,OrderAmount)
SELECT OrderNumber = MAX(CASE WHEN split.ItemNumber = 1 THEN Item ELSE -1 END)
,CustomerID = #CustomerID
,OrderDate = MAX(CASE WHEN split.ItemNumber = 2 THEN Item ELSE '1753' END)
,OrderAmount = MAX(CASE WHEN split.ItemNumber = 3 THEN Item ELSE -1 END)
FROM dbo.FileContent fc
CROSS APPLY dbo.DelimitedSplit8K(fc.LineContent,',') split
WHERE LineNumber > #Section2FirstLine
GROUP BY LineNumber
;
--===== Bump the counter
SELECT #Counter += 1
;
END
;
--===== All done. Display the results of the two tables we populated from all 10 files.
SELECT * FROM #Customer;
SELECT * FROM #CustomerOrder;