SQL Server : convert string to Int when importing from CSV - sql-server

In my CSV file, there is AuthorID by they are in string format
Is it possible in a SQL Server query, when importing a CSV, to convert String to Integer?
BULK
INSERT Author
FROM 'C:\author.csv'
WITH
(
FIRSTROW = 2,
<Convert Column 1 to Integer>
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO

for data manipulation whistle data transfer use SSIS packages, I dont think you can do anything with the data itself with bulk insert statement whistle you are inserting it into a table. The destination column should have the appropriate and the right DATATYPE and it should work.
In you case if you set datatype for your destination column to be INT and you do not have any strings in your data it should work just fine. If you still get errors then you should check you data that you are inserting and if there is any data that needs changing before it can be inserted in an INT datatype column then you might have to consider making use of SSIS package.

Related

BULK INSERT type mismatch when table created from same .CSV

I receive update information for items on a daily basis via a CSV file that includes date/time information in the format YYY-MM-DDThh:mm:ss
I used the Management Studio task "Import Flat File..." to create a table dbo.fullItemList and import the contents of the initial file. It identified the date/time columns as type datetime2(7) and imported the data correctly. I then copied this table to create a blank table dbo.dailyItemUpdate.
I want to create a script that imports the CSV file to dbo.dailyItemUpdate, uses a MERGE function to update dbo.fullItemList, then wipes dbo.dailyItemUpdate ready for the next day.
The bit I can't get to work is the import. As the table already exists I'm using the following
BULK INSERT dbo.dailyItemUpdate
FROM 'pathToFile\ReceivedFile.csv'
WITH
(
DATAFILETYPE = 'char',
FIELDQUOTE = '"',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
TABLOCK
)
But I get a "type mismatch..." error on the date/time columns. How come the BULK INSERT fails, even though the data type was picked up by the "Import Flat File" function?

Migrating from SQL Server to Hive Table using flat file

I am migrating my data from SQL Server to Hive using following steps but there is data issue with the resulting table. I tried various options including checking datatype, Using csvSerde but not able to get data aligned properly in respective columns. I followed following steps:
Export SQL Server data to flat file with fields separated by comma.
Create external table in Hive as given below and load data.
CREATE EXTERNAL TABLE IF NOT EXISTS myschema.mytable (
r_date timestamp
, v_nbr varchar(12)
, d_account int
, d_amount decimal(19,4)
, a_account varchar(14)
)
row format delimited
fields terminated by ','
stored as textfile;
LOAD DATA INPATH 'gs://mybucket/myschema.db/mytable/mytable.txt' OVERWRITE INTO TABLE myschema.mytable;
There is issue with data with all combination I could try.
I also tried OpenCSVSerde but the result was worse than simple text file. I also tried by changing delimiter to semicolon but no luck.
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
with serdeproperties ( "separatorChar" = ",") stored as textfile
location 'gs://mybucket/myschema.db/mytable/';
Can you please suggest some robust approach so that I don't have to deal with data issue.
Note: Currently I don't have option of connecting my SQL Server table with Sqoop.

Bulk insert CSV file from Azure blob storage to SQL managed instance

I have CSV file on Azure blob storage. It has 4 columns in it without headers and one blank row at starting. I am inserting CSV file into SQL managed instance by bulkinsert and I have 5 columns in the database table. I don't have 5th column in CSV file.
Therefore it is throwing this error:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 5 (uId2)
As I want to insert that 4 columns from CSV file to table in database and I want that 5th column in table as NULL.
I am using this code:
BULK INSERT testing
FROM 'test.csv'
WITH (DATA_SOURCE = 'BULKTEST',
FIELDTERMINATOR = ',',
FIRSTROW = 0,
CODEPAGE = '65001',
ROWTERMINATOR = '0x0a'
);
Want that 5th row as NULL in database table, if there are 4 columns in CSV file.
Sorry, we achieve that in bulk insert. None of other ways according my experience.
Azure SQL managed instance is also not supported as dataset in Data Factory Data flow. Otherwise we can using Data Flow derived column to create a new column to mapping to the Azure SQL database.
The best way is that you editor your csv file: just add new column as header in you csv files.
Hope this helps.

SQL Server Bulk Insert CSV Issue

I'm having an issue that I have not encountered before when bulk inserting from a csv file. For whatever reason, the last column isn't being separated on insert. I kept getting type conversion errors that I knew couldn't be true so I changed the datatype to varchar to see what was being inserted. When I looked at the result set, I saw that instead of (ex. 35.44, 56.82 separated in two columns) in the .csv, I saw (ex. 35.44,56.82 all in one column). This of course is why SQL Server was throwing that error, but how can I resolve this. Am I missing something simple?
To sum it up, the Bulk Insert is ignoring the last field terminator and combining the last two columns into one column
My Bulk Insert:
BULK
INSERT [YourTableName]
FROM 'YourFilePathHere'
WITH
(
FIELDTERMINATOR=',',
ROWTERMINATOR = '\n'
)
A row:
YSQ3863,Bag 38x63 YELLOW 50/RL,CS,BAG,17.96,LB,1,50,50,YELLOW,,,,,,63,17.96,,,,38,,2394,,8.15,11.58,19.2,222.41

SQL - Bulk Insert and Data Types

Today I have a bulk insert from fixed width file like this:
BULK INSERT #TBF8DPR501
FROM 'C:\File.txt' WITH (
FORMATFILE = 'C:\File.txt.xml'
,ROWTERMINATOR = '\n'
)
The format file is just to set the width of each field, and after the bulk insert into the temp table, I crated an INSERT INTO X SELECT FROM temp to convert some columns that the bulk cannot convert.
My question is, is it possible to make the bulk insert be able to convert values such as:
Date in format dd.MM.yyyy OR ddMMyyyy
Decimal values like this 0000000000010022 (where it is 100.22)
Without the need to make the bulk insert into a temp table to convert the values?
No, it isn't: BULK INSERT simply copies data as fast as possible, it doesn't transform the data in any way. Your current solution with a temp table is a very common one used in data warehousing and reporting scenarios, so if it works the way you want I would just keep using it.
If you do want to do the transformation during the load, then you could use an ETL tool such as SSIS. But there is nothing wrong with your current approach and SSIS would be a very 'heavy' alternative.

Resources