I'm trying to bulk insert a csv file and have columns which include dates that are in the DD/MM/YYYY format when the sql DATE format is YYYY-MM-DD and some are even D/M/YYYY as it isn't 01/01/2020 instead it is 1/1/2020 so I don't think I can slice the string.
BULK INSERT orders
FROM 'C:\Orders.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'C:\OrdersErrorRows.csv',
TABLOCK
);
You can convert to date and specify the format. Try the syntax referenced:
https://www.mssqltips.com/sqlservertip/1145/date-and-time-conversions-using-sql-server/
Related
I have a dataset more than 1000 when import into SSMS. Thus I use the following T-SQL to create table and import data:
CREATE TABLE sales(
SPID text,
GeoID text,
PID text,
SaleDate smalldatetime,
Amount int,
Customers int,
Boxes int);
BULK INSERT sales
FROM 'C:\Users\alvin_zoj6s4b\Downloads\sales.txt'
WITH (FIELDQUOTE = '\'
, FIELDTERMINATOR = ','
, ROWTERMINATOR = '\n');
I got the following error:
The problem is on the SalesDate column so I change from smalldatetime to Text on create table query:
CREATE TABLE sales(
SPID text,
GeoID text,
PID text,
SaleDate TEXT,
Amount int,
Customers int,
Boxes int);
The result of the table showing ' ' on the text data type (SPID, GeoID, PID & SaleDate):
Here is the structure of my text file stored in my laptop:
The 2 problems I would like to clarify:
Text data type should put ' ' in the raw data when import the data to SSMS but why my output will showing ' '?
The SaleDate only work when I change it to text, anything I have done wrongly?
As I mentioned in the comments, you need to tell SQL Server that the file is a CSV file, with FORMAT and tell it the quote identifier is a single quote (') with FIELDQUOTE, as the default is a double quote ("). You had, for some reason, defined the FIELDQUOTE as '\', yet there isn't a single \ in your images.
You also need to fix those data types; text has been deprecated for 17 years and so you shouldn't be using it. I have used lengths based on your image of data (Please do not upload images of code/data/errors when asking a question.). I also use a date for your column SaleDate as all the times are at midnight; if they do have a time portion, then you might want a datetime2 (as there's no milliseconds in the data a datetime2(0) would make sense).
This gives the following:
CREATE TABLE dbo.sales (SPID varchar(4), --guessed length, but I doubt you need MAX for values that will have 8,000+ characters
GeoID varchar(2), --guessed length, but I doubt you need MAX for values that will have 8,000+ characters
PID varchar(3), --guessed length, but I doubt you need MAX for values that will have 8,000+ characters
SaleDate date, --Considering all your columns are mid night, this seems a better choice. Otherwise use datetime2(0)
Amount int,
Customers int,
Boxes int);
GO
BULK INSERT dbo.sales
FROM 'C:\Users\alvin_zoj6s4b\Downloads\sales.txt'
WITH (FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FORMAT = 'CSV',
FIELDQUOTE = '''');
I have a CSV file in UTF8 encoding and I would like to import data into SQL Server DB table.
In some cells I have stored values like:
±40%;
16.5±10%;
All columns load perfectly but only columns with ± character show in DB this:
All columns where I would like to store this character I use nvarchar(50) with collation Latin1_General_100_CS_AS_WS_SC_UTF8
Is there any wait how this character store into DB ?
Thank you
EDIT
I use for load CSV file this:
BULK INSERT [dbo].[x]
FROM 'c:\Users\x\Downloads\x.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ';', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'c:\Users\x\Downloads\xx.csv',
TABLOCK
);
I also try to change SSMS Options:
‘Tools‘ -> ‘Options‘ -> ‘Environment‘ -> ‘Fonts and Colors‘ -> Select ‘Grid Results’
Set Font to Arial but without positive results
I have over 20 million records in many files, which I want to import
Have you tried adding a CODEPAGE=65001 (UTF-8) to the WITH clause of the BULK INSERT statement?
BULK INSERT [dbo].[x]
FROM 'c:\Users\x\Downloads\x.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ';', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'c:\Users\x\Downloads\xx.csv',
CODEPAGE = 65001,
TABLOCK
);
I want to bulk import from CSV file in sql but \n for new line is not working in SQL as row terminator. it does not read any record from csv file if i use \n but when i use
ROWTERMINATOR = '0x0A'
it mix up the all records.
this is code what i am using in my sp.
Create Table #temp
(
Field1 nvarchar(max) null,
Field2 nvarchar(max) null,
Field3 nvarchar(max) null
)
BULK INSERT #temp
FROM 'c:\file.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --not working
--ROWTERMINATOR = '\r', --not working
--ROWTERMINATOR = char(10), ---not working
--ROWTERMINATOR = char(13), ---not working
TABLOCK
)
INSERT INTO table_name
(
tbl_field1,tbl_field2,tbl_field3
)
SELECT
field1,
field2,
field3
FROM #temp
Thanks in Advance
I did it with the help of #DVO. Thank you #dvo for answer. It is working fine as per your instructions. i used notepad++ and see the hidden characters and handle them accordingly.
I'm trying to load a CSV file into a table in order to sort the data. However, the smallmoney column BillingRate (e.g. $203.75) will not convert, with SQL Server producing the following message:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 4 (BillingRate).
Msg 4864, Level 16, State 1, Line 10
Here is the code I am using in order to do this:
--CREATE TABLE SubData2
--(
--RecordID int,
--SubscriberID int,
--BillingMonth int,
--BillingRate smallmoney,
--Region varchar(255)
--);
BULK INSERT Subdata2
FROM 'C:\Folder\1-caseint.csv'
WITH
(FIRSTROW = 2,
FIELDTERMINATOR = '|', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'C:\Folder\CaseErrorRows4.csv',
TABLOCK);
A typical line of code in the CSV files looks like:
1|0000000001|1|$233.94|"West"
Apologies if there are any glaring errors here - I'm new to SQL Server :)
Many thanks in advance,
Tom.
This is odd. On a direct insert only the last fails.
declare #t table (sm smallmoney);
insert into #t
values ('$256.6')
insert into #t
values (244.8);
insert into #t
values ('12.8');
insert into #t
values ('$256.5'), ('244.7');
insert into #t
values ($256.5);
insert into #t
values ('$256.5'), (244.7), ('12.12');
select * from #t;
Give it a try removing the $ from the data.
I'm trying to import a .txt file into Advanced Query Tool (the SQL client I use). So far, I have:
CREATE TABLE #tb_test
(
id INTEGER,
name varchar(10),
dob date,
city char(20),
state char(20),
zip integer
);
insert into #tb_test
values
(1,'TEST','2015-01-01','TEST','TEST',11111)
;
bulk insert #tb_test
from 'h:\tbdata.txt'
with
(
fieldterminator = '\t',
rowterminator = '\n'
);
I receive an error message saying there's a syntax error on line 1. Am I missing a database from which #tb_test comes (like db.#tb_test)?
Here's a line from the tbdata.txt file:
2,'TEST2','2012-01-01','TEST','TEST',21111
I was curious with this question and I found the following solution:
Your data is comma separated but you are trying to split by TAB
two options: change the file data to be TAB separated or change the fieldterminator = '\t' to fieldterminator = ','
The DATE format has issues when loading directly from a file, my best solution is to change the temp field dob to type VARCHAR(20) and then, when passing to the final display/data storage convert to DATE.
Here is the corrected code:
CREATE TABLE #tb_test
(
id INTEGER,
name varchar(10),
dob varchar(20),
city char(20),
state char(20),
zip integer
);
insert into #tb_test
values
(1,'TEST','2015-01-01','TEST','TEST',11111)
;
bulk insert #tb_test
from 'h:\tbdata.txt'
with
(
fieldterminator = ',',
rowterminator = '\n'
);