This question already has answers here:
BULK INSERT with identity (auto-increment) column
(9 answers)
Closed 4 years ago.
I defined a table with an unique objected (generated)
CREATE TABLE Table1(
Id bigint IDENTITY PRIMARY KEY
,Version VARCHAR(10) NOT NULL
,Date DATE NOT NULL
,Code VARCHAR(10) NOT NULL);
INSERT INTO Table1(Version,Date,Code) VALUES ('1.0','2018-04-16','8615');
INSERT INTO Table1(Version,Date,Code) VALUES ('1.0','2018-04-16','2285');
INSERT INTO Table1(Version,Date,Code) VALUES ('1.0','2018-04-16','11625');
Now I have a .csv. file with more information to insert. I suppose to use BULK INSERT like
BULK INSERT Table1
FROM 'C:\test.csv'
WITH (
FIELDTERMINATOR = ','
,ROWTERMINATOR = '\n'
)
the input file contains:
1.0,2018-04-16,240061
1.0,2018-04-17,3435
1.0,2018-04-18,2143
1.0,2018-04-19,44
1.0,2018-04-20,2453
1.0,2018-04-01,2012
1.0,2018-04-22,123
1.0,2018-04-23,9887
1.0,2018-04-30,57
1.0,2018-05-1,576
1.0,2018-05-8,35
1.0,2018-05-9,867
1.0,2018-05-10,555
....
running the BULK INSERT statement results in errors
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (Id).
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 1 (Id).
What is the best way to insert a lot of data from the csv into de table? (more then 10000 rows)
try specifying column name like
BULK INSERT Table1 (Version,Date,Code)
FROM 'C:\test.csv'
WITH (
FIELDTERMINATOR = ','
,ROWTERMINATOR = '\n'
)
Related
I'm trying to load a CSV file into a table in order to sort the data. However, the smallmoney column BillingRate (e.g. $203.75) will not convert, with SQL Server producing the following message:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 4 (BillingRate).
Msg 4864, Level 16, State 1, Line 10
Here is the code I am using in order to do this:
--CREATE TABLE SubData2
--(
--RecordID int,
--SubscriberID int,
--BillingMonth int,
--BillingRate smallmoney,
--Region varchar(255)
--);
BULK INSERT Subdata2
FROM 'C:\Folder\1-caseint.csv'
WITH
(FIRSTROW = 2,
FIELDTERMINATOR = '|', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'C:\Folder\CaseErrorRows4.csv',
TABLOCK);
A typical line of code in the CSV files looks like:
1|0000000001|1|$233.94|"West"
Apologies if there are any glaring errors here - I'm new to SQL Server :)
Many thanks in advance,
Tom.
This is odd. On a direct insert only the last fails.
declare #t table (sm smallmoney);
insert into #t
values ('$256.6')
insert into #t
values (244.8);
insert into #t
values ('12.8');
insert into #t
values ('$256.5'), ('244.7');
insert into #t
values ($256.5);
insert into #t
values ('$256.5'), (244.7), ('12.12');
select * from #t;
Give it a try removing the $ from the data.
When I try this on SQL Server 2014:
INSERT INTO B_G049
SELECT B049_Z001, B049_Z002, ObjectId1, ObjectId7, 43099 AS value
FROM B_G049
WHERE value = 42734;
with decimals columns, I get an error:
Msg 8115, Level 16, State 8, Line 56
Arithmetic overflow error converting int to data type numeric.
I have never noticed such problem with copy rows between the same structure tables (in this case same table is input and output) using
INSERT INTO table1
SELECT (list of columns)
FROM table1
WHERE (condition)
I'm trying to execute this code:
DECLARE #temp TABLE (Start dateTime, Name nvarchar)
INSERT INTO #temp (Start, Name)
VALUES (GETDATE(), 'Callum')
SELECT * FROM #temp
and I get this error:
Msg 8152, Level 16, State 4, Line 3 String or binary data would be
truncated. The statement has been terminated.
(0 row(s) affected)
I found out that I'm trying to fit too much data in the column, but I'm not sure how....
You need to specify the size in nvarchar. For Eg nvarchar(50)
I have an excel file that I want to bulk insert into temp table:
create table #tmptable
(
Date varchar(10),
Receipt varchar(50),
Description varchar(100),
[Card Member] varchar(50),
[Account #] varchar(17),
Amount varchar(20)
)
bulk insert #tmptable
from 'C:\Transactions\example.xls'
with (FieldTerminator='\t', RowTerminator = '\n')
go
This is my excel file:
When executing bulk statement, getting the following error:
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error
(truncation) for row 1, column 1 (Date). Msg 4864, Level 16, State 1,
Line 1 Bulk load data conversion error (type mismatch or invalid
character for the specified codepage) for row 2, column 1 (Date).
Do not know why it happens.
Well, you are actually reading your headers, meaning the the data on the first few rows of your xls are images that's why you are getting a type mismatch error
get the row number of that first row where the data actually is.
then you use this:
create table #tmptable
(
Date date,
Receipt varchar(50),
Description varchar(100),
[Card Member] varchar(50),
[Account #] varchar(17),
Amount varchar(20)
)
bulk insert #tmptable
from 'C:\Transactions\example.xls'
with (FieldTerminator='\t', RowTerminator = '\n', FirstRow = X)
go
where X is the row number where the data actually starts and not the headers
I need to insert csv file into sql server db. The problem is I keep getting a
Bulk load data conversion error (type mismatch or invalid character for
the specified codepage) error. this is repeated 10 times and
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error.
The provider did not give any information about the error.
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
The specific column that errors is a bigint,primary key,no null field. I have made sure all 5000 records have a 4 digit number and there are no nulls. I am tired of fighting this and simply want to insert the date and datavalue fields into this table --CSResult. The problem is sql server simply errors out and I am thinking it wants values for ALL the fields.
IS there a way to insert data into the columns I want?
Any ideas.
BULK INSERT [DB].[CsSchema].[CsResult] (
FROM 'c:\june132012.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
this is what the data looks like:
3857,2011-05-19 04:00:00.000,Y,,82,N,N,,4,1,1,10,,1,,535/31706815
3858,2011-05-19 02:23:00.000,Y,,128,N,N,,4,1,1,10,,1,,535/31706815
Here is the DDL:
ResultID bigint notnull
ResultDate datetime,notnull
HasValue CHAR 1, notnull
Duration int, null
DataValue decimal 19,9, notnull
IsEdited CHAR 1, notnull
IsAnnotated CHAR 1, notnull
AddOnValue decimal 19,9, null
DataTypeID FK, int, notnull
ResultTypeID FK smallint, notnull
PatientID FK, int, notnull
DataSourceTypeID FK, smallint, notnull
MedicationID FK, int, null
DataDownloadID FK, int, null
SlotTypeID FK,smallint, null
DataSourceIdentity nvarchar 20, null