How to bulk insert into SQL Server from Excel by query - sql-server

BULK INSERT dbo.bulkins
FROM "C:\BulkDataFile.csv"
WITH
(FIELDTERMINATOR =',',
ROWTERMINATOR = '\n')
Error:
Msg 4860, Level 16, State 1, Line 2
Cannot bulk load. The file "C:\BulkDataFile.csv" does not exist.
How to fix it?

The error message seems crystal clear: "the file ... does not exist" ...
So it seems that this file you're trying to use to do your BULK INSERT just isn't there.
How to fix it? Simple: just put the file where you expect it to be, and run your code again.
And if this is a remote SQL Server, the file must be on the remote machine's C:\ drive - not your local PC's C:\ drive ...

Related

Bulk Insert SQL Server FROM network file

Can't seem to get this bulk insert to work. I know the SQL Server windows account has appropriate permissions to this network folder - that has been verified.
The error I get is: Msg 12704, Level 16, State 1, Line 1 Bad or inaccessible location specified in external data source "(null)".
I can copy and paste the path into my windows finder and it opens the file but maybe I am using the wrong syntax in the SQL statement?
Here is the SQL statement:
BULK INSERT Employee_be.dbo.Table_Cur_Test
FROM '\\SERVERPATH\Data\Test\CurTest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ';\n'
)
GO
Any suggestions?

SQL Server bulk insert fails, but file imports easily with import wizard

I have an R script that combines years of FFIEC Bank Call Report schedules into flat files--one for each schedule--then writes each schedule to a tab-delimited, non-quoted flat file suitable for bulk inserting into SQL Server. Then I run this bulk insert command:
bulk insert CI from 'e:\CI.txt' with (firstrow = 2, rowterminator = '0x0a', fieldterminator = '\t')
The bulk insert will run for a while then quit, with this error message:
Msg 7301, Level 16, State 2, Line 4
Cannot obtain the required interface ("IID_IColumnsInfo") from OLE DB provider "BULK" for linked server "(null)".
I've searched here for answers and the most common problem seems to be the rowterminator argument. I know that the files I've created have a line feed without a carriage return, so '0x0a' is the correct argument (but I tried '\n' and it didn't work).
Interestingly, I tried setting the fieldterminator to gibberish just to see what happened and I got the expected error message:
The bulk load failed. The column is too long in the data file for row 1, column 1."
So that tells me that SQL Server has access to the file and is indeed starting to insert it.
Also, I did a manual import (right click on database, tasks->Import Data) and SQL Server swallowed up the file without a hitch. That tells the layout of the table is fine, and so is the file?
Is it possible there's something at the end of the file that's confusing the bulk insert? I looked in a hex editor and it ends with data followed by 0A (the hex code for a line feed).
I'm stumped and open to any possibilities!

How to fix pyodbc error with stored procedure execution

I'm setting up a new VM on a server to offload SQL Server database loading from my laptop. In doing so, I'd like to be able to execute stored procedures (no params, just 'exec storedprocedure') in my database via Python, but it's not working.
Stored procedure call worked when using sqlcmd via a batch file and in SSMS, but i'd like to make it all python based.
The stored procedure is appending fact tables follows the below general format:
--staging tbl drop and creation
if object_id(stagingtbl) is not null drop tabl stagingtbl
create table stagingtbl
(fields datatypes nullable
)
--staging tbl load
bulk insert stagingtbl
from 'c:\\filepath\\filename.csv'
with (
firstrow = 2
, rowterminator = '\n'
,fieldterminator = ','
, tablock /*don't know what tablock does but it works...*/
)
--staging table transformation
; with cte as (
/*ETL process to transform csv file into my tbl structure*/
)
--final table load
insert final_tbl
select * from cte
/*
T-SQL update the final table's effect to date, based on subsequent effect from date.
eg:
id, effectfromdate, effecttodate
1,1/1/19, 1/1/3000
1,1/10/19, 1/1/3000
becomes
id, effectfromdate, effecttodate
1,1/1/19, 1/10/19
1,1/10/19, 1/1/3000
*/
The stored procedure works fine with sqlcmd and in ssms but in python (pyodbc) executing the query 'exec storedprocedure', I get the error message:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]
Cannot bulk load because the file "c:\filepath\filename.csv" could not be opened.
Operating system error code 3(The system cannot find the path specified.). (4861) (SQLExecDirectW)')
When the csv file is there, no misspellings in path or filename, and I can open the csv when double clicking on it, and no one has the csv open.
With continued experimentation I've established the problem is not with python or pyodbc. In SSMS on my laptop (host machine of db) the stored procedures work just fine, but in SSMS on the VM the stored procedures cause the same error. This tells me that my question isn't the root problem and I had more digging to do. The error (in SSMS) is below.
Msg 4861, Level 16, State 1, Procedure Append_People, Line 71 [Batch Start Line 0]
Cannot bulk load because the file "N:\path\filename.csv" could not be opened. Operating system error code 3(The system cannot find the path specified.).
Once I established the problem is in SSMS, I broadened my search and discovered the issue is the path for the bulk insert command has to be relative to the machine hosting the database. So in the VM (client machine until I migrate the db) when I use the path c:\ thinking it's the VM's c:\ drive the stored procedure is looking at the c:\ of my laptop since it's the host machine. With that I also learned that on a shared drive (N:\) access is delegated and that is its own issue (https://dba.stackexchange.com/questions/44524/bulk-insert-through-network).
So I'm going to focus on migrating the database first, then that'll solve my problem. Thanks to all who tried to help

SQL Server error 5: “5(Access is denied.) while trying to read a trace file

I would like to read content of a trace file and write it into a table in SQL Server. As I have read here, fn_trace_gettable do ths job. I have this code:
select
IDENTITY(int, 1, 1) AS RowNumber, *
into
mytracetest
from
fn_trace_gettable('C:\Users\Babak\Desktop\ITSM_Trace\trace.trc', default)
But I am getting this error:
Msg 19049, Level 16, State 1, Line 1
File 'C:\Users\Babak\Desktop\ITSM_Trace\trace.trc' either does not exist or there was an error opening the file. Error = '5(Access is denied.)'.
What should I do to solve this problem?
Is this for a remote SQL Server instance?
Is that trace.trc on that server's file system in c:\users\babak\desktop\....?
SQL Server can only read from its own drives - not from the local disks on your own computer ...

Bulk Upload: "unexpected end of file" on new server

I am trying to do a bulk upload in one table in our sql database. This query was running good before, when we had the database on different server, but now on the new server I am getting an error.
Here is all I have:
sql bulk import query:
BULK
INSERT NewProducts
FROM 'c:\newproducts.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
And the errors I am getting are:
Msg 4832, Level 16, State 1, Line 1
Bulk load: An unexpected end of file was encountered in the data file.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
Thanks for any help in advance.
For anybody else who comes across this question looking for an answer, this error also happens when the number of columns in your CSV file don't match the columns of the table you're doing the bulk insert into.
I've encountered this before and there are a few things to look for:
Make sure that your csv file doesn't have any blank rows at the top.
Make sure that there are no additional blank rows at the end of the file.
Make sure that the ROWTERMINATOR is actually \n and not \r\n
If you do all three of these and are still getting the error let me know.
In my case the file I was trying to access was in a directory that the SQL Sever process did not have access to. I moved my flat files to a directory SQL had access to and this error was resolved.

Resources