SQL Server SSMS Bulk Insert Access denied - sql-server

I'm trying to use the following codes to do some bulk insert:
BULK INSERT [xxx].[xxx].[xxx]
FROM 'E:\xxx\xxx.csv'
WITH (
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
But it reports an error:
Cannot bulk load because the file "E:\xxx\xxx.csv" could not be opened. Operating system error code (null).
Some posts show it's because my SSMS 2017 uses some sort of dummy account which doesn't have the permission to access the file on the shared drive.
I've tried running SSMS 2017 as admin, but it didn't work.
My question is how to create a domain account for SSMS 2017, so it can bulk insert? (a step-by-step guide is preferred).

Related

Bulk Insert SQL Server FROM network file

Can't seem to get this bulk insert to work. I know the SQL Server windows account has appropriate permissions to this network folder - that has been verified.
The error I get is: Msg 12704, Level 16, State 1, Line 1 Bad or inaccessible location specified in external data source "(null)".
I can copy and paste the path into my windows finder and it opens the file but maybe I am using the wrong syntax in the SQL statement?
Here is the SQL statement:
BULK INSERT Employee_be.dbo.Table_Cur_Test
FROM '\\SERVERPATH\Data\Test\CurTest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = ';\n'
)
GO
Any suggestions?

SQL Server bulk insert fails, but file imports easily with import wizard

I have an R script that combines years of FFIEC Bank Call Report schedules into flat files--one for each schedule--then writes each schedule to a tab-delimited, non-quoted flat file suitable for bulk inserting into SQL Server. Then I run this bulk insert command:
bulk insert CI from 'e:\CI.txt' with (firstrow = 2, rowterminator = '0x0a', fieldterminator = '\t')
The bulk insert will run for a while then quit, with this error message:
Msg 7301, Level 16, State 2, Line 4
Cannot obtain the required interface ("IID_IColumnsInfo") from OLE DB provider "BULK" for linked server "(null)".
I've searched here for answers and the most common problem seems to be the rowterminator argument. I know that the files I've created have a line feed without a carriage return, so '0x0a' is the correct argument (but I tried '\n' and it didn't work).
Interestingly, I tried setting the fieldterminator to gibberish just to see what happened and I got the expected error message:
The bulk load failed. The column is too long in the data file for row 1, column 1."
So that tells me that SQL Server has access to the file and is indeed starting to insert it.
Also, I did a manual import (right click on database, tasks->Import Data) and SQL Server swallowed up the file without a hitch. That tells the layout of the table is fine, and so is the file?
Is it possible there's something at the end of the file that's confusing the bulk insert? I looked in a hex editor and it ends with data followed by 0A (the hex code for a line feed).
I'm stumped and open to any possibilities!

How to Export Data From SQL Server to CSV File using sql statement

So what i am trying to do is using query inset data in CSV file with headers i know this not the right way i also tried using "bcp" on sql but it spiking the headers as well putting everything in one column
bulk insert "C:\New folder\s.csv"
from [D].[user]
with (fieldterminator = ',', rowterminator = '\n')
go
Sources should be sql Query
Output result should be a .CSV file with Headers

How to fix pyodbc error with stored procedure execution

I'm setting up a new VM on a server to offload SQL Server database loading from my laptop. In doing so, I'd like to be able to execute stored procedures (no params, just 'exec storedprocedure') in my database via Python, but it's not working.
Stored procedure call worked when using sqlcmd via a batch file and in SSMS, but i'd like to make it all python based.
The stored procedure is appending fact tables follows the below general format:
--staging tbl drop and creation
if object_id(stagingtbl) is not null drop tabl stagingtbl
create table stagingtbl
(fields datatypes nullable
)
--staging tbl load
bulk insert stagingtbl
from 'c:\\filepath\\filename.csv'
with (
firstrow = 2
, rowterminator = '\n'
,fieldterminator = ','
, tablock /*don't know what tablock does but it works...*/
)
--staging table transformation
; with cte as (
/*ETL process to transform csv file into my tbl structure*/
)
--final table load
insert final_tbl
select * from cte
/*
T-SQL update the final table's effect to date, based on subsequent effect from date.
eg:
id, effectfromdate, effecttodate
1,1/1/19, 1/1/3000
1,1/10/19, 1/1/3000
becomes
id, effectfromdate, effecttodate
1,1/1/19, 1/10/19
1,1/10/19, 1/1/3000
*/
The stored procedure works fine with sqlcmd and in ssms but in python (pyodbc) executing the query 'exec storedprocedure', I get the error message:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]
Cannot bulk load because the file "c:\filepath\filename.csv" could not be opened.
Operating system error code 3(The system cannot find the path specified.). (4861) (SQLExecDirectW)')
When the csv file is there, no misspellings in path or filename, and I can open the csv when double clicking on it, and no one has the csv open.
With continued experimentation I've established the problem is not with python or pyodbc. In SSMS on my laptop (host machine of db) the stored procedures work just fine, but in SSMS on the VM the stored procedures cause the same error. This tells me that my question isn't the root problem and I had more digging to do. The error (in SSMS) is below.
Msg 4861, Level 16, State 1, Procedure Append_People, Line 71 [Batch Start Line 0]
Cannot bulk load because the file "N:\path\filename.csv" could not be opened. Operating system error code 3(The system cannot find the path specified.).
Once I established the problem is in SSMS, I broadened my search and discovered the issue is the path for the bulk insert command has to be relative to the machine hosting the database. So in the VM (client machine until I migrate the db) when I use the path c:\ thinking it's the VM's c:\ drive the stored procedure is looking at the c:\ of my laptop since it's the host machine. With that I also learned that on a shared drive (N:\) access is delegated and that is its own issue (https://dba.stackexchange.com/questions/44524/bulk-insert-through-network).
So I'm going to focus on migrating the database first, then that'll solve my problem. Thanks to all who tried to help

Export an entire sheet from Excel to SQL server

I'm trying to create a query to export a sheet from excel to SQL Server, I came up with this query yet I'm getting the error Invalid object name Sheet1$
How can I select from the sheet: "Sheet1"?
s = "INSERT INTO TestTable SELECT * FROM [Sheet1$] "
cn.Execute s
In your case i guess sql server doesnt have access to sheet1 file.
Check here how to make file accessible or what could be the problem for sql to locate your file.
There are 2 ways that i know of how you could achieve this.
1>>
BULK INSERT TestTable
FROM 'C:\CSVData\sheet1.xls'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'C:\CSVDATA\SchoolsErrorRows.txt',
TABLOCK
)
But make sure on your system sql server has access to folder from where you want to take the excel file and you have bulk import rights
Check out more info
here
2>> Also you could use sql import wizard like this.

Resources