We have a post-deployment script in our SQL Server project which essentially performs a bulk-insert to populate tables after they're created. This is done by reading several .csv files:
BULK INSERT
[dbo].[Table1]
FROM '.\SubFolder\TestData\Data1.csv'
WITH
(
ROWTERMINATOR = '0x0a',
FIELDTERMINATOR = ','
)
BULK INSERT
[dbo].[Table2]
FROM '.\SubFolder\TestData\Data2.csv'
WITH
(
ROWTERMINATOR = '0x0a',
FIELDTERMINATOR = ','
)
The problem is Visual Studio is having a hard time finding the files:
Cannot bulk load because the file ".\SubFolder\TestData\Data1.csv" could not be opened.
Operating system error code 3(The system cannot find the path specified.).
The .csv files are checked in to the source control and I do see them when I go to the folder they're mapped to on my machine. I assume the problem is . isn't returning current path for the sql file being executed. Is there a way to get the relative path? Is there a macro (or a SQLCMD Variable maybe) that would give me current path of the file?
The problem you have is that the .csv files are in your VS project but the script will be executed on the SQL Server, so the files should be in a location that the server will have access to. Maybe, you could add a Pre-Build event that will copy the .csv files to a shared drive on the server and then use a static path in your script that will take the files from the shared location.
I know this question is very old but still relevant. I found a working solution to this problem under the following conditions (what are optional because there are ways to overcome it) :
You will use the "publish" option within the Visual studio IDE to deploy your app.
The csv file is part of your project and configured to be copied to output folder.
Here are the steps:
Open Project Properties and go to SQLCMD Variables Section.
Add a new variable (for example $(CurrentPath))
In default Value put: $(ProjectDir)$(OutputPath)
Change your BULK code to:
BULK INSERT
[dbo].[Table1]
FROM '$(CurrentPath)\PathToFolderInsideOutputDirectory\Data1.csv'
WITH
(
ROWTERMINATOR = '0x0a',
FIELDTERMINATOR = ','
)
Save all and compile.
Test your deploy using publish, ensure the $(CurrentPath) variable shows the right path(or press "Load values" button), press the publish button, all should work.
You can create a SSIS package and use foreach Loop container to loop through all the .csv files in a given path. See below the a demo configuration of Foreach Loop Container
Related
I'm setting up a new VM on a server to offload SQL Server database loading from my laptop. In doing so, I'd like to be able to execute stored procedures (no params, just 'exec storedprocedure') in my database via Python, but it's not working.
Stored procedure call worked when using sqlcmd via a batch file and in SSMS, but i'd like to make it all python based.
The stored procedure is appending fact tables follows the below general format:
--staging tbl drop and creation
if object_id(stagingtbl) is not null drop tabl stagingtbl
create table stagingtbl
(fields datatypes nullable
)
--staging tbl load
bulk insert stagingtbl
from 'c:\\filepath\\filename.csv'
with (
firstrow = 2
, rowterminator = '\n'
,fieldterminator = ','
, tablock /*don't know what tablock does but it works...*/
)
--staging table transformation
; with cte as (
/*ETL process to transform csv file into my tbl structure*/
)
--final table load
insert final_tbl
select * from cte
/*
T-SQL update the final table's effect to date, based on subsequent effect from date.
eg:
id, effectfromdate, effecttodate
1,1/1/19, 1/1/3000
1,1/10/19, 1/1/3000
becomes
id, effectfromdate, effecttodate
1,1/1/19, 1/10/19
1,1/10/19, 1/1/3000
*/
The stored procedure works fine with sqlcmd and in ssms but in python (pyodbc) executing the query 'exec storedprocedure', I get the error message:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]
Cannot bulk load because the file "c:\filepath\filename.csv" could not be opened.
Operating system error code 3(The system cannot find the path specified.). (4861) (SQLExecDirectW)')
When the csv file is there, no misspellings in path or filename, and I can open the csv when double clicking on it, and no one has the csv open.
With continued experimentation I've established the problem is not with python or pyodbc. In SSMS on my laptop (host machine of db) the stored procedures work just fine, but in SSMS on the VM the stored procedures cause the same error. This tells me that my question isn't the root problem and I had more digging to do. The error (in SSMS) is below.
Msg 4861, Level 16, State 1, Procedure Append_People, Line 71 [Batch Start Line 0]
Cannot bulk load because the file "N:\path\filename.csv" could not be opened. Operating system error code 3(The system cannot find the path specified.).
Once I established the problem is in SSMS, I broadened my search and discovered the issue is the path for the bulk insert command has to be relative to the machine hosting the database. So in the VM (client machine until I migrate the db) when I use the path c:\ thinking it's the VM's c:\ drive the stored procedure is looking at the c:\ of my laptop since it's the host machine. With that I also learned that on a shared drive (N:\) access is delegated and that is its own issue (https://dba.stackexchange.com/questions/44524/bulk-insert-through-network).
So I'm going to focus on migrating the database first, then that'll solve my problem. Thanks to all who tried to help
I know it's possible to do a bulk insert from a file like this:
strSQL = "BULK INSERT Northwind.dbo.[Order Details]
FROM 'e:\My Documents\TextFiles\OrderDetails.txt' " & _
"WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' )"
But I can't seem to find a way to insert an object that's in memory instead. Is this possible?
The file must be visible from the SQL Server itself and its account must have access rights (if you are using SQL Server):
From http://technet.microsoft.com/en-us/library/ms188365.aspx
BULK INSERT can import data from a disk (including network, floppy disk, hard disk, and so on). 'data_file' must specify a valid path from the server on which SQL Server is running. If data_file is a remote file, specify the Universal Naming Convention (UNC) name. A UNC name has the form \Systemname\ShareName\Path\FileName. For example, \SystemX\DiskZ\Sales\update.txt.
You can use a table variable to populate a temporary table, then use a MERGE statement to process that temp table into the target database.
T-SQL Merge statement docs here http://msdn.microsoft.com/en-us/library/bb510625.aspx
The solution ended up being to build a COM object in C# that does the bulk insert and then leveraging that COM object in the VB6 project.
I Think That My Question Is Simple.
How Can I Find That My Query Is Running From Where
( Where is The Location of the Script File itself ) ?
Edit :
Thank You For Your Answer.
I Need To Import a XML File Using my TSQL Script File And i want to Keep Them Together,
so Wherever Someone try to run the TSQL script file, it must knows the current directory of itself to know where is the XML file and then import it. Thank Again !
You need a well known location where you can place XML files for the server to load. This could be a share on the SQL Server machine, or on a file server which the SQL Server service account has permissions to read from.
You then need a comment like this at the top of your script:
--Make sure you've placed the accompanying XML file on \\RemoteMachine\UploadShare
--Otherwise, expect this script to produce errors
Change \\RemoteMachine\UploadShare to match the well known location you've selected. Optionally, have the comment followed by 30-40 blank lines (or more comments), so that it's obvious to anyone running it that they might need to read what's there.
Then, write the rest of your script based on that presumption.
I Found A Solution to my problem that's simpler !
You Know I Just Import My XML File To A Temp Table for once.
Then I Write a Select Query for That Temp Table That Contains my imported Data Like This :
" SELECT 'INSERT INTO MyTable VALUES (' + Col1 + ', ' + Col2 + ')' FROM MyImportedTable "
And Now I Have Many Insert Commands For Each One Of My Imported Records.
And I Save All of the Insert Commands in My Script. And So I Just Need My Script File Everywhere I Go.
I have a sql command as follows:
INSERT [dbo].[Currency] ([CurrencyID], [Description], [Symbol])
VALUES (N'7418fe34-1abc-4189-b5f1-e638a34af1a1', N'GBP', N'£')
When I run this against the database, it inputs the last column as '£' rather than '£'. I have come across this before but can't for the life of me remember how to fix it!
Any ideas?
Thanks.
UPDATE
Funnilty enough, if I copy and paste that line from my sql file into sql man stud, then it inserts fine. So I think there is something wrong with my sql file, and a possible character in it that I cant see?
UPDATE
The sql script has the following to insert the euro symbol:
INSERT [dbo].[Currency] ([CurrencyID], [Description], [Symbol])
VALUES (N'c60b1e0c-289a-4a0a-8c7d-30a490cbb7a8', N'EUR', N'€')
And it outputs "€" in the database for the last column
UPDATE
Ok, I have now copy and pasted my full sql file into Sql Server and run it, and it now inserts everything fine. So why does this issue arise only when I run my ".sql" file?
UPDATE
Another update! If I view the ".sql" file in Visual Studio it looks fine, however if I open it within notepad, the bogus characters appear!
(From the comments)
The file is saved as UTF-8, but sqlcmd is reading it using the wrong code page. Adding -f 65001 to the options tells sqlcmd to read it as an UTF-8 file.
Is there way to get file from windows xp command prompt? I tried to run xp_cmdshell 'type [path to file]' but then when i insert theese data into other file and renaming it to file.exe (that is executable) it does not work. Any suggestions how to get file contents in such way that i can use it?
You could use BULK INSERT on the file and treat the file as a table with one row and one column. This should allow you to read the file directly into a VARBINARY field Like this:
CREATE TABLE FileRead
(
content VARBINARY(MAX)
)
BULK INSERT FileRead FROM [FilePath]
This requires SQL Server to have access to the file you are trying to read. It sounds like you are trying to "acquire" executables from a server you do not have access to? :-)