Is there a way to bulk load data from Azure Blob to SQL Server? I'm accessing the files through Azure Storage Explorer. I'm pretty sure it can be done, but I'm getting an error that says the 'file could not be opened'. Here is my code.
DECLARE #cmd varchar(1000)
SET #cmd = 'BULK INSERT [dbo].[dest_table]
FROM ''alldata/2019/06/29/BB/dds_id.out.20190629.gz''
WITH ( FIELDTERMINATOR = ''\n'',
FIRSTROW = 46,
ROWTERMINATOR = '''+CHAR(10)+''')';
PRINT #cmd
EXEC(#cmd)
The file ends in .gz, so it's compressed. I think that's the problem here. Can someone please confirm. More importantly, is there a workaround for this? All I have is SQL Server; no access to SSIS.
The compressed file is not a data file.
The document BULK INSERT (Transact-SQL):
Imports a data file into a database table or view in a user-specified format in SQL Server.
The argument data_file :
Is the full path of the data file that contains data to import into the specified table or view. BULK INSERT can import data from a disk (including network, floppy disk, hard disk, and so on).
One way is that you uncompress .gz file firstly, and then using BULK INSERT to load the data file to SQL server.
There is another way you can try it using Azure Data Factory. Data Factory can help you uncompress the .gz file in dataset settings:
Reference this tutorial: Copy data from Azure Blob storage to a SQL database by using Azure Data Factory.
Hope this helps.
Related
I'm trying to import a very big .CSV file (about 2 GB) as a table into a database in SQL Server (through SQL Server Management Studio).
I tried doing it with the "Import Flat File..." and the "BULK Insert" in the query console as well, both take an extremely long time and I stopped the execution, or it just fails by itself at the end.
BULK INSERT [table name]
FROM 'Path of the CSV'
WITH (FIRSTROW = 1,
FIELDTERMINATOR = ',',
ROWTERMINATIR = '0x0A')
Maybe I'm doing something wrong?
I couldn't find any solution except splitting the data into smaller files but I wanted to avoid doing that.
Thank you.
I am trying to importdata from a local CSV file into an Azure database. the idea is to allow the customer to do bulk inserts into the system using a preformatted CSV file.
The code I am using is:
BULK INSERT tmp_Import_Truck
FROM 'C:\ImportFrom\ImportData.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR = '\n'
)
The problem is that I am getting an eror that it cannot open the file.
Msg 4861, Level 16, State 1, Line 1 Cannot bulk load because the file
"C:\ImportFrom\ImportData.csv" could not be opened. Operating system
error code (null).
How do I resolve this issue?
You can't use bulk insert in Azure SQL DB with file name, because it should be on the SQL Server machine.
You can use however BCP utility executed on your computer, to bulk copy this file to Azure SQL:
bcp database.dbo.table in C:\ImportFrom\ImportData.csv
-S yourserver.database.windows.net
-U someusername -P strongpassword
I was shocked when I learned that importing the excel data to sql database using OPENROWSET has downsides as it truncates the cells' values of it to 255-length-characters before it passes to the database. I'm now thinking of using xp_cmdshell to read the excel file's data and transfer it to database. however I'm clueless on how I could do that. could somebody help me to achieve that?
Yes BCP could be used to import data from excel(.xlsx) files into Sql Server tables. Only thing to remember here is from MS documentation -
Prerequisite - Save Excel data as text To use the rest of the methods
described on this page - the BULK INSERT statement, the BCP tool, or
Azure Data Factory - first you have to export your Excel data to a
text file.
In Excel, select File | Save As and then select Text (Tab delimited)
(.txt) or CSV (Comma delimited) (.csv) as the destination file type.
A sample BCP command to import data from a excel file (converted to tab delimited) into Sql Server table -
bcp.exe "MyDB_Copy.dbo.Product" in "C:\Users\abhishek\Documents\BCPSample.txt" -c -t"\t" -r"\n" -S ABHISHEK-HP -T -h TABLOCK
Read more about BCP and import here.
Basically what the title is saying.
Today when I use bulk insert with T-SQL on Microsoft SQL Server 2008 to get data from a local drive I use the following query,
BULK INSERT tmp_table
FROM 'c:\data\x.csv'
WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0a' );
Which works ok. But now I want to be able to read .csv data files from a folder on a FTP server.
As every other FTP server, I need username\password to log in first, and then, I somehow need to fetch the files. Is this possible with T-SQL?
I do know that with C# this would be a piece of cake for me, but want to learn this by using T-SQL.
I will also need to know how I can dynamically get the names of files from a given folder on the FTP server, but since I'm taking this one step at a time, you don't need to answer this right away.
When the files are on local drive i am able to use xp_dirtree to get the names of all files in a folder.
An excellent guide can be found here http://www.patrickkeisler.com/2012/11/how-to-use-xpdirtree-to-list-all-files.html
I am trying to bulk insert a csv file located on a remote web server but i am getting the following error.
Cannot bulk load because the file "http://34.34.32.34/test.csv" could
not be opened. Operating system error code 123(The filename, directory
name, or volume label syntax is incorrect.).
Is there anyway to accomplish this?
The documentation for BULK INSERT says nothing about SQL Server being able to connect to web servers.
http://msdn.microsoft.com/en-us/library/ms188365.aspx
' data_file ' Is the full path of the data file that contains data to
import into the specified table or view. BULK INSERT can import data
from a disk (including network, floppy disk, hard disk, and so on).
data_file must specify a valid path from the server on which SQL
Server is running. If data_file is a remote file, specify the
Universal Naming Convention (UNC) name. A UNC name has the form
\Systemname\ShareName\Path\FileName. For example,
\SystemX\DiskZ\Sales\update.txt.
If you must import a file from HTTP, consider writing a CLR stored procedure or using SSIS' external connectivity capabilities.
http://34.34.32.34/test.csv is, exactly as the error message says, an incorrect file name. Correct filenames look like c:\somefolder\test.csv. Something that starts with http: is an URL, not a file.
BULK INSERT does not support URLs as source. You should download the file first locally (using wget, curl or any other program that can download HTTP content), then bulk insert the downloaded file.