I'm trying to create a query to export a sheet from excel to SQL Server, I came up with this query yet I'm getting the error Invalid object name Sheet1$
How can I select from the sheet: "Sheet1"?
s = "INSERT INTO TestTable SELECT * FROM [Sheet1$] "
cn.Execute s
In your case i guess sql server doesnt have access to sheet1 file.
Check here how to make file accessible or what could be the problem for sql to locate your file.
There are 2 ways that i know of how you could achieve this.
1>>
BULK INSERT TestTable
FROM 'C:\CSVData\sheet1.xls'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
ERRORFILE = 'C:\CSVDATA\SchoolsErrorRows.txt',
TABLOCK
)
But make sure on your system sql server has access to folder from where you want to take the excel file and you have bulk import rights
Check out more info
here
2>> Also you could use sql import wizard like this.
Related
I receive update information for items on a daily basis via a CSV file that includes date/time information in the format YYY-MM-DDThh:mm:ss
I used the Management Studio task "Import Flat File..." to create a table dbo.fullItemList and import the contents of the initial file. It identified the date/time columns as type datetime2(7) and imported the data correctly. I then copied this table to create a blank table dbo.dailyItemUpdate.
I want to create a script that imports the CSV file to dbo.dailyItemUpdate, uses a MERGE function to update dbo.fullItemList, then wipes dbo.dailyItemUpdate ready for the next day.
The bit I can't get to work is the import. As the table already exists I'm using the following
BULK INSERT dbo.dailyItemUpdate
FROM 'pathToFile\ReceivedFile.csv'
WITH
(
DATAFILETYPE = 'char',
FIELDQUOTE = '"',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
TABLOCK
)
But I get a "type mismatch..." error on the date/time columns. How come the BULK INSERT fails, even though the data type was picked up by the "Import Flat File" function?
I am trying to import a .CSV file into my SQL Server database. This is the script I am using:
BULK INSERT <table_name>
FROM '/data.txt'
WITH
(
FIRSTROW = 2,
FORMAT='CSV',
ERRORFILE = '/RowErrors.txt',
MAXERRORS = 100
)
The trouble is my CSV has rows like the following in it.
"1-XY-YYYY","","","",""GXXXXX","SuXXXXXXXX","1-XY-YYYY"
Note the ""GXXXXX" in column 5.
The import stops with the following error when it gets to that row
Bulk load failed due to invalid column value in CSV data file
Is there some way to get the importer to ignore data formatting error like we can with the MAXERRORS property?
We are migrating to Access 2013 from 2010 and from SQL Server 2008 to 2016. To import a CSV file, I used the Docmd.TransferText command and that did the import easily as it was to a local Access table.
Now I have issues when trying to import the data from CSV to a remote SQL Server table. I copied the file to the SQL Server box where the 2016 database is and used the below for the transfer.
str1 = "BULK INSERT Temp3 " & _
"FROM 'C:\Bulk\FileExchange_Response_49636101_49.csv'" & _
"WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', " & _
"ROWTERMINATOR = '\\n', TABLOCK)".
This does not throw any error however, does not import the data.
Could anyone please shed any ideas to import the data from the CSV?
Thanks
The data does not get imported even if there are no errors.
Looks like you got your data into Access. Try this:
Create a "Linked Table" to your SQL Server table. External Data > New Data Source > From Database > SQL Server > Link to the data source by creating a linked table > Create a DSN to your database, choose your table. You'll see a green icon in the tables with your SQL Server table there.
Create an Append Query. Create > Query Design > Close Show Table pop-up > Select SQL view at top left > use something like Insert Into MyNewLinkedTable (Field1, Field2) Select Field1, Field2 From MyAccessTable
Click Run on the ribbon above, then follow the prompts.
No VBA required!
I'm trying to use the following codes to do some bulk insert:
BULK INSERT [xxx].[xxx].[xxx]
FROM 'E:\xxx\xxx.csv'
WITH (
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
But it reports an error:
Cannot bulk load because the file "E:\xxx\xxx.csv" could not be opened. Operating system error code (null).
Some posts show it's because my SSMS 2017 uses some sort of dummy account which doesn't have the permission to access the file on the shared drive.
I've tried running SSMS 2017 as admin, but it didn't work.
My question is how to create a domain account for SSMS 2017, so it can bulk insert? (a step-by-step guide is preferred).
I am using Microsoft SQL Server Management studio and I am currently importing some CSV files in a database. I am importing the CSV files using the BULK INSERT command into already existing tables, using the following query.
BULK INSERT myTable
FROM >>'D:\myfolder\file.csv'
WITH
(FIRSTROW = 2,
FIELDTERMINATOR = ';', --CSV Field Delimiter
ROWTERMINATOR = '\n', -- Used to shift to the next row
ERRORFILE = 'D:\myfolder\Error Files\myErrrorFile.csv',
TABLOCK
)
This works fine for me thus far, but I would like to automate the process of naming columns in tables. More specifically I would like to create a table and use as column names, the contents of the first row of the CSV file. Is that possible?
The easiest way I can think of is:
right-click on the database, select: Tasks -> Import Data...
After that, SQL Server Import and Export Wizard will display. There you have everything to specify and custom settings on importing data from any sources (such as getting column names from first row in a file).
In your case, your data source will be Flat file source.