I'm running Microsoft SQL Server 17.7 and I'm trying to write script that can read text from an excel file and insert the data (Matterlist #s) as a string into the Declare statement below. I want to avoid having to copy paste each xxxxx-xxxxx number manually into my Declare statement. All I can find are ways to import data from Excel into a SQL table but nothing on how to read text and insert that data into my script. Any suggestions or steps that I could follow to get through this would greatly be appreciated it.
Declare statement:
DECLARE #MatterList NVARCHAR(100) = '(XXXXX-XXXXX, XXXXX-XXXXX, XXXXX-XXXXX, XXXXX-XXXXX)'
Excel file structure:
ID Client Matter_Name Matter_ID
------------------------------------------
1 Client1 Matter_Name1 xxxxx-xxxxx
2 Client1 Matter_Name1 xxxxx-xxxxx
Related
I have a file which is fill of million of records and it looks like below:
20 Apr 2016 21:50:01,022 - username,,ResetGUI,Account,finished,8182819710127A590BAF3E5DFD9AE8B0.tomcat32,7
20 Apr 2016 21:50:01,516 - username,12345678,loadaccount,AccountAdmin,Starts,40A5D891F145C785CD03596AAD07209F.tomcat32,
I want to automate importing the data into a table.
I am not sure how it works?
Please advise!
If it is a one time data load you can use the SQL Server Import Export Wizard.
Right Click your DB in Management Studio -> Tasks -> Import Data
The wizard will guide you through selecting a database to import to, and a data source (other DB or flat file).
Make sure you create csvtest table first. Make all cols as varchar(200) first to test import
If you can give me col names ill construct a create table script for u. I just need to know your table name. Col names and data types ( from source ).
If u plan to regurly import this file Process can be...
If table exists truncate it.
If table does not exist create one.
Bulk load csv into new table
Anyways heres how to import into existing table from csv
BULK
INSERT CSVTest
FROM 'c:\csvtest.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
I am using SSIS to import a file into SQL Server 2008. This file is supplied in .csv format to me via someone else. I have no control over the export of the file.
When I import the file the one field shows e.g. 01.10111144000000000000000e+009 instead of 1101111440.
I then proceeded to open it up in Notepad and Excel and that is how it shows up as well. When I right-click on that column in Excel and select 'Format Cells' and set it to General it reflects correctly. Problem is I can't do this manually.
What can I do prior to doing a bulk insert from the file to make sure that the column will import correctly?
You can use below code for importing data from CSV to SQL Server database.
DECLARE #filePath VARCHAR (200)
,#basePath VARCHAR(200) = 'C:\Dumps\'
,#Bulk NVARCHAR(MAX);
-- Get full file path
SET #filePath = #basePath + 'reportsdump.csv';
-- Populate table [#ReportsDumpData] with data from csv.
SET #Bulk = 'BULK INSERT #ReportsDumpData FROM '''+#filePath+''' WITH (';
SET #bulk += 'FIRSTROW = 2, FIELDTERMINATOR = ''";"'' , rowterminator=''\n'')';
EXEC(#Bulk);
The joys of having to work with other people's extracts. I loaded it into a temp (staging) table. From there I first convert it to a float which I convert into a decimal(10, 0) and convert that to a varchar(10). This works.
I am working on an import script, the idea being to import multiple workbooks into one table. I have made progress, so I am able to import one workbook successfully into my table. What I want to do is create a query that will loop a folder read the file names and import the data into my database in Microsoft SQL Server Management Studio.
--Creating the TABLE--
CREATE TABLE BrewinDolphinHoldings
(
recordID INT AUTO_NUMBER
FUNDNAME VARCHAR(25),
SEDOL VARCHAR(7),
ISIN VARCHAR(11),
NAME VARCHAR(20),
WEIGHT INTEGER(3)
)
constraint[pk_recordID]PRIMARYKEY
(
[recordID] ASC
)
INSERT INTO BrewinDolphinHoldings
VALUES
("HoldingsData', GB125451241, DavidsHoldings, 22)
--SELECTING THE SHEET--
SELECT/UPDATE? *
FROM OPENROWSET('Microsoft.JET.OLEDB.4.0',
'Excel 8.0;Database=X:\CC\sql\DEMO\SpreadsheetName + '.xlsx',
'SELECT * FROM [Sheet1$]') AS HoldingsData
So essentially my question is, I want to create a loop a loop that will read the file name in a directory, and the import will read that name every time it loops and import the relevant spreadsheets? so,for example:
DECLARE SpreadsheetName as STRING
DECLARE FileExtension as '.xlsx'
FOR EACH ITEM IN DIRECTORY
X=1
Y=MAX
FILENAME THAT LOOP READS = SpreadsheetName
SELECT * FROM
OPENROWSET('Microsoft.JET.OLEDB.12.0',
'Excel 8.0;Database=X:\CC\sql\DEMO\SpreadsheetName + fileExtension.xls
END LOOP
So, I'm thinking maybe something like this? Although I don't know if the loop will overwrite my database? maybe instead of UPDATE I should use INSERT?
I don't want to use SSIS, preferably a query, although if anyone can recommend anything I could look into, or, help me with this loop It would greatly help
I'm open to new ideas from you guys, so if anyone can try and fix my code, or give me a few examples of imports for multiple excel sheets, would be greatly appreciated!
I'm new to SQL Server, I do have some previous programming experience!
Thanks!
You can use bcp to do what you are talking about to import any type of delimited text file, such as csv or text tab delimited. If it is possible generate/save the spreadsheets as csv and use this method. See these links.
Import Multiple CSV Files to SQL Server from a Folder
http://www.databasejournal.com/features/mssql/article.php/3325701/Import-multiple-Files-to-SQL-Server-using-T-SQL.htm
If it has to be excel, then you can't use bcp, but these should still help you with the logic for the loops on the file folders. I have never used the excel openrowset before, but if you have it working like you said, it should be able to insert in just the same. You can still use the xp_cmdshell/xp_dirtree to look at the files and generate the path even though you can't import it with bcp.
How to list files inside a folder with SQL Server
I would then say it would be easiest to do a insert from a select statement from the openrowset to put it into the table.
http://www.w3schools.com/sql/sql_insert_into_select.asp
Make sure xp_cmdshell is enabled on your sql server instance as well.
https://msdn.microsoft.com/en-us/library/ms190693(v=sql.110).aspx
I have these requirements:
Export table data into a .csv file using a stored procedure
First row in the .csv file will be a custom header
Note: data in this row will not come from table. It will be a fixed header for all the .csv being generated.
Similar to something like this:
Latest price data:
product1;150;20150727
product2;180;20150727
product3;180;20150727
Assuming that date is a proper datetimecolumn the following procedure will at least do the job for this table named prodtbl:
CREATE proc csvexp (#header as nvarchar(256)) AS
BEGIN
SELECT csv FROM (
SELECT prod,date,prod+';'+CAST(price AS varchar(8))+';'
+replace(CONVERT(varchar,getdate(),102),'.','') csv
FROM prodtbl
UNION ALL
SELECT null,'1/1/1900',#header
) csvexp ORDER BY prod,date
END
The command
EXEC csvexp 'my very own csv export'
will then generate the following output:
my very own csv export
product1;150;20150727
product2;180;20150727
product3;180;20150727
The part of actually getting this output into a .csv file still remains to be done ...
When importing a file.csv to sql server 2008 I am getting a problem.
In my file.csv the decimal is written in this way: (ex. 1234,34112) and It seems that SQL server does not understand the ',' as decimal.
My solution has been to import it using BULK INSERT as VARCHAR and after that convert it to decimal. It works but I guess it may be a better solution which I am not able to get.
Could you help me with that?
Thanks in advance
There are only two ways of doing it one you have already mentioned importing in Sql server and then doing something like this..
CREATE Table ImportTable (Value NVARCHAR(1000)) --<-- Import the data as it is
INSERT INTO #TABLE VALUES
('1234,34112'),('12651635,68466'),('1234574,5874')
/* Alter table add column of NUMERIC type.
And update that column something like this...
*/
UPDATE ImportTable
SET NewColumn = CAST(REPLACE(Value , ',', '.') AS NUMERIC(28,8))
Or you can change it your excel sheet before you import it.
Unless you are using SSIS to import data it is always best to get your data in Sql Server 1st using lose datatypes and then do any data manipulation if needed.
SQL Server Management Studio 17 provides a new direct option to import flat files that handles import of decimal csv columns for you. Right your database, then click Tasks > Import Flat File...