As I am a beginner in SQL Server and my scripting is not very polished yet. I need suggestions on the below issue.
I receive files from a remote server to my machine (around 700/day) as follows :
ABCD.100.1601310200
ABCD.101.1601310210
ABCD.102.1601310215
Naming Convention:
Here the first part 'ABCD' remains the same, middle part is a sequence id which is in incremental order for every file. The last part is time stamp.
File structure
The file does not have any specific extension but can be opened with notepad/excel. Therefore can be called as flat file. Each files consist of 95 columns and 20000 rows fixed with some garbage value on top 4 and bottom 4 rows of column 1.
Now, I need to make a database in SQL server where I can import data from these flat files using a scheduler. Suggestion needed.
There are probably other ways of doing this, but this is one way:
Create a format file for your tables. You only need to create it once. Use this file in the import script in step 2.
Create an import script based on OPENROWSET(BULK '<file_name>', FORMATFILE='<format_file>'
Schedule the script from step 2 in SQL Server to run against the database you want the data imported in
Create the format file
This creates a format file to be used in the next step. The following script creates a format file in C:\Temp\imp.fmt based on an existing table (replace TEST_TT with the database you are importing to). This creates such a format file with a , as field seperator. If the files have tab as seperator, remove the -t, switch.
DECLARE #cmd VARCHAR(8000);
SET #cmd='BCP TEST_TT.dbo.[ABCD.100.1601310200] format nul -f "C:\Temp\imp.fmt" -c -t, -T -S ' + (SELECT ##SERVERNAME);
EXEC master..xp_cmdshell #cmd;
Before executing this you will to reconfigure SQL Server to allow the xp_cmdshell stored procedure. You only need to do this once.
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO
import script
This script assumes:
The files need to be imported to separate tables
The files are located in C:\Temp
The format file is C:\Temp\imp.fmt (generated in the previous step)
SET NOCOUNT ON;
DECLARE #store_path VARCHAR(256)='C:\Temp';
DECLARE #files TABLE(fn NVARCHAR(256));
DECLARE #list_cmd VARCHAR(256)='DIR ' + #store_path + '\ABCD.* /B';
INSERT INTO #files EXEC master..xp_cmdshell #list_cmd;
DECLARE #fullcmd NVARCHAR(MAX);
SET #fullcmd=(
SELECT
'IF OBJECT_ID('''+QUOTENAME(fn)+''',''U'') IS NOT NULL DROP TABLE '+QUOTENAME(fn)+';'+
'SELECT * INTO '+QUOTENAME(fn)+' '+
'FROM OPENROWSET(BULK '''+#store_path+'\'+fn+''',FORMATFILE=''C:\Temp\imp.fmt'') AS tt;'
FROM
#files
WHERE
fn IS NOT NULL
FOR XML PATH('')
);
EXEC sp_executesql #fullcmd;
Related
I want to automate uploading certain files into my SQL Server every day, but each file has a different name.
I was researching and found a simple way was to schedule a bulk insert statement to run every day, but I don't know how to implement the file name change in the query. I'm not too familiar with using Windows command prompt but I'm open to using that as a solution.
The file name change is something like mmddyyyyfile with the mmddyyyy part changing to correspond with the day's date.
We use this technique in our system bulk loads when we have regular file extracts to import, similar to what you describe in your situation. If you have access to and are willing to use xp_cmdshell (which is sounds like you are) then doing something like this allows for dynamic filenames and you don't have to worry about what your date pattern is:
SET NOCOUNT ON;
DECLARE #cmdstr VARCHAR(1024) = 'dir c:\upload /B'; --set your own folder path here
DECLARE #FileName VARCHAR(1024);
DROP TABLE IF EXISTS #CmdOutput;
CREATE TABLE #CmdOutput (CmdOutput varchar(1024));
INSERT #CmdOutput EXEC master..xp_cmdshell #cmdstr;
DECLARE FILES CURSOR FAST_FORWARD FOR
SELECT CmdOutput
FROM #CmdOutput
WHERE CmdOutput IS NOT NULL;
OPEN FILES;
FETCH NEXT FROM FILES INTO #FileName;
WHILE ##FETCH_STATUS = 0
BEGIN
/*
Use dynamic SQL to do your bulk load here based on the value of #FileName
*/
FETCH NEXT FROM FILES INTO #FileName;
END;
CLOSE FILES;
DEALLOCATE FILES;
DROP TABLE #CmdOutput;
This will blindly take any file in the folder and include it in the cursor iterations. If the folder containing your .csv files will have something else that you don't then you can easily add filtering to the WHERE clause that defines the cursor to limit the files.
Finally, the obligatory warning about enabling and using xp_cmdshell on your SQL Server instance. I won't go into the details about that here (there is ample information that can be searched out), but suffice to say it is a security concern and needs to be used with the understanding of risks involved.
Is it possible to export records to csv format using SQL script in stored procedure?
I am trying make job schedule, that will export records into a .csv file. I am using SQL Server 2012.
I don't want to make small application for just an exporting. That's why I am trying to make a script and add it in schedule. For example, I have records like
EmpID EmployeeName TotalClasses
---------------------------------
01 Zaraath 55
02 John Wick 97
File destination location is D:/ExportRecords/file.csv
In SSMS you can save the query result in CSV format.
Try This Below Query
-- To allow advanced options to be changed.
EXECUTE sp_configure 'show advanced options', 1;
GO
-- To update the currently configured value for advanced options.
RECONFIGURE;
GO
-- To enable the feature.
EXECUTE sp_configure 'xp_cmdshell', 1;
GO
-- To update the currently configured value for this feature.
RECONFIGURE;
GO
declare #sql varchar(8000)
select #sql = 'bcp "select * from DatabaseName..TableName" queryout d:\FileName.csv -c -t, -T -S' + ##servername
exec master..xp_cmdshell #sql
You have to create Empty FileName.csv in D:\
You could use BCP (Bulk Copy Program) the built-in cli for exporting data from SQL Server. It's low overhead, executes fast, and has lots of feature switches, like -t (which creates CSV file) which make it good for scripting.
Something like this. The Docs are useful as well
BCP dbo.YourTable out D:/ExportRecords/file.csv -c -t
I need to move files from one blob to another. Not copy. I need to move; meaning when I move from A to B then files go from A to B and nothing is there in A. Which is so basic but not possible in Azure Blob. Please let me know if its possible. I am using it from SQL Server using AzcopyVersion 10.3.2
Now because of this, I need to copy files from A to B and then remove files form A. There are 2 problems.
1) I only want certain files to go from A to B.
DECLARE #Program varchar(200) = 'C:\azcopy.exe'
DECLARE #Source varchar(max) = '"https://myblob.blob.core.windows.net/test/myfolder/*?SAS"'
DECLARE #Destination varchar(max) = '"https://myblob.blob.core.windows.net/test/archive?SAS"'
DECLARE #Cmd varchar(5000)
SELECT #Cmd = #Program +' cp '+ #Source +' '+ #Destination + ' --recursive'
PRINT #cmd
EXECUTE master..xp_cmdshell #Cmd
So When I type myfolder/* then it will take all the files. When I try myfolder/*.pdf, it says
failed to parse user input due to error: cannot use wildcards in the path section of the URL except in trailing "/*". If you wish to use * in your URL, manually encode it to %2A
When I try myfolder/%2A.pdf OR myfolder/%2Apdf it still gives the error.
INFO: Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.
But the destination folder is already there. And in the log file it says,
RESPONSE Status: 403 This request is not authorized to perform this operation.
For azcopy version 10.3.2:
1.Copy specific files, like only copy .pdf files: you should add --include-pattern "*.pdf" to your command. And also remember for the #Source variable, remove the wildcard *, so your #Source should be '"https://myblob.blob.core.windows.net/test/myfolder?SAS"'.
The completed command looks like this(please change it to meet your sql cmd):
azcopy cp "https://xx.blob.core.windows.net/test1/folder1?sas" "https://xx.blob.core.windows.net/test1/archive1?sas" --include-pattern "*.pdf" --recursive=true
2.For delete specific blobs, like only delete .pdf files, you should also add --include-pattern "*.pdf" to your azcopy rm command.
And also, there is no move command in azcopy, you should copy it first => then delete it. You can achieve this with the above 2 commands.
I have a Spring batch which invokes a SQL Procedure.
SQL Procedure is to extract a file (using BCP command).
Internally code of SQL proc fetches the data from few tables and creates a string which it inserts in one Temp table and then there is BCP command to just extract the data from Temp table.
SQL Proc runs perfectly fine when we execute it in SQL Server Management Studio, i.e. a file is extracted with the data, but when the SQL proc is initiated from Java Spring batch, system generates Empty file.
On doing debugging I found that system does not find data in Temp table hence empty file is getting extracted. If I keep some data filled in that Temp table, then that gets extracted in file.
I feel the issue is that when Transaction is initiated from Java, the Insert which is written in SQL Proc (to insert data in Temp table) is not getting Commit, hence when BCP command works, it finds empty temp table.
Requirement -
I tried by writing - BEGIN TRANSACTION before Insert and COMMIT after the Insert, but still the empty file is getting generated.
Is there a way to Force the Commit post the INSERT, so that when the BCP command is getting executed it finds the data in Temp table.
Or is there any other solution which you guys can suggest that I should try?
Appreciate your help!
EDIT -------------
Sample code for Data Query from the Proc
SET #DATAQuery ='SELECT 1'
BEGIN TRANSACTION;
SET #DATAQuery = 'INSERT INTO EXTRACTRESULTS ' + #DATAQuery
PRINT 'Data Query:'+#DATAQuery
EXEC (#DATAQuery)
COMMIT TRANSACTION
SET #FINALQuery = 'SELECT RESULTS FROM '+#DBNAME+'.dbo.'+'EXTRACTRESULTS'
SET #ExtractBatchStringData = 'bcp "'+#FINALQuery+'" queryout "'+#FOLDERPATH+#filename +'" -c -T -t -S""';
PRINT 'ExtractBatchStringData: '+#ExtractBatchStringData
EXEC #STATUS = xp_cmdshell #ExtractBatchStringData
I would like to know how I can switch from one database to another within the same script. I have a script that reads the header information from a SQL Server .BAK file and loads the information into a test database. Once the information is in the temp table (Test database) I run the following script to get the database name.
This part works fine.
INSERT INTO #HeaderInfo EXEC('RESTORE HEADERONLY
FROM DISK = N''I:\TEST\database.bak''
WITH NOUNLOAD')
DECLARE #databasename varchar(128);
SET #databasename = (SELECT DatabaseName FROM #HeaderInfo);
The problem is when I try to run the following script nothing happens. The new database is never selected and the script is still on the test database.
EXEC ('USE '+ #databasename)
The goal is switch to the new database (USE NewDatabase) so that the other part of my script (DBCC CHECKDB) can run. This script checks the integrity of the database and saves the results to a temp table.
What am I doing wrong?
You can't expect a use statement to work in this fashion using dynamic SQL. Dynamic SQL is run in its own context, so as soon as it has executed, you're back to your original context. This means that you'd have to include your SQL statements in the same dynamic SQL execution, such as:
declare #db sysname = 'tempdb';
exec ('use ' + #db + '; dbcc checkdb;')
You can alternatively use fully qualified names for your DB objects and specify the database name in your dbcc command, even with a variable, as in:
declare #db sysname = 'tempdb';
dbcc checkdb (#db);
You can't do this because Exec scope is limited to dynamic query. When exec ends context is returned to original state. But context changes in Exec itself. So you should do your thing in one big dynamic statement like:
DECLARE #str NVARCHAR(MAX)
SET #str = 'select * from table1
USE DatabaseName
select * from table2'
EXEC (#str)