Automate truncate/copy of table data - sql-server

Every week, I have to run a script that truncates a bunch of tables. Then I use the export data task to move the data to another server (same database name).
The servers aren't linked, I can't save the export job, and my permissions/settings are limited by the DBA (I am an admin on the databases). I have windows authentication on both servers only. The servers are different versions (2005/2008).
My question is is there a way to automate this with my limited ability to modify the servers? Perhaps using Powershell?
Selecting all these tables and stuff in the export wizard week after week is a pain.

If you have access to the SQL Server Management console apps, try something this from a different system.
C:\> bcp ExportImportFile.inp out prod.dbo.[Table] -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c > C:\Temp\ExportImport.log
C:\> sqlcmd -S %SQLSERVER% -U %USERNAEM% -P %PASSWORD% -Q "Use Prod;TRUNCATE TABLE [Table];" >> C:\Temp\ExportImport.log
C:\> bcp prod.dbo.[Table] in ExportImportFile.inp -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c >> C:\Temp\ExportImport.log

You can use DBATOOLS:
$splat = #{
SqlInstance = '{source instance}'
Database = 'tempdb'
Destination = '{dest instance}'
DestinationDatabase = 'tempdb'
Table = 'table1' # you can provide a list of tables
AutoCreateTable = $true
Truncate = $true
}
Copy-DbaDbTableData #splat
If you don't have dbatools: https://dbatools.io/getting-started/

Related

SQL Server table swap

I have a table that is truncated and loaded with data everyday the problem is truncating the table is taking a while and users are noticing this. What I am wondering is, is there a way to have two of the same tables and truncate one then load the new data and then have the users user that new table and just keep switching between the two table.
If you're clearing out the old table, as well as populating new you could use the OUTPUT clause. Be mindful of the potential for log growth, consider a loop/batch approach if this may be a problem.
DELETE
OldDatabase.dbo.MyTable
OUTPUT
DELETED.col1
, DELETED.col2
, DELETED.col3
INTO
NewDatabase.dbo.MyTable
Or you can Use BCP which is a handy alternative to be aware of. Note this is using SQLCMD syntax.
:setvar SourceServer OldServer
:setvar SourceDatabase OldDatabase
:setvar DestinationServer NewServer
:setvar DestinationDatabase NewDatabase
:setvar BCPFilePath "C:\"
!!bcp "$(SourceDatabase).dbo.MyTable" FORMAT nul -S "$(SourceServer)" -T -n -q -f "$(BCPFilePath)MyTable.fmt"
!!bcp "SELECT * FROM $(SourceDatabase).dbo.MyTable WHERE col1=x AND col2=y" queryout "$(BCPFilePath)MyTable.dat" -S "$(SourceServer)" -T -q -f "$(BCPFilePath)MyTable.fmt" -> "$(BCPFilePath)MyTable.txt"
!!bcp "$(DestinationDatabase).dbo.MyTable" in $(BCPFilePath)MyTable.dat -S $(DestinationServer) -T -E -q -b 2500 -h "TABLOCK" -f $(BCPFilePath)MyTable.fmt

How to create a new local table from a select query on remote db in PostgreSQL?

I can use the following command to do so as long as I create the table and the appropriate columns first. I would like the command to be able to create table for me based on the results of my query.
psql -h remote.host -U myuser -p 5432 -d remotedb -c "copy (SELECT view.column FROM schema.view LIMIT 10) to stdout" | psql -h localhost -U localuser -d localdb -c "copy localtable from stdin"
Again, it will populate the data properly if I create the table and columns ahead of time, but it would be much easier if I could automate that with a comand that creates the table according to the results of my query.

Add a date to output filename using SQLCMD from within SQL Agent CmdExec?

I want to run a weekly extract from a SQL Server database using SQLCMD under SQL Agent. Because I need to save multiple extracts in the same share, I want to use the current date as part of the extract's file name. When doing this from the command line, I use:
sqlcmd -S POC -i "\\org-data\data\dept\share\registry\SQLCMD\extractdata.sql" -s "|" -W -h-1 -o "\\org-data\data\dept\share\registry\Extracts\extractdata.%date:~-4,4%%date:~-10,2%%date:~-7,2%.txt"
and it works perfectly.
When I place the same statement into a CmdExec under SQL Agent, my date becomes a syntax error -- ("The filename, directory name, or volume label syntax is incorrect")
How do others handle this? Thanks.
Try using the SQL Server Agent tokens. They are described in MSDN article, "Use Tokens in Job Steps". The DATE token provides the current date in YYYYMMDD format. For your example use:
"...\Extracts\extractdata.$(ESCAPE_DQUOTE(DATE)).txt"
This isn't working for me
echo off
sqlcmd -m 1 -S 10.108.96.210\QA832 -U Exception -P Password1 -i E:\KCM_UAT\Exception.sql -o C:\Test_$(ESCAPE_DQUOTE(DATE)).txt -W -h-1 -s " "
set /p delExit=Press the ENTER key to exit...:
The file is written out like this
Test_$(ESCAPE_DQUOTE(DATE)).txt

Setting up a database, schemas, tables and stored procedures all in one click

I have all the scripts to do:
Set up a database.
Create schema/s.
Create tables.
Create stored procedures.
I would like to write a batch file that will have SQL Server run those scripts and consequently my database will be created easier and quicker. For the sake of this example, lets assume that I have a folder with the address C:\folder and inside this folder I have files SetDatabase.sql, SetSchema.sql, SetTable.sql, and SetSP.sql. How would I set all that up on localhost\TSQL2012?
You can do this in powershell using sqlcmd
sqlcmd -S serverName\instanceName -i scripts.sql
The above statement will execute a script.
You can use the :r command in another file (scripts.sql) to store all your scripts.
:r C:\..\script1.sql
:r C:\..\script2.sql
....
set _connectionCredentialsMaster=-S MyServer\MyInstance -d Master -U sa -P mypassword
set _connectionCredentialsMyDatabase=-S MyServer\MyInstance -d MyDatabase -U sa -P mypassword
set _sqlcmd="%ProgramFiles%\Microsoft SQL Server\110\Tools\Binn\SQLCMD.EXE"
%_sqlcmd% -i MyFileCreateDatabase001.sql -b -o MyFileCreateDatabase001.Sql.log %_connectionCredentialsMaster%
%_sqlcmd% -i MyFile001.sql -b -o MyFile001.Sql.log %_connectionCredentialsMyDatabase%
%_sqlcmd% -i MyFile002.sql -b -o MyFile002.Sql.log %_connectionCredentialsMyDatabase%
set _connectionCredentialsMaster=
set _connectionCredentialsMyDatabase=
set _sqlcmd=
Just remember, when you run the 'Create Database' statement, you are actually USING the "Master" database. Then, after MyDatabase is created, you can use it. Thus why the first line in the example above...connects to Master.
The above will let you set the credentials "at the top" "one time"....and keep your lines in the file for each file.
Use SQL data tools to implement your needs. You should study about that before you do.
http://msdn.microsoft.com/en-in/data/tools.aspx

Using variables in SQLCMD for Linux

I'm running the Microsoft SQLCMD tool for Linux (CTP 11.0.1720.0) on a Linux box (Red Hat Enterprise Server 5.3 tikanga) with Korn shell. The tool is properly configured, and works in all cases except when using scripting variables.
I have an SQL script, that looks like this.
SELECT COLUMN1 FROM TABLE WHERE COLUMN2 = '$(param1)';
And I'm running the sqlcmd command like this.
sqlcmd -S server -d database -U user -P pass -i input.sql -v param1="DUMMYVALUE"
When I execute the above command, I get the following error.
Sqlcmd: 'param1=DUMMYVALUE': Invalid argument. Enter '-?' for help.
Help lists the below syntax.
[-v var = "value"...]
Am I missing something here?
You don't need to pass variables to sqlcmd. It auto picks from your shell variables:
e.g.
export param1=DUMMYVALUE
sqlcmd -S $host -U $user -P $pwd -d $db -i input.sql
In the RTP version (11.0.1790.0), the -v switch does not appear in the list of parameters when executing sqlcmd -?. Apparently this option isn't supported under the Linux version of the tool.
As far as I can tell, importing parameter values from environment variables doesn't work either.
If you need a workaround, one way would be to concatenate one or more :setvar statements with the text file containing the commands you want to run into a new file, then execute the new file. Based on your example:
echo :setvar param1 DUMMYVALUE > param_input.sql
cat input.sql >> param_input.sql
sqlcmd -S server -d database -U user -P pass -i param_input.sql
You can export the variable in linux. After that you won't need to pass the variable in sqlcmd. However, I did notice you will need to change your sql script and remove the :setvar command if it doesn't have a default value.
export dbName=xyz
sqlcmd -Uusername -Sservername -Ppassword -i script.sql
:setvar dbName --remove this line
USE [$(dbName)]
GO
I think you're just not quoting the input variables correctly. I created this bash script...
#!/bin/bash
# Create a sql file with a parameterized test script
echo "
set nocount on
select k = '-db', v = '\$(db)' union all
select k = '-schema', v = '\$(schema)' union all
select '-', 'static'
go" > ./test.sql
# capture input variables
DB=$1
SCHEMA="${2:-dbo}"
# Exec sqlcmd
sqlcmd -S 'localhost\lemur' -E -i ./test.sql -v "db=${DB}" -v "schema=${SCHEMA}"
... and tested it like so:
$ ./test.sh master
k v
------- ------
-db master
-schema dbo
- static

Resources