Trying to Export Tables to CSVs from SQL Server - sql-server

I ran the following script to try to get all tables in my DB exported (trying to backup the data in CSVs).
SELECT 'sqlcmd -S . -d '+DB_NAME()+' -E -s, -W -Q "SET NOCOUNT ON; SELECT * FROM '+table_schema+'.'+TABLE_name+'" > "C:\Temp\'+Table_Name+'.csv"'
FROM [INFORMATION_SCHEMA].[TABLES]
I saved the results as a batch file and ran the batch file as Administrator.
That runs without an error, but I get no data exported. All it does is create blank CSV files.
I ran this as well: 'EXEC sp_configure 'remote access',1 reconfigure'.
Still, nothing is exported. CSVs are created, but no data is exported...
Any thoughts?

I ended up using R to do the task...
library("RODBC")
conn <- odbcDriverConnect('driver={SQL Server};server=Server_Name;DB_Name;trusted_connection=true')
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#1")
write.csv(data,file=paste("C:/Users/TBL#1.csv",sep=""),row.names=FALSE)
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#2")
write.csv(data,file=paste("C:/Users/TBL#2.csv",sep=""),row.names=FALSE)
Gotta love the IT teams in corporate America...especially when they lock down your system so tight, you need to come up with all kinds of weird hacks just so you can do the job that you were hired to do...
Is there a word for negative synergy?

Related

SNOWFLAKE SQL: How to run several select statement in one command

I am a newbie in the SQL world, and hoping SMEs in the community can help me.
What I am trying to solve for: Several 'select' statement to run on a weekly basis with one call, and the results gets download to on my computer (bonus if it can be downloaded on specific folder)
How I am doing it right now: I use SNOWFLAKE (that's the approved software we are using), Run each 'select' statement one at a time, then once each result is displayed, I manually download the csv file on my computer
I know there's an efficient way of doing it, so would appreciate the help from this community. Thank you in advance.
You can run multiple queries at once using SnowSQL Client. Here's how
Preparation:
Setup SnowSLQL connection configuration
open file: .snowsql/config
add your connection details:
[connections.my_sample_connection]
accountname = mysnowflakeaccount
username = myusername
password = mypassword
warehousename = mywarehouse
Create your query file.
e.g. my_query.sql
-- Query 1
select 1 col1, 2 col2;
-- Query 2
select 'a' colA, 'b' colB;
Execution:
snowsql -c my_sample_connection -f my_query.sql -o output_file=/path/query_result.csv -o timing=false -o friendly=false -o output_format=csv
Result:
/path/query_result.csv - Containing the result of the queries in my_query.sql

SQL Server R Services - outputting data to database table, performance

I noticed that rx* functions (eg. rxKmeans, rxDataStep) insert data to SQL Server table in a row-by-row fashion when outFile parameter is set to a table. This is obviously very slow and something like bulk-insert would be desirable instead. Can this be obtained and how to do it?
Currently I am trying to insert about 14 mln rows to a table by invoking rxKmeans function with outFile parameter specified and it takes about 20 minutes.
Example of my code:
clustersLogInitialPD <- rxKmeans(formula = ~LogInitialPD
,data = inDataSource
,algorithm = "Lloyd"
,centers = start_c
,maxIterations = 1
,outFile = sqlLogPDClustersDS
,outColName = "ClusterNo"
,overwrite = TRUE
,writeModelVars = TRUE
,extraVarsToWrite = c("LoadsetId", "ExposureId")
,reportProgress = 0
)
sqlLogPDClustersDS points to a table in my database.
I am working on SQL Server 2016 SP1 with R Services installed and configured (both in-database and standalone). Generally everything works fine except this terrible performance of writing rows to database tables from R scrip.
Any comments will be greatly appreciated.
I brought this up on this Microsoft R MSDN forum thread recently as well.
I ran into this problem and I'm aware of 2 reasonable solutions.
Use sp_execute_external_script output data frame option
/* Time writing data back to SQL from R */
SET STATISTICS TIME ON
IF object_id('tempdb..#tmp') IS NOT NULL
DROP TABLE #tmp
CREATE TABLE #tmp (a FLOAT NOT NULL, b INT NOT NULL );
DECLARE #numRows INT = 1000000
INSERT INTO #tmp (a, b)
EXECUTE sys.sp_execute_external_script
#language = N'R'
,#script = N'OutputDataSet <- data.frame(a=rnorm(numRows), b=1)'
,#input_data_1 = N''
, #output_data_1_name = N'OutputDataSet'
,#params = N' #numRows INT'
,#numRows = #numRows
GO
-- ~7-8 seconds for 1 million row insert (2 columns) on my server
-- rxDataStep for 100K rows takes ~45 seconds on my server
Use SQL Server bcp.exe or BULK INSERT (only if running on the SQL box itself) after first writing a data frame to a flat file
I've written some code that does this but it's not very polished and I've had to leave sections with <<<VARIABLE>>> that assume connection string information (server, database, schema, login, password). If you find this useful or any bugs please let me know. I'd also love to see Microsoft incorporate the ability to save data from R back to SQL Server using BCP APIs. Solution (1) above only works via sp_execute_external_script. Basic testing also leads me to believe that bcp.exe can be roughly twice as fast as option (1) for a million rows. BCP will result in a minimally-logged SQL operation so I'd expect it to be faster.
# Creates a bcp file format function needed to insert data into a table.
# This should be run one-off during code development to generate the format needed for a given task and saved in a the .R file that uses it
createBcpFormatFile <- function(formatFileName, tableName) {
# Command to generate BCP file format for importing data into SQL Server
# https://msdn.microsoft.com/en-us/library/ms162802.aspx
# format creates a format file based on the option specified (-n, -c, -w, or -N) and the table or view delimiters. When bulk copying data, the bcp command can refer to a format file, which saves you from re-entering format information interactively. The format option requires the -f option; creating an XML format file, also requires the -x option. For more information, see Create a Format File (SQL Server). You must specify nul as the value (format nul).
# -c Performs the operation using a character data type. This option does not prompt for each field; it uses char as the storage type, without prefixes and with \t (tab character) as the field separator and \r\n (newline character) as the row terminator. -c is not compatible with -w.
# -x Used with the format and -f format_file options, generates an XML-based format file instead of the default non-XML format file. The -x does not work when importing or exporting data. It generates an error if used without both format and -f format_file.
## Bob: -x not used because we currently target bcp version 8 (default odbc driver compatibility that is installed everywhere)
# -f If -f is used with the format option, the specified format_file is created for the specified table or view. To create an XML format file, also specify the -x option. For more information, see Create a Format File (SQL Server).
# -t field_term Specifies the field terminator. The default is \t (tab character). Use this parameter to override the default field terminator. For more information, see Specify Field and Row Terminators (SQL Server).
# -S server_name [\instance_name] Specifies the instance of SQL Server to which to connect. If no server is specified, the bcp utility connects to the default instance of SQL Server on the local computer. This option is required when a bcp command is run from a remote computer on the network or a local named instance. To connect to the default instance of SQL Server on a server, specify only server_name. To connect to a named instance of SQL Server, specify server_name\instance_name.
# -U login_id Specifies the login ID used to connect to SQL Server.
# -P -P password Specifies the password for the login ID. If this option is not used, the bcp command prompts for a password. If this option is used at the end of the command prompt without a password, bcp uses the default password (NULL).
bcpPath <- .pathToBcpExe()
parsedTableName <- parseName(tableName)
# We can't use the -d option for BCP and instead need to fully qualify a table (database.schema.table)
# -d database_name Specifies the database to connect to. By default, bcp.exe connects to the user’s default database. If -d database_name and a three part name (database_name.schema.table, passed as the first parameter to bcp.exe) is specified, an error will occur because you cannot specify the database name twice.If database_name begins with a hyphen (-) or a forward slash (/), do not add a space between -d and the database name.
fullyQualifiedTableName <- paste0(parsedTableName["dbName"], ".", parsedTableName["schemaName"], ".", parsedTableName["tableName"])
bcpOptions <- paste0("format nul -c -f ", formatFileName, " -t, ", .bcpConnectionOptions())
commandToRun <- paste0(bcpPath, " ", fullyQualifiedTableName, " ", bcpOptions)
result <- .bcpRunShellThrowErrors(commandToRun)
}
# Save a data frame (data) using file format (formatFilePath) to a table on the database (tableName)
bcpDataToTable <- function(data, formatFilePath, tableName) {
numRows <- nrow(data)
# write file to disk
ptm <- proc.time()
tmpFileName <- tempfile("bcp", tmpdir=getwd(), fileext=".csv")
write.table(data, file=tmpFileName, quote=FALSE, row.names=FALSE, col.names=FALSE, sep=",")
# Bob: note that one can make this significantly faster by switching over to use the readr package (readr::write_csv)
#readr::write_csv(data, tmpFileName, col_names=FALSE)
# bcp file to server time start
mid <- proc.time()
bcpPath <- .pathToBcpExe()
parsedTableName <- parseName(tableName)
# We can't use the -d option for BCP and instead need to fully qualify a table (database.schema.table)
# -d database_name Specifies the database to connect to. By default, bcp.exe connects to the user’s default database. If -d database_name and a three part name (database_name.schema.table, passed as the first parameter to bcp.exe) is specified, an error will occur because you cannot specify the database name twice.If database_name begins with a hyphen (-) or a forward slash (/), do not add a space between -d and the database name.
fullyQualifiedTableName <- paste0(parsedTableName["dbName"], ".", parsedTableName["schemaName"], ".", parsedTableName["tableName"])
bcpOptions <- paste0(" in ", tmpFileName, " ", .bcpConnectionOptions(), " -f ", formatFilePath, " -h TABLOCK")
commandToRun <- paste0(bcpPath, " ", fullyQualifiedTableName, " ", bcpOptions)
result <- .bcpRunShellThrowErrors(commandToRun)
cat(paste0("time to save dataset to disk (", numRows, " rows):\n"))
print(mid - ptm)
cat(paste0("overall time (", numRows, " rows):\n"))
proc.time() - ptm
unlink(tmpFileName)
}
# Examples:
# createBcpFormatFile("test2.fmt", "temp_bob")
# data <- data.frame(x=sample(1:40, 1000, replace=TRUE))
# bcpDataToTable(data, "test2.fmt", "test_bcp_1")
#####################
# #
# Private functions #
# #
#####################
# Path to bcp.exe. bcp.exe is currently from version 8 (SQL 2000); newer versions depend on newer SQL Server ODBC drivers and are harder to copy/paste distribute
.pathToBcpExe <- function() {
paste0(<<<bcpFolder>>>, "/bcp.exe")
}
# Function to convert warnings from shell into errors always
.bcpRunShellThrowErrors <- function(commandToRun) {
tryCatch({
shell(commandToRun)
}, warning=function(w) {
conditionMessageWithoutPassword <- gsub(<<<connectionStringSqlPassword>>>, "*****", conditionMessage(w), fixed=TRUE) # Do not print SQL passwords in errors
stop("Converted from warning: ", conditionMessageWithoutPassword)
})
}
# The connection options needed to establish a connection to the client database
.bcpConnectionOptions <- function() {
if (<<<useTrustedConnection>>>) {
return(paste0(" -S ", <<<databaseServer>>>, " -T"))
} else {
return(paste0(" -S ", <<<databaseServer>>>, " -U ", <<<connectionStringLogin>>>," -P ", <<<connectionStringSqlPassword>>>))
}
}
###################
# Other functions #
###################
# Mirrors SQL Server parseName function
parseName <- function(databaseObject) {
splitName <- strsplit(databaseObject, '.', fixed=TRUE)[[1]]
if (length(splitName)==3){
dbName <- splitName[1]
schemaName <- splitName[2]
tableName <- splitName[3]
} else if (length(splitName)==2){
dbName <- <<<databaseServer>>>
schemaName <- splitName[1]
tableName <- splitName[2]
} else if (length(splitName)==1){
dbName <- <<<databaseName>>>
schemaName <- ""
tableName <- splitName[1]
}
return(c(tableName=tableName, schemaName=schemaName, dbName=dbName))
}

Extract one by one data from Database through Shell Script

I have to code in Korn Shell. I have to take data from one database; and create "insert into" statements in a .sql file. And run this .sql file in another database.
There are 24 columns in the table; I'm not able to extract data from that table one by one in order to create insert into statement.
Can anyone help me with the same?
I wrote the following code till now(just a sample, with two columns data)
$ cat analysis.sh
#!/bin/ksh
function sqlQuery {
ied sqlplus -s / << 'EOF'
DEFINE DELIMITER='${TAB_SPACE}'
set heading OFF termout ON trimout ON feedback OFF
set pagesize 0
SELECT ID, H00
FROM SW_ABC
WHERE ID=361140;
EOF
}
eval x=(`sqlQuery`)
ID=${x[0]}
HOUR=${x[1]}
echo ID is $ID
echo HOUR is $HOUR
But here eval is not working.

mysqldump table per *.sql file batch script

I have done some digging around and I can not find a way to make mysqldump create a file per table. I have about 100 tables (and growing) that I would like to be dumped into separate files without having to write a new mysqldump line for each table I have.
E.g. instead of my_huge_database_file.sql which contains all the tables for my DB.
I'd like mytable1.sql, mytable2.sql etc etc
Does mysqldump have a parameter for this or can it be done with a batch file? If so how.
It is for backup purposes.
I think I may have found a work around, and that is to make a small PHP script that fetches the names of my tables and runs mysqldump using exec().
$result = $dbh->query("SHOW TABLES FROM mydb") ;
while($row = $result->fetch()) {
exec('c:\Xit\xampp\mysql\bin\mysqldump.exe -uroot -ppw mydb > c:\dump\\'.$row[0]) ;
}
In my batch file I then simply do:
php mybackupscript.php
Instead of SHOW TABLES command, you could query the INFORMATION_SCHEMA database. This way you could easily dump every table for every database and also know how many tables there are in a given database (i.e. for logging purposes). In my backup, I use the following query:
SELECT DISTINCT CONVERT(`TABLE_SCHEMA` USING UTF8) AS 'dbName'
, CONVERT(`TABLE_NAME` USING UTF8) AS 'tblName'
, (SELECT COUNT(`TABLE_NAME`)
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` = dbName
GROUP BY `TABLE_SCHEMA`) AS 'tblCount'
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` NOT IN ('INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA', 'mysql')
ORDER BY 'dbName' ASC
, 'tblName' ASC;
You could also put a syntax in the WHERE clause such as TABLE_TYPE != 'VIEW', to make sure that the views will not get dump.
I can't test this, because I don't have a Windows MySQL installation, but this should point you to the right direction:
#echo off
mysql -u user -pyourpassword database -e "show tables;" > tables_file
for /f "skip=3 delims=|" %%TABLE in (tables_file) do (mysqldump -u user -pyourpassword database %%TABLE > %%TABLE.sql)

Percona's pt-table-sync: how to run on more than one table?

In the command line, this will successfully update table1:
pt-table-sync --execute h=host1,D=db1,t=table1 h=host2,D=db2
However if I want to update more than one table, I'm not sure how to write it. This only updates table1 as well and ignores the other tables:
pt-table-sync --execute h=host1,D=db1,t=table1,table2,table3 h=host2,D=db2
And this gives me an error:
pt-table-sync --execute h=host1,D=db1 --tables table1,table2,table3 h=host2,D=db2
Anyone have an example of how to list the '-tables'... so that it successfully update all the tables in the list?
The --tables option seems to be incompatible with the DSN notation, you get this error:
You specified a database but not a table in h=localhost,D=test.
Are you trying to sync only tables in the 'test' database?
If so, use '--databases test' instead.
As suggested in that error message, you can use --databases and then you can use --tables successfully.
For example, I created tables test.foo and test.bar, filled each with three rows, then deleted the rows from test.bar on the second server dewey.
I ran this:
$ pt-table-sync h=huey h=dewey --databases test --tables foo,bar --execute --verbose
# Syncing h=dewey
# DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE
# 0 0 3 0 Chunk 15:26:15 15:26:15 2 test.bar
# 0 0 0 0 Chunk 15:26:15 15:26:15 0 test.foo
It successfully re-inserted the 3 missing rows in test.bar.
Other tables in my test database were ignored.
This is an old question, but I searched everywhere for an answer. pt-table-sync only does one table. There is no tool that does the same thing to a list of tables or a full database schema. Specifically I want to run a Live server and be able to sync back to a Staging server, then edit code and files in the Staging server without fear of messing up Live or being overwritten by Live... and I want it to be free :)
I ended up writing a shell script called mysql_sync_live_to_stage.sh as follows:
#!/bin/bash
# sync db live to staging
error_log_file='./mysql_sync_errors.log'
echo $(date +"%Y %m %d %H:%M") > $error_log_file
function sync_table()
{
pt-table-sync --no-foreign-key-checks --execute
h=DB_1_HOST,u=DB_1_USER,p=DB_1_PASSWORD,D=$1,t=$3
h=DB_2_HOST,u=DB_2_USER,p=DB_2_PASSWORD,D=$2,t=$3 >> $error_log_file
}
# SYNC ALL TABLES IN name_of_live_database
mysql -h "DB_1_HOST" -u "DB_1_USER" -pDB_1_PASSWORD -D "DB_1_DBNAME" -e "SHOW TABLES" |
egrep -i '[0-9a-z\-\_]+' | egrep -i -v 'Tables_in' | while read -r table ; do
echo "Processing $table"
sync_table "name_of_live_database" "name_of_staging_database" $table
done
# FIX Config Settings For Staging
echo "Cleanup Queries..."
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar'
WHERE config_id='foo'"
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar2'
WHERE config_id='foo2'"
echo "Done"
This reads a list of table names from the live site then executes a sync on each one via the do loop. It goes through the list alphabetically, so I recommend keeping the --no-foreign-key-checks flag.
Its not perfect... It won't sync tables that don't exist in both databases, but when combined with a "git pull -f origin master" I get a complete sync in a couple minutes.

Resources