How to extract a table from a *.dump file into a CSV - database

I have a *.dump file (postgresql dump) and I would like to output my_table to my_table.csv. Is there a better way to do this than pg_restore -t my_table db.dump > my_table.txt and then writing a script to create the CSV from the output?

The output from pg_restore --data-only -t my_table db.dump basically is tab-separated headerless tabulated text with some comments and a few extra commands. A script to mangle it into csv with a tool like perl or awk would be pretty simple.
That said, personally I would:
Restore the table to a temporary database created for the purpose. If the table depends on custom types, functions, sequences, etc you will need to restore them too.
In psql, \copy the_table TO 'some_file.csv' WITH (FORMAT CSV, HEADER ON)
This way you can control the representation of nulls and lots more.

Related

BCP from Linux to SQL Server

I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -
1) Why does this command return an Unknown argument: -S
bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t
2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.
3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.
For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).
Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.
For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).
For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.

SQL Server OPENROWSET error reading bcp file

I'm trying to transfer table data from one SQL Server to another and wanting to use the bcp utility for it. This is purely to transfer data between two identical schemas, but I'm not able to use something like SSDT; I need something that can be scriptable and portable so it can be run by others with just SQL server and SSMS access.
I am generating a native output file and format file like so:
$> bcp database.TableName OUT c:\data\bcp\TableName.bcp -T -N -S SQLINSTANCE
$> bcp database.TableName format nul -f c:\data\bcp\TableName.fmt -T -N
Then in Management Studio I am trying to in turn read the files like this:
SELECT
*
FROM
OPENROWSET (BULK 'c:\data\bcp\TableName.bcp',
FORMATFILE = 'c:\data\bcp\TableName.fmt') AS t1
But am getting this error:
The bulk load failed. The column is too long in the data file for row 6, column 19. Verify that the field terminator and row terminator are specified correctly.
I have followed this process before successfully, and it works for other tables. But I'm running into issue with this table. The column mentioned is of datatype nvarchar(max). I can inspect what I think is the "problem" record in the source data and it's just a very long string but I don't see anything else special about it.
Is there something else I should be doing when generating the format file or what else am I missing?
If you are only exporting for the purpose of importing to another SQL Server, native format is the way to go. And is this case you don't need to use format files. Just do a native export and import.
Note you are specifying a capital -N and that's not native. Native is lower -n.
You should export using something like:
bcp database.Schema.TableName OUT c:\data\bcp\TableName.bcp -T -n -S SQLINSTANCE
Then on the importing side I sugest using BULK IMPORT, which don't need a format file for native at all:
BULK INSERT TargetDB.dbo.TargetTable
FROM 'c:\data\bcp\TableName.bcp'
WITH (DATAFILETYPE = 'native');
If you can't use BULK INSERT and must absolutely go for OPENROWSET, you need a format file. bcp can generate that for you, but again, lower case -n:
bcp database.Schema.TableName format nul -f c:\data\bcp\TableName.fmt -T -n -S SQLINSTANCE
Now your OPENROWSET should work.

Output data to file using Isql sybase

I am using below command to write data to csv file in isql
$ISQL -S DSA1_PROD -U emer_r_gh5432 -X
Query -
Select * from SecDb..LoginOwnerTb where SvrId= 45566 and OwnerRitsId = '1001167635';
OUTPUT TO '/tmp/sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '';
go
it says
Server 'ABC', Line 1:
Incorrect syntax near ';'.
Please help
NOTE: I'm assuming you're working with Sybase ASE and the isql command line tool. There may be other ways to accomplish what you're trying to do when going against the SQLAnywhere, IQ and/or Advantage database products ... *shrug* ...
The OUTPUT TO clause is used with the dbisql GUI tool.
To perform a somewhat-similar OUTPUT operation with the isql command line tool:
-- once logged in via isql ...
-- to write to new file; to overwrite existing file:
select ....
go > /path/to/local/file/accessible/by/user/running/isql
-- to append to existing file:
select ...
go >> /path/to/local/file/accessible/by/user/running/isql
To set the column delimiter you can use the -s flag when invoking isql from the command line, eg:
# set the column delimiter to a semi-colon:
$ isql ... -s ';' ...
# set the column delimiter to a pipe:
$ isql ... -s '|' ...
Keep in mind that the output will still be generated using fixed-width columns, with each column's width determined by either a) the column's datatype 'width' or b) the column's title/label width, whichever is wider.
I'm not aware of any way to perform the following with the isql command line tool:
designate a column delimiter on-the-fly while inside a isql session
designate a quote character
remove extra spaces (ie, ouput data in true delimited format as opposed to fixed-width format)
To generate true delimited files you have a few options:
see if the dbisql GUI tool serves your purpose [I don't use dbisql so I'm *assuming* the OUTPUT TO clause works as expected]
use the bcp (command line) utility to place the data into a delimited file [bcp options, and how to handle subsets of tables, is a much larger discussion, ie, too much to address in this response]
see if you can find another (3rd party) tool that can extract the desired data set to a delimited file

SQL - Automatic results to CSV or Text File

I was wondering if anyone can help.
I have a number of queries in SQL (all in separate *.sql files). I wanted to know if there is a way to run these queries automatically or mass run them to be saved to either a csv or txt file?
Also, I have come variables within these queries which will need to be amended on a weekly bases before the queries are run.
Thanks.
KJ
Could you please provide some additional help in relation to the variables? Previously I would declare and set variables as:
DECLARE #TW_FROM DATETIME
DECLARE #TW_TO DATETIME
SET #TW_FROM = '2015-11-16 00:00:00';
SET #TW_TO = '2015-11-22 23:00:00';
How do I do this using sqlcmd?
Yes, you can use sqlcmd to do this.
First of all - variables. You can refer to your variables in the .sql files using $(variablename) wherever you want to substitue the variable. For example,
use $(dbname);
select $(columnname) from table1 where column= '$(var1)'
You then call sqlcmd with the following command (note the argument -v variables)
sqlcmd -S servername -d database -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred"
In order to output this to a file, you tag > filename.txt on the end
sqlcmd -S servername -d database -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" > filename.txt
If you want to output to a csv, you can also specify the delimiter using the argument -s (note the idfference with the capital S for server). So now we have
sqlcmd -S servername -d database -s "," -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" > filename.csv
If you want to output several commands to the same csv or txt file, use >> instead of > as it add to teh bottom of the file, rather than replacing it.
sqlcmd -S servername -d database -s "," -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" >> filename.csv
To run this for several scripts, you can put the statements in a batch file, and then change the variables every week.
You could write a batch file that uses sqlcmd:
MSDN sqlcmd
That will allow you to call script files in a loop and output the results to a file.
Convert your current scrips to a Stored Procedure.
You can then pass your variables to that and run the query.
If you have SQL Server agent available (SQL standard or better) you can use this to automate the running of the stored procedures.
Otherwise the same can be achieved with Task Scheduler in windows.
As for exporting to CSV this will be useful.
It depends on where your SQL Server is acutally running. It might be quite tricky to write anything to the location you want.
You could read about BCP.
My suggestion is:
Create an UDF (best is inline-UDF!) from all of your queries within your database. Than call them from EXCEL or any other fitting product. You might want to set up an Excel where all your queries are filled one on each Sheet automatically

How do I restore one database from a mysqldump containing multiple databases?

I have a mysql dump with 5 databases and would like to know if there is a way to import just one of those (using mysqldump or other).
Suggestions appreciated.
You can use the mysql command line --one-database option.
mysql> mysql -u root -p --one-database YOURDBNAME < YOURFILE.SQL
Of course be careful when you do this.
You can also use a mysql dumpsplitter.
You can pipe the dumped SQL through sed and have it extract the database for you. Something like:
cat mysqldumped.sql | \
sed -n -e '/^CREATE DATABASE.*`the_database_you_want`/,/^CREATE DATABASE/ p' | \
sed -e '$d' | \
mysql
The two sed commands:
Only print the lines matching between the CREATE DATABASE lines (including both CREATE DATABASE lines), and
Delete the last CREATE DATABASE line from the output since we don't want mysqld to create a second database.
If your dump does not contain the CREATE DATABASE lines, you can also match against the USE lines.

Resources