FreeBCP to SQL Azure complains table doesn't exist - sql-server

$ freebcp DWSTAGE.BCPTEST in bcptest.txt -f cdr.fmt -S serverfromfreetds -U user#azureserver -P password
Msg 208, Level 16, State 1
Server 'azureserver', Line 1
Invalid object name 'DWSTAGE.BCPTEST'.
Msg 208, Level 16
General SQL Server error: Check messages from the SQL Server
Msg 20064, Level 2
Attempt to use Bulk Copy with a non-existent Server table
$ freebcp DATABASENAME.DWSTAGE.BCPTEST in bcptest.txt -f cdr.fmt -S serverfromfreetds -U user#azureserver -P password
Msg 40515, Level 15, State 1
Server 'azureserver', Line 16
Reference to database and/or server name in 'DATABASENAME.DWSTAGE.BCPTEST' is not supported in this version of SQL Server.
Msg 40515, Level 15
General SQL Server error: Check messages from the SQL Server
Msg 20064, Level 2
Attempt to use Bulk Copy with a non-existent Server table
I've also tried adding the database to the command line with the -D option. The default database for that connection is set up as this one and only Azure database in the freetds.conf.
The connection to SQL Azure seems fine otherwise - I just can't get FreeBCP to work:
$ isql serverfromfreetds user#azuredatabasename password
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> SELECT COUNT(*) FROM DWSTAGE.BCPTEST;
+------------+
| |
+------------+
| 0 |
+------------+
SQLRowCount returns 1
1 rows fetched
SQL> SELECT COUNT(*) FROM DWSTAGE.BCPTESTX;
[ISQL]ERROR: Could not SQLExecute
SQL>
This seems like some database/schema confusion, but I can't find a combination of settings which works.

From the FreeTDS mailing list:
The message 208 comes from the server. A quick look at freebcp.c shows
argv[1] isn't parsed. It's copied to a struct and used verbatim e.g.
if (dbfcmd(dbproc, "SET FMTONLY ON select *
from %s SET FMTONLY OFF", pdata->dbobject) == FAIL)
My guess is that the account you're logging in with has a default
database, and that database is not the one containing DWSTAGE.BCPTEST.
The Azure server rejects dbname.schema.object syntax, and freebcp has
no -D option because until Azure every TDS server did accept that
syntax.
You could verify that using
$ freebcp 'select db_name()' queryout /dev/stdout ...
As a temporary workaround, I think this would work:
freebcp DWSTAGE.BCPTEST in bcptest.txt \
-O 'USE dbname' \
-f cdr.fmt -S serverfromfreetds -U user#azureserver -P password
A permanent fix would support -D.
Yes, that user's default database is probably master. There IS a
default database set up in the odbc config, but I was mistaken, there
is no such option in the freeetds.conf.
We've moved away from trying to get Linux to work for this process for
now, but I'll revisit this.
I expect that workaround will not work because USE isn't supported -
you do in fact have to connect directly to the database, because of
the nature of the SQL Azure architecture.
Yes, you're right. I forgot about that.
About a year ago we added the DBSETLDBNAME macro as a way to set the
dbname in the db-lib LOGINREC. That sets the dbname in the login
packet, obviating the need for "USE dbname". freebcp could be modified
to support that feature with a -D option.
See change
http://gitorious.org/freetds/freetds/commit/4a21ded022405693607e71938d0c6173816f5ff9/diffs/c34afafd2fec4cbba9b245e4f13a5471c6fb8041
(add support for -D in freebcp)

I have gotten freebcp working with Ubuntu 12.04 and Azure SQL. It follows on to the answer above where support for -D is mentioned.
First I started by installing and configuring freetds as described in this thread: https://askubuntu.com/questions/167491/connecting-ms-sql-using-freetds-and-unixodbc-isql-no-default-driver-specified
Basically, that consisted of:
sudo apt-get install tdsodbc unixodbc freetds-bin
That brought me to the point where this thread starts; freebcp could connect but I was getting the error about "Reference to database...not supported in this version of SQL server".
The next step is to note that the version of freebcp installed by apt-get is a quite old version. Instead download a more recent version from ftp://ftp.freetds.org/pub/freetds/stable/. I used freetds-1.00.9.tar.gz.
Install it using ./configure and make then use this new freebcp just the same but with -D.
Here is my command string: freetds-1.00/src/apps/freebcp bcptest in bcptest.big.dat -U myUserName#myServer -S MyServerConfig -P 'mypa$$word' -D myDatabase -c -e upload.err

Related

How make sqlcmd hold file access rights on linux?

Just installed sql-server on Ubuntu 16.04, 2 days ago. Using sqlcmd for bulk insert i got:
Msg 4860, Level 16, State 1, Line 6 Cannot bulk load. The file
"~/test_data.txt" does not exist or you don't have file access rights.
Yes the file did exist, i made sure of it using the command cat.
Then i tried bcp tool, but i got:
SQLState = S1000, NativeError = 0 Error = [Microsoft][ODBC Driver 13
for SQL Server]Unable to open BCP host data-file
Also tried installing visual studio code and adding mssql extension but i got the same "file access rights" warning. And already used the chmod 777 trying to fix it. Didn't work.
Command bulk insert sqlcmd:
BULK INSERT TestEmployees FROM '~/test_data.txt'
WITH(
rowterminator = ','
);
Command on bcp tool
bcp auth in path/auth2.tsv -S localhost -U sa -P <my password> -d Trabalho1BD -c
I think your problem is the '~/test_data.txt' part of your bulk insert command. Specifically, that says "find a file called test_data.txt in the home directory". But whose home directory? Not yours! It's looking for the file in the home directory of the account that's running your SQL Server. Try changing that to a full path (i.e. '/home/«username»/test_data.txt' and that should do it.

BCP neither gives results nor outputs anything when using valid statements but it does throw errors when passing invalid parameters

I have to use bcp command-line tool to export data from an SQL Server database to a file in a Red Hat server.
I am (apparently) using valid statements but bcp is not producing any kind of output/results.
However, when I execute statements with missing or invalid parameters it displays the respective error.
I am looking for the reason of this issue (e.g. defective installation, bad usage of bcp, lack of permissions or any other known conflict) and how to fix it.
bcp statement:
bcp fully_qualified_table_name out ./data.txt -c -S server -U user -P password
bcp usage:
usage: /opt/microsoft/bin/bcp {dbtable | query} {in | out | queryout | format} datafile
[-m maxerrors] [-f formatfile] [-e errfile]
[-F firstrow] [-L lastrow] [-b batchsize]
[-n native type] [-c character type] [-w wide character type]
[-N keep non-text native] [-q quoted identifier]
[-t field terminator] [-r row terminator]
[-a packetsize] [-K application intent]
[-S server name or DSN if -D provided] [-D treat -S as DSN]
[-U username] [-P password]
[-T trusted connection] [-v version] [-R regional enable]
[-k keep null values] [-E keep identity values]
[-h "load hints"] [-d database name]
bcp version:
BCP - Bulk Copy Program for Microsoft SQL Server.
Copyright (C) Microsoft Corporation. All Rights Reserved.
Version: 11.0.2270.0
SQL Server version (SELECT ##VERSION):
Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
Distribution:
Red Hat Enterprise Linux 6.7 (KornShell).
Invalid statements with respective error message (examples).
bcp THAT_TUB_ACE.oh_nerd.table_name out ./data.txt -c -S sr._bear -U you_sr. -P pass_sword
SQLState = S1T00, NativeError = 0
Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]Login timeout expired
SQLState = 08001, NativeError = 11001
Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.
SQLState = 08001, NativeError = 11001
Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server]TCP Provider: Error code 0x2AF9
...
bcp fully_qualified_table_name ./data.txt -c -S valid_server -U valid_user -P bad_word
bcp fully_qualified_table_name out ./data.txt -c -S valid_server -U valid_user -P bad_word
SQLState = 28000, NativeError = 18456
Error = [unixODBC][Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Login failed for user 'valid_user'.
SUMMARY.
The objective is to generate a datafile using the following syntax (or similar):
bcp fully_qualified_table_name out ./data.txt -c -S server -U user -P password
The facts are:
When running a valid bcp statement there's nothing in the window at all (no output) and no datafile is created.
I cannot use option -T (trusted connection using integrated security) for bcp so I have to specify the server, user and password.
Tried queryout option on a very simple small table already but still no luck.
Credentials are valid, I successfully tested them using sqlcmd like the following: sqlcmd -S server -U user -P password -Q 'SELECT * FROM really_small_table'.
The bcp statements under "Invalid statements with respective error message (examples)" section of this question are just examples of invalid statements to show that bcp actually does something but giving the expected results.
Hopefully you've already solved your problem, but I had a similar problem configuring PyODBC for Python 3.4 on RedHat 7.1. I was trying to connect to Microsoft SQL Server 2016. Hopefully this will help someone else.
The real problem for me was ODBC/FreeTDS configuration. Here are the installation steps I took and the eventual solution.
I started by installing Microsoft's ODBC driver for redhat. This likely isn't required, but I list it here since I never removed it from my redhat machine for fear of breaking ODBC.
Install ODBC and ODBC development files: yum install unixODBC-devel.x86_64
Install FreeTDS: yum install freetds.x86_64
Update config files. Steps 1-3 took about 10 minutes. I spent a very frustrating day trying various configurations, and usually I was able to connect via tsql (FreeTDS debugging tool, more on that below), but never via Python. I eventually stumbled upon this blog, which pointed me to the correct driver via ldconfig -p | grep libtdsodbc (I believe this points to the FreeTDS driver, hence why Microsoft's driver probably isn't needed).
The eventual configuration I ended up with is as follows:
/etc/odbc.ini (note: I had to create this file as the ODBC/FreeTDS installation process didn't create it for me.)
[SQLServer]
Description = TDS driver (Sybase/MS SQL)
Driver = SQLServer
Servername = your_sql_hostname
TDS Version = 0.95
Database = your_database_name (not to be confused with instance name)
Port = 1433
/etc/odbcinst.ini
[ODBC]
# Enables ODBC debugging output. Helpful to see where things stop working.
Trace = yes
TraceFile = /etc/odbcinst.trace
[SQLServer]
Description=TDS driver (Sybase/MS SQL)
Driver=/lib64/libtdsodbc.so.0
Driver64=/lib64/libtdsodbc.so.0
UsageCount=1
Troubleshooting tips
tsql is debugging/testing tool that comes with FreeTDS. It is your friend. I was able to use it early on to verify that the networking side of things was correct (i.e. no firewalls blocking access, hostnames resolve, etc).
Use tsql -H your_sql_hostname -L to get the instance names and ports from your SQL server. Note that you'll use port 1433 (which is the MS SQL instance discovery port) in your ODBC configuration above, not these port numbers.
ServerName your_sql_hostname
InstanceName instance_1_name
IsClustered No
Version 13.0.x.x
tcp 5555
ServerName your_sql_hostname
InstanceName instance_2_name
IsClustered No
Version 13.0.x.x
tcp 6666
Use the port from the previous tsql command like so:
tsql -H your_sql_hostname -p your_sql_instance_port -D your_database_name -U your_username -P your_password
You should get a prompt. Try querying some data from your database (note that you have to type GO on a separate line to actually run your query). If this works but you still can't connect via Python/other-app, it usually means ODBC/FreeTDS are installed correctly and there are no networking problems, but your ODBC configuration isn't correct.
1> SELECT TOP 1 * FROM your_table
2> GO
{a row will be returned here}
1> # prompt returns, use `exit` to get out

PostgreSQL: duplication by one command [duplicate]

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Import SQL dump into PostgreSQL database

We are switching hosts and the old one provided a SQL dump of the PostgreSQL database of our site.
Now, I'm trying to set this up on a local WAMP server to test this.
The only problem is that I don't have an idea how to import this database in the PostgreSQL 9 that I have set up.
I tried pgAdmin III but I can't seem to find an 'import' function. So I just opened the SQL editor and pasted the contents of the dump there and executed it, it creates the tables but it keeps giving me errors when it tries to put the data in it.
ERROR: syntax error at or near "t"
LINE 474: t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view...
The lines:
COPY tb_abilities (active, creation, modtime, id, lang, title, description) FROM stdin;
t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view nl ...
I've also tried to do this with the command prompt but I can't find the command that I need.
If I do
psql mydatabase < C:/database/db-backup.sql;
I get the error
ERROR: syntax error at or near "psql"
LINE 1: psql mydatabase < C:/database/db-backu...
^
What's the best way to import the database?
psql databasename < data_base_dump
That's the command you are looking for.
Beware: databasename must be created before importing.
Have a look at the PostgreSQL Docs Chapter 23. Backup and Restore.
Here is the command you are looking for.
psql -h hostname -d databasename -U username -f file.sql
I believe that you want to run in psql:
\i C:/database/db-backup.sql
That worked for me:
sudo -u postgres psql db_name < 'file_path'
I'm not sure if this works for the OP's situation, but I found that running the following command in the interactive console was the most flexible solution for me:
\i 'path/to/file.sql'
Just make sure you're already connected to the correct database. This command executes all of the SQL commands in the specified file.
Works pretty well, in command line, all arguments are required, -W is for password
psql -h localhost -U user -W -d database_name -f path/to/file.sql
Just for funsies, if your dump is compressed you can do something like
gunzip -c filename.gz | psql dbname
As Jacob mentioned, the PostgreSQL docs describe all this quite well.
make sure the database you want to import to is created, then you can import the dump with
sudo -u postgres -i psql testdatabase < db-structure.sql
If you want to overwrite the whole database, first drop the database
# be sure you drop the right database !!!
#sudo -u postgres -i psql -c "drop database testdatabase;"
and then recreate it with
sudo -u postgres -i psql -c "create database testdatabase;"
Follow the steps:
Go to the psql shell
\c db_name
\i path_of_dump [eg:-C:/db_name.pgsql]
I tried many different solutions for restoring my postgres backup. I ran into permission denied problems on MacOS, no solutions seemed to work.
Here's how I got it to work:
Postgres comes with Pgadmin4. If you use macOS you can press CMD+SPACE and type pgadmin4 to run it. This will open up a browser tab in chrome.
If you run into errors getting pgadmin4 to work, try killall pgAdmin4 in your terminal, then try again.
Steps to getting pgadmin4 + backup/restore
1. Create the backup
Do this by rightclicking the database -> "backup"
2. Give the file a name.
Like test12345. Click backup. This creates a binary file dump, it's not in a .sql format
3. See where it downloaded
There should be a popup at the bottomright of your screen. Click the "more details" page to see where your backup downloaded to
4. Find the location of downloaded file
In this case, it's /users/vincenttang
5. Restore the backup from pgadmin
Assuming you did steps 1 to 4 correctly, you'll have a restore binary file. There might come a time your coworker wants to use your restore file on their local machine. Have said person go to pgadmin and restore
Do this by rightclicking the database -> "restore"
6. Select file finder
Make sure to select the file location manually, DO NOT drag and drop a file onto the uploader fields in pgadmin. Because you will run into error permissions. Instead, find the file you just created:
7. Find said file
You might have to change the filter at bottomright to "All files". Find the file thereafter, from step 4. Now hit the bottomright "Select" button to confirm
8. Restore said file
You'll see this page again, with the location of the file selected. Go ahead and restore it
9. Success
If all is good, the bottom right should popup an indicator showing a successful restore. You can navigate over to your tables to see if the data has been restored propery on each table.
10. If it wasn't successful:
Should step 9 fail, try deleting your old public schema on your database. Go to "Query Tool"
Execute this code block:
DROP SCHEMA public CASCADE; CREATE SCHEMA public;
Now try steps 5 to 9 again, it should work out
Summary
This is how I had to backup/restore my backup on Postgres, when I had error permission issues and could not log in as a superuser. Or set credentials for read/write using chmod for folders. This workflow works for a binary file dump default of "Custom" from pgadmin. I assume .sql is the same way, but I have not yet tested that
I use:
cat /home/path/to/dump/file | psql -h localhost -U <user_name> -d <db_name>
Hope this will help someone.
If you are using a file with .dump extension use:
pg_restore -h hostname -d dbname -U username filename.dump
I noticed that many examples are overcomplicated for localhost where just postgres user without password exist in many cases:
psql -d db_name -f dump.sql
You can do it in pgadmin3. Drop the schema(s) that your dump contains. Then right-click on the database and choose Restore. Then you can browse for the dump file.
I used this
psql -d dbName -U username -f /home/sample.sql
Postgresql12
from sql file:
pg_restore -d database < file.sql
from custom format file:
pg_restore -Fc database < file.dump
I had more than 100MB data, therefore I could not restore database using Pgadmin4.
I used simply postgres client, and write below command.
postgres#khan:/$ pg_restore -d database_name /home/khan/Downloads/dump.sql
It worked fine and took few seconds.You can see below link for more information.
https://www.postgresql.org/docs/8.1/app-pgrestore.html

Copying PostgreSQL database to another server

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Resources