Postgres pg_dump times out - database

I am running this command on Webfaction:
ionice -c2 -n6 pg_dump --blobs -U mhjohnson_flavma -f dump.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: canceling statement due to statement timeout
Any ideas on how to change the timeout?

Your server probably has statement-timeouts configured in one way or another. (cf. here)
As a quick solution, you could use PGOPTIONS="-c statement_timeout=0" pg_dump [...] to temporarily overwrite this setting for the dumping process.

Related

Postgres database export and import, problem with $$PATH$$

I am in the process of doing export and import with postgres database.
I had used the following command to take the backup of postgres db
C:\dirs> pg_dump -U postgres -p 15432 -W -F t cgate-next-demo > .\dbexport_10th_February_2022.tar
Password:*****
I have unzipped dbexport_10th_February_2022.tar file and proceeded with database import. As a initial step, I had dropped the database.
#drop database if exists "cgate-next-demo";
And I had recreated the empty database.
#create database "cgate-next-demo";
In order to do this, I have logged in to psql once,
C:\dirs> psql -U postgres -p 15432
Password for user postgres:*****
postgres=#
For database import I have used the following command.
C:\dirs> psql -U postgres -p 15432 -d cgate-next-demo <restore.sql
While I do that I have got the following error. I took this excerpt from console logs.
ERROR: could not open file "$$PATH$$/6052.dat" for reading: No such
file or directory HINT: COPY FROM instructs the PostgreSQL server
process to read a file. You may want a client-side facility such as
psql's \copy.
Can someone guide on what would've caused this issue.
You are doing this in the wrong fashion. Rather than unpacking the archive, pass it as argument to pg_restore. That will do everything for you.

Executing large script file (1.7GB) on Microsoft SQL Server using SQLCMD

I have a generated script file with 1.7 GB size, I'm trying to run this script using the following sqlcmd
sqlcmd -m1 -S 192.168.100.10\SQLHQ -U sa -P 123456 -i "E:/VMS2008R2.sql"
When I run this line in CMD as administrator, the command take some time and give me no error, the database schema generated to database but without data and without any success or finished message!
UPDATE
When I run the command without -m1 I got the following exception:
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure

Restoring from Postgres sql file

I have a backup from PostgresSQL database created this way:
/usr/bin/pg_dump --no-owner --no-acl > dump.sql
When I try to restore on a different machine:
psql db < dump.sql it trows many errors: invalid command \N
When I try to use pg_restore:
pg_restore dump.sql -d db
Different error: pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
According to documentation this should be a non issue.
Any way to tell psql that \N character is a null value?
Oh hi me, it's me
Make sure:
Database exists
Importing user has permissions
Get the first error before \N ones flood in
psql -d database -f backup.sql -U user should do the trick for importing
Also eat healthy and get good rest. k bye
this may be the case if the columns in the table and the columns in the file not match

Cassandra:Request did not complete within rpc_timeout

i was working with Cassandra 1.2.4 probably, after restoring some key-space when i tried to query in a key-space it gave me Request did not complete within rpc_timeout
so i checked system.log & output.log under /var/log/cassandra path
i just find this exception:
Exception in thread Thread[ReadStage:42,5,main]
java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
what is the reason ? and how can i get rid of rpc_timeout
thanks in advance,
Seems somehow your SStables are corrupted. You can try rebuilding them using nodetool's
scrub [keyspace] operation.
If you can't access a specific keyspace,
> ./nodetool -u <username> -pw <password> -h <cassandra_ip> scrub <keyspace>
or if you can't access any keyspace,
> ./nodetool -u <username> -pw <password> -h <cassandra_ip> scrub
cqlsh returns rpc_timeout when any error occur in server. (remote procedure call -to server- timed out).
i think you problem was after a backup/restore and the restoring step may not perform correctly and your sstables corrupted. this may be helpful.

PostgreSQL - Backup and Restore Database Tables with Partitions

I'm working on PostgreSQL 8.4 and I'd like to do backup and restore (from Ubuntu 11.10 to Ubuntu 12.4)
I want to include all partitions, clusters, roles and stuff.
My commands:
Back up:
dumb_all > filename
Compress:
zip -f mybackup
Uncompress and restore:
sudo gunzip -c /home/ubuntu/Desktop/backupFile.zip | psql -U postgres
The issue is in the restore process, I got an error
invalid command \.
ERROR: syntax error at or near "2"
LINE 1: 2 2 1
^
invalid command \.
ERROR: syntax error at or near "1"
LINE 1: ...
^
out of memory
Plus, the tables with partitions did not restored. also some tables restored without any data!
Please help!
EDIT
I used pgAdmin to do the back up, using the "backup server" option.
If you did used zip to compress the output, then you should use unzip do uncompress it, not gunzip, they use different formats/algorithms.
I'd suggest you to use gzip and gunzip only. For instance, if you generated a backup named mybackup.sql, you can gzip it with:
gzip mybackup.sql
It will generate a file named mybackup.sql.gz. Then, to restore, you can use:
gunzip -c mybackup.sql.gz | psql -U postgres
Also, I'd suggest you to avoid using pgAdmin to do the dump. Not that it can't do, it is just that you can't automatize it, you can easily use pg_dumpall the same way:
pg_dumpall -U postgres -f mybackup.sql
You can either dump and compress without intermediate files using pipe:
pg_dumpall -U postgres | gzip -c > mybackup.sql.gz
BTW, I'd really suggest you avoiding pg_dumpall and use pg_dump with custom format for each database, as with that you already get the result compressed and easier to use latter. But pg_dumpall is ok for small databases.

Resources