I want to export a postgresql database named "kd" with all roles, tablespaces, etc. I run the command pg_dumpall -U sce -h localhost -p 5450 -d kd > /tmp/db.sql I get the error
pg_dumpall: missing "=" after "kd5" in connection info string
I run the command pg_dumpall -U sce -h localhost -p 5450 > /tmp/db.sql
I get the error
pg_dumpall: query was: SELECT oid, rolname, rolsuper, rolinherit, rolcreaterole, rolcreatedb, rolcanlogin, rolconnlimit, rolpassword, rolvaliduntil, rolreplication, pg_catalog.shobj_description(oid, 'pg_authid') as rolcomment, rolname = current_user AS is_current_user FROM pg_authid ORDER BY 2
How can fix this problem ?
Related
When this workload (SqlSecondaryIndex workload from https://github.com/YugaByte/yb-sample-apps/) is still running
% java -jar yb-sample-apps.jar --workload SqlSecondaryIndex --nodes
127.0.0.1:5433 --num_threads_read 4 --num_threads_write 2
an attempt to use ysql_dump to export the table causes "Query error: Restart read required" error.
$ ./ysql_dump -h 127.0.0.1 -d postgres --data-only --table sqlsecondaryindex -f out.txt
ysql_dump: Dumping the contents of table "sqlsecondaryindex" failed: PQgetResult() failed.
ysql_dump: Error message from server: ERROR: Query error: Restart read required at: { read: { physical: 1592265362684030 } local_limit: { physical: 1592265375906038 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }
But if the same command is executed when the workload is stopped, then ysql_dump command completes successfully without any issues. Is this expected behavior?
To read against a consistent snapshot and avoid running into the "read restart" error, pass the --serializable-deferrable option to ysql_dump. For example:
~/tserver/postgres/bin/ysql_dump -h 127.0.0.1 -d postgres \
--data-only --table sqlsecondaryindex \
--serializable-deferrable -f data1.csv
I can use the following command to do so as long as I create the table and the appropriate columns first. I would like the command to be able to create table for me based on the results of my query.
psql -h remote.host -U myuser -p 5432 -d remotedb -c "copy (SELECT view.column FROM schema.view LIMIT 10) to stdout" | psql -h localhost -U localuser -d localdb -c "copy localtable from stdin"
Again, it will populate the data properly if I create the table and columns ahead of time, but it would be much easier if I could automate that with a comand that creates the table according to the results of my query.
I have set up a cron in my godaddy server for taking DB backup. For Testing purpose, I run the cron in every minute. The command is :
mysqldump tuniv_results > /home/username/public_html/DB-VVS/tuniv_results.sql
In my DB-VVS folder one file, tuniv_results.sql, is creating but it is of zero byte. Could you please let me know the issue, why it is not creating properly?
Thanks in advance.
------------UPDATE-------------------
$user="****";
$password="****";
$database="*****";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h <ipaddress> -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
I create a file in the root folder and put the absolute path of that file in the Command text-field as /home/username/cronfile.php. But in the root file there is no file like bekap.sql. Please let me know what might be the issue.
Try this one :
$user="*********";
$password="*****";
$database="*********";
$dumpCommand='/usr/bin/mysqldump';
$dumpCommand.=" -e -f -h host.name.com -u$user -p$password";
$dumpCommand.=" $database";
$dumpCommand.=" > bekap.sql";
$results=$dumpCommand;
exec($dumpCommand);
echo "result: ".$results;
Another solution :
I think this will help to you.
Open terminal and type:
sudo tcsh
pico /etc/crontab
or
nano /etc/crontab
And add one of the following lines depending on your situation. This schedule the backup on 1am every day.
Remote Host Backup with linked PATH to mysqldump:
0 1 * * * mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Remote Host Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -h mysql.host.com -uusername -ppassword --opt database > /path/to/directory/filename.sql
Local Host mysql Backup:
0 1 * * * /usr/local/mysql/bin/mysqldump -uroot -ppassword --opt database > /path/to/directory/filename.sql
(There is no space between the -p and password or -u and username - replace root with a correct database username.)
Every week, I have to run a script that truncates a bunch of tables. Then I use the export data task to move the data to another server (same database name).
The servers aren't linked, I can't save the export job, and my permissions/settings are limited by the DBA (I am an admin on the databases). I have windows authentication on both servers only. The servers are different versions (2005/2008).
My question is is there a way to automate this with my limited ability to modify the servers? Perhaps using Powershell?
Selecting all these tables and stuff in the export wizard week after week is a pain.
If you have access to the SQL Server Management console apps, try something this from a different system.
C:\> bcp ExportImportFile.inp out prod.dbo.[Table] -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c > C:\Temp\ExportImport.log
C:\> sqlcmd -S %SQLSERVER% -U %USERNAEM% -P %PASSWORD% -Q "Use Prod;TRUNCATE TABLE [Table];" >> C:\Temp\ExportImport.log
C:\> bcp prod.dbo.[Table] in ExportImportFile.inp -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c >> C:\Temp\ExportImport.log
You can use DBATOOLS:
$splat = #{
SqlInstance = '{source instance}'
Database = 'tempdb'
Destination = '{dest instance}'
DestinationDatabase = 'tempdb'
Table = 'table1' # you can provide a list of tables
AutoCreateTable = $true
Truncate = $true
}
Copy-DbaDbTableData #splat
If you don't have dbatools: https://dbatools.io/getting-started/
I'm working with Postgres 9.0, and I have an application where I need to insert images into the remote server. So I use:
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h 192.168.1.12 -p 5432 -d myDB -U my_admin -c "\lo_import 'C://im/zzz4.jpg'";
where
192.168.1.12 is the IP address of the server system
5432 is the Port number
myDB is server database name
my_admin is the username
"\lo_import 'C://im/zzz4.jpg'" is the query that is fired.
After the image has been inserted into the database I need to update a row in a table like this:
UPDATE species
SET speciesimages=17755; -- OID from previous command.. how to get the OID ??
WHERE species='ACCOAA';
So my question is: how do I get the OID returned after the \lo_import in psql?
I tried running \lo_import 'C://im/zzz4.jpg' in Postgres but I get an error:
ERROR: syntax error at or near ""\lo_import 'C://im/zzz4.jpg'""
LINE 1: "\lo_import 'C://im/zzz4.jpg'"
I also tried this:
update species
set speciesimages=\lo_import 'C://im/zzz4.jpg'
where species='ACAAC04';
But I get this error:
ERROR: syntax error at or near "\"
LINE 2: set speciesimages=\lo_import 'C://im/zzz4.jpg'
^
As your file resides on your local machine and you want to import the blob to a remote server, you have two options:
1) Transfer the file to the server and use the server-side function:
UPDATE species
SET speciesimages = lo_import('/path/to/server-local/file/zzz4.jpg')
WHERE species = 'ACAAC04';
2) Use the psql meta-command like you have it.
But you cannot mix psql meta commands with SQL-commands, that's impossible.
Use the psql variable :LASTOID in an UPDATE command that you launch immediately after the \lo_import meta command in the same psql session:
UPDATE species
SET speciesimages = :LASTOID
WHERE species = 'ACAAC04';
To script that (works in Linux, I am not familiar with Windows shell scripting):
echo "\lo_import '/path/to/my/file/zzz4.jpg' \\\\ UPDATE species SET speciesimages = :LASTOID WHERE species = 'ACAAC04';" | \
psql -h 192.168.1.12 -p 5432 -d myDB -U my_admin
\\ is the separator meta-command. You need to double the \, in a "" string, because the shell interprets one layer.
\ before the newline is just the line continuation in Linux shells.
Alternative syntax (tested on Linux again):
psql -h 192.168.1.12 -p 5432 -d myDB -U my_admin << EOF
\lo_import '/path/to/my/file/zzz4.jpg'
UPDATE species
SET speciesimages = :LASTOID
WHERE species = 'ACAAC04';
EOF
After importing an image with this command:
\lo_import '$imagePath' '$imageName'
You can then find the description of the binary by querying the pg_catalog.pg_largeobject_metadata table which stores the oid value you need.
Ie:
"SELECT oid as `"ID`",
pg_catalog.obj_description(oid, 'pg_largeobject') as `"Description`"
FROM pg_catalog.pg_largeobject_metadata WHERE pg_catalog.obj_description(oid,'pg_largeobject') = '$image' limit 1 "
Here's how to do it if your field is type bytea.
\lo_import '/cygdrive/c/Users/Chloe/Downloads/Contract.pdf'
update contracts set contract = lo_get(:LASTOID) where id = 77;
Use \lo_list and \lo_unlink after you import.