Unable to delete Postgres database using CLI - database

I am unable to delete PostgreSQL database in azure using the below command:
az PostgreSQL db delete
Is there any other script such as bash to clean up azure postgreSQL database?

I think you should not say postgresql but postgres - az postgres db delete

Try command to like Delete database 'testdb' in the server 'testsvr':
az postgres db delete -g testgroup -s testsvr -n testdb
Required Parameters
--name -n
The name of the database.
Optional Parameters
--ids
One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource Id' arguments. You should provide either --ids or other 'Resource Id' arguments.
--resource-group -g
Name of resource group. You can configure the default group using az configure --defaults group=.
--server-name -s
Name of the server. The name can contain only lowercase letters, numbers, and the hyphen (-) character. Minimum 3 characters and maximum 63 characters.
--yes -y
Do not prompt for confirmation.
To delete the Azure Database for PostgreSQL flexible server, run:
az postgres flexible-server delete --resource-group myresourcegroup --name
mydemoserver

Related

Postgres database export and import, problem with $$PATH$$

I am in the process of doing export and import with postgres database.
I had used the following command to take the backup of postgres db
C:\dirs> pg_dump -U postgres -p 15432 -W -F t cgate-next-demo > .\dbexport_10th_February_2022.tar
Password:*****
I have unzipped dbexport_10th_February_2022.tar file and proceeded with database import. As a initial step, I had dropped the database.
#drop database if exists "cgate-next-demo";
And I had recreated the empty database.
#create database "cgate-next-demo";
In order to do this, I have logged in to psql once,
C:\dirs> psql -U postgres -p 15432
Password for user postgres:*****
postgres=#
For database import I have used the following command.
C:\dirs> psql -U postgres -p 15432 -d cgate-next-demo <restore.sql
While I do that I have got the following error. I took this excerpt from console logs.
ERROR: could not open file "$$PATH$$/6052.dat" for reading: No such
file or directory HINT: COPY FROM instructs the PostgreSQL server
process to read a file. You may want a client-side facility such as
psql's \copy.
Can someone guide on what would've caused this issue.
You are doing this in the wrong fashion. Rather than unpacking the archive, pass it as argument to pg_restore. That will do everything for you.

Install and connect to database with psql

I want to set up the psql terminal tool in Centos 6.6
I have been given access to as database and i just want to use the terminal for writing queries to the database for information. I have no prior experience with psql before but I want to move on from the pgadmin3 gui.
I started off by installing psql:
yum install postgresql
but when I try to access it, ie. typing [root#localhost]# psql I get the following error:
psql: FATAL: database "root" does not exist
I've tried using:
psql --host=<DB instance endpoint> --port=<port> --username=<master user name> --password --dbname=<database name>
but that fails to work too, maybe this is really basic but im completely lost for setting this up
Use:
psql -U my_pgadmin_username postgres
or
psql -U my_pgadmin_username -h localhost postgres
Alternately, more typical usage:
sudo -u postgres psql

PostgreSQL: duplication by one command [duplicate]

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Creating Hive Metastore Database Tables Error

I'm running through the Cloudera Manager (free edition) and I reached the point where the wizard is creating the Hive Metastore Database.
This error is shown and halts the configuration process.
using /var/run/cloudera-scm-agent/process/40-hive-metastore-create-tables/hadoop-conf as HADOOP_CONF_DIR
I cant seem to find any information that might cause this?
Every thing has been configured correctly up to this point, everything installed and user names and passwords are correct.
Has anybody seen this error before? Thoughts?
Error Log:
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1116)
at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:688)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1094)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2337)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2370)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2154)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:381)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at com.cloudera.enterprise.dbutil.SqlRunner.open(SqlRunner.java:109)
at com.cloudera.enterprise.dbutil.SqlRunner.runSingleQuery(SqlRunner.java:80)
at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.countTables(HiveMetastoreDbUtil.java:191)
... 2 more
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2540)
at com.mysql.jdbc.MysqlIO.readPacket(MysqlIO.java:612)
... 20 more
ok cloudera is using version HIVE 0.10, that doesn't support remote login
but you need to go around that bug, logging to the server that is getting the error the cloudera manager will tell you the ip
1) create login in to the Server that fails to install HIVE
2) Create a $HADDOP_HOME
$HADOOP_HOME="/usr/lib/hadoop/"
3) INSTALL postgres in the server that fails
`$ sudo apt-get install postgresql`
$ cat /etc/postgresql/9.1/main/postgresql.conf | grep -e listen -e standard_conforming_strings
modify this to lines in the file
listen_addresses = '*'
standard_conforming_strings = off
You also need to configure authentication for your network in pg_hba.conf. You need to make sure that the PostgreSQL user that you will create in the next step will have access to the server from a remote host. To do this, add a new line into pg_hba.con that has the following information:
host <database> <user> <network address> <mask> password
Start PostgreSQL Server
$ sudo service postgresql start
Use chkconfig utility to ensure that your PostgreSQL server will start at a boot time:
chkconfig postgresql on
You can use the chkconfig utility to verify that PostgreSQL server will be started at boot time, for example:
chkconfig --list postgresql
Step 2: Install the Postgres JDBC Driver
Before you can run the Hive metastore with a remote PostgreSQL database, you must configure a JDBC driver to the remote PostgreSQL database, set up the initial database schema, and configure the PostgreSQL user account for the Hive user.
To install the PostgreSQL JDBC Driver on a Debian/Ubuntu system:
Install libpostgresql-jdbc-java and symbolically link the file into the /usr/lib/hive/lib/ directory.
$ sudo apt-get install libpostgresql-jdbc-java
$ ln -s /usr/share/java/postgresql-jdbc4.jar /usr/lib/hive/lib/postgresql-jdbc4.jar
Step 3: Create the metastore database and user account
bash# sudo –u postgres psql
bash$ psql
postgres=# CREATE USER hiveuser WITH PASSWORD 'mypassword';
postgres=# CREATE DATABASE metastore;
postgres=# \c metastore;
You are now connected to database 'metastore'.
postgres=# \i /usr/lib/hive/scripts/metastore/upgrade/postgres/hive-schema-0.10.0.postgres.sql
SET
SET
...
Now you need to grant permission for all metastore tables to user hiveuser. PostgreSQL does not have statements to grant the permissions for all tables at once; you'll need to grant the permissions one table at a time. You could automate the task with the following SQL script:
bash# sudo –u postgres psql
metastore=# \o /tmp/grant-privs
metastore=# SELECT 'GRANT SELECT,INSERT,UPDATE,DELETE ON "' || schemaname || '"."' || tablename || '" TO hiveuser ;'
metastore-# FROM pg_tables
metastore-# WHERE tableowner = CURRENT_USER and schemaname = 'public';
metastore=# \o
metastore=# \i /tmp/grant-privs
You can verify the connection from the machine where you'll be running the metastore service as follows:
psql –h myhost –U hiveuser –d metastore
metastore=#
Step 4: Configure the Metastore Service to Communicate with the PostgreSQL Database
change the IP of the AWS amazon master Server, or your master server, don't use DNS name
$find / -name hive-site.xml
$nano /run/cloudera-scm-agent/process/27-hive-metastore-create-tables/hive-site.xml
in the File search for:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://myhost/metastore</value>
</property>
and change to the correct IP that is your Master Hadoop Server where u are running Cloudera Manager
also every link in that file that is not correctly write to the hadoop master Cloudera manager connector , you will have to change to the correct IP
after all this just get back to the autoinstall of cloudera manager and run again and it will be all good :)
that it all the installation that you have to work around our contract cloudera support (that's their business) :)
all this it works fine for me when i have this problem in de cloudera CDH 4.X with sorl
Regards
Go to this link :
http://www.cloudera.com/documentation/enterprise/5-7-x/topics/cm_ig_mysql.html
Go to this topic: Installing the MySQL JDBC Driver
Follow the instruction. Finally Restart your hive service
Thx Kumar

Copying PostgreSQL database to another server

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Resources