PHP Script to copy database from one server to another - database

I have a scenario where I need to copy production database to my dev database on daily basis. Both are different servers. What I have thought of writing a cronjob that will do the stuff. I have written a php script. I am connecting to remote production server via sshpass, taking its dump and then populating that dump.
exec("sshpass -p 'mypassword' ssh root#IP_ADDRESS:PORT");
exec("mysqldump -u root -p DB > production_dump.sql");
exec("mysql -u root -p test < production_dump.sql");
But at first line it throws error of stating
ssh: Could not resolve hostname IP_ADDRESS:PORT: Name or service not known
I have tried given solution on internet but non of them worked. Can any on please explain what I am doing wrong?

Your command is failing because it's not formatted right. You need to use one of the following formats:
sshpass -p 'mypassword' ssh root#IP_ADDRESS PORT
sshpass -p 'mypassword' ssh root#IP_ADDRESS -p PORT
sshpass -p 'mypassword' ssh ssh://root#IP_ADDRESS:PORT
However, I'm not sure if the rest of the script will work, especially if it starts asking for a password. A bash script would be the way to go.

Related

Mssql login fail ECONNREFUSED 127.0.0.1:1433

while trying to log in using this .. mssql -u sa -p mypassword .i get this error, Error: Failed to connect to localhost:1433 - connect ECONNREFUSED 127.0.0.1:1433
I have installed sql server on docker using this https://www.microsoft.com/en-us/sql-server/developer-get-started/java-mac tutorial and started it.
I am using mac os sierra. I have searched all over internet including stackoverflow for this but gotten no answer. The only answer i get is to enable tcp/ip using sql configuration manager, but mac os doesn't have a configuration manager so I can enable the tcp/ip. Kindly assist.
I finally found the solution .. the docker set the memory as 2GB while the MS SQL server requires 3.25GB... All i had to do was go to the Docker preferences and changed the memory to 4GB and it works :). I was using sql server on docker on Mac.
I'm using docker to set up containers and then sql-cli to access SQL server. This is how I resolved that error which I got after providing mssql -u sa -p mypassword.
What I didn't realize at the beginning was a too simple password provided before with setting up a docker container:
docker run -d --name Homer -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myPassw0rd' -p 1433:1433 microsoft/mssql-server-linux
The Terminal doesn't say this and only after going to docker > Kitematic and checking the logs of the just created container I saw such a security warning. I deleted that container and created a container with a strong password.
Then I got the error after I've started a wrong container (so the connection failed because I was trying to provide the password for a different container). Since then, I prefer to use Kinematic to manage and access my containers. Before I type mssql -u sa -p mypassword in the Terminal and start to work, I just go to docker > Kinematic and Start my container.
In my case the container exited due to an insecure mssql password.
Try reading the container logs.
In my case, I just needed to start the container.
docker start {container_name}
In my case (I was following this tutorial https://database.guide/how-to-install-sql-server-on-a-mac/) the problem was the host address.
I was trying to connect to localhost and I got the ECONNREFUSED message but then I realized that I needed to use the local IP docker assigned to the container (it was something like 192.168.xxx.xxx), so:
mssql -s 192.168..... -o 1433 -u sa -p 'mypassword'
finally worded.
I had the same, in my case I notice that the problem was the PORT, so:
1)Check if the container is running with
docker start "container_name"
2)Then, get the correct PORT with:
docker ps
3) Run it
mssql -s "PORT" -o 1433 -u sa -p "pwd"
I'm adding this answer to complement Krzysztof Tomasz's answer.
I was following this guide: How to Install SQL Server on a Mac
Everything was going well but at the time of connecting to the container with this command:
mssql -u sa -p mypass1
I got:
Error: Failed to connect to localhost:1433 - connect ECONNREFUSED
127.0.0.1:1433
Then I opened Docker app, clicked the container and in the Logs menu I could see the following:
2020-02-05 16:26:45.71 spid20s ERROR: Unable to set system
administrator password: Password validation failed. The password does
not meet SQL Server password policy requirements because it is too
short. The password must be at least 8 characters..
The password I set had only 7 chars. :o)
Now this makes sense.
This is also documented # Microsoft doc here:
Quickstart: Run SQL Server container images with Docker
Solved this problem by removing the container and launching it again...
As I only had one container I ran the following command:
docker rm $(docker ps -a -q)
Then launched sql server image again with a stronger password:
docker run -d --name sql_server_demo -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=MyPass11' -p 1433:1433 microsoft/mssql-server-linux
I resolved this issue by updating the port from 1422 to 1433, I used Kitematic to implement this update.
I had the same, and it was a RAM issue, HOWEVER... 4GB didn't do it for me, for some reason in my case I needed 6, then it worked.
Make sure you have started the container in docker.
Command to start a container: docker start container "containerName"
and then try to connect mssql
I had same problem too, after study the logs with two commands:
docker ps -a
then
docker logs 99373f58f2ff
I understood that problem is related to the password dose not to meet SQL Server password policy! That's is.

Run a batch file on remote host using PsExec

I am experimenting with PsExec and I am trying to run a batch file on a remote host from a local PC on the same LAN. The batch file has been tested on the local PC and works fine. I managed to connect to the remote host via PsExec using the below commands.
PsExec -u Username -p Password \\Remote_Host_IP C:\Path_to_batch_file\Batch.bat
I am getting this error:-
PsExec could not start C:\Path_to_batch_file\Batch.bat on Remote_Host_IP:
The system cannot find the file specified.
This is probably occurring since it is searching for the file on the remote host while the file is located on the local PC thus not finding the file.
I do not want to make any manual intervention on the remote host.
After trying hard to find the correct commands on the net I cannot solve this issue.
If you want you can try this:
PsExec.exe #pc_list.txt >>pc_log.txt -c D:\PC\pc.bat
Where:
pc_list.txt is a list of all your PC in your network
pc_log.txt is a log
D:\PC\pc.bat is the path where your script is
You can schedule it from a server that has access on all your network with task scheduler
Try this.
Psexec \\remotehost -u username -p password -c local_path\file.bat
Run it as:
PsExec \Remote_Host_IP -u Username -p Password cmd/c "C:\Path_to_batch_file\Batch.bat"
This should fix it

PostgreSQL: duplication by one command [duplicate]

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Mysqldump connecting issue

I'm trying to make dump with next command:
mysqldump -v -u root -p -h 127.0.0.1 -P 3308 -x --add-drop-table
--add-locks --create-options -K -e -q -A > database.sql
The result (after password input) is message "Connecting to 127.0.0.1...". After this is nothing (no any errors, just waiting).
database.sql is empty file.
Why I see no any activity? Is it bug?
From http://linuxcommand.org/man_pages/mysqldump1.html
The password to use when connecting to the server. If you use the
short option form (-p), you cannot have a space between the option and
the password. If you omit the password value following the --password
or -p option on the command line, you are prompted for one.
The system may be waiting for you to input a password.
If you want to avoid that just add the password in the command. Assuming your password is "FLOWER":
mysqldump -v -u root -pFLOWER -h 127.0.0.1 -P 3308 -x --add-drop-table --add-locks --create-options -K -e -q -A > database.sql
This problem, as you describe it, can be caused by the mysql server not running or not being available on the host (in your case, localhost), or it is running but not on that port.
What kind of a system is it? If it is a flavor of linux/unix, you can run
ps -ef|egrep mysql
to see if the mysql server is running. Check the equivalent command on Windows or whatever else you may be running. Also, you can verify that this is the problem by seeing if this works:
mysql -u root -p -h 127.0.0.1 -P 3308
The solution is to start the server:
/etc/init.d/mysqld start
or the equivalent on your system.
Note: if it is running, determine what port it is on - it is possible that you are not specifying the right port number. The default is 3306 - it is unusual that you are using a non-standard port.

Copying PostgreSQL database to another server

I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser#remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser#remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
pg_dump the_db_name > the_backup.sql
Then copy the backup to your development server, restore with:
psql the_new_dev_db < the_backup.sql
Use pg_dump, and later psql or pg_restore - depending whether you choose -Fp or -Fc options to pg_dump.
Example of usage:
ssh production
pg_dump -C -Fp -f dump.sql -U postgres some_database_name
scp dump.sql development:
rm dump.sql
ssh development
psql -U postgres -f dump.sql
If you are looking to migrate between versions (eg you updated postgres and have 9.1 running on localhost:5432 and 9.3 running on localhost:5434) you can run:
pg_dumpall -p 5432 -U myuser91 | psql -U myuser94 -d postgres -p 5434
Check out the migration docs.
pg_basebackup seems to be the better way of doing this now, especially for large databases.
You can copy a database from a server with the same or older major version. Or more precisely:
pg_basebackup works with servers of the same or an older major version, down to 9.1. However, WAL streaming mode (-X stream) only works with server version 9.3 and later, and tar format mode (--format=tar) of the current version only works with server version 9.5 or later.
For that you need on the source server:
listen_addresses = '*' to be able to connect from the target server. Make sure port 5432 is open for that matter.
At least 1 available replication connection: max_wal_senders = 1 (-X fetch), 2 for -X stream (the default in case of PostgreSQL 12), or more.
wal_level = replica or higher to be able to set max_wal_senders > 0.
host replication postgres DST_IP/32 trust in pg_hba.conf. This grants access to the pg cluster to anyone from the DST_IP machine. You might want to resort to a more secure option.
Changes 1, 2, 3 require server restart, change 4 requires reload.
On the target server:
# systemctl stop postgresql#VERSION-NAME
postgres$ pg_basebackup -h SRC_IP -U postgres -D VERSION/NAME --progress
# systemctl start postgresql#VERSION-NAME
Accepted answer is correct, but if you want to avoid entering the password interactively, you can use this:
PGPASSWORD={{export_db_password}} pg_dump --create -h {{export_db_host}} -U {{export_db_user}} {{export_db_name}} | PGPASSWORD={{import_db_password}} psql -h {{import_db_host}} -U {{import_db_user}} {{import_db_name}}
Run this command with database name, you want to backup, to take dump of DB.
pg_dump -U {user-name} {source_db} -f {dumpfilename.sql}
eg. pg_dump -U postgres mydbname -f mydbnamedump.sql
Now scp this dump file to remote machine where you want to copy DB.
eg. scp mydbnamedump.sql user01#remotemachineip:~/some/folder/
On remote machine run following command in ~/some/folder to restore the DB.
psql -U {user-name} -d {desintation_db}-f {dumpfilename.sql}
eg. psql -U postgres -d mynewdb -f mydbnamedump.sql
Dump your database : pg_dump database_name_name > backup.sql
Import your database back: psql db_name < backup.sql
I struggled quite a lot and eventually the method that allowed me to make it work with Rails 4 was:
on your old server
sudo su - postgres
pg_dump -c --inserts old_db_name > dump.sql
I had to use the postgres linux user to create the dump. also i had to use -c to force the creation of the database on the new server. --inserts tells it to use the INSERT() syntax which otherwise would not work for me :(
then, on the new server, simpy:
sudo su - postgres
psql new_database_name < dump.sql
to transfer the dump.sql file between server I simply used the "cat" to print the content and than "nano" to recreate it copypasting the content.
Also, the ROLE i was using on the two database was different so i had to find-replace all the owner name in the dump.
Let me share a Linux shell script to copy your table data from one server to another PostgreSQL server.
Reference taken from this blog:
Linux Bash Shell Script for data migration between PostgreSQL Servers:
#!/bin/bash
psql \
-X \
-U user_name \
-h host_name1 \
-d database_name \
-c "\\copy tbl_Students to stdout" \
| \
psql \
-X \
-U user_name \
-h host_name2 \
-d database_name \
-c "\\copy tbl_Students from stdin"
I am just migrating the data; please create a blank table at your destination/second database server.
This is a utility script. Further, you can modify the script for generic use something like by adding parameters for host_name, database_name, table_name and others
Here is an example using pg_basebackup
I chose to go this route because it backs up the entire database cluster (users, databases, etc.).
I'm posting this as a solution on here because it details every step I had to take, feel free to add recommendations or improvements after reading other answers on here and doing some more research.
For Postgres 12 and Ubuntu 18.04 I had to do these actions:
On the server that is currently running the database:
Update pg_hba.conf, for me located at /etc/postgresql/12/main/pg_hba.conf
Add the following line (substitute 192.168.0.100 with the IP address of the server you want to copy the database to).
host replication postgres 192.168.0.100/32 trust
Update postgresql.conf, for me located at /etc/postgresql/12/main/postgresql.conf. Add the following line:
listen_addresses = '*'
Restart postgres:
sudo service postgresql restart
On the host you want to copy the database cluster to:
sudo service postgresql stop
sudo su root
rm -rf /var/lib/postgresql/12/main/*
exit
sudo -u postgres pg_basebackup -h 192.168.0.101 -U postgres -D /var/lib/postgresql/12/main/
sudo service postgresql start
Big picture - stop the service, delete everything in the data directory (mine is in /var/lib/postgreql/12). The permissions on this directory are drwx------ with user and group postgres. I could only do this as root, not even with sudo -u postgres. I'm unsure why. Ensure you are doing this on the new server you want to copy the database to! You are deleting the entire database cluster.
Make sure to change the IP address from 192.168.0.101 to the IP address you are copying the database from. Copy the data from the original server with pg_basebackup. Start the service.
Update pg_hba.conf and postgresql.conf to match the original server configuration - before you made any changes adding the replication line and the listen_addresses line (in my care I had to add the ability to log-in locally via md5 to pg_hba.conf).
Note there are considerations for max_wal_senders and wal_level that can be found in the documentation. I did not have to do anything with this.
If you are more comfortable with a GUI, you can use the pgAdmin software.
Connect to your source and destination servers
Right-click on the source db > backup
Right-click on the destination server > create > database. Use the same properties as the source db (you can see the properties of the source db by right-click > properties)
Right-click on the created db > restore.

Resources