mysqldump data through command line without affecting web application - database

I read some topics on restoring and copying mysql database from 1 server to another
But I wanted to be make sure the impact it might have on my production web app.
So basically here is my situation:
Server A has a database called enrollment.
Server B has a database called enrollment.
Through the command line, how do I do the following:
1.Create a backup copy of 'enrollment' on Server A
2. Drop database enrollment on Server A
3. Copy/Dump database enrollmentt from Server B to Server A( do I need to ssh or copy the sql file or can do i do it throug mysql?)
The databse size is about 64 MB.
While i do the above, how long will the production web app be impacted?
based on my research, this was my thinking, but I wanted to be careful since I am dealing with production data
On server B, mysqldump --databases enrollment > enrollment_backup.sql
scp enrollment_backup.sql from Server B to Server A
drop database enrollment
mysqldump < enrollment_backup.sql
Note: I have root access on server A & server B.

You have to do the drop database in the last step:
1) backup server A
2) dump data A on server B
3) change the web app to point to B
4) if everything is ok you can drop server A

You can dump any remote server to your local one. Or even any remote server to any other remote server.
mysqldump -hproduction -uroot1 -p --add-drop-database --add-drop-table --extended-insert --lock-all-tables --routines --databases mydatabase | \
mysql -hlocalhost -uroot2 -ppassword
This will connect to the server production with user root1 and a password you will enter via a prompt. The database mydatabase (you can give a list of databases as well) will be dumped.
The standard output with all the commands is then redirected to another MySQL server running on localhost (can be any other host as well), using user root2 and the password password. The second password has to be given on the command line.
Advanced possibilities:
If both systems are not in a VPN or secured by any other means, you can create an SSH tunnel and then dump using this tunnel.
If you do not want to pass the password via command line or do not want to enter it at all, let mysqldump or mysql read it from an individual options file via --defaults-extra-file.

Related

Import/Export PostgreSQL db "without" pg_dump or sql file / backup, etc...?

I need to import a old db into a new postgre server.
Is there a way to migrate an old database to a new server without using pg_dump?
I don't have the sql file, or the old server backup file, neither the user and the password, just the physical files in the "\data" folder, is there any way to do this?
The target server is in the same version of th old server.
Thanks.
Well as a test you could try:
pg_ctl start -D $DATA
Where pg_ctl comes from the target version and the $DATA is the the /data directory. You have not said how you came to have just a /data directory. If this came from an unclean shutdown or a corrupted drive the possibility exists that the server will not start.
UPDATE
To get around auth failure find pg_hba.conf and create or modify local connection to use trust method. For more info see pg_hba and trust. Then you should be able to connect like:
psql -d some_db -U postgres
Once in you can use ALTER ROLE to change password:
ALTER ROLE <role_name> WITH PASSWORD 'new_password';

How to push data from local SQL Server to Tableau Server on AWS

We are developing Tableau dashboards and deploying the workbooks on a EC2 windows instance in AWS. One of the data source is the company SQL server inside firewall. The server is managed by IT and we only have read permission to one of the databases. Now the solution is to build workbook on Tableau desktop locally by connecting to the company SQL server. Before publishing the workbooks to Tableau server, the data are extracted from data sources. The static data got uploaded with workbooks when published.
Instead of linking to static extracted data on Tableau server, we would like to set up a database on AWS (e.g. Postgresql), probably on the same instance and push the data from company SQL server to AWS database.
There may be a way to push directly from SQL server to postgres on AWS. But since we don't have much control of the server plus the IT folks are probably not willing to push data to external, this will not be an option. What I can think of is as follows:
Set up Postgres on AWS instance and create the tables with same schemas as the ones in SQL server.
Extract data from SQL server and save as CSV files. One table per file.
Enable file system sharing on AWS windows instance. So the instance can read files from local file system directly.
Load data from CSV to Postgres tables.
Set up the data connection on Tableau Server on AWS to read data from Postgres.
I don't know if others have come across a situation like this and what their solutions are. But I think this is not a uncommon scenario. One change would be to have both local Tableau Desktop and AWS Tableau Server connect to Postgres on AWS. Not sure if local Tableau could access Postgres on AWS though.
We also want to automate the whole process as much as possible. On local server, I can probably run a Python script as cron job to frequently export data from SQL server and save to CSVs. On the server side, something similar will be run to load data from CSV to Postgres. If the files are big, though, it may be pretty slow to import data from CSV to postgres. But there is no better way to transfer files from local to AWS EC2 instance programmatically since it is Windows instance.
I am open to any suggestions.
A. Platform choice
If you use a database other than SQL Server on AWS (say Postgres), you need to perform one (or maybe two) conversions:
In the integration from on on-prem SQl Server to AWS database you need to map from SQL Server datatypes to postgres datatypes
I don't know much about Tableau, but if it is currently pointing at SQL Server, you probably need some kind of conversion to point it at Postgres
These two steps alone might make it worth your while to investigate a SQL Express RDS. SQL Express has no licencing cost but obviously windows does. You can also run SQL Express on Linux which would have no licencing costs, but would require a lot of fiddling about to get running (i.e. I doubt if there is a SQL Express Linux RDS available)
B. Integration Approach
Any process external to your network (i.e. on the cloud) that is pulling data from your network will need the firewall opened. Assuming this is not an option, that leaves us only with push from on-prem options
Just as an aside on this point, Power BI achieves it's desktop data integration by using a desktop 'gateway' that coordinates data transfer, meaning that cloud Power BI doesn't need to open a port to get what it needs, it uses the desktop gateway to push it out
Given that we only have push options, then we need something on-prem to push data out. Yes, this could be a cron job on Linux or a windows scheduled task. Please note, this is where you start creating shadow IT
To get data out of SQL Server to be pushed to the cloud, the easiest way is to use BCP.EXE to generate flat files. If these are going into a SQL Server, these should be native format (to save complexity). If these are going to Postgres they should be tab delimited
If these files are being uploaded to SQL Server, then it's just another BCP command to push native files into tables into SQL Server (prior to this you need to run SQLCMD.EXE command to truncate the target table
So for three tables, assuming you'd installed the free* SQL Server client tools, you'd have a batch file something like this:
REM STEP 1: Clear staging folder
DEL /Y C:\Staging\*.TXT
REM STEP 2: Generate the export files
BCP database.dbo.Table1 OUT C:\Staging\Table1.TXT -E -S LocalSQLServer -N
BCP database.dbo.Table2 OUT C:\Staging\Table2.TXT -E -S LocalSQLServer -N
BCP database.dbo.Table3 OUT C:\Staging\Table3.TXT -E -S LocalSQLServer -N
REM STEP 3: Clear target tables
REM Your SQL RDS is unlikely to support single sign on
REM so need to use user/pass here
SQLCMD -U username -P password -S RDSSQLServerName -d databasename -Q"TRUNCATE TABLE Table1; TRUNCATE TABLE Table2; TRUNCATE TABLE Table3;"
REM STEP 4: Push data in
BCP database.dbo.Table1 IN C:\Staging\Table1.TXT -U username -P password -S RDSSQLServerName-N
BCP database.dbo.Table2 IN C:\Staging\Table2.TXT -U username -P password -S RDSSQLServerName-N
BCP database.dbo.Table3 IN C:\Staging\Table3.TXT -U username -P password -S RDSSQLServerName-N
(I'm pretty sure that BCP and SQLCMD are free... not sure but you can certainly download the free SQL Server tools and see)
If you wanted to push to Postgres SQL instead,
in step 2, you'd need to drop the -N option, which would make the file text, tab delimited, readable by anything
in step 3 and step 4 you'd need to use the associated Postgres command line tool, but you'd need to deal with data types etc. (which can be a pain - ambiguous date formats alone are always a huge problem)
Also note here the AWS RDS instance is just another database with a hostname, login, password. The only thing you have to do is make sure the firewall is open on the AWS side to accept incoming connections from your IP Address
There are many more layers of sophistication you can build into your integration: differential replication, retries etc. but given the 'shadow IT status' this might not be worth it
Also be aware that I think AWS charges for data uploads, so if you are replicating a 1G database everyday, that's going to add up. (Azure doesn't charge for uploads but I'm sure you'll pay in some other way!)
For this type of problem I would strongly recommend use of SymmetricDS - https://www.symmetricds.org/
The main caveat is that the SQL Server would require the addition of some triggers to track changes but at that point SymmetricDS will handle the push of the data.
An alternative approach, similar to what you suggested, would be to have a script export the data into CSV files, upload them to S3, and then have a bucket event trigger on the S3 bucket that kicks off a Lambda to load the data when it arrives.

pgAdmin 9.5 not showing all databases

I have a problem with my database. I installed postgreSQL 9.5 on my Ubuntu server. I changed the postgresql.conf file to allow binding the postgreSQL server to the localhost. This allows me to run pgAdmin and connect to my database by forwarding also the port 5432, where I run my postgreSQL.
The problem I am experiencing is that I only see the default table 'postgres', but not my newly created one 'games' (I created this table by running create database games with the postgres user connected to the server).
And here is my screen shot of the pgAdmin application with all the property value I use to connect to my server.
As you can see from the first picture I use the same permissions as for the postgres database - it is blank, which should grant access to everyone. I know I have to change that later and limit it to the postgres user I have, but for now I will let it that way. Once I manage to see my 'games' database, then I will start to tighten the security more.
UPDATE I granted all access to the database 'games', which is visible right on the third screen shot down. The access privilege is different. This did not help me, I would still not see the database, when connecting to the server with pgAdmin. A saw someone had a similar problem and run the right click on the server and clicked 'New database'. This seems created a new database, because as you can see from the pgAdmin, the application manage to find the score table I create inside pgAdmin. The reason I believe this is the case is, because running the same SQL connected to the server postgres=# select * from score; results in ERROR: relation "score" does not exist LINE 1: select * from score;.
I manage to find the problem. One of my problems was that I had (unaware of that) installed a postgreSQL server on my machine. Seems I installed it with my pgAdmin install. So everytime I would connect to my server, I would establish a connection to my localhost server and not my remote server. So I just uninstalled the server and installed only the pgAdmin client.
The second problem I had was that the file /etc/postgresql/9.5/main/pg_hba.conf had to be changed. So I run:
sudo vi /etc/postgresql/9.5/main/pg_hba.conf
and changed the line
# Database administrative login by Unix domain socket
local all postgres peer
to
# Database administrative login by Unix domain socket
local all postgres md5
Once that was changed, I had to restart the configuration by executing:
sudo /etc/init.d/postgresql reload
I would also point out that it is important to have postgres user as a unix and DB user with same passwords. I found all this information here.
Try granting access privileges explicitly for your new table.
I believe a blank access privileges column means the table has DEFAULT access privileges. The default could be no public access for tables, columns, schemas, and tablespaces. For more info: http://www.postgresql.org/docs/9.4/static/sql-grant.html

OpenERP 6.1 Database Migration to new VPS through shell

I've been trying to migrate my old openerp server installation to a new VPS so I tried to migrate the database.
I need to do it via shell because of the size of the database and unstable connection.
What I've done is log in server1 and then
su postgres
pg_dump dbname > db.dump
then I transfered the file to the new server and restored it like this
createdb dbname
psql dbname < db.dump
the database itself was restored and I can browse through the tables if I want to but when I try to get in OpenERP the database is not available in the select box where the databases are. If I create new databases by using the openerp interface they appear correctly in the select box and I can connect.
I tried to create the db with UTF8 encoding and using template1 as well but nothing was different. I also tried to create the database via the interface, drop the tables and restore the backup but this gives errors after I log in like "product.product relation does not exist".
Any ideas what else I could try? Thanks in advance.
When restoring the database take care to restore it with the correct ownership.
You may want to take a look at this question

How to restore a MySQL .dump file in remote host

i create backup from localhost MYSQL database(drupal) using PHPMyAdmin (file format *.sql). size of this backup = 20MG. i create new database in PHPMyAdmin my live(online) server. Now, when i import backup sql Files i see this error :
#2006 - MySQL server has gone away
i know this error fix with this:
edit ../sql/bin/my.ini
set max_allowed_packet to e.g. 16M
but my server support said: better way is restore mysql using:
mysql -u username -p dbname < file.sql
now, i don't know how to work with this command line for remote server?!
You need SSH access to your server to execute that command using a terminal. If support told you you should use that command, I would think you have SSH access. The SQL file would have to be on your server, so you'd need to transfer it there first (using for example scp).
But if you're not used to the command line, I would recommend first spending some time learning the basics before jumping right into it ;)

Resources