Skip all databases and run only specific one - database

I have a sql file generated by "mysqldump --all-databases" . There are many databases in it. What I want to do is to update my local database but only a specific one, not all. I tried to use "mysql -database=db_name < file.sql" but it updated all databases. Is there a way to skip all databases except the one that I want.

You can try doing:
mysql -D example_database -o < dump.sql
This will only execute the SQL commands for the specified database and will skip the commands for all other databases. The -o ("one database") option is critical to this working as expected (it tells mysql to ignore statements related to other databases).
dump.sql is the result of executing mysqldump --all-databases

it's getting dirty, stop reading if or be warned ;)
It's not pretty and maybe there is a better (correct) way to achive this. But assuming the dumps you are working with aren't to big, you might be quicker by importing the full dump into temporary DBs and creating a fresh dump of the database you'd like to import. (As I said, not pretty)
Additionally, you should really make sure that you're able to restore backups you make (in any imaginable way). This could get really embarrassing the day you need them urgently.

A couple of ideas:
You can edit the dump file (it's just a text file, after all), extracting only the items related to the database you want to restore. I think each database is written to the file individually, so this is a matter of identifying the one big block related to the one you want and deleting the bits before and after it -- e.g., not as much of a chore as it sounds. Look for the create database and use statements (there's also -- at least in my version, with my options) a banner comment "Current Database: foo" at the top of each section). This would be pretty easy to do with vi or anything else that lets you do large operations easily. You can then search through the result ensuring that there are no cross-references to the DBs you don't want updated.
You can back up the databases you don't want updated, do the update, then restore them. I mean, before you do this sort of thing, you have a backup anyway, right? ;-)
You can restore to a blank MySQL instance, then grab the backup of the one you want. Blech.
(Variation on #3) You can do a rename on all of your current databases, import the file, then drop the ones you don't want and rename the originals back. Also blech.
I'd probably go with #1 (after a full backup). Good luck with it.
Edit No, I'd go with codaddict's -D databasename -o solution (after a full backup). Nice one

Related

how can I safely backup a huge database?

I need to backup a Drupal database it is huge. So it has over 1500 tables (don't blame me, its a Drupal thing) and is 10GB in size.
I couldn't do it with PHPMyAdmin, I just got an error when it started to build the .sql file.
I want to make sure I wont break anything or take the server down or anything when I try to back it up.
I was going to attempt a mysqldump on my server and then copy the file down locally but realised that this may cause unforeseen problems. So my question to you is, is it safe to use mysqldump on so many tables at once and even if it is safe are there any problems such a huge file could lead to in the future for rebuilding the database?
Thanks for the input guys.
is it safe to use mysqldump on so many tables at once
I run daily backups with mysqldump on servers literally 10x this size: 15000+ tables, 100+ GB.
If you have not examined the contents of a file produced by mysqldump ... you should, because to see its output is to understand why it is an intrinsically safe backup utility:
The backups are human-readable, and consist entirely of the necessary SQL statements to create a database exactly like the one you backed up.
In this form, their content is easily manipulated with ubiquitous tools like sed and grep and perl, which can be used to pluck out just one table from a file for restoration, for example.
If a restoration fails, the error will indicate the line number within the file where the error occurred. This is usually related to buggy behavior in the version of the server where the backup was created (e.g. MySQL Server 5.1 allowed you to create views in some situations where the server itself would not accept the output of its own SHOW CREATE VIEW statement. The create statement was not considered -- by the same server -- to be a valid view definition, but this was not a defect in mysqldump, or in the backup file, per se.)
Restoring from a mysqldump-created backup is not lightning fast, because the server must execute all of those SQL statements, but from the perspective of safety, I would argue that there isn't a safer alternative, since it is the canonical backup tool and any bugs are likely to be found and fixed by virtue of the large user base, if nothing else.
Do not use the --force option, except in emergencies. It will cause the backup to skip over any errors encountered on the server while the backup is running, causing your backup to be incomplete with virtually no warning. Instead, find and fix any errors that occur. Typical errors during backup are related to views that are no longer valid because they reference tables or columns that have been renamed or dropped, or where the user who originally created the view has been removed from the server. Fix these by redefining the view, correctly.
Above all, test your backups by restoring them to a different server. If you haven't done this, you don't really have backups.
The output file can be compressed, usually substantially, with gzip/pigz, bzip2/bpzip2, xz/pixz, or zpaq. These are listed in approximate order by amount of space saved (gzip saves the least, zpaq saves the most) and speed (gzip is the fastest, zpaq is the slowest). pigz, pbzip2, pixz, and zpaq will take advantage of multiple cores, if you have then. The others can only use a single core at a time.
Use mysqlhotcopy it is well working with large databases
Work only MyISAM and ARCHIVE-tables.
Work only on the server where the database is stored.
This utility is deprecated in MySQL 5.6.20 and removed in MySQL 5.7

MySQL Database Dump Restores to Smaller Size

I just started using HeidiSQL to manage my databases at work. I was previously using NaviCat, and I was able to simply drag and drop a database from one server to a database on another server and copy all the tables and data--piece of cake, backed up. Similar functionality isn't obvious in Heidi, but I decided using mysqldump is a standard way of backing up a database and I should get to know it.
I used Heidi to perform the dump--creating databases, creating tables, inserting data, and making one single SQL file. This was performed on one single database that is 801.7 MB (checksum: 1755734665). When I executed the dump file on my local doppelganger database it appeared to work flawlessly, however the database size is 794.0 MB (checksum: 2937674450).
So, by creating a new database from a mysqldump of a database I lost 7.7 MB of something (and the databases have different checksums--not sure what that means though). I searched and found a post saying that performing a mysqldump can optimize a database, but could not find any conclusive information.
Am I being a helicopter parent? Is there any easy way to make sure I didn't lose data/structure somehow?
Thank you,
Kai

Remove `information_schema` from SQL dump

I am attempting to import a 40GB SQL backup on one of my servers and the software I used to automate the backups apparently included the "information_schema". Are there any tools/scripts/etc. that could be used to remove this data?
Due to the size of the SQL file, I have tried Notepad++ (file too large) and other Text Editors make it very difficult to tell what information belongs to the information_schema.
About at my wit's end and hoping there is something that could simplify removing this data from the SQL dump. I tried running the import with "-f" to force past it but it made what appears to be a bit of a mess.
I have tried how to figure out this. but the only idea is using grep and remove TABLES in inforation_schema like this: this example removes 3 tables from dump.sql
egrep -v '(GLOBAL_VARIABLES|CHARACTER_SETS|COLLATIONS)' dump.sql > new_dump.sql
in practice, 40 tables should be given....
I hope you succeed.
FYI. one database is 40GB? how many tables in there. To restore dump file quickly, in later separate big tables or databases from each other. and load concurrently.

how to automate postgress import and export of functions and views

I got some project using PostgreSQL database in legacy, and it uses 19 stored procedures (functions), and some 70 views.
Now, we did some update on live database, and as functions were changed, due to postgres limitation and need to drop and recreate all functions and views, we spent quite some time to do that.
Is there an automated way of changing functions and views in postgress in a way that it takes care about dependencies and do it in proper order.
We have basic views that then create upper level views ... its a bit complex database, at least for me :)
Thanks
I think the easiest way to do this is to backup a database to a text file:
pg_dump database_name > database_name.pg_dump
They'll be in a proper dependency order, as otherwise restoring a database from backup would be hard. You can edit function and view definitions in the backup file and restore it back to new database.
If database backup file is too big to be edited in your editor, from Postgres 9.2, you can split it to 3 sections:
pg_dump --section=pre-data > database_name.1.pg_dump
pg_dump --section=data > database_name.2.pg_dump
pg_dump --section=post-data > database_name.3.pg_dump
You'll edit only the first section, which will be small. In older versions you could use for example split utility.
If you cannot afford downtime required for backup and restore it gets trickier. But I'll still recommend working with backup file. Remember that Postgres supports DDL in transactions — if you import functions and views in a transaction and there'll be an error, you can simply rollback all changes, make corrections and try again.
There is no "easy" way. The best approach IMO is to be prepared first and the set up a way to do this using SQL scripts and version control.
What we do in LedgerSMB is we keep the function definitions in a series of .sql files, which are tracked in subversion. We then have a script that reloads them. This will take some work to set up if you haven't done so before. The easiest way to do this is:
pg_dump -s > ddl_statements_for_mydb.sql
Then you can copy/paste the function definitions (change CREATE to CREATE OR REPLACE or add a DROP IF EXISTS where appropriate). Then you will want to modularize into usable chunks and have a script that reloads all chunks in the right order into your db. the time and effort that goes into setting up everything now will save many times that in the future because you can apply changes in a predictable way to testing, staging, and production accounts with no appreciable downtime (perhaps even no downtime at all depending on how you structure it).

Oracle Backup Database with sqlplus it's possible?

I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.

Resources