How do I dump single table data from a huge dump file into the database.
If I understand your question correctly - you already have a dump file of many tables and you only need the restore one table (right?).
I think the only way to do that is to actually restore the whole file into a new DB, then copy the data from the new DB to the existing one, OR dump only the table you just restored from the new DB using:
mysqldump -u username -p db_name table_name > dump.sql
And restore it again wherever you need it.
To make things a little quicker and save some disk, you can kill the first restore operation after the desired table was completely restored, so I hope the table name begins with one of the first letters of the alphabet :)
There are some suggestions on how you might do this in the following articles:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
http://blog.tsheets.com/2008/tips-tricks/extract-a-single-table-from-a-mysqldump-file.html
I found these by searching for "load single table from mysql database dump" on Google: http://www.google.com/search?q=load+single+table+from+mysql=database+dump
Related
I'm looking for a way to automatize a copy of a database each day on the same SQL Server.
For example, I have a database MyDB. I would like, each day, do a copy of MyDB in a MyDB_TEST on the same server.
Is there any simple script to do this "simple" task ?
I found this script:
backup database OriginalDB
to disk = 'D:\backup\OriginalDB_full.bak'
with init, stats =10;
restore database new_db_name
from disk = 'D:\backup\OriginalDB_full.bak'
with stats =10, recovery,
move 'logical_Data_file' to 'D:\data\new_db_name.mdf',
move 'logical_log_file' to 'L:\log\new_db_name_log.ldf'
But I don't understand for what to replace in 'logical_Data_file' and 'logical_log_file'.
It's a move and I want a copy of my database...Why these two latest lines are "move" ?
I think I misunderstand this script...anyone could help me please?
EDIT :
I just edited my code like this :
backup database MY_DB
to disk = 'D:\BACKUP\MY_DB.bak'
with init, stats =10;
restore database MY_DB_NEW
from disk = 'D:\BACKUP\MY_DB.bak'
with stats =10, recovery,
move 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB.mdf' to 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_new.mdf',
move 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_log.mdf' to 'D:\Microsoft SQL Server\MSSQL12.SQLDIVALTO\MSSQL\DATA\MY_DB_new_log.ldf'
And I sadly get an error telling the logical file "MY_DB.mdf" is not a part of the My_DB_New database...use RESTORE FILELISTONLY to get logicals file name.
I don't understand where is my mistake in this script, any inputs?
When you RESTORE a database, unless you specify otherwise it would create the same files that it had in the previously; same name, same path. As you want a copy, that would be overwriting the existing ones. That would, obviously fail, as those files are in use by your original database.
Therefore you need tell the instance to put the database files ("move" them) to a different path, hence the MOVE clause. This means that you then don't have the 2 databases conflicting over trying to use or write other each others files.
Side note, this type of thing does normally, however, tend to suggest an xy problem, though that is a different question.
CORRECTION:
I wasn't putting the logical file name but the file name instead.
Just put the logical file name without the path!
I have a postgresql database for code (flask, ember) that is being developed. I did a db_dump to back up the existing data. Then I added a column in the code. I have to create the database again so the new column will be in the database. When I try to restore the data with psql -d dbname -f dumpfile I get many errors such as 'relation "xx" already exists', " violates foreign key constraint", etc.
I'm new to this. Is there a way to restore old data to a new empty database that has all the relationships set up already? Or do I have add a column "by hand" to the database when I add a column in the code, to keep the data?
The correct way to proceed is to use ALTER TABLE to add a column to the table.
When you upgrade code, you can simply replace the old code with new one. Not so with a database, because it holds state. You will have to provide SQL statements that modify the existing database so that it changes to the desired new state.
To keep this manageable, use specialized software like Flyway or Liquibase.
When you did the pg_dump, you only dumped the data and table structure, bit did not drop any tables. Now, you are trying to restore the dump, and that will attempt to re-create the tables.
You have a couple options (the first is what I'd recommend):
Add --clean to your pg_dump command -- this will DROP all the tables when you go to restore the dump file.
You can also --data-only your pg_dump command -- this will only dump the existing data, and will not attempt to re-create the tables. However, you will have to find a way to truncate your tables (or delete the data out of them) so as not to encounter any FK errors or PK collisions.
I am using SSMS and need to create a .bak of a table in one of the databases containing the content of that table.
The reason behind this is that I might need to populate the database again with this data at a later time (it is test data) and generating it again using the script I wrote takes to much time.
How do I do this in SSMS?
Taking your response to WEI_DBA very literally and accepting you prefer a bak file, I suggest you create a second database, then copy the content of this specific table (eg. with 'select into') to the second database. You can then backup the second database in the regular way.
create database foo
select *
into foo.dbo.table1
from dbo.table1
backup database foo
to disk = 'c:\temp.foo.bak'
with format
, medianame = 'foobak'
, name = 'Full backup foo'
The thing is that your instinct tells you that you need a backup (.bak) file. This is not the best way to share the contents of the table. Users would have to restore it to a dummy database and then copy in the same way as you now have ahead.
Sharing the table content via csv or xml is in my opinion a better way (and for that you can indeed use the Import/Export wizard as mentioned earlier).
wondering about a little convenience problem of mine atm.
Say I got a database with a variable amount of tables and I want a dump of that database BUT only the table structure and the data of one specific table.
It is, of course, basically possible to do that, but the command would be rather long and I'd have to know all the tables.
But I'd need a command without knowing the names or how many other tables there are, only the one table whose data I want should be relevant, the others are basically just cattle, and in the end I would like to have it all in one file.
Looking forward to reading some suggestions or maybe some pointers on how to solve my problem. Really curious :)
The default pg_dump output format is a psql script, so you can just concatenate them:
pg_dump -d my_db --schema-only > dump.sql
pg_dump -d my_db -t my_table --data-only >> dump.sql
How to combine several sqlite databases (one table per file) into one big sqlite database containing all the tables. e.g. you have database files: db1.dat, db2.dat, db3.dat.... and you want to create one file dbNew.dat which contains tables from all the db1, db2...
Several similar questions have been asked on various forums. I posted this question (with answer) for a particular reason. When you are dealing with several tables and have indexed many fields there. It causes unnecessary confusion to create index properly into the destination database tables. You may miss 1-2 index and its just annoying. The given method can also deal with large amount of data i.e. when you really have gbs of tables. Following are the steps to do so:
Download sqlite expert: http://www.sqliteexpert.com/download.html
Create a new database dbNew: File-> New Database
Load the 1st sqlite database db1 (containing a single table): File-> Open Database
Click on the 'DDL' option. It gives you a list of commands which are needed to create the particular sqlite table CONTENT.
Copy these commands and select 'SQL' option. Paste the commands there. Change the name of destination table DEST (from default name CONTENT) into whatever you want.
6'Click on 'Execute SQL'. This should give you a copy of the table CONTENT in db1 with the name DEST. The main utility of doing it is that you create all the index also in the DEST table as they were in the CONTENT table.
Now just click and drag the DEST table from the database db1 to the database dbNew.
Now just delete the database db1.
Go back to step 3 and repeat with the another database db2 etc.