i have 10+ tables, i want to export them to another datebase. how could i do that?
i tried select * from table_a, table_b into ourfile "/tmp/tmp.data", but it joined the two tables.
It's probably too late, but for the record:
Export an entire database:
mysqldump -u user -p database_name > filename.sql
Export only one table of the database:
mysqldump -u user -p database_name table_name > filename.sql
Export multiple tables of the database
Just like exporting one table, but keep on writing table names after the first table name (with one space between each name). Example exporting 3 tables:
mysqldump -u user -p database_name table_1 table_2 table_3 > filename.sql
Notes:
The tables are exported (i.e. written in the file) in the order in which they are written down in the command.
All of the examples above export the structure and the data of the database or table. To export only the structure, use no-data. Example exporting only one table of the database, but with no-data:
mysqldump -u user -p --no-data database_name table_name > filename.sql
Export mysqldump -u user -p mydatabasename > filename.sql
Import mysql -u user -p anotherdatabase < filename.sql
Related
I need simple example how to copy data from database DB1 table T1 to database DB2 table T2.
T2 has identical structure like T1 (same column names, properties. Just different data)
DB2 running on same server like DB1, but on different port.
In the case the two databases are on two different server instances, you could export in CSV from db1 and then import the data in db2 :
COPY (SELECT * FROM t1) TO '/home/export.csv';
and then load back into db2 :
COPY t2 FROM '/home/export.csv';
Again, the two tables on the two different database instances must have the same structure.
Using the command line tools : pg_dump and psql , you could do even in this way :
pg_dump -U postgres -t t1 db1 | psql -U postgres -d db2
You can specify command line arguments to both pg_dump and psql to specify the address and/or port of the server .
Another option would be to use an external tool like : openDBcopy, to perform the migration/copy of the table.
You can try this one -
pg_dump -t table_name_to_copy source_db | psql target_db
mysql allows you to export complete database at once but I found if really very tough to import the complete database at once.
I used mysqldump -u root -p --all-databases > alldb.sql and when I am trying to import the complete database by mysql -u root -p < alldb.sql command its giving me very weird error.
Error
SQL query:
--
-- Database: `fadudeal_blog`
--
-- --------------------------------------------------------
--
-- Table structure for table `wp_commentmeta`
--
CREATE TABLE IF NOT EXISTS `wp_commentmeta` (
`meta_id` BIGINT( 20 ) UNSIGNED NOT NULL AUTO_INCREMENT ,
`comment_id` BIGINT( 20 ) UNSIGNED NOT NULL DEFAULT '0',
`meta_key` VARCHAR( 255 ) COLLATE utf8mb4_unicode_ci DEFAULT NULL ,
`meta_value` LONGTEXT COLLATE utf8mb4_unicode_ci,
PRIMARY KEY ( `meta_id` ) ,
KEY `comment_id` ( `comment_id` ) ,
KEY `meta_key` ( `meta_key` ( 191 ) )
) ENGINE = INNODB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_unicode_ci AUTO_INCREMENT =1;
MySQL said: Documentation
#1046 - No database selected
Its saying #1046 - No database selected , my question is when the mysql knows that I have exported the complete database at once then how can I specify just one database name?
I don't now if I am right or wrong but I tried it multiple times, I found the same problem. Please let me know how we can upload or import complete database at once.
Rohit , i think you will have to create the database and then issue a USE and then run the SQL. I looked at the manual here https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html and it also provides an option for you to mention the db_name when you connect itself
something like mysql -u root -p < sql.sql (assuming the db is already created. It may be worth a try doing that way
shell> mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
from : https://dev.mysql.com/doc/refman/5.7/en/mysqldump-sql-format.html
I want to make a dump file of a DB, but all I want from the DB is the rows that are associated with a specific value. For example, I want to create a dump file for all the tables with rows related an organization_id of 23e4r. Is there a way to do that?
mysqldump has a --where option, which lets you specify a WHERE clause, exactly as if you were writing a query, eg:
mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> <table> > dumpfile.sql
If you want to dump the results from multiple tables that match that criteria, its:
for T in table1 table2 table3; do mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> $T >> dumpfile.sql;done
Assuming you are using a bash shell, or equivalent
I recently decided to switch the company through which i get my hosting, so to move my old db into my new db, i have been trying to run this:
mysqldump --host=ipaddress --user=username --password=password db_name table_name | mysql -u username -ppassword -h new_url new_db_name
and this seemed to be working fine.. but because my database is so freaking massive, i would get time out errors in the middle of my tables. So i was wondering if there was any easy way to do a mysqldump on just part of my table.
I would assume the work flow will look something like this:
create temp_table
move rows from old_table where id>2,500,000 into temp_table
some how dump the temp table into the new db's table (which has the same name as old_table)
but i'm not exactly sure how to do those steps.
Add this --where="id>2500000" at the end of mysqldump command. MySQL 5.1 Reference Manual
In your case the mysqldump command would look like
mysqldump --host=ipaddress \
--user=username \
--password=password \
db_name table_name \
--where="id>2500000
If you dump twice. The second dump will contain table creation info. But next time you want to add the new rows only. So for second dump add --no-create-info option in mysqldump command line.
I've developed a tool for this job. It's called mysqlsuperdump and can be found here:
https://github.com/hgfischer/mysqlsuperdump
With it you can speciffy the full "WHERE" clause for each table, so it's possible to specify different rules for each table.
You can also replace the values of each column by each table in the dump. This is useful, for example, when you want to export a database dump to use in development environment.
I am trying to make some normal (understand restorable) backup of mysql backup. My problem is, that I only need to back up a single table, which was last created, or edited. Is it possible to set mysqldump to do that? Mysql can find the last inserted table, but how can I include it in mysql dump command? I need to do that without locking the table, and the DB has partitioning enabled.... Thanks for help...
You can use this SQL to get the last inserted / updated table :-
select table_schema, table_name
from information_schema.tables
where table_schema not in ("mysql", "information_schema", "performance_schema")
order by greatest(create_time, update_time) desc limit 1;
Once you have the results from this query, you can cooperate it into any other language (for example bash) to produce the exact table dump).
./mysqldump -uroot -proot mysql user > mysql_user.sql
For dumping a single table use the below command.
Open cmd prompt and type the path of mysql like c:\program files\mysql\bin.
Now type the command:
mysqldump -u username -p password databasename table name > C:\backup\filename.sql
Here username - your mysql username
password - your mysql password
databasename - your database name
table name - your table name
C:\backup\filename.sql - path where the file should save and the filename.
If you want to add the backup table to any other database you can do it by following steps:
login to mysql
type the below command
mysql -u username -p password database name < C:\backup\filename.sql