I recently decided to switch the company through which i get my hosting, so to move my old db into my new db, i have been trying to run this:
mysqldump --host=ipaddress --user=username --password=password db_name table_name | mysql -u username -ppassword -h new_url new_db_name
and this seemed to be working fine.. but because my database is so freaking massive, i would get time out errors in the middle of my tables. So i was wondering if there was any easy way to do a mysqldump on just part of my table.
I would assume the work flow will look something like this:
create temp_table
move rows from old_table where id>2,500,000 into temp_table
some how dump the temp table into the new db's table (which has the same name as old_table)
but i'm not exactly sure how to do those steps.
Add this --where="id>2500000" at the end of mysqldump command. MySQL 5.1 Reference Manual
In your case the mysqldump command would look like
mysqldump --host=ipaddress \
--user=username \
--password=password \
db_name table_name \
--where="id>2500000
If you dump twice. The second dump will contain table creation info. But next time you want to add the new rows only. So for second dump add --no-create-info option in mysqldump command line.
I've developed a tool for this job. It's called mysqlsuperdump and can be found here:
https://github.com/hgfischer/mysqlsuperdump
With it you can speciffy the full "WHERE" clause for each table, so it's possible to specify different rules for each table.
You can also replace the values of each column by each table in the dump. This is useful, for example, when you want to export a database dump to use in development environment.
Related
I want to find all the columns with a name that includes a specific string using PSQL in a Pervasive database. How do I do that?
You can query the X$Field table for your string. Something like:
select file.xf$name, field.xe$name from x$field field
join x$file file on xe$file = xf$id
where xe$name like '%some string%'
This query should work for both original and v2 (long metadata) databases but would only work if you have the DDFs (FILE.DDF, FIELD.DDF, and INDEX.DDF at a minimum) and have a PSQL database setup pointing to the DDFs.
I want to make a dump file of a DB, but all I want from the DB is the rows that are associated with a specific value. For example, I want to create a dump file for all the tables with rows related an organization_id of 23e4r. Is there a way to do that?
mysqldump has a --where option, which lets you specify a WHERE clause, exactly as if you were writing a query, eg:
mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> <table> > dumpfile.sql
If you want to dump the results from multiple tables that match that criteria, its:
for T in table1 table2 table3; do mysqldump -u<user> -p<password> --where="organization_id=23e4r" <database> $T >> dumpfile.sql;done
Assuming you are using a bash shell, or equivalent
I am exporting a simple hive table to Sql server. Both tables have the exact schema. There is an identity column in Sql Server and I have done a "set identity_insert table_name on" on it.
But when I export from sqoop to sql server, sqoop gives me an error saying that "IDENTITY_INSERT is set to off".
If I export to a Sql Server table having no identity column then all works fine.
Any idea about this? Anyone faced this issue while exporting from sqoop to sql server?
Thanks
In Short:
Postfix -- --identity-insert to your Sqoop export command
Detailed:
Here is an example for anyone searching (and possibly for my own later reference).
SQLSERVER_JDBC_URI="jdbc:sqlserver://<address>:<port>;username=<username>;password=<password>"
HIVE_PATH="/user/hive/warehouse/"
$TABLENAME=<tablename>
sqoop-export \
-D mapreduce.job.queuename=<queuename> \
--connect $SQLSERVER_JDBC_URI \
--export-dir "$HIVE_PATH""$TABLENAME" \
--input-fields-terminated-by , \
--table "$TABLENAME" \
-- --schema <schema> \
--identity-insert
Note the particular bits on the last line -- -- --schema <schema> --identity-insert . You can omit the schema part, but leave in the extra --.
That allows you to set the identity insert ability for that table within your sqoop session. (source)
Tell SQL Server to let you insert into the table with the IDENTITY column. That's an autoincrement column that you normally can't write to. But you can change that. See here or here. It'll still fail if one of your values conflicts with one that already exists in that column.
The SET IDENTITY_INSERT statement is session-specific. So if you set it by opening a query window, executing the statement, and then ran the export anywhere else, IDENTITY_INSERT was only set in that session, not in the export session. You need to modify the export itself if possible. If not, a direct export from sqoop to MSSQL will not be possible; instead you will need to dump the data from sqoop to a file that MSSQL can read (such as tab delimited) and then write a statement that first does SET IDENTITY_INSERT ON, then BULK INSERTs the file, then does SET IDENTITY_INSERT OFF.
I am trying to make some normal (understand restorable) backup of mysql backup. My problem is, that I only need to back up a single table, which was last created, or edited. Is it possible to set mysqldump to do that? Mysql can find the last inserted table, but how can I include it in mysql dump command? I need to do that without locking the table, and the DB has partitioning enabled.... Thanks for help...
You can use this SQL to get the last inserted / updated table :-
select table_schema, table_name
from information_schema.tables
where table_schema not in ("mysql", "information_schema", "performance_schema")
order by greatest(create_time, update_time) desc limit 1;
Once you have the results from this query, you can cooperate it into any other language (for example bash) to produce the exact table dump).
./mysqldump -uroot -proot mysql user > mysql_user.sql
For dumping a single table use the below command.
Open cmd prompt and type the path of mysql like c:\program files\mysql\bin.
Now type the command:
mysqldump -u username -p password databasename table name > C:\backup\filename.sql
Here username - your mysql username
password - your mysql password
databasename - your database name
table name - your table name
C:\backup\filename.sql - path where the file should save and the filename.
If you want to add the backup table to any other database you can do it by following steps:
login to mysql
type the below command
mysql -u username -p password database name < C:\backup\filename.sql
Well.. the question is descriptive enough I guess. What I am looking for is an exact equivalent of the below MySQL command in oracle-
mysqldump --xml --no-data -u[username] -p[pass] [db_instance] > [someXMLfile]
Where on a linux box do I have to run the oracle command? Straight inside the shell would do?
You can get an XML representation of any given table using the GET_XML function in the DBMS_METADATA package. The DBMS_METADATA documentation has an example of generating the XML for all tables in a schema (this excludes the storage clauses, though you can obviously eliminate that call)
set pagesize 0
set long 90000
execute DBMS_METADATA.SET_TRANSFORM_PARAM(
DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM USER_ALL_TABLES u
WHERE u.nested='NO'
AND (u.iot_type is null or u.iot_type='IOT');
execute DBMS_METADATA.SET_TRANSFORM_PARAM(
DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');