I need to import a very large backup pf my database.
I'm using this command for importing all databases:
mysqldump -u root -p --all-databases < localhost.sql
It works, but only 5 db of 6 were imported.
The file has 700'000 lines so is very difficoult select only the last db i care about.
Any advices? Thank you!
EDIT:
Using
mysqldump -u root -p joomla < localhost.sql
got an error
'[root#tp lota]# mysqldump -u root -p joomla < localhost.sql
Enter password:
-- MySQL dump 10.13 Distrib 5.1.69, for redhat-linux-gnu (x86_64)
--
-- Host: localhost Database: joomla
-- ------------------------------------------------------
-- Server version 5.1.69
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET #OLD_TIME_ZONE=##TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0 */;
mysqldump: Got error: 1049: Unknown database 'joomla' when selecting the database'
EDIT #2: the problem was database information_schema inside the dump. After deleting it all went ok. Thank you for your answers.
Rather use mysql (not mysqldump) to import the data:
mysql -u root -p < localhost.sql
mysqldump is for exporting data. Also, you may need to create the (empty) database before importing.
Open Terminal and enter below commands
mysql -u root -p
eg:- mysql -u abcd -p
Check databases that are present
mysql> show databases;
Create a Database if not created before
mysql> create database "Name";
eg:- create database ABCD;
Then Select That New Database "ABCD"
mysql> USE ABCD;
Select the path of SQL file from the machine
mysql> source /home/Desktop/new_file.sql;
Then press enter and wait for some times if it's all executed then
mysql> exit
Related
I am trying to dump only data from a PostgreSQL database using pg_dump and then to restore those data into another one. But generating sql script with this tool also add some comments and settings into the output file.
Running this command :
pg_dump --column-inserts --data-only my_db > my_dump.sql
I get something like :
--
-- PostgreSQL database dump
--
-- Dumped from database version 8.4.22
-- Dumped by pg_dump version 10.8 (Ubuntu 10.8-0ubuntu0.18.04.1)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET row_security = off;
--
-- Data for Name: sf_guard_user; Type: TABLE DATA; Schema: public; Owner: admin
--
INSERT INTO public.....
Is there any way to avoid pg_dump generating those comments and settings ?
I could do a small script to remove every lines before the first insert but it also generates comments everywhere on the file and I am sure there is a cleaner way to proceed but found nothing.
I don't think there is. I'd simply pipe through grep to filter out lines that start with the comment delimiter:
pg_dump --column-inserts --data-only my_db | grep -v "^--" > my_dump.sql
I ran the following script to try to get all tables in my DB exported (trying to backup the data in CSVs).
SELECT 'sqlcmd -S . -d '+DB_NAME()+' -E -s, -W -Q "SET NOCOUNT ON; SELECT * FROM '+table_schema+'.'+TABLE_name+'" > "C:\Temp\'+Table_Name+'.csv"'
FROM [INFORMATION_SCHEMA].[TABLES]
I saved the results as a batch file and ran the batch file as Administrator.
That runs without an error, but I get no data exported. All it does is create blank CSV files.
I ran this as well: 'EXEC sp_configure 'remote access',1 reconfigure'.
Still, nothing is exported. CSVs are created, but no data is exported...
Any thoughts?
I ended up using R to do the task...
library("RODBC")
conn <- odbcDriverConnect('driver={SQL Server};server=Server_Name;DB_Name;trusted_connection=true')
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#1")
write.csv(data,file=paste("C:/Users/TBL#1.csv",sep=""),row.names=FALSE)
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#2")
write.csv(data,file=paste("C:/Users/TBL#2.csv",sep=""),row.names=FALSE)
Gotta love the IT teams in corporate America...especially when they lock down your system so tight, you need to come up with all kinds of weird hacks just so you can do the job that you were hired to do...
Is there a word for negative synergy?
I wrote a simple script to create a user (TestV100), create a table (Xy100) in that schema and export a tabl delimited flat file from hadoop to this oracle table.
This is the shell script: - ExportOracleTestV100.sh
#!/bin/bash
# Testing connectivity to Oracle DB
#sqoop eval --connect jdbc:oracle:thin:#hostname:1521:orcl -username test -password password --query "SELECT count(*) as bob FROM \"TestV1\".\"Test\"" --verbose
HOST=$1
USER=$2
PASS=$3
SCHEMA=$4
PORT=$5
SID=$6
SQOOP=/usr/bin/sqoop
JDBC="jdbc:oracle:thin:#$1:$5:$6"
SQOOP_EVAL="$SQOOP eval --connect $JDBC --username $USER --password $PASS --query"
#Create Schema and Tables;
${SQOOP_EVAL} "CREATE USER \"TestV100\" identified by \"password\""
${SQOOP_EVAL} "GRANT CONNECT TO \"TestV100\""
${SQOOP_EVAL} "ALTER USER \"TestV100\" QUOTA UNLIMITED ON USERS"
${SQOOP_EVAL} "DROP TABLE \"TestV100\".\"Xy100\""
${SQOOP_EVAL} "CREATE TABLE \"TestV100\".\"Xy100\"( \"a\" NVARCHAR2(255) DEFAULT NULL, \"x\" NUMBER(10,0) DEFAULT NULL, \"y\" NUMBER(10,0) DEFAULT NULL )"
############################
## Load Data into tables; ##
############################
SQOOP_EXPORT="/usr/bin/sudo -u hdfs $SQOOP export --connect ${JDBC} --username $USER --password $PASS --export-dir"
${SQOOP_EXPORT} "/tmp/rv/TestV100/xy100.txt" --table "\"\"$SCHEMA\".\"Xy100\"\"" --fields-terminated-by "\t" --input-null-string null -m 1
And this is the input file: - cat /tmp/rv/TestV100/Xy100.txt
c 8 3
a 1 4
c 6 1
c 2 0
a 7 7
c 4 2
c 7 5
a 0 0
c 5 6
a 2 2
a 5 5
a 3 6
c 9 7
a 4 1
c 3 4
a 6 3
b 6 5
b 8 7
b 5 1
b 7 3
b 2 4
b 1 0
b 4 6
b 3 2
This is how the shell script is called:
sh ./ExportOracleTestV100.sh oracle11 test password TestV100 1521 orcl --verbose
Note: 'test' user has full access on TestV100 schema.
Output:
[root#abc-repo-app1 rv]# sh ./ExportOracleTestV100.sh oracle11 test password TestV100 1521 orcl --verbose
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
15/11/02 12:40:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.4.1
15/11/02 12:40:07 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/11/02 12:40:07 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
15/11/02 12:40:07 INFO manager.SqlManager: Using default fetchSize of 1000
15/11/02 12:40:07 INFO tool.CodeGenTool: Beginning code generation
15/11/02 12:40:08 INFO manager.OracleManager: Time zone has been set to GMT
15/11/02 12:40:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "TestV100"."xy100" t WHERE 1=0
15/11/02 12:40:08 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.IllegalArgumentException: There is no column found in the target table "TestV100"."xy100". Please ensure that your table name is correct.
java.lang.IllegalArgumentException: There is no column found in the target table "TestV100"."xy100". Please ensure that your table name is correct.
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1658)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:96)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:64)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
As you can see above, Sqoop version is 1.4.5-cdh5.4.1
I somehow managed to get it this far. I see a lot of posting online for sqoop import and this error, and solution is to change the table name to UPPERCASE in the command. But, I am running export. Also, the oracle table HAS to be created with mixed case.
I hope I gave all required information here. Can someone please help or point to some place which can help me get past this error?
table name in uppercase worked.
sqoop export --connect jdbc:oracle:thin:#xyzx:1569:xyz --username xyz --password xyz --table CIPHADOOPCUSTOMERREPORT --export-dir /apps/hive/warehouse/ciprpt.db/dtd_customer_report --input-fields-terminated-by "\t" --input-lines-terminated-by "\n" --verbose -m 8 --input-null-string '\N' --input-null-non-string '\N'
Supply the table name in upper case in the --table argument.
Sounds silly but yeah - works with table name in upper caps.
I had this problem because target table for sqoop export was one column short.
Solution:
specify list of columns with --columns parameter; or regenerate target table to match schema of the input table.
In the command line, this will successfully update table1:
pt-table-sync --execute h=host1,D=db1,t=table1 h=host2,D=db2
However if I want to update more than one table, I'm not sure how to write it. This only updates table1 as well and ignores the other tables:
pt-table-sync --execute h=host1,D=db1,t=table1,table2,table3 h=host2,D=db2
And this gives me an error:
pt-table-sync --execute h=host1,D=db1 --tables table1,table2,table3 h=host2,D=db2
Anyone have an example of how to list the '-tables'... so that it successfully update all the tables in the list?
The --tables option seems to be incompatible with the DSN notation, you get this error:
You specified a database but not a table in h=localhost,D=test.
Are you trying to sync only tables in the 'test' database?
If so, use '--databases test' instead.
As suggested in that error message, you can use --databases and then you can use --tables successfully.
For example, I created tables test.foo and test.bar, filled each with three rows, then deleted the rows from test.bar on the second server dewey.
I ran this:
$ pt-table-sync h=huey h=dewey --databases test --tables foo,bar --execute --verbose
# Syncing h=dewey
# DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE
# 0 0 3 0 Chunk 15:26:15 15:26:15 2 test.bar
# 0 0 0 0 Chunk 15:26:15 15:26:15 0 test.foo
It successfully re-inserted the 3 missing rows in test.bar.
Other tables in my test database were ignored.
This is an old question, but I searched everywhere for an answer. pt-table-sync only does one table. There is no tool that does the same thing to a list of tables or a full database schema. Specifically I want to run a Live server and be able to sync back to a Staging server, then edit code and files in the Staging server without fear of messing up Live or being overwritten by Live... and I want it to be free :)
I ended up writing a shell script called mysql_sync_live_to_stage.sh as follows:
#!/bin/bash
# sync db live to staging
error_log_file='./mysql_sync_errors.log'
echo $(date +"%Y %m %d %H:%M") > $error_log_file
function sync_table()
{
pt-table-sync --no-foreign-key-checks --execute
h=DB_1_HOST,u=DB_1_USER,p=DB_1_PASSWORD,D=$1,t=$3
h=DB_2_HOST,u=DB_2_USER,p=DB_2_PASSWORD,D=$2,t=$3 >> $error_log_file
}
# SYNC ALL TABLES IN name_of_live_database
mysql -h "DB_1_HOST" -u "DB_1_USER" -pDB_1_PASSWORD -D "DB_1_DBNAME" -e "SHOW TABLES" |
egrep -i '[0-9a-z\-\_]+' | egrep -i -v 'Tables_in' | while read -r table ; do
echo "Processing $table"
sync_table "name_of_live_database" "name_of_staging_database" $table
done
# FIX Config Settings For Staging
echo "Cleanup Queries..."
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar'
WHERE config_id='foo'"
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar2'
WHERE config_id='foo2'"
echo "Done"
This reads a list of table names from the live site then executes a sync on each one via the do loop. It goes through the list alphabetically, so I recommend keeping the --no-foreign-key-checks flag.
Its not perfect... It won't sync tables that don't exist in both databases, but when combined with a "git pull -f origin master" I get a complete sync in a couple minutes.
I have database on a server with 120 tables.
I want to clone the whole database with a new db name and the copied data.
Is there an efficient way to do this?
$ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql
$ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql
mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW>
Like accepted answer but without .sql files:
mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS>
In case you use phpMyAdmin
Select the database you wish to copy (by clicking on the database from the phpMyAdmin home screen).
Once inside the database, select the Operations tab.
Scroll down to the section where it says "Copy database to:"
Type in the name of the new database.
Select "structure and data" to copy everything. Alternately, you can select "Structure only" if you want the columns but not the data.
Check the box "CREATE DATABASE before copying" to create a new database.
Check the box "Add AUTO_INCREMENT value."
Click on the Go button to proceed.
There is mysqldbcopy tool from the MySQL Utilities package.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.html
If you want to make sure it is an exact clone, the receiving database needs to be entirely cleared / dropped. This way, the new db only has the tables in your import file and nothing else. Otherwise, your receiving database could retain tables that weren't specified in your import file.
ex from prior answers:
DB1 == tableA, tableB
DB2 == tableB, tableC
DB1 imported to -> DB2
DB2 == tableA, tableB, tableC //true clone should not contain tableC
the change is easy with --databases and --add-drop-database (see mysql docs). This adds the drop statement to the sqldump so your new database will be an exact replica:
$ mysqldump -h $ip -u $user -p$pass --databases $dbname --add-drop-database > $file.sql
$ mysql -h $ip $dbname -u $user -p$pass < $file.sql
of course replace the $ variables and as always, no space between password and -p. For extra security, strip the -p$pass from your command
$newdb = (date('Y')-1);
$mysqli->query("DROP DATABASE `".$newdb."`;");
$mysqli->query("CREATE DATABASE `".$newdb."`;");
$query = "
SELECT
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA LIKE 'rds'
";
$result = $mysqli->query($query)->fetch_all(MYSQLI_ASSOC);
foreach($result as $val) {
echo $val['TABLE_NAME'].PHP_EOL;
$mysqli->query("CREATE TABLE `".$newdb."`.`".$val['TABLE_NAME']."` LIKE rds.`".$val['TABLE_NAME']."`");
$mysqli->query("INSERT `".$newdb."`.`".$val['TABLE_NAME']."` SELECT * FROM rds.`".$val['TABLE_NAME']."`");
}