Importing all mysql database at once - database

mysql allows you to export complete database at once but I found if really very tough to import the complete database at once.
I used mysqldump -u root -p --all-databases > alldb.sql and when I am trying to import the complete database by mysql -u root -p < alldb.sql command its giving me very weird error.
Error
SQL query:
--
-- Database: `fadudeal_blog`
--
-- --------------------------------------------------------
--
-- Table structure for table `wp_commentmeta`
--
CREATE TABLE IF NOT EXISTS `wp_commentmeta` (
`meta_id` BIGINT( 20 ) UNSIGNED NOT NULL AUTO_INCREMENT ,
`comment_id` BIGINT( 20 ) UNSIGNED NOT NULL DEFAULT '0',
`meta_key` VARCHAR( 255 ) COLLATE utf8mb4_unicode_ci DEFAULT NULL ,
`meta_value` LONGTEXT COLLATE utf8mb4_unicode_ci,
PRIMARY KEY ( `meta_id` ) ,
KEY `comment_id` ( `comment_id` ) ,
KEY `meta_key` ( `meta_key` ( 191 ) )
) ENGINE = INNODB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_unicode_ci AUTO_INCREMENT =1;
MySQL said: Documentation
#1046 - No database selected
Its saying #1046 - No database selected , my question is when the mysql knows that I have exported the complete database at once then how can I specify just one database name?
I don't now if I am right or wrong but I tried it multiple times, I found the same problem. Please let me know how we can upload or import complete database at once.

Rohit , i think you will have to create the database and then issue a USE and then run the SQL. I looked at the manual here https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html and it also provides an option for you to mention the db_name when you connect itself
something like mysql -u root -p < sql.sql (assuming the db is already created. It may be worth a try doing that way
shell> mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
from : https://dev.mysql.com/doc/refman/5.7/en/mysqldump-sql-format.html

Related

Create table using sqlcmd || Azure SQL Server

I have a create table statement in a file. Using sqlcmd command, I want to create a table. Below is the table structure present in the file column.sql:
CREATE TABLE [dbname].[accessforms].tblename1
(
pk_column int PRIMARY KEY,
column_1 int NOT NULL
);
GO
I run it like this:
sqlcmd -S server_name -U username -P password -i /home/usr/columns.sql -o /home/usr/columns.txt
And I am getting this error;
Reference to database and/or server name in 'dbname.accessforms.tblename1' is not supported in this version of SQL Server
Could you please help me? Why am I getting this error and how we can solve this?
You're running that query in the Cloud.
Azure Cloud doesn't allow three part naming conventions, such as database_name.schema_name.object_name.
You'll have to drop the database name from your reference and only use schema.object.
Your script will have to become:
CREATE TABLE [accessforms].tblename1
(
pk_column int PRIMARY KEY,
column_1 int NOT NULL
);
GO

Unknow database [...] when selecting the database (Ubuntu)

I'm trying to work with MySQL on my laptop (Ubuntu) and always that I have to export a .sql file to database, the console show me the same message: "unknow database Spotify (for example) when selecting the database".
The sql script is correct, and must work, but always show the same message; any solution?
CREATE DATABASE Spotify;
USE Spotify ;
DROP TABLE IF EXISTS Spotify.Usuarios ;
CREATE TABLE IF NOT EXISTS Spotify.Usuarios
(
iduser INT NULL ,
user VARCHAR(10) NULL ,
password VARCHAR(45) NULL ,
reg VARCHAR(45) NULL
) ENGINE = InnoDB;
Finally, I solved it: there was a problem with Ubuntu packages, and the mysql's installation didn't finished correctly

"IDENTITY_INSERT is set to off" sqoop error while exporting table to Sql Server

I am exporting a simple hive table to Sql server. Both tables have the exact schema. There is an identity column in Sql Server and I have done a "set identity_insert table_name on" on it.
But when I export from sqoop to sql server, sqoop gives me an error saying that "IDENTITY_INSERT is set to off".
If I export to a Sql Server table having no identity column then all works fine.
Any idea about this? Anyone faced this issue while exporting from sqoop to sql server?
Thanks
In Short:
Postfix -- --identity-insert to your Sqoop export command
Detailed:
Here is an example for anyone searching (and possibly for my own later reference).
SQLSERVER_JDBC_URI="jdbc:sqlserver://<address>:<port>;username=<username>;password=<password>"
HIVE_PATH="/user/hive/warehouse/"
$TABLENAME=<tablename>
sqoop-export \
-D mapreduce.job.queuename=<queuename> \
--connect $SQLSERVER_JDBC_URI \
--export-dir "$HIVE_PATH""$TABLENAME" \
--input-fields-terminated-by , \
--table "$TABLENAME" \
-- --schema <schema> \
--identity-insert
Note the particular bits on the last line -- -- --schema <schema> --identity-insert . You can omit the schema part, but leave in the extra --.
That allows you to set the identity insert ability for that table within your sqoop session. (source)
Tell SQL Server to let you insert into the table with the IDENTITY column. That's an autoincrement column that you normally can't write to. But you can change that. See here or here. It'll still fail if one of your values conflicts with one that already exists in that column.
The SET IDENTITY_INSERT statement is session-specific. So if you set it by opening a query window, executing the statement, and then ran the export anywhere else, IDENTITY_INSERT was only set in that session, not in the export session. You need to modify the export itself if possible. If not, a direct export from sqoop to MSSQL will not be possible; instead you will need to dump the data from sqoop to a file that MSSQL can read (such as tab delimited) and then write a statement that first does SET IDENTITY_INSERT ON, then BULK INSERTs the file, then does SET IDENTITY_INSERT OFF.

Using BCP to import data to SQL Server while preserving accents, Asian characters, etc

I'm trying to import a PostgreSQL dump of data into SQL Server using bcp. I've written a Python script to switches delimiters into '^' and eliminate other bad formatting, but I cannot find the correct switches to preserve unicode formatting for the strings when importing into SQL Server.
In Python, if I print out the lines that are causing me trouble, the row looks like this with the csv module:
['12', '\xe4\xb8\x89\xe5\x8e\x9f \xe3\x81\x95\xe3\x81\xa8\xe5\xbf\x97']
The database table only has 2 columns: one integer, one varchar.
My statement (simplified) for creating the table is only:
CREATE TABLE [dbo].[example](
[ID] [int] NOT NULL,
[Comment] [nvarchar](max)
)
And to run bcp, I'm using this line:
c:\>bcp dbo.example in fileinput -S servername -T -t^^ -c
It successfully imports about a million rows, but all of my accented characters are broken.
For example, "Böhm, Rüdiger" is turned into "B+¦hm, R++diger". Does anyone have experience with how to properly set switches or other hints with bcp?
Edit: varchar switched to nvarchar, but this does not fix the issue. This output in Python (reading with CSV module):
['62', 'B\xc3\xb6hm, R\xc3\xbcdiger']
is displayed as this in SSMS from the destination DB (delimiters matched for consistency):
select * from dbo.example where id = 62
62;"B├╢hm, R├╝diger"
where in pgAdmin, using the original DB, I have this:
62;"Böhm, Rüdiger"
You may need to modify your BCP command to support wide character sets (note the use of -w instead of -c switch)
bcp dbo.example in fileinput -S servername -T -t^^ -w
BCP documentation reference
See also http://msdn.microsoft.com/en-us/library/ms188289.aspx
If you need to preserve unicode change varchar to nvarchar...

Mysql dump of single table

I am trying to make some normal (understand restorable) backup of mysql backup. My problem is, that I only need to back up a single table, which was last created, or edited. Is it possible to set mysqldump to do that? Mysql can find the last inserted table, but how can I include it in mysql dump command? I need to do that without locking the table, and the DB has partitioning enabled.... Thanks for help...
You can use this SQL to get the last inserted / updated table :-
select table_schema, table_name
from information_schema.tables
where table_schema not in ("mysql", "information_schema", "performance_schema")
order by greatest(create_time, update_time) desc limit 1;
Once you have the results from this query, you can cooperate it into any other language (for example bash) to produce the exact table dump).
./mysqldump -uroot -proot mysql user > mysql_user.sql
For dumping a single table use the below command.
Open cmd prompt and type the path of mysql like c:\program files\mysql\bin.
Now type the command:
mysqldump -u username -p password databasename table name > C:\backup\filename.sql
Here username - your mysql username
password - your mysql password
databasename - your database name
table name - your table name
C:\backup\filename.sql - path where the file should save and the filename.
If you want to add the backup table to any other database you can do it by following steps:
login to mysql
type the below command
mysql -u username -p password database name < C:\backup\filename.sql

Resources