I have a create table statement in a file. Using sqlcmd command, I want to create a table. Below is the table structure present in the file column.sql:
CREATE TABLE [dbname].[accessforms].tblename1
(
pk_column int PRIMARY KEY,
column_1 int NOT NULL
);
GO
I run it like this:
sqlcmd -S server_name -U username -P password -i /home/usr/columns.sql -o /home/usr/columns.txt
And I am getting this error;
Reference to database and/or server name in 'dbname.accessforms.tblename1' is not supported in this version of SQL Server
Could you please help me? Why am I getting this error and how we can solve this?
You're running that query in the Cloud.
Azure Cloud doesn't allow three part naming conventions, such as database_name.schema_name.object_name.
You'll have to drop the database name from your reference and only use schema.object.
Your script will have to become:
CREATE TABLE [accessforms].tblename1
(
pk_column int PRIMARY KEY,
column_1 int NOT NULL
);
GO
Related
I need simple example how to copy data from database DB1 table T1 to database DB2 table T2.
T2 has identical structure like T1 (same column names, properties. Just different data)
DB2 running on same server like DB1, but on different port.
In the case the two databases are on two different server instances, you could export in CSV from db1 and then import the data in db2 :
COPY (SELECT * FROM t1) TO '/home/export.csv';
and then load back into db2 :
COPY t2 FROM '/home/export.csv';
Again, the two tables on the two different database instances must have the same structure.
Using the command line tools : pg_dump and psql , you could do even in this way :
pg_dump -U postgres -t t1 db1 | psql -U postgres -d db2
You can specify command line arguments to both pg_dump and psql to specify the address and/or port of the server .
Another option would be to use an external tool like : openDBcopy, to perform the migration/copy of the table.
You can try this one -
pg_dump -t table_name_to_copy source_db | psql target_db
mysql allows you to export complete database at once but I found if really very tough to import the complete database at once.
I used mysqldump -u root -p --all-databases > alldb.sql and when I am trying to import the complete database by mysql -u root -p < alldb.sql command its giving me very weird error.
Error
SQL query:
--
-- Database: `fadudeal_blog`
--
-- --------------------------------------------------------
--
-- Table structure for table `wp_commentmeta`
--
CREATE TABLE IF NOT EXISTS `wp_commentmeta` (
`meta_id` BIGINT( 20 ) UNSIGNED NOT NULL AUTO_INCREMENT ,
`comment_id` BIGINT( 20 ) UNSIGNED NOT NULL DEFAULT '0',
`meta_key` VARCHAR( 255 ) COLLATE utf8mb4_unicode_ci DEFAULT NULL ,
`meta_value` LONGTEXT COLLATE utf8mb4_unicode_ci,
PRIMARY KEY ( `meta_id` ) ,
KEY `comment_id` ( `comment_id` ) ,
KEY `meta_key` ( `meta_key` ( 191 ) )
) ENGINE = INNODB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_unicode_ci AUTO_INCREMENT =1;
MySQL said: Documentation
#1046 - No database selected
Its saying #1046 - No database selected , my question is when the mysql knows that I have exported the complete database at once then how can I specify just one database name?
I don't now if I am right or wrong but I tried it multiple times, I found the same problem. Please let me know how we can upload or import complete database at once.
Rohit , i think you will have to create the database and then issue a USE and then run the SQL. I looked at the manual here https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html and it also provides an option for you to mention the db_name when you connect itself
something like mysql -u root -p < sql.sql (assuming the db is already created. It may be worth a try doing that way
shell> mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
from : https://dev.mysql.com/doc/refman/5.7/en/mysqldump-sql-format.html
I'm trying to work with MySQL on my laptop (Ubuntu) and always that I have to export a .sql file to database, the console show me the same message: "unknow database Spotify (for example) when selecting the database".
The sql script is correct, and must work, but always show the same message; any solution?
CREATE DATABASE Spotify;
USE Spotify ;
DROP TABLE IF EXISTS Spotify.Usuarios ;
CREATE TABLE IF NOT EXISTS Spotify.Usuarios
(
iduser INT NULL ,
user VARCHAR(10) NULL ,
password VARCHAR(45) NULL ,
reg VARCHAR(45) NULL
) ENGINE = InnoDB;
Finally, I solved it: there was a problem with Ubuntu packages, and the mysql's installation didn't finished correctly
I am exporting a simple hive table to Sql server. Both tables have the exact schema. There is an identity column in Sql Server and I have done a "set identity_insert table_name on" on it.
But when I export from sqoop to sql server, sqoop gives me an error saying that "IDENTITY_INSERT is set to off".
If I export to a Sql Server table having no identity column then all works fine.
Any idea about this? Anyone faced this issue while exporting from sqoop to sql server?
Thanks
In Short:
Postfix -- --identity-insert to your Sqoop export command
Detailed:
Here is an example for anyone searching (and possibly for my own later reference).
SQLSERVER_JDBC_URI="jdbc:sqlserver://<address>:<port>;username=<username>;password=<password>"
HIVE_PATH="/user/hive/warehouse/"
$TABLENAME=<tablename>
sqoop-export \
-D mapreduce.job.queuename=<queuename> \
--connect $SQLSERVER_JDBC_URI \
--export-dir "$HIVE_PATH""$TABLENAME" \
--input-fields-terminated-by , \
--table "$TABLENAME" \
-- --schema <schema> \
--identity-insert
Note the particular bits on the last line -- -- --schema <schema> --identity-insert . You can omit the schema part, but leave in the extra --.
That allows you to set the identity insert ability for that table within your sqoop session. (source)
Tell SQL Server to let you insert into the table with the IDENTITY column. That's an autoincrement column that you normally can't write to. But you can change that. See here or here. It'll still fail if one of your values conflicts with one that already exists in that column.
The SET IDENTITY_INSERT statement is session-specific. So if you set it by opening a query window, executing the statement, and then ran the export anywhere else, IDENTITY_INSERT was only set in that session, not in the export session. You need to modify the export itself if possible. If not, a direct export from sqoop to MSSQL will not be possible; instead you will need to dump the data from sqoop to a file that MSSQL can read (such as tab delimited) and then write a statement that first does SET IDENTITY_INSERT ON, then BULK INSERTs the file, then does SET IDENTITY_INSERT OFF.
I am trying to make some normal (understand restorable) backup of mysql backup. My problem is, that I only need to back up a single table, which was last created, or edited. Is it possible to set mysqldump to do that? Mysql can find the last inserted table, but how can I include it in mysql dump command? I need to do that without locking the table, and the DB has partitioning enabled.... Thanks for help...
You can use this SQL to get the last inserted / updated table :-
select table_schema, table_name
from information_schema.tables
where table_schema not in ("mysql", "information_schema", "performance_schema")
order by greatest(create_time, update_time) desc limit 1;
Once you have the results from this query, you can cooperate it into any other language (for example bash) to produce the exact table dump).
./mysqldump -uroot -proot mysql user > mysql_user.sql
For dumping a single table use the below command.
Open cmd prompt and type the path of mysql like c:\program files\mysql\bin.
Now type the command:
mysqldump -u username -p password databasename table name > C:\backup\filename.sql
Here username - your mysql username
password - your mysql password
databasename - your database name
table name - your table name
C:\backup\filename.sql - path where the file should save and the filename.
If you want to add the backup table to any other database you can do it by following steps:
login to mysql
type the below command
mysql -u username -p password database name < C:\backup\filename.sql