I have a database (db4) that was created by MySQL4, and a database (db5) that was created by MySQL5. db4 contains several tables with the charset latin1 and several indices, but no data that was encrypted using the MySQL "PASSWORD" function. db5 is empty.
I want to migrate all tables and indices from db4 to db5 (which are actually on the same server). Ideally this should be done without any loss of information and within a short period of time.
Which terminal commands do I need to download the complete database from MySQL4 and insert the data afterwards to db5? Do I have to re-create the indices?
You can make a dump of database in mysql4 using mysqldump. And than upload it to MySQL5 using mysql command.
mysqldump dbname > file
mysql dbname < file
All the indexes will be recreated automatically.
In case anyone else needs to move a database from Mysql4 to Mysql5, here's what I did.
dump the database from the mysql4 server
mysqldump -uuser -ppass db4 > db4.sql
fix some syntax problems (source)
# change comment style from -- to #
sed -r -i -e 's/^--(.*)$/#\1/' db4.sql
# change type declaration keyword from "TYPE" to "ENGINE"
sed -i -e 's/) TYPE=/) ENGINE=/' db4.sql
# adapt timestamp field definition
sed -i -e 's/timestamp(14) NOT NULL,$/timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,/' db4.sql
on the mysql5 server you can now import the modified SQL dump
mysql -uuser -ppass db5 < db4.sql
Sven, I think doing a backup of your data in db4 and restoring it in db5 will work for you.
Backup
mysqldump database_name > file_name.sql
Restore
mysql < file_name.sql
It can be also done within a single step, using the following command:
mysqldump -u dbo4 --password="..." --default-character-set="latin1" db4 | mysql -S /tmp/mysql5.sock -u dbo5 --password="..." --default-character-set="latin1" db5
Unfortunately the standard-values with special characters are not correctly imported, and there seems to be no way to avoid that: How to maintain character-set of standard-values when uploading MySQL-dump.
Related
I use RedGate SQL data compare and generated a .sql file, so I could run it on my local machine. But the problem is that the file is over 300mb, which means I can't do copy and paste because the clipboard won't be able to handle it, and when I try to open the file in SQL Server Management Studio I get an error about the file being too large.
Is there a way to run a large .sql file? The file basically contains data for two new tables.
From the command prompt, start up sqlcmd:
sqlcmd -S <server> -i C:\<your file here>.sql
Just replace <server> with the location of your SQL box and <your file here> with the name of your script. Don't forget, if you're using a SQL instance the syntax is:
sqlcmd -S <server>\instance.
Here is the list of all arguments you can pass sqlcmd:
Sqlcmd [-U login id] [-P password]
[-S server] [-H hostname] [-E trusted connection]
[-d use database name] [-l login timeout] [-t query timeout]
[-h headers] [-s colseparator] [-w screen width]
[-a packetsize] [-e echo input] [-I Enable Quoted Identifiers]
[-c cmdend] [-L[c] list servers[clean output]]
[-q "cmdline query"] [-Q "cmdline query" and exit]
[-m errorlevel] [-V severitylevel] [-W remove trailing spaces]
[-u unicode output] [-r[0|1] msgs to stderr]
[-i inputfile] [-o outputfile] [-z new password]
[-f | i:[,o:]] [-Z new password and exit]
[-k[1|2] remove[replace] control characters]
[-y variable length type display width]
[-Y fixed length type display width]
[-p[1] print statistics[colon format]]
[-R use client regional setting]
[-b On error batch abort]
[-v var = "value"...] [-A dedicated admin connection]
[-X[1] disable commands, startup script, environment variables [and exit]]
[-x disable variable substitution]
[-? show syntax summary]
I had exactly the same issue and had been struggling for a while then finally found the solution which is to set -a parameter to the sqlcmd in order to change its default packet size:
sqlcmd -S [servername] -d [databasename] -i [scriptfilename] -a 32767
You can use this tool as well. It is really useful.
BigSqlRunner
NB: Broken link, so have updated it.
Take command prompt with administrator privilege
Change directory to where the .sql file stored
Execute the following command
sqlcmd -S 'your server name' -U 'user name of server' -P 'password of server' -d 'db name'-i script.sql
I am using MSSQL Express 2014 and none of the solutions worked for me. They all just crashed SQL. As I only needed to run a one off script with many simple insert statements I got around it by writing a little console app as a very last resort:
class Program
{
static void Main(string[] args)
{
RunScript();
}
private static void RunScript()
{
My_DataEntities db = new My_DataEntities();
string line;
System.IO.StreamReader file =
new System.IO.StreamReader("c:\\ukpostcodesmssql.sql");
while ((line = file.ReadLine()) != null)
{
db.Database.ExecuteSqlCommand(line);
}
file.Close();
}
}
Run it at the command line with osql, see here:
http://metrix.fcny.org/wiki/display/dev/How+to+execute+a+.SQL+script+using+OSQL
Hope this help you!
sqlcmd -u UserName -s <ServerName\InstanceName> -i U:\<Path>\script.sql
I had similar problem. My file with sql script was over 150MB of size (with almost 900k of very simple INSERTs). I used solution advised by Takuro (as the answer in this question) but I still got error with message saying that there was not enough memory ("There is insufficient system memory in resource pool 'internal' to run this query").
What helped me was that I put GO command after every 50k INSERTs.
(It's not directly addressing the question (file size) but I believe it resolves problem that is indirectly connected with large size of sql script itself. In my case many insert commands)
==> sqlcmd -S [servername] -d [databasename] -i [scriptfilename] -a 32767
I have successfully done with this command with 365mb sql file.
this syntax runs in about 15 minutes.
it helped me solve a problem that took me a long time to figure out
Run the script file
Open a command prompt window.
In the Command Prompt window, type: sqlcmd -S <ServerName\InstanceName> -i C:\yourScript.sql
Press ENTER.
Your question is quite similar to this one
You can save your file/script as .txt or .sql and run it from Sql Server Management Studio (I think the menu is Open/Query, then just run the query in the SSMS interface). You migh have to update the first line, indicating the database to be created or selected on your local machine.
If you have to do this data transfer very often, you could then go for replication. Depending on your needs, snapshot replication could be ok. If you have to synch the data between your two servers, you could go for a more complex model such as merge replication.
EDIT: I didn't notice that you had problems with SSMS linked to file size. Then you can go for command-line, as proposed by others, snapshot replication (publish on your main server, subscribe on your local one, replicate, then unsubscribe) or even backup/restore
The file basically contain data for two new tables.
Then you may find it simpler to just DTS (or SSIS, if this is SQL Server 2005+) the data over, if the two servers are on the same network.
If the two servers are not on the same network, you can backup the source database and restore it to a new database on the destination server. Then you can use DTS/SSIS, or even a simple INSERT INTO SELECT, to transfer the two tables to the destination database.
There is probably another way for all the fellows still encountering problems importing really large SQL dumps.
What also be considered when possible: If you have access to the server you could export the database in multiple parts, like first the structure, then per table (or related objects) an export of the data in smaller pieces, instead of one big file.
When you don't have access to server and/or required to use the existing big file, you could try to split them into parts with SQLDumpSplitter: https://philiplb.de/sqldumpsplitter3/.
Then import the pieces to get a full copy of the database.
Good luck, guys.
I have a database which contains 50 tables (5 schemas, 5 tablespaces). And tried to take a backup of few tables (each table in different tablespace) using following command.
$psql -U my_db_user my_db_name -t my_table_1 -t my_table_2 -t my_table_3 > ttables.sql
Above command is working fine to take the *sql backup. But the table column value is having null values. While restoring the the dump using the following command getting some error due to null (\N) values which is in backup file (ttables.sql).
$cat ttables.sql | psql -d new_db -U new_db_user
Is there any way to avoid \N characters in backup dump file? or Any wrong with backup / restore command which I have used?
(Postgres version 9.1)
I have a huge database which I want to dump out using BCP and then load it up elsewhere. I have done quite a bit of research on the Sybase version of BCP (being more familiar with the MSSQL one) and I see how to USE an Import file but I can't figure out for the life of me how to create one.
I am currently making my Sybase bcp out files of data like this:
bcp mytester.dbo.XTABLE out XTABLE.bcp -U sa -P mypass -T -n
and trying to import them back in like this:
bcp mytester.dbo.XTABLE in XTABLE.bcp -E -n -S Sybase_157 -U sa -P SyAdmin
Right now, the IN part gives me an error about IDENTITY_INSERT regardless of if the table has an identity or not:
Server Message: Sybase157 - Msg 7756, Level 16, State 1: Cannot use
'SET IDENTITY_INSERT' for table 'mytester.dbo.XTABLE' because the
table does not have the identity property.
I have often used the great info on this page for help, but this is the first time i've put in a question, so i humbly request any guidance you all can provide :)
In your BCP in, the -E flag tells bcp to take identity column values from the input file. I would try running it without that flag. fmt files in Sybase are a bit finicky, and I would try to avoid if possible. So as long as your schemas are the same between your systems the following command should work:
bcp mytester.dbo.XTABLE in XTABLE.bcp -n -S Sybase_157 -U sa -P SyAdmin
Also, the -T flag on your bcp out seems odd. I know SQLServer -T is a security setting, but in Sybase it indicates the max size of a text or image column, and is followed by a number..e.g -T 32000 (would be 32Kbytes)
But to answer the question in your title, if you run bcp out interactively (without specifying -c,-n, or -f) it will step through each column, prompting for information. At the end it will ask if you want to create a format file, and allow you to specify the name of the file.
For reference, here is the syntax and available flags:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/X14951.htm
And the chapter in the Utility Guide:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/BABGCCIC.htm
I need to copy a postgres DB from one server to another, but the credentials I have do not have permission to lock the database so a pg_dump fails. I have full read/update/insert rights to the DB in question.
How can I make a copy of this database? I'm not worried about inconsistencies (it is a small database on a dev server, so minimal risks of inconsistencies during the extract)
[edit] Full error:
$ pg_dump --username=bob mydatabase > /tmp/dump.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: permission denied for relation sl_node
pg_dump: The command was: LOCK TABLE _replication.sl_node IN ACCESS SHARE MODE
ERROR: permission denied for relation sl_node
This is your real problem.
Make sure the user bob has SELECT privilege for _replication.sl_node. Is that by any chance a Slony system table or something?
This worked for me
sudo -u postgres pg_dump -Fc -c db_name > file_name.pgdump
Then create a DB and run pg_restore it:
sudo -u postgres /usr/local/pgsql/bin/pg_restore -U postgres -d db_name -v file_name.pgdump
pg_dump doesn't lock the entire database, it does get an explicit lock on all the tables it is going to dump, though. This lock is taken in "access share mode", which is the same lock level required by a SELECT statement: it's intended just to guard against one of the tables being dropped between it deciding which tables to dump and then getting the data.
So it sounds like your problem might actually be that it is trying to dump a table you don't have permission for? PostgreSQL doesn't have database-level read/update/insert rights, so maybe you're just missing the select privilege from a single table somewhere...
As Frank H. suggested, post the full error message and we'll try to help decode it.
You need SELECT permissions (read) on all database objects to make a dump, not LOCK permissions (whatever that may be). What's the complete error message when you start pg_dump to make a dump?
https://forums.aws.amazon.com/thread.jspa?threadID=151526
this link helped me a lot. It refers to another one,
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.PostGIS
I first change the ownship to rds_superuser, then paste this piece of code,
CREATE FUNCTION exec(text) returns text language plpgsql volatile AS $f$
BEGIN EXECUTE $1; RETURN $1; END; $f$;
SELECT exec('ALTER TABLE ' || quote_ident(s.nspname) || '.' || quote_ident(s.relname) || ' OWNER TO rds_superuser')
FROM (
SELECT nspname, relname
FROM pg_class c JOIN pg_namespace n ON (c.relnamespace = n.oid)
WHERE nspname in ('tiger','topology') AND
relkind IN ('r','S','v') ORDER BY relkind = 'S')
s;
thereafter, I am able to dump my whole database.
Did you run 'pg_dump' with the correct -U (user who owns that db) ? If yes, then just like other poster said, check the permissions.
HTH
This worked for me -d dbname -n schemaname
pg_dump -v -Fc -h <host> -U <username> -p -d <db_name> -n <schema_name> > file_name.pgdump
default schema is public
I have a mysql dump with 5 databases and would like to know if there is a way to import just one of those (using mysqldump or other).
Suggestions appreciated.
You can use the mysql command line --one-database option.
mysql> mysql -u root -p --one-database YOURDBNAME < YOURFILE.SQL
Of course be careful when you do this.
You can also use a mysql dumpsplitter.
You can pipe the dumped SQL through sed and have it extract the database for you. Something like:
cat mysqldumped.sql | \
sed -n -e '/^CREATE DATABASE.*`the_database_you_want`/,/^CREATE DATABASE/ p' | \
sed -e '$d' | \
mysql
The two sed commands:
Only print the lines matching between the CREATE DATABASE lines (including both CREATE DATABASE lines), and
Delete the last CREATE DATABASE line from the output since we don't want mysqld to create a second database.
If your dump does not contain the CREATE DATABASE lines, you can also match against the USE lines.