I have a huge production DB of around 30GB using Mysql present in remote machine. I want to make a copy of that DB on my local Mysql setup. But I don't want to use SQL dump files.
Is there any alternative to make a copy of production DB to my local machine without using SQL dump files? Please tell me.
Thanks in advance.
If you are using MySQL 5.x - you may use replication mechanism to make "mirror" database. You can run replication, then stop slave database and back it up very fast without stopping master database.
If you want to use it for backup - you can find more information here:
Using Replication for Backups at dev.mysql.com
Related
In a project that I support, we already have PostgreSQL database in different environments - Development, Integration and. production.
I know we can take a back up of Integration database with PG_Dump and restore it to Development in order to sync those databases.
However, I want to understand if I can use the back up file from PG_dump to create the database locally in my system?
A "local database" is not substantially different from a "remote database", so yes, that should work.
As always, note that restoring a dump on a lower PostgreSQL version than the one where it was taken is not supported (and will often fail).
Goal: Backup and Restore a SQL Server database multiple times onto an Amazon RDS SQL Server instance with different database and file names.
So Amazon RDS added the ability to access SQL Server database backups and "import" and "export", yay! But you can't change the database name or the file names, boo!
For non-production databases, I want to put them on a single RDS instance, e.g. dev, test, integration, etc. since I don't need much performance and it would save a lot of money.
I have been seeking to come up with a solution for cloning a database onto an Amazon RDS instance, specifying the database name. I don't want to (i.e. not allowed to) spend $6000 for Red Gate SQL Clone. Trying to hack a combination of scripting, bcp, import/export, etc is likely going to take a lot of time.
With the introduction of import/export a database in RDS via SQL backups, I have a new option. The problem is I can't specify database and filenames on "import"(restore).
I thought about writing a script that gets the database backup from RDS, restores it to a local SQL Server Express instance specifying the database name and files that I'll want on the destination, then backup this, then import/restore to Amazon. This is an option but it will take WAY longer than is probably practical.
So... my final thought at this point and my question: is there a reliable way to simply edit/patch the backup file to change the database and file names?
Even if you could afford SQL CLone, I'm not sure it would function on AWS as I believe it requires Windows Hyper-V, which isn't supported on Windows Server VMs on AWS.
Windocks has also just released support for SQL Server cloning, but they also use Hyper-V based approach . . . so if you have options outside of AWS I believe their solution fits your budget . . . but again, not on AWS.
Disclosure: I am the Co-Founder of WinDocks
My scenario is my live database is on Azure database name vlproduction and I am using SQL Server 2014 on my local machine, database name testvlproduction. For some reason, my testvlproduction database was deleted.
I want to generate testvlproduction to be the same as vlproduction but I found there is no way to take direct backup of live so I generate script with data but script is too big (300mb) whenever I'm trying to run script on my local it shows
System.OutOfMemoryException
Please tell me, what to do to fix this ? or is there another way to generate database same as live on local
Is there already built any such functionality in SQL Server?
May be this question repeated but still I have no solution for my issue.
Fill free to ask any query.
Thanks
This is a limitation of sql server,This happen because Management Studio is running out of memory, not the SQL Server service.
This is likely caused by the size of the result set that you are returning to Management Studio.
See https://support.microsoft.com/en-in/kb/2874903for more details.
For more sql-server-management-studio-cant-handle-large-files
Since your SSMS runs out of memory you should try it with the a new release. It should be compatible with SQL Server 2014.
If you have some idle time on your production database take a db backup and restore it locally - use Export Data Tier Application and select the proper version where you want to restore. If this is not an option for you I'd suggest to take #mohan111 's suggestion of committing your script in batches.
I believe you have two problems:
System out of memory exception - This usually happens while running big scripts in SSMS. One work around is to run it using command prompt using sqlcmd command (see the link - https://msdn.microsoft.com/en-us/library/ms180944.aspx)
Creating Test Database with the same schema as that of production - You can make use of SSDT tool in Visual Studio. This will help you to create a database project which mimics the production database and you can use the publish functionality to create database where ever required which will have the same schema
I'm using a C# application to Backup and Restore DBs on a remote server using the microsoft.sqlserver.smo.dll.
Testing with my local machine, I can browse backup files to select the backup to use. Can this be done through code for the remote SQL Server using the SQL credentials similar to the way MSSMS does it?
My backups are saved with a certain naming convention (ie. "Ebuy_full_2013_8_7_H13_M40.bak") and I would like to be able to show these in the application so a decision about which backup file to restore can be made.
Thanks,
Rick
SOLVED: Based on a comment for another question I ran SQL Profiler to determine what functions MSSMS was using, found it was using master.dbo.xp_dirtree, was able to duplicate this in my app.
For SQL Server, we are able to send over the db for the most part pretty easily to offshore staff.
Is this possible with the AS/400 or they can only VPN in to work?
Every database engine has a slightly different version of SQL. DB2 for i at V5R4 has differences to DB2 LUW 9.7 and both are different to SQL Server and MySQL at any version. So the quick answer is no, you can't simply make a copy of a DB2 for i database and run it on MySQL or SQL Server. You'd normally do exactly as you are doing with SQL Server: Have one machine here and another machine there and unload/reload the data as needed.
Having said that, the differences between SQL dialects are not usually crippling. Use the IBM Navigator for i and extract all of the DDL for the IBM database, then try to execute the DDL on the SQL Server machine. You'll have some syntax problems, but you should be able to work them out with someone who is knowledgeable in both dialects. Keep track of the changes to the DDL because you'll need them in order to extract the data from the IBM side.
Once you have the empty database created on the new machine, it's time to extract out the data. Write some CL programs to do CPYTOIMPF to generate CSV files or flat files or whatever it is that SQL Server wants in order to import properly. Then FTP that data to the new machine and write some scripts to do the import.
As you can tell, this is not going to be a simple process and it will take some time to develop and debug. I'd go with having the offshore staff using a VPN to the local IBM machine.
The easiest way I can think of would be to create a Save File (SAVF) then FTP that save file to the other IBM i and [restore it] (http://pic.dhe.ibm.com/infocenter/iseries/v6r1m0/index.jsp?topic=/cl/rstobj.htm).
In the PC world this is similar to zipping up a directory, FTPing it to another machine and then unzipping it.
If this isn't what you mean, can you elaborate on what you're wanting?
The offshore site probably has their own SQL Server, probably running the same version as you.
But unless they also have an IBM Power System running the same release of IBM i, then they will most likely need to access your system.