Transferring a SQL Database on AWS - sql-server

We have a SQL Server 2005 database on our local server.
Here it is (ofc i've to repeat the proceder for the other databases):
I've to transfer it to our SQL server 2012 instance on Amazon RDS.
I right clicked the database and selected Generate Scripts - All tables - Copy Schema and Data and saved everything as a sql file
At this point I attempted to use the SQL Azure MW v5.15 (in a question here I saw that it works with AWS too, way to go Microsoft!) to transfer the database on AWS.
However it crashes.
No problem, I try to use SQL Management studio to import the file but as soon the RAM consumed by the program reaches 1gb (as you can see that DB is 3,4gb) BOOM - out of memory error!
What should I do now?

You'll need to do part-by-part of your creation. I'd faced that problem some time before, my scripts reaches like 4 GB, only with the schemas, tables, etc. So, I think you should first of all, generate your scripts of creating schemas, users and logins. After that, tables, views and procedures. Then, another objects, like jobs, functions... To conclude, all the data you have, you should export to the RDS through the IMPORT/EXPORT Wizard in SSMS.
I've followed that steps and it worked for me.
Good luck!

Related

Copy SQL Server database between two servers

I have a productive SQL Server database on one server. Also on this server there are the databases for test and development, which was over a long period no big deal for performance. But now I have to test some quite expensive selects for a nightly report and so my users get heavy trouble using the database during working hours.
We have a redundant server with SQL Server running which we never really used, but now we think his time has come. Until now, to get a fresh copy of the production system I do a full backup, copy it into a folder on Server A and connect the folder as network drive an Server B. Then I copy the file into a folder where SQL Server has read permissions and import the backup into the database on B.
On Server A I have made some maintenance plans for backing up the production system into one of the other systems by executing the plan.
I want some solution like this for my B Server, but it doesn't seem to get running. I can do a Select from server A in a table on server B, so they both do know each other and can see them.
I've tried the copy database wizard but it crashes because it couldn't delete the database on server B even if I had removed it manually before the try.
I also tried importing the Database on Server B from Server A but it didn't work either.
Googling my problem didn't work out because I always get solutions to the problem making backup files onto some kind of server.
Hope someone of you could help automating this process.

Microsoft Access Forms with SSMS 2012

We are maintaining Microsoft Access 2016 database for Locker Management Inventory. The database contains complex queries (Computed Columns) , Reports and Database backend file is also on the same computer.
We are required to use SQL Server Management Studio 2012 as our Back-end for Microsoft Access Forms and this form is required to be accessed by 10 Users on LAN. Please guide Step by Step Procedure for this task.
Thanks in Anticipation.
Access is inherently multi user and has no problem linking to tables on a SQL Server that is on the same LAN.
The design steps are typically to design the application in Access, unsplit, in a single file. Then when the app is ready - one splits the file into a back end and front end that are linked. One can copy the front end multiple times for multiple users.
These steps above are all Access and you'll find good documentation online and in any textbook for the Access database.
The final step, if one must use SQL Server as the back end, is to then import those tables from the Access back end file into the SQL Server. This is not uncommon but the instructions for this would be in SQL Server area.
Hope this helps.
One must understand the fundamental architecture: a multi user Access application is 2 files: front and back (this is actually what they are called). There is only 1 back file - it is where the tables are held. The front file can be copied repeatedly, 1 per user, and all fronts link to the single back end file.
SQL Server is the back end and Access is the front end - in your case.
One only really needs the SQL Server product as the back end file if the payload is particularly high. Pay load is combination of the quantity of records and simultaneous users.
The basic approach is to split your database into two parts. A built in database splitter is built into Access. Once done, you could in theory just place the back end accDB access file on some server folder that is shared to everyone. You then use the linked table manager in Access to link to the back end file.
Since you been “mandated” to use SQL server, then you would migrate the back end accDB file tables to sql server, and then re-link (again use the built in table manager) to link the tables to SQL server. So the process for linking to an access file, or linking to sql server is near identical. The end result is your access application will now be using tables that reside on SQL server. From a developer and even user point of view your forms, code, reports etc. will continue to run as before.
And as a general rule it only makes sense to run SQL server on your machine for development use. You would have to copy/transfer the sql database to that “mandated” instance of SQL server they are telling you to use that presumably is running on some server. It would not make sense to mandate use of SQL server without you being provided with some server running SQL server.
It certainly is possible to allow users to connect to YOUR computer running SQL server, but that seems less than ideal since if you re-boot, shut down or your computer freezes, then all other users would suffer a connection break and much inconvenience.
I mean, right now if you have a shared folder for all users, such folders are typically NOT placed on one users computer, but some dedicated machine that acts as some kind of server for everyone.
So if they are telling you to use SQL server, then it quite much assumed they are providing a dedicated machine that is running SQL server. So certainly for development you can (and even should) run SQL server on your machine. However you would then have to transfer that database to the computer/server they are telling you to use with SQL server. And then you would re-link your Access application tables to now point to that production server. The last step would be to then distribute this correctly linked application to each user. So just like all software you purchase such as word, or Excel, you STILL install that software on each computer. Now that you are building software, then you as a developer will adopt the same concept – that is to distribute and install your software on each workstation. So while you might use word or Excel (or Access) from the local computer, you also may well often “share” some data files (but NOT the application) on the server.
As long as the application you distribute to each workstation is correctly linked and points to the server based edition of SQL server, then you all be sharing the same database. You will NOT in practice have multiple users working on and using the application you created, but distribute that Access application to each workstation. How you “get” that application to each workstation is not really any different then how you would supply a word document or Excel sheet – the only requirement here being that each workstation gets their own copy. Since each users copy of this application has linked tables that point to SQL server, then you all working on and sharing a common database.
So the first concept to grasp is that of spitting a database. I explain this concept here:
http://www.kallal.ca/Articles/split/index.htm
As for some steps and migrating to sql server, here is a great starting point:
https://www.fmsinc.com/MicrosoftAccess/SQLServerUpsizing/index.html

Edit SQL Server backup file to change database and file paths to allow restoring multiple times to same Amazon RDS instance as different databases

Goal: Backup and Restore a SQL Server database multiple times onto an Amazon RDS SQL Server instance with different database and file names.
So Amazon RDS added the ability to access SQL Server database backups and "import" and "export", yay! But you can't change the database name or the file names, boo!
For non-production databases, I want to put them on a single RDS instance, e.g. dev, test, integration, etc. since I don't need much performance and it would save a lot of money.
I have been seeking to come up with a solution for cloning a database onto an Amazon RDS instance, specifying the database name. I don't want to (i.e. not allowed to) spend $6000 for Red Gate SQL Clone. Trying to hack a combination of scripting, bcp, import/export, etc is likely going to take a lot of time.
With the introduction of import/export a database in RDS via SQL backups, I have a new option. The problem is I can't specify database and filenames on "import"(restore).
I thought about writing a script that gets the database backup from RDS, restores it to a local SQL Server Express instance specifying the database name and files that I'll want on the destination, then backup this, then import/restore to Amazon. This is an option but it will take WAY longer than is probably practical.
So... my final thought at this point and my question: is there a reliable way to simply edit/patch the backup file to change the database and file names?
Even if you could afford SQL CLone, I'm not sure it would function on AWS as I believe it requires Windows Hyper-V, which isn't supported on Windows Server VMs on AWS.
Windocks has also just released support for SQL Server cloning, but they also use Hyper-V based approach . . . so if you have options outside of AWS I believe their solution fits your budget . . . but again, not on AWS.
Disclosure: I am the Co-Founder of WinDocks

Trouble with Copy Database Wizard between two SQL 2008R2 Servers

I am trying to use Copy Database Wizard to copy from my live server (shared hosting) to my local machine. Both the live and local servers are SQL 2008 R2.
I have used CDW for several years with perfect success when copying from a live SQL 2000 server to my local 2008 R2. But now that I have migrated my live database to SQL 2008 R2 the CDW is giving me this error:
Could not read metadata, possibly due to insufficient access rights.
I've learned that this error can be predicted before you even complete the CDW setup: On the page where the CDW asks you for your desired destination database name, it is SUPPOSED to populate the .mdf and .ldf files with their name-to-be and size (e.g. MB, GB).
But in my case these file names and sizes are not being shown (area is simply blank in the wizard) and then of course when I attempt to execute the package it gives me the error.
After much research I believe that reason for this error is due to the CDW requirement of "You must be a member of the sysadmin fixed server role on both the source and destination servers."
On my local server, my Windows Authentication login is listed as a Role Member for the sysadmin Server Role. However on my live server (keep in mind it is a shared SQL server with 250+ databases) the only Role Member listed is [sa].
Am I right in thinking that the only way to satisfy this requirement would be to add my specific SQL user to the live/source Server > Security > Server Roles > sysadmin role? I'm guessing that would never be done on a shared server right? Or is there some other way to make it work by messing with the specific database properties/users/roles?
I can't explain why CDW is working from the live SQL 2000 server and not the 2008 R2. I HOPE it is simply that something isn't set up right on the live database, but maybe it is due to changes that were made to SQL security over the years.
In case it matters, I must use the SMO method instead of detach/attach because it is a live database that I don't want to take down. Historically the CDW from SQL 2000 only takes 3 minutes with SMO method so speed isn't an issue anyway.
Here's my preference for a solution:
Find a way to get CDW to work, most likely by changing something on the live server. Is this possible? What would it be?
If that fails, then...
What about an idea of using CDW to create the package, but then going into to BIDS and manipulating something in the package to circumvent the sysadmin role requirement. (Does it really need the metadata? I don't need anything beside the actual data tables.) Is this possible?
UPDATE 6/14/2016: Editing a CDW package in BIDS won't work as it appears to simply use the .mdf and .ldf files, which of course I don't have access to on the shared server. I think an alternative is to use Import/Export Wizard to create a package, then edit in BIDS. The annoying part is that without access to metadata the Import/Export Wizard doesn't seem to be aware of Foreign Keys, and thus doesn't know what order to process the tables in.
If that fails, then...
Is there any other way to easily automate a daily copy from my live server to local machine? The reason I like CDW is because it is super simple to use (when it works), it can be scheduled to run daily as a SQL agent job, and requires no manual work on my part. Is there a "next best thing" if CDW can't be made to work?
You'd think that a very common scenario for all websites out there would be "how do I get a copy of my live database onto my local SQL server, daily, automatically"? But maybe I'm the weird one!
Another simple solution would be the Import/Export Wizard.
In SSMS right-click the database you want transferred and select 'Tasks' and then 'Export Data...'. It will open a wizard that is very similar to that of CDW. The difference here is that I could not find a sysadmin requirement to use it.
At the end it will give the option to run immediately and/or save the SSIS package. If you save the SSIS package (I prefer to save it to disk) you can then create a schedule via a SQL Agent job.

Tool to copy SQL Server 2008 db to SQL Server 2008 Express?

I have a typical dev scenario: I have a SQL 2008 database that I want to copy every so often to my local instance of 2008 Express so that I can do dev, make changes, etc. to the local copy. I have some constraints though: the source db is part of a live e-commerce site in shared hosting so I can't detach it and the hosting service wants me to pay $5 for each ad hoc back up I invoke.
What I'd like is some tool that I can invoke ad hoc to take a snapshot (complete, not incremental) of the live db that I can then import in my local one. I've tried the SSMS 2008 Copy Database Wizard but it gives me an error saying I can't do that with Express. I tried the Generate Scripts tool and thought that was going to make it - the export to my local disk worked but when I went to import using SQLCMD (the script was 1GB so SSMS errored when I tried to open it there), it told me there was a syntax error a few thousand lines in.
Coming from the MySQL world, this process is trivial. All I want is an analog of mysqldump and then a command-line way to import that file into a db. Surely there's an easy way to do this in the SQL Server world? This seems like the most basic use-case for developers.
[ Yes, I've seen a few other questions here that seem similar but I didn't think they had the same constraints. ]
Best answer: full backup, restore, pay $5. Anything else seems to me like it'd waste a lot more than $5 worth of time.
If they don't charge you to run queries against the database these tools may help. Granted these are not free tools, but are handy on so many fronts it would be worth buying one. These tools can diff your source db and target db both data and structure or just one or the other, and optionally sync the target database to be just like the source.
http://www.innovartis.co.uk/
http://www.red-gate.com/products/sql%5Fdata%5Fcompare/index.htm
Try SQL Dumper.
SQL Server Dumper enables you to dump selected SQL Server database tables into SQL INSERT statements, that are saved as local .sql files and contain all the data required to create a duplicate table, or to be used for backup purposes. You can choose to create an individual .sql file for each table, or combine all selected tables into a single file.
SQL Server Database Publishing Wizard and osql usually do the trick for me with large databases.

Resources