I have a SQL Server 2012 hosted on a standalone machine. I want to migrate it to my AWS Redshift (already existing data warehouse).
My question is wether it is possible via AWS Data migration service ?
I am also open to other efficient methods for migration. Currently I am doing the following steps
taking a backup of the SQL server DB in the standalone server.
uploading it to AWS-S3.
Droping and restoring the Db from S3 in AWS-RDS (Sql-server)
I would like this data to be present in my data warehouse i.e AWS-Redshift
Thanks for the help in advance !
There are 2 types of migration within DMS
"one off" data migration, where the data is copied using sql
statements
"continuous replication", where the "change dta capture" system on
the source is used to capture and process just the updates.
SQL server can be used as a source for both of these types however there are caveats and limitations that should be read and understood thoroughly.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SQLServer.html
So long as you follow the instructions and meet the limitiations that are documented then it will work great.
Related
I'm relatively new to Azure and am having trouble finding what options are out there for connecting to an existing SQL database to push data into it.
The situation is that we have an external client who needs to connect to our Azure SQL database to push data into it, on an on-going basis. We can't give them permission to get into our database, so we're looking at what we can do allow data in. At this point the best option seems to be to create a web service deployed in Azure that will validate the data and then push it into our database.
The question I have is, are there other options to do this in an easier way? Are there Azure services or processes that can be set up to automatically process a file and pull the data into a database? Any other go-between options when each side has their own database and for security reasons can't just open up access to it?
Azure Data Factory works great for basic ETL. If neither party can grant direct access, you can use an intermediate repository like Blob Storage to drop csv/xml/json files for ingestion. If they'll grant you access to pull, you can setup a linked service that more or less functions the same as a linked server in MSSQL. As of the last release ADF now supports Azure hosted SSIS packages too.
I would do this via SSIS using SQL Studio Managemenet Studio (if it's a one time operation). If you plan to do this repeatedly, you could schedule the SSIS job to execute on schedule. SSIS will do bulk inserts using small batches so you shouldn't have transaction log issues and it should be efficient (because of bulk inserting). Before you do this insert though, you will probably want to consider your performance tier so you don't get major throttling by Azure and possible timeouts.
Goal: Backup and Restore a SQL Server database multiple times onto an Amazon RDS SQL Server instance with different database and file names.
So Amazon RDS added the ability to access SQL Server database backups and "import" and "export", yay! But you can't change the database name or the file names, boo!
For non-production databases, I want to put them on a single RDS instance, e.g. dev, test, integration, etc. since I don't need much performance and it would save a lot of money.
I have been seeking to come up with a solution for cloning a database onto an Amazon RDS instance, specifying the database name. I don't want to (i.e. not allowed to) spend $6000 for Red Gate SQL Clone. Trying to hack a combination of scripting, bcp, import/export, etc is likely going to take a lot of time.
With the introduction of import/export a database in RDS via SQL backups, I have a new option. The problem is I can't specify database and filenames on "import"(restore).
I thought about writing a script that gets the database backup from RDS, restores it to a local SQL Server Express instance specifying the database name and files that I'll want on the destination, then backup this, then import/restore to Amazon. This is an option but it will take WAY longer than is probably practical.
So... my final thought at this point and my question: is there a reliable way to simply edit/patch the backup file to change the database and file names?
Even if you could afford SQL CLone, I'm not sure it would function on AWS as I believe it requires Windows Hyper-V, which isn't supported on Windows Server VMs on AWS.
Windocks has also just released support for SQL Server cloning, but they also use Hyper-V based approach . . . so if you have options outside of AWS I believe their solution fits your budget . . . but again, not on AWS.
Disclosure: I am the Co-Founder of WinDocks
I would need to migrate a SQL database from Sybase to MS SQL Server. Before doing the actual migration on the production server I first created an SSIS-package with SQL Server Management Studio's Import/Export Wizard for testing with other databases. The test migration was successful and I would now like to deploy my SSIS-package to the real servers.
However, it seems I cannot simply run the package in Management Studio choosing different data sources for it - it only runs on the same databases for which it was created. Now, it can be edited in something called SQL Server Business Intelligence Development Studio (or BIDS for short)(I am using the SQL Server 2008 version), but going through every data flow task changing the destination source manually for each of the ~ 150 tables I am moving is ineffective and also introduces a possibility for error.
I there a way to quickly change what data source is to be used for ALL destination sources in ALL the flow tasks of an SSIS-package? If not, what simple method is there for testing migration with test databases first and simply changing the data sources when deploying?
I am using ODBC data sources, but for some the package shows OLE-sources in BIDS instead.
I hope I was clear enough. If you have additional questions, please ask! Thank you!
I would use a variable for the ConnectionString property of the connection manager. A package level configuration can be very useful for accomplishing this task. Several ways to do this. I prefer to use a table in SQL Server that holds all the configurations for all packages. This can be especially effective if you have multiple packages and need to dynamically change a set of connection managers across those multiple packages.
The basic steps are:
Opposite click on your SSIS design surface and select "Package Configurations..."
Create a package level configuration of Configuration Type "SQL Server"
Store your connection in a Configuration table in SQL Server
Alter your Connection Manager to use a variable for the ConnectionString Property
Populate that variable from the Configuration table via your package level configuration
When it comes time to switch from Test to Production, simply update the connection string in your configuration table
These screenshots can help...
This is part of a larger package management framework that I implemented using this book:
Microsoft SQL Server 2008 Integration Services: Problem, Design, Solution
I highly recommend it. Should take less than a day to set it up. Book has step by step instructions.
This question and its associated answers also helpful.
I'm being demanded to develop a new software that must be built over SharePoint and use Microsft SQL Server 2012.
I have a DB in Postgres, and some of its tables will be used in this new project, so I must import these tables, everyday. I'd like to use WebService to do it, but they want it to be done DB-to-DB directly.
Postgres-to-Postgres is already done and it "works", but importing to Microsoft is being troublesome.
Does anybody know some MSSQL tool that can connect to Postgres and do the import?
Typically for this sort of situation (assuming that it'll be a process that's repeated on a regular basis) you'd use SSIS (comes with most versions of MS SQL Server). Have a look at the first several hits on this Google search, especially the first one.
Another option is to connect to the Postgres database directly from your application via ODBC, and eliminate the redundant copy of the data - and get real-time updates instead of having to wait for the next execution of the SSIS job.
I'm using SQL Azure on a project and it works great. The problem is that the usual backup features do not exist. I have exported the database a couple of times using SQLAzureMW ( http://sqlazuremw.codeplex.com/ ) but this tool is now choking trying to download the database data with bcp. In any case, it's not as nice a solution as SQL Server backups.
Is anyone aware of a commercial or open source tool, or other technique, for making reliable backups of SQL Azure databases? This is really a showstopper.
Starting with update 4, SQL Azure now supports database copies. You can make a copy of your database, kept in Azure, and use that to retrieve data in the event of an accidental deletion or schema bugaboo:
http://msdn.microsoft.com/en-us/library/ff951624.aspx
It's still up to you to get that database off Azure and onto your own local SQL Server, though, but at least you've got a mechanism for making a point-in-time copy.
Microsoft takes care of the backups for you. There is no reason to back up SQL Azure databases yourself.
Yes, we had the same problem and couldn't find any good/simple solutions, so we cobbled together a solution using Red Gate: http://mooneyblog.mmdbsolutions.com/index.php/2011/01/11/simple-database-backups-with-sql-azure
SQL Azure will support PIT (Point in time) backup/restore (mainly restore) later this year (2011), CTP in summer. There is some (little) preliminary info here info here.