I need to migrate .SQB files to Snowflake.
I have a data relay where MSSQL Server database files are saved in .SQB format (Redgate) and available via sSTP with full backups every week and hourly backups in between.
Our data warehouse is Snowflake and the rest of our data from other sources. I'm looking for the simplest, most cost effective solution to get my data to Snowflake.
My current ETL process is as follows.
AWS EC2 instance (Windows) that downloads the files, applies
Redgate's SQL Backup Converter
(https://documentation.red-gate.com/sbu7/tools-and-utilities/sql-backup-file-converter)
to convert the files to .BAK. This tool requires a license
Restore MS SQL database on the same AWS EC2 Instance
Migrate MS SQL database to Snowflake via Fivetran
Is there a simpler / better solution? I'd love to eliminate the need for the intermediate EC2 if possible.
The .SQB files come from an external vendor and there is no way to have them change the file format or delivery method.
This isn't a full solution to your problem, but it might help to know that you're okay to use the SQL Backup file converter wherever you need to, free of any licensing restrictions. This is true for all of SQL Backup's desktop and command-line tools. Licensing only gets involved when dealing with the Server Components, but once a .SQB file has been created you're free to use SQBConverter.exe to convert it to a .BAK file wherever you need to.
My advice would be to either install SQL Backup on whichever machine you want to use the tooling on, or just copy all the files from an existing installation. Both should work fine, so pick whichever is easiest for you.
(FYI: I'm a current Redgate software engineer and I used to work on SQL Backup until fairly recently.)
You can
Step 1: Export Data from SQL Server Using SQL Server Management Studio.
Step 2: Upload the CSV File to an Amazon S3 Bucket.
Step 3: Upload Data to Snowflake From S3 using COPY INTO command.
You can use your own AWS S3 bucket for this and then create a External Stage pointing to the S3 bucket or You can upload the files into internal Snowflake Stage.
Copy Into from External Stage -
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#loading-files-from-a-named-external-stage
Copy Into from an Internal Stage -
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#loading-files-from-an-internal-stage
Creating External Stage-
https://docs.snowflake.com/en/sql-reference/sql/create-stage.html
Related
guys.
I know it is a general question. Let me give you my scenario:
I have a client who sends a bunch of Excel files to me and I use my on-premise SSIS package to export it to a database located on Azure. My SSIS package does call stored procedures stored on the Azure SQL Server to manipulate the data.
I want to move the whole process to the cloud and I want to know what is the best way and how can we achieve it. I was thinking maybe I can use blob storage in a container and by providing a cloud folder located on Azure and let my client through the files there. Then my an app (service) such as Data Factory can detect those files and run my SSIS package that is deployed on Azure "Somehow".
Any ideas or sample code would be great.
Thanks!
You can try below manual approach -
1. Copy all csv files to ADLS (Azure Data Lake Storage) (For Automation - Copy activity, for loop and lookup activity, you may use)
2. For any transformation of data, use USQL jobs (ADLA) whose output is also stored in ADLS. You can save USQL scripts in blob storage (for ADF AUTOMATOMATION).
3. To transfer the data from ADLS files to Azure sql database, use copy activity of azure data factory and use sql as sink and csv as source format.
I am looking for a most efficient and fastest way to transfer huge data from a SQL Server located in Europe to SQL Server located in USA.
The traditional ways are taking longer time.
Linked Server
SQL bulk copy or BCP
SQL database replication
SQL Import Wizard
Cloud is an option but it comes with data privacy issues. I am not looking for offline copy using backup and restore or transfer via hard disc.
Can anyone suggest the best way to overcome this issue ?
You can ask the company from Europe to back everything up in a HD and ship it securely. My work does it this way. Shipping Oracle DB copies from LA
Alternative 1: using compressed full backup file of the database
Full backup the database
Compress the backup file and splitting it into smaller chunks of size 500MB (or less) using zip tool.
note: You can back up to multiple files with compression that save about 60%– in one or more locations using SSMS or T-Sql script – and multiple threads will be used.
This can make a backup take less time and no need to use zip tool.
Host the files in ftp server or http uploader server
Copy the data files from the source http/ftp using http /ftp protocol
In the target server uncompress the files and compose one backup file
Restore the database
Update:
Alternative 2: Using compressed bcp files
SQL bulk copy or BCP out as native data
compress the files using zip
host files in ftp server or http uploader server
Copy files from the source http/ftp using http /ftp protocol
in the target server uncompress the files
bcp in from data files
note:
You can use batch file or Powershell scripts to automate these tasks
Speed of network is controlled by network speed of the service provider, contact your internet service provider to get the max speed.
We avoided the online interaction between source/target Sql servers to avoid the time out of the network.
Goal: Backup and Restore a SQL Server database multiple times onto an Amazon RDS SQL Server instance with different database and file names.
So Amazon RDS added the ability to access SQL Server database backups and "import" and "export", yay! But you can't change the database name or the file names, boo!
For non-production databases, I want to put them on a single RDS instance, e.g. dev, test, integration, etc. since I don't need much performance and it would save a lot of money.
I have been seeking to come up with a solution for cloning a database onto an Amazon RDS instance, specifying the database name. I don't want to (i.e. not allowed to) spend $6000 for Red Gate SQL Clone. Trying to hack a combination of scripting, bcp, import/export, etc is likely going to take a lot of time.
With the introduction of import/export a database in RDS via SQL backups, I have a new option. The problem is I can't specify database and filenames on "import"(restore).
I thought about writing a script that gets the database backup from RDS, restores it to a local SQL Server Express instance specifying the database name and files that I'll want on the destination, then backup this, then import/restore to Amazon. This is an option but it will take WAY longer than is probably practical.
So... my final thought at this point and my question: is there a reliable way to simply edit/patch the backup file to change the database and file names?
Even if you could afford SQL CLone, I'm not sure it would function on AWS as I believe it requires Windows Hyper-V, which isn't supported on Windows Server VMs on AWS.
Windocks has also just released support for SQL Server cloning, but they also use Hyper-V based approach . . . so if you have options outside of AWS I believe their solution fits your budget . . . but again, not on AWS.
Disclosure: I am the Co-Founder of WinDocks
I have been attempting to move from a regular SQL Server on a Win2008 Server to the SQL Server on Amazon AWS RDS.
I thought an simple backup and restore would work. Though AWS RDS doesn't seem to have access to a file system so the sql scripts all seem to need a local file system on the source and destination server. I attempted a script following
exec sp_addlinkedserver #server='test.xxxx.us-east-1.rds.amazonaws.com'
-- Verify that the servers were linked (lists linked servers)
exec sp_linkedservers
EXEC ('RESTORE DATABASE [orchard] FROM DISK = ''C:\Temp\orchard.bak'' WITH FILE = 1, NOUNLOAD, STATS = 10')
AT [test.xxxx.us-east-1.rds.amazonaws.com]
Any Suggestions would be helpful.
download the free 'SQL Azure Migration Wizard' from CodePlex -- I did a short blog/screencast about this. Be sure to set the 'TO' setting in the wizard to the AWS DNS name and then use 'SQL Server 2008' and not 'SQL Azure'
The official word I got for AWS support on migration of SQL databases using .bak files is that it is not supported. So no more quick restore from .bak files. They offered the official help for migration of existing databases here:
Official AWS database migration guide
And the also gave me an unofficial wink at the Azure database migration tool. Just use it to generate a script of your schema and or data and execute it against your RDS instance. Its a good tool. You will have to import the .bak into a non-RDS SQL server first to do this.
SQL Azure migration tool
You will probably find that the Data-tier Applications BACPAC format will provide you with the most convenient solution. You can use Export to produce a file that contains both the database schema and data. Import will create a new database that is populated with data based on that file.
In contrast to the Backup and Restore operations, Export and Import do not require access to the database server's file system.
You can work with BACPAC files using SQL Server Management Studio or via the API in .Net, Powershell, MSBuild etc.
Note that there are issues using this method to Export and then Import from and to Amazon RDS. As a new database is created on RDS, the following two objects are created within it.
A User with membership in the db_owner role.
The rds_deny_backups_trigger Trigger
During the import, there will be a conflict between the objects included in the BACPAC file and the ones that are added automatically by RDS. These objects are both present in the BACPAC file and automatically created by RDS as the new database is created.
If you have a non-RDS instance of SQL Server handy, then you can Import the BACPAC to that instance, drop the objects above and then export the database to create a new BACPAC file. This one will not have any conflicts when you restore it to an RDS instance.
Otherwise, it is possible to work around this issue using the following steps.
Edit the model.xml file within the BACPAC file (BACPACs are just zip files).
Remove elements with the following values in their Type attributes that are related to the objects listed above (those that are automatically added by RDS).
SqlRoleMembership
SqlPermissionStatement
SqlLogin
SqlUser
SqlDatabaseDdlTrigger
Generate a checksum for the modified version of the model.xml file using one of the ComputeHash methods on the SHA256 class.
Use the BitConverter.ToString() method to convert the hash to a hexadecimal string (you will need to remove the separators).
Replace the existing hash in the Checksum element in the origin.xml file (also contained within the BACPAC file) with the new one.
Create a new BACPAC file by zipping the contents of the original with both the model.xml and origin.xml files replaced with the new versions. Do NOT use System.IO.Compression.ZipFile for this purpose as there seems to be some conflict with the zip file that is produced - the data is not included in the import. I used 7Zip without any problems.
Import the new BACPAC file and you should not have any conflicts with the objects that are automatically generated by RDS.
Note: There is another, related problem with importing a BacPac to RDS using SQL Server Management Studio which I explain here.
I wrote up some step-by-step instructions on how to restore a .bak file to RDS using the SQL Azure Migration Tool based on Lynn's screencast. This is a much simpler method than the official instructions, and it worked well for several databases I migrated.
Use the export wizard in sql server management studio on your source database. Right click on the database > tasks > export data. There is a wizard that walks you through sending the whole database to a remote sql server.
There is a tool designed by AWS that will answer most, if not all, of your compatibility questions - the Schema Conversion Tool for SQL Server: https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.SQLServer.html
Because not all sql server database objects are supported by RDS, and even varies across sql server versions, the Assessment report will be well worth your time as well:
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_AssessmentReport.html
Lastly, definitely leverage Database Migration Service:
https://aws.amazon.com/dms/
The following article discussing how to Copy Database With Data – Generate T-SQL For Inserting Data From One Table to Another Table is what I needed.
http://blog.sqlauthority.com/2009/07/29/sql-server-2008-copy-database-with-data-generate-t-sql-for-inserting-data-from-one-table-to-another-table/
I have a DB on oracle on Windows Server 2003. How do I export it with all the data and put it into other Windows server?
Use RMAN to take a full backup. Then restore it on the new server.
See Clone using RMAN Article
You can use Oracle Data Pump to export and import database. Quote from documentation:
Oracle Data Pump is a feature of Oracle Database 11g Release 2 that enables very fast bulk data and metadata movement between Oracle databases.
Procedure is like this:
Export existing database using expdp utility
Install Oracle database server on new Windows server
Import database on new server using impdp utility
Check this link: Oracle Data Pump. There you will find complete documentation and examples how to use this utility.
If you are wanting to create an exact copy of an existing database on a new sever of the same operating system (though not necessarily the same O/S version) and the same Oracle version, the quickest and least problematic method is to just copy the database files. This is often referred to as database cloning, and it is a common method DBAs use to setup development and test databases that are intended to be exact duplicates of production databases.
Stop all instances of the database on the existing system. You could login to each instance "as sysdba" using SQLPlus and run the "shutdown immediate" command. You could also stop the Windows Services for the instances. They are named OracleServicesid where "sid" is the instance name. Usually, there is just one instance, but there could be multiple instances to a single database. All instances must be stopped for this procedure.
Locate the database files. Look for an "oradata" folder somewhere below the Oracle root folder and then find the folder for the database sid in there. (There could be multiple oradata folders. You need to find the one that has the folder named for the SID of your database.) There are also the files in the Admin folder for the sid as well as the %ORACLE_HOME%/database folder. If DBCA had been used to create the database, then the location of all of these files varies by the Oracle version.
Once you have identified all of the files for the database, you can use any method at your disposal to copy these files to the same locations on the new server. (Note: The database files, control files, and redo logs must be placed in the same locations (i.e., file system paths) where they exist on the old server. Otherwise, configuration files must be changed and commands must be run to alter the database's internal file paths.) The parameter file (initSID.ora) and server parameter file (spfileSID.ora) must be placed in the %ORACLE_HOME%/database folder.
On the new sever, you must run the oradim utility. (Note: oradim is an Oracle utility that is specific to Windows and is used to create, maintain, and delete instance services.) Here is a sample command:
oradim -new -sid yourdbsid -startmode automatic
Startup the database with SQLPlus, and you should be in business.
This is a general overview of the process, but it should help you get the job done quickly and easily. The problem with other tools is the need to create an empty database on the target server before loading the data by whatever means. If the target server has a different version of Oracle, it will be necessary to run data dictionary scripts to upgrade or downgrade the database. (Note: A downgrade may not always be possible.) If the new server has a different O/S, then the above procedure would require additional steps that would significantly increase its complexity.
It also possible to duplicate a database using RMAN. Google the words "clone oracle database using rman" to get some good sites on how this is done using that tool. If you are not already using RMAN, the procedure I have described above would probably be the way to go.