I have a 5TB Postgres database in a different account in a different region which I want to migrate to our account and region. The source is PostgreSQL 11.5 on EC2 (standard Linux Ubuntu). The destination would be Aurora postgres (serverless) in our account and region. So this is a cross account and cross region transfer and import.
In the source account (US-West-2 region) create an S3 Bucket with cross region replication with appropriate role
From the source account, do the pg_dump to place the dump in an S3 above. Due to cross region replication the data will be copied over to the replication in US-East-2.
In the source account - Provision a destination account IAM user to access the source account S3 replication instance.
In the destination account we’ll create Aurora Postgres and import the data without having to bear the cross region data transfer.
I need help with the following.
Is S3 cross region replication eliminate the cross region pg_dump data migration cost since S3 replication is free and the data import in the destination will happen from S3 in the same region.
Does Aurora postgres (serverless) support pg_restore/import from aws cli.
Thanks in advance folks!
If you want to use S3 to limit costs, you can do a pg_dump to s3, this is reasonable.
To migrate directly, I would recommend using pgloader
You can use it to migrate from one database to the other:
pgloader postgresql:///source postgresql:///destination
If you want to use pg_restore it should work but otherwise psql -f dump.sql also works.
Related
As Azure SQL Database has two layers. One for Compute and other for Storage.
Storage layer stores MDF/LDF file in a Storage Account. Can we see the storage account that is used by Azure SQL Database to store these files and also the .BAK files that are generated as part of point in time backup for 7 days (without need of any storage account configuration)?
Reference: https://learn.microsoft.com/en-in/azure/azure-sql/database/high-availability-sla#basic-standard-and-general-purpose-service-tier-locally-redundant-availability
All information related to automated backups and the storage used by Azure SQL is not available to customers as it is managed by Azure behind the scenes, and that is part of the benefits of using platform as a service.
For example, when you go to the restore a database on the portal you can see the options that you have and you can only select to which point in time you want to restore. This is part of the idea of getting a platform as a service (PaaS)... it is managed for you. You only need to choose the point in time and the Azure service should map it to the internal file(s) behind the scene. As those files are managed by Azure you cannot see them or see their attributes, that type of information is available on IaaS, not on PaaS or SaaS.
If you want more control about backup files on Azure SQL, choose to backup databases to storage accounts using export feature or using SqlPackage utility or using Azure Automation.
I want to migrate a RDS database that is located on AWS account "A" to AWS account "B", but i want to have my own copy of the RDS database on account "B".
I want to make sure what is the easiest way and to have an independent copy on account "B"
The best way to do this is going to be to take a manual snapshot on the account on the account currently hosting the RDS database.
Once the snapshot is complete you can then share this with the other account.
The other account can then launch a new RDS instance from this snapshot. Once this instance has been launched the snapshot can be removed. Be aware if they are in a different region you will also need to copy the snapshot to the destination region before you share with the other account.
Some links that may be of use to you:
How do I share manual Amazon RDS DB snapshots or Aurora DB cluster snapshots with another AWS account?
Restore a DB Instance from a DB Snapshot
For staging in Snowflake, we need S3 AWS layer or Azure or Local machine. Instead of this, can we FTP a file from a source team directly to Snowflake internal storage, so that, from there the Snowpipe can the file and load to our Snowflake table.
If yes, please tell how. If no, please confirm that as well. If no, won't that is a big drawback of Snowflake to depend on other platforms every time.
You can use just about any driver from Snowflake to move files to Internal stage on Snowflake. ODBC, JDBC, Python, SnowSQL, etc. FTP isn't a very common protocol in the cloud, though. Snowflake has a lot of customers without any presence on AWS, Azure, or GCP that are using Snowflake without issues in this manner.
I'm researching the differences between AWS and Azure for my company. We going to make an web-based application. Which is going to be across 3 regions, each region needs to have a MS SQL database.
But I can't figure how to do the following with AWS: the databases need to sync between each region (2 way). So the data stays the same on every Database.
Why we want this? For example a customer* from Eu adds a record to the database. Now this database needs to sync with the other regions. Resulting that a customer form the region US can see the added records. (*Customers can add products to the database)
Do you guys have any idea how we can achieve this?
it's a requirement to use Ms SQL.
If you are using SQL on EC2 instances then the only way to achieve multi-region, multi-master for MS SQL Server is to use Peer-to-Peer Transactional Replication, however it doesn't protect against individual row conflicts.
https://technet.microsoft.com/en-us/library/ms151196.aspx
This isn't a feature of AWS RDS for MS SQL, however there is another product for multi-region replication that's available on the AWS marketplace, but it only works for read replicas.
http://cloudbasic.net/aws/rds/alwayson/
At present AWS doesn't support read replicas for SQL server RDS databases.
However replication between AWS RDS sql server databases can be done using DMS (database migration service). Refer below link for more details
https://aws.amazon.com/blogs/database/introducing-ongoing-replication-from-amazon-rds-for-sql-server-using-aws-database-migration-service/
I have created an application on Bluemix. I need to copy my database on Bluemix that can be accessed from my adapter. Can anyone give me detailed steps on how to proceed?
First thing: if your database is reachable through the Internet and you only need to connect to it from the application, please note that a cf application on Bluemix can access the public network and so it is already able to connect to your DB in this scenario.
Assuming that you have a requirement for migrating the DB on Bluemix, you didn't specify which kind of database you want to migrate, here are the main (not all) possibilities you currently have:
RDBMS:
PostgreSQL by Compose (you need an account on compose.io)
SQL Database (DB2, only Premium plan available)
ClearDB (MySQL)
ElephantSQL (this is basically a PostgreSQL as a Service - that is you have to work on the db via API)
you could use the RDBS capability of dashDB
No-SQL:
Cloudant (documental)
Redis by Compose (ultra fast key-value db. You need an account on compose.io)
MongoDB by Compose (you need an account on compose.io)
IBM Graph (graph No-SQL db)
I suggest you to take a look at the Bluemix Catalog (subcategory Data and Analytics) and to refer to the Docs as well.
You can create dashDB service on your bluemix, and copy / upload your data to Bluemix dashDB database, using dashDB VCAP Credentials to connect to it from your adapter, or you can bind your dashDB service to you application on Bluemix.