What happens if h/w failure happens in time travel period ??
My assumption is that if a h/w failure happens during time travel we can on own restore the database while h/w failure during the fail-safe time can be handled only by SF
Yes, you are right.
Time travel can be used for restoring the objects from your end, while the object is within the defined Data retention time set.
Fail-safe ensures historical data is protected in the event of a system failure and the objects can be restored through Snowflake support.
Snowflake is a deployment on your CSP of choice, either AWS, Azure or GCP. The micropartitions live in a Snowflake managed bucket.
That means Snowflake uses the CSP's native support for Multi-AZ protection of data. The likelyhood of any failure is very unlikely.
Related
Our system is experiencing a higher load which is causing the database CPU to increase and also the queue depth to increase as traffic increases to the system.
This system is a practically a read database which we sync daily. Would adding read replicas to this system help scaling up the database and handling the increased read load? As I understand it, Aurora automatically distributes the load right?
We are using an Aurora Postgres instance that is db.t3.medium.
Would adding read replicas to this system help scaling up the database
and handling the increased read load?
If the database CPU is truly your bottleneck, then yes adding a read replica and distributing some of your reads to it should help.
As I understand it, Aurora automatically distributes the load right?
Not really. It provides a DNS-Load-Balanced read-only endpoint. As long as you configure your database connections to use that endpoint for read-only queries, then they should be fairly distributed.
I have a master db in one region.. and I want to create a read replica of it in another region just for disaster recovery practices.
I do not want it to be that costly, but I want the replication to work.
My current master db has db.t2.medium.
My question is:
What type should I keep for my read replica? Is db.t2.small fine for my replica?
It should not have much effect as read replica (RR) replication is asynchronous:
Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there is a change to the primary DB instance.
This means that your RR will be always lagging behind the master. For exactly how much, it depends on your setup. Thus you should monitor the lag as shown in Monitoring read replication. This is needed, because you may find that the lag is unacceptably large for the RR to be useful for DR purposes (i.e. large RPO).
I have a problem using the AWS Database Migration Service for implementing a transactional replication from SQL Server as a source database engine, a help is highly appreciated.
The 'safeguardPolicy' connection attribute defaults to 'RELY_ON_SQL_SERVER_REPLICATION_AGENT'. The tools will start mimicking a transaction in the database for preventing the log to be reused and to be able to read as much changes from the active log.
But what is the intended behavior of these safeguard transaction? Will those sessions be stopped at some point? What is the mechanism to start / run for some time / stop such a transaction?
The production databases I manage are in Full recovery mode, with Log backups on each half an hour. The log grows to an enormous size due to the inability for a valid truncation procedure to succeed and because of those safeguard transactions initiated by the DMS tool.
The only solution to a full transaction log due to LOG_SCAN caused by such behavior of DMS for now is to stop the DMS tasks and run a manual truncation of the log, to release space not used. But it is not a solution at all if we need to stop the replication each time such a problem occurs, knowing that it will occur often.
Please share some internals about the tool if possible.
Thanks
I have a SQL Server 2008 database set up for mirroring and was wondering if there was any way to generate a report for an audit showing that the data is being mirrored correctly and failing over would not result in any data loss. I can show using the database mirroring monitor that data is being transferred, but need a way to verify that the data matches (preferably without having to break the mirror).
Just query sys.database_mirroring, if the mirroring_state_desc is 'SYNCHRONIZED' then data is in the mirror. Make sure the transaction safety ('mirroring_safety_level') is FULL to guarantee no data loss on failover, see Mirroring states:
If transaction safety is set to FULL automatic failover and manual failover are both supported in the SYNCHRONIZED state, there is no data loss after a failover.
If transaction safety is off, some data loss is always possible, even in the SYNCHRONIZED state.
If the auditors don't trust the official product documentation, you can show the data content of a database snapshot of a mirror, since mirrors are not accessible. See Database Snapshots. Obviously, to do a meaningful comparison with a frozen snapshot you would have to freeze the source first, take the snapshot on mirror, run the comparison, then unfreeze the source. Which implies the database is read-only for the duration, any change will cause it to diverge from the snapshot and fail the comparison. An exercise in futility, with downtime, as the documentation clearly states that a syncronized full protected mirror is guaranteed to be identical with the source.
I am tasked with setting up a disaster recovery for one of our system. The primary server is in FL and the secondary is in Germany. The application is a global application within my company.
I am not sure if I should use Log shipping or Mirroring. What I have read is that mirroring will have an adverse effect on the performance of my application. Is this true? Does this mean that any time a user modify or save a record that it will take longer to get a positive response.
Thanks
Mirroring can have different performance impacts depending on the operating mode you choose. If you are mirroring you can have three operating modes: High Protection (with and without automatic failover) and High Performance.
Basically, these amount to synchronous and asynchronous mirroring. With High Protection your application will be waiting for the mirroring to finish before considering the transaction complete. In High Performance mode your application will not wait for the mirroring to have been committed. In fact, it is not guaranteed at any point in time that all the most recent transactions will have been saved in the mirror's transaction log.
One of the main factors to consider with mirroring will be the round trip time of your network. Higher latency will impact more heavily on your performance. You will need to weigh the performance cost against your specific recovery (and failover) requirements.
If you haven't already, you should read Database Mirroring in SQL Server 2005 and
Database Mirroring Best Practices and Performance Considerations.
Mirroring would keep both the primary and DR environments in synch 100% of the time and thus eliminate the possibility for data loss. However, as you noted, this has an adverse affect on performance, but may be necessary in situations that cannot tolerate any data loss (ex. financial applications). Shipping logs and applying them to the standby database at the DR site doesn't have the same impact on user response time, but opens up a small period during which data loss could potentially occur.
Mirroring is operate synchronously (wait until the log is committed to DB), usually deploy on good network connection (LAN)
Log shipping is operate asynchronously (will not wait the log is committed to DB), usually deploy over MPLS / VPN or slow network
so for your objective, u should use Log Shipping