I have a Aurora PostgresSQL cluster with read replicas.
I want to find out (preferably in AWS docs) if it gives strong consistency for reads from RO replicas after writes or is it "eventually consistent" meaning that RO replicas can return stale data during ongoing replication?
Unfortunately, this is not super clear from the documentation.
What I have found here is:
As a result, all Aurora Replicas return the same data for query results with minimal replica lag.
This lag is usually much less than 100 milliseconds after the primary instance has written an update.
However, I am not sure how to interpret this - does it return always the same data at a cost of higher latency with an added replication lag or can it return stale data during the replication?
Also, I am not sure if it depends on the underlying DB engine (Postgres in my case).
It is asynchronous replication as written in the docs. This means that Aurora replicas can return old data even though the new one has already been written in the writer instance.
As already said, data from the reader node is considered as "eventually consistent" even though the reader node(s) and the R/W node both share the same managed storage.
The reason for this eventual consistency is because the propagation of WAL logs to the buffer pool of the reader node(s) is asynchronous. I.e. the writer node doesn't wait for the reader node(s) to confirm that it has applied the WAL logs to its buffer pool before acknowledging the transaction to the client app. But this "eventual consistency" will only appear if the reader node happened to have data affected by those WAL logs in its buffer pool. Otherwise the reader node will read from the managed storage, which is always strongly consistent.
After some milliseconds, the reader nodes' buffer pool will anyway be updated.
Related
Our system is experiencing a higher load which is causing the database CPU to increase and also the queue depth to increase as traffic increases to the system.
This system is a practically a read database which we sync daily. Would adding read replicas to this system help scaling up the database and handling the increased read load? As I understand it, Aurora automatically distributes the load right?
We are using an Aurora Postgres instance that is db.t3.medium.
Would adding read replicas to this system help scaling up the database
and handling the increased read load?
If the database CPU is truly your bottleneck, then yes adding a read replica and distributing some of your reads to it should help.
As I understand it, Aurora automatically distributes the load right?
Not really. It provides a DNS-Load-Balanced read-only endpoint. As long as you configure your database connections to use that endpoint for read-only queries, then they should be fairly distributed.
I have a master db in one region.. and I want to create a read replica of it in another region just for disaster recovery practices.
I do not want it to be that costly, but I want the replication to work.
My current master db has db.t2.medium.
My question is:
What type should I keep for my read replica? Is db.t2.small fine for my replica?
It should not have much effect as read replica (RR) replication is asynchronous:
Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there is a change to the primary DB instance.
This means that your RR will be always lagging behind the master. For exactly how much, it depends on your setup. Thus you should monitor the lag as shown in Monitoring read replication. This is needed, because you may find that the lag is unacceptably large for the RR to be useful for DR purposes (i.e. large RPO).
How similar or different is YugaByte DB's replication model compared to PostgreSQL master-slave replication?
PostgreSQL is a single node RDBMS. Tables are not horizontally partitioned/sharded into smaller components since all data is served from a single node anyway. In a highly available (HA) deployment, a master-slave node pair is used. The master is responsible for handling writes/updates to the data. Committed data gets asynchronously copied over to a completely independent "slave" instance. Upon master failure, the application clients can start talking to the slave instance with a caveat that they won't see the data that has been recently committed at the master but not yet copied over to the slave. Usually the master-to-slave failover, master repair and slave-to-master failback are handled manually.
On the other hand, YugaByte DB is a Google Spanner-inspired distributed RDBMS where HA deployments start with a minimum of 3 nodes. Tables are horizontally partitioned/sharded into smaller components called "tablets". Tablets are distributed to all the available nodes evenly. Each tablet is made resilient to failures by automatically storing 2 additional copies on 2 additional nodes, leading to a total of 3 copies. These 3 copies are known as a tablet group. Instead of managing replication at the overall instance level involving all of the data (as PostgreSQL does), YugaByte DB's replication is managed at the individual tablet group level using a distributed consensus protocol called Raft.
Raft (along with a mechanism called Leader Leases) ensures that only 1 of the 3 copies can be a leader (responsible for serving writes/updates and strongly consistent reads) at any time. Writes for a given row are handled by the corresponding tablet leader which commits data locally as well as at least 1 follower before acknowledging success to the client application. Loss of a node leads to loss of tablet leaders hosted on that node. New updates on those tablets cannot be processed till new tablet leaders get auto-elected among the remaining followers. This auto-election usually takes a few seconds and is primarily dependent on the network latency across the nodes. After the election completes, the cluster is ready to accept writes even for the tablets who were impacted by the node failure.
The Google Spanner design followed by YugaByte DB does require committing the data at one more copy than PostgreSQL, which means increased write latency compared to PostgreSQL. But in return, the benefits derived are that of built-in repair (through leader auto-election) and failover (to the new leader after election). Even failback (after the failed node is back online) is automatic. This design is a better bet when infrastructure running the DB cluster is expected to fail more often than before.
For more details, refer to my post on this topic.
I know there have been many articles written about database replication. Trust me, I spent some time reading those articles including this SO one that explaints the pros and cons of replication. This SO article goes in depth about replication and clustering individually, but doesn't answer these simple questions that I have:
When do you replicate your database, and when do you cluster?
Can both be performed at the same time? If yes, what are the inspirations for each?
Thanks in advance.
MySQL currently supports two different solutions for creating a high availability environment and achieving multi-server scalability.
MySQL Replication
The first form is replication, which MySQL has supported since MySQL version 3.23. Replication in MySQL is currently implemented as an asyncronous master-slave setup that uses a logical log-shipping backend.
A master-slave setup means that one server is designated to act as the master. It is then required to receive all of the write queries. The master then executes and logs the queries, which is then shipped to the slave to execute and hence to keep the same data across all of the replication members.
Replication is asyncronous, which means that the slave server is not guaranteed to have the data when the master performs the change. Normally, replication will be as real-time as possible. However, there is no guarantee about the time required for the change to propagate to the slave.
Replication can be used for many reasons. Some of the more common reasons include scalibility, server failover, and for backup solutions.
Scalibility can be achieved due to the fact that you can now do can do SELECT queries across any of the slaves. Write statements however are not improved generally due to the fact that writes have to occur on each of the replication member.
Failover can be implemented fairly easily using an external monitoring utility that uses a heartbeat or similar mechanism to detect the failure of a master server. MySQL does not currently do automatic failover as the logic is generally very application dependent. Keep in mind that due to the fact that replication is asynchronous that it is possible that not all of the changes done on the master will have propagated to the slave.
MySQL replication works very well even across slower connections, and with connections that aren't continuous. It also is able to be used across different hardware and software platforms. It is possible to use replication with most storage engines including MyISAM and InnoDB.
MySQL Cluster
MySQL Cluster is a shared nothing, distributed, partitioning system that uses synchronous replication in order to maintain high availability and performance.
MySQL Cluster is implemented through a separate storage engine called NDB Cluster. This storage engine will automatically partition data across a number of data nodes. The automatic partitioning of data allows for parallelization of queries that are executed. Both reads and writes can be scaled in this fashion since the writes can be distributed across many nodes.
Internally, MySQL Cluster also uses synchronous replication in order to remove any single point of failure from the system. Since two or more nodes are always guaranteed to have the data fragment, at least one node can fail without any impact on running transactions. Failure detection is automatically handled with the dead node being removed transparent to the application. Upon node restart, it will automatically be re-integrated into the cluster and begin handling requests as soon as possible.
There are a number of limitations that currently exist and have to be kept in mind while deciding if MySQL Cluster is the correct solution for your situation.
Currently all of the data and indexes stored in MySQL Cluster are stored in main memory across the cluster. This does restrict the size of the database based on the systems used in the cluster.
MySQL Cluster is designed to be used on an internal network as latency is very important for response time.
As a result, it is not possible to run a single cluster across a wide geographic distance. In addition, while MySQL Cluster will work over commodity network setups, in order to attain the highest performance possible special clustering interconnects can be used.
In Master-Salve configuration the write operations are performed by Master and Read by slave. So all SQL request first reaches the Master and a queue of request is maintained and the read operation get executed only after completion of write. There is a common problem in Master-Salve configuration which i also witnessed is that when queue becomes too large to be maintatined by master then this achitecture collapse and the slave starts behaving like master.
For clusters i have worked on Cassandra where the request reaches a node(table) and a commit hash is maintained which notices the differences made to a node and updates the other nodes based on that commit hash. So here all operations are not dependent on a single node.
We used Master-Salve when write data is not big in size and count otherwise we use clusters.
Clusters are expensive in space and Master-Salve in time so your desicion of what to choose depends on what you want to save.
We can also use both at the same time, i have done this in my current company.
We moved the tables with most write operations to Cassandra and we have written 4 API to perform the CRUD operation on tables in Cassandra. As whenever an HTTP request comes it first hits our web server and from the code running on our web server we can decide which operation has to be performed (among CRUD) and then we call that particular API to make changes to the cassandra database.
Could anybody tell me more about difference between physical replication and logical replication in PostgreSQL?
TL;DR: Logical replication sends row-by-row changes, physical replication sends disk block changes. Logical replication is better for some tasks, physical replication for others.
Note that in PostgreSQL 12 (current at time of update) logical replication is stable and reliable, but quite limited. Use physical replication if you are asking this question.
Streaming replication can be logical replication. It's all a bit complicated.
WAL-shipping vs streaming
There are two main ways to send data from master to replica in PostgreSQL:
WAL-shipping or continuous archiving, where individual write-ahead-log files are copied from pg_xlog by the archive_command running on the master to some other location. A restore_command configured in the replica's recovery.conf runs on the replica to fetch the archives so the replica can replay the WAL.
This is what's used for point-in-time replication (PITR), which is used as a method of continuous backup.
No direct network connection is required to the master server. Replication can have long delays, especially without an archive_timeout set. WAL shipping cannot be used for synchronous replication.
streaming replication, where each change is sent to one or more replica servers directly over a TCP/IP connection as it happens. The replicas must have a direct network connection the master configured in their recovery.conf's primary_conninfo option.
Streaming replication has little or no delay so long as the replica is fast enough to keep up. It can be used for synchronous replication. You cannot use streaming replication for PITR1 so it's not much use for continuous backup. If you drop a table on the master, oops, it's dropped on the replicas too.
Thus, the two methods have different purposes. However, both of them transport physical WAL archives from primary to replica; they differ only in the timing, and whether the WAL segments get archived somewhere else along the way.
You can and usually should combine the two methods, using streaming replication usually, but with archive_command enabled. Then on the replica, set a restore_command to allow the replica to fall back to restore from WAL archives if there are direct connectivity issues between primary and replica.
Asynchronous vs synchronous streaming
On top of that, there's synchronous and asynchronous streaming replication:
In asynchronous streaming replication the replica(s) are allowed to fall behind the master in time when the master is faster/busier. If the master crashes you might lose data that wasn't replicated yet.
If the asynchronous replica falls too far behind the master, the master might throw away information the replica needs if max_wal_size (was previously called wal_keep_segments) is too low and no slot is used, meaning you have to re-create the replica from scratch. Or the master's pg_wal(waspg_xlog) might fill up and stop the master from working until disk space is freed if max_wal_size is too high or a slot is used.
In synchronous replication the master doesn't finish committing until a replica has confirmed it received the transaction2. You never lose data if the master crashes and you have to fail over to a replica. The master will never throw away data the replica needs or fill up its xlog and run out of disk space because of replica delays. In exchange it can cause the master to slow down or even stop working if replicas have problems, and it always has some performance impact on the master due to network latency.
When there are multiple replicas, only one is synchronous at a time. See synchronous_standby_names.
You can't have synchronous log shipping.
You can actually combine log shipping and asynchronous replication to protect against having to recreate a replica if it falls too far behind, without risking affecting the master. This is an ideal configuration for many deployments, combined with monitoring how far the replica is behind the master to ensure it's within acceptable disaster recovery limits.
Logical vs physical
On top of that we have logical vs physical streaming replication, as introduced in PostgreSQL 9.4:
In physical streaming replication changes are sent at nearly disk block level, like "at offset 14 of disk page 18 of relation 12311, wrote tuple with hex value 0x2342beef1222....".
Physical replication sends everything: the contents of every database in the PostgreSQL install, all tables in every database. It sends index entries, it sends the whole new table data when you VACUUM FULL, it sends data for transactions that rolled back, etc. So it generates a lot of "noise" and sends a lot of excess data. It also requires the replica to be completely identical, so you cannot do anything that'd require a transaction, like creating temp or unlogged tables. Querying the replica delays replication, so long queries on the replica need to be cancelled.
In exchange, it's simple and efficient to apply the changes on the replica, and the replica is reliably exactly the same as the master. DDL is replicated transparently, just like everything else, so it requires no special handling. It can also stream big transactions as they happen, so there is little delay between commit on the master and commit on the replica even for big changes.
Physical replication is mature, well tested, and widely adopted.
logical streaming replication, new in 9.4, sends changes at a higher level, and much more selectively.
It replicates only one database at a time. It sends only row changes and only for committed transactions, and it doesn't have to send vacuum data, index changes, etc. It can selectively send data only for some tables within a database. This makes logical replication much more bandwidth-efficient.
Operating at a higher level also means that you can do transactions on the replica databases. You can create temporary and unlogged tables. Even normal tables, if you want. You can use foreign data wrappers, views, create functions, whatever you like. There's no need to cancel queries if they run too long either.
Logical replication can also be used to build multi-master replication in PostgreSQL, which is not possible using physical replication.
In exchange, though, it can't (currently) stream big transactions as they happen. It has to wait until they commit. So there can be a long delay between a big transaction committing on the master and being applied to the replica.
It replays transactions strictly in commit order, so small fast transactions can get stuck behind a big transaction and be delayed quite a while.
DDL isn't handled automatically. You have to keep the table definitions in sync between master and replica yourself, or the application using logical replication has to have its own facilities to do this. It can be complicated to get this right.
The apply process its self is more complicated than "write some bytes where I'm told to" as well. It also takes more resources on the replica than physical replication does.
Current logical replication implementations are not mature or widely adopted, or particularly easy to use.
Too many options, tell me what to do
Phew. Complicated, huh? And I haven't even got into the details of delayed replication, slots, max_wal_size, timelines, how promotion works, Postgres-XL, BDR and multimaster, etc.
So what should you do?
There's no single right answer. Otherwise PostgreSQL would only support that one way. But there are a few common use cases:
For backup and disaster recovery use pgbarman to make base backups and retain WAL for you, providing easy to manage continuous backup. You should still take periodic pg_dump backups as extra insurance.
For high availability with zero data loss risk use streaming synchronous replication.
For high availability with low data loss risk and better performance you should use asynchronous streaming replication. Either have WAL archiving enabled for fallback or use a replication slot. Monitor how far the replica is behind the master using external tools like Icinga.
References
continuous archiving and PITR
high availability, load balancing and replication
replication settings
recovery.conf
pgbarman
repmgr
wiki: replication, clustering and connection pooling