How best to batch insert queries in Grakn? - database

What is best practice for batching Grakn insert queries?
from the docs:
"Keep the number of operations per transaction minimal. Although it is technically possible to commit a write transaction once after many operations, it is not recommended. To avoid lengthy rollbacks, running out of memory and conflicting operations, it is best to keep the number of queries per transaction minimal, ideally to one query per transaction."
On the other hand, I've heard some recommend 500-1000 queries per commit.
What are the possible gains, bottlenecks and risks?

In general, we would like to use many small lightweight transactions and keep this as the recommended advice where possible. However, in Grakn Versions < 2.0 there are still significant speed gains to be made by batching queries into a larger transaction, each of which can insert between 500-1000 concepts before committing.
In Grakn versions before 2.0, the Grakn's storage system has relatively expensive open and commit transaction operations, which should be much lighter in 2.0 and going forwards. By using small transactions, you also end up with smaller re-try operations for the cases where transactions fail due for any reason (for example, due to conflicts with other write transactions).

Related

Extensive benchamark for Flink in stream processing

I am using Flink at my company and I am considering to apply several scenarios to see the performance of each case.
Below is the scenarios that I will work on
Experiments
End-to-End
Exactly-At-Once or At-least-once
source : kafka
sink : Mysql and Redis
logic : simple counting logic
For the Exactly-At-Once, I will use the TwoPhaseCommitSink for achieving the case.
Before doing experiment, I am wondering some issues as below.
The performance speed of the sink
As you can see, I will use the mysql (RDB) for the sink. Is there any descriptive benchmarks result when we use the RDB for at-least-once or exactly-at-once? I think that when the sink uses the database, the throughput will be influenced because it takes some time to connect and communicate with database. But I cannot find any documents or technical blogs showing the detailed results of benchmark of Flink when using the Sink for RDB.
Especially, I am also wondering that the Exactly-at-once will have more degraded performance than the at-least-once and it is hard to use the commercial purpose because of its slow processing.
So my question is as below.
Is there any informative results for the two semantics mode (at least once, exactly at once) using the database sink (mysql or redis)?
Exactly-at-once semantics for end-to-end will be very slow when using the mysql sink? I will apply the twophasecommitsink.
Thanks.
A few reactions:
Simple, generic Flink benchmarks are pretty useless as predictors of specific application performance. So much depends on what a specific job is doing, and there's a lot of room for optimization.
Exactly-once with two-phase commit sinks is costly in terms of latency, but not so bad with respect to throughput. The issue is that the commit has to be done in concert with a checkpoint. If you need to checkpoint frequently in order to reduce the latency, then that will more significantly harm the throughput.
Unaligned checkpoints and the changelog state backend can make a big difference for some use cases. If you want to include these in your testing, be sure to use Flink 1.16, which saw significant improvements in these areas.
The Flink project has invested quite a bit in having a suite of benchmarks that run on every commit. See https://github.com/apache/flink-benchmarks and http://codespeed.dak8s.net:8000/ for more info.

Is it possible to achieve Exacly Once Semantics using a BASE-fashioned database?

In Stream Processing applications (f. e. based on Apache Flink or Apache Spark Streaming) it is sometimes necessary to process data exactly once.
In the database world something equal be achieved by using databases that follow the ACID criteria (correct me if I'm wrong here).
However there are a lot of (non relational) databases that do not follow ACID but BASE.
Now my question is: If I'm going to integrate such a BASE database into a stream processing application (exactly once), can I still guarantee exactly once processing for the whole pipeline? And if this is possible, under what circumstances?
Exactly Once Semantics means the processing framework such as flink can guarantee each incoming record(event) will be processed exactly one time even if the pineline fails in any way.
This is done by having checkpoints after each operation in the pineline, so that when the application recovers from failure, successful operation will not be executed again.
Depends on what kind of operations you are trying to do with databases, most cases databases are used as sinks for processing result to write into. In that case the operation involving database is just a simple insert and it will not be executed again after one successful run therefore it's still exactly-once regardless of its ACID support.
You might be tempted to group operations together for databases that support ACID but it will be a bad practice in a parallel streaming pineline since they created mutilple transactions and the locks might block the whole process. Instead, use BASE (NoSQL) database that are fast with intensive read and update performance is preferable, you just need to make your operations to be idempotent so that partially re-executed statements (if they failed half way through then after recovery they might be executed all again) won't result in incorrect data.

How does multi table schema create data consistency issues?

As per this answer, it is recommended to go for single table in Cassandra.
Cassandra 3.0
We are planning for below schema:
Second table has composite key. PK(domain_id, item_id). So, domain_id is partition key & item_id will be clustering key.
GET request handler will access(read) two tables
POST request handler will access(write) into two tables
PUT request handler will access(write) details table(only)
As per CAP theorem,
What are the consistency issues in having multi-table schema? in Cassandra...
Can we avoid consistency issues in Cassandra? with these terms QUORUM, consistency level etc...
recommended to go for single table in Cassandra.
I would recommend the opposite. If you have to support multiple queries for the same data in Apache Cassandra, you should have one table for each query.
What are the consistency issues in having multi-table schema? in Cassandra...
Consistency issues between query tables can happen when writes are applied to one table but not the other(s). In that case, the application should have a way to gracefully handle it. If it becomes problematic, perhaps running a nightly job to keep them in-sync might be necessary.
You can also have consistency issues within a table. Maybe something happens (node crashes, down longer than 3 hours, hints not replayed) during the write process. In that case, a given data point may have only a subset of its intended replicas.
This scenario can be countered by running regularly-scheduled repairs. Additionally, consistency can be increased on a per-query basis (QUORUM vs. ONE, etc), and consistency levels of QUORUM and higher will occasionally trigger a read-repair (which syncs all replicas in the current operation).
Can we avoid consistency issues in Cassandra? with these terms QUORUM, consistency level etc...
So Apache Cassandra was engineered to be highly-available (HA), thereby embracing the paradigm of eventual consistency. Some might interpret that to mean Cassandra is inconsistent by design, and they would not be incorrect. I can say after several years of supporting hundreds of clusters at web/retail scale, that consistency issues (while they do happen) are rare, and are usually caused by failures to components outside of a Cassandra cluster.
Ultimately though, it comes down to the business requirements of the application. For some applications like product reviews or recommendations, a little inconsistency shouldn't be a problem. On the other hand, things like location-based pricing may need a higher level of query consistency. And if 100% consistency is indeed a hard requirement, I would question whether or not Cassandra is the proper choice for data storage.
Edit
I did not get this: "Consistency issues between query tables can happen when writes are applied to one table but not the other(s)." When writes are applied to one table but not the other(s), what happens?
So let's say that a new domain is added. Perhaps a scenario arises where the domain_details_table gets updated, but the id_table does not. Nothing wrong here on the database side. Except that when the application expects to find that domain_id in the id_table, but cannot.
In that case, maybe the application can retry using a secondary index on domain_details_table.domain_id. It won't be fast, but the decision to be made is more around which scenario is more preferable; no answer, or a slow answer? Again, application requirements come into play here.
For your point: "You can also have consistency issues within a table. Maybe something happens (node crashes, down longer than 3 hours, hints not replayed) during the write process." How does RDBMS(like MySQL) deal with this?
So the answer to this used to be simple. RDBMSs only run on a single server, so there's only one replica to keep in-sync. But today, most RDBMSs have HA solutions which can be used, and thus have to be kept in-sync. In that case (from what I understand), most of them will asynchronously update the secondary replica(s), while restricting traffic only to the primary.
It's also good to remember that RDBMSs enforce consistency through locking strategies, as well. So even a single-instance RDBMS will lock a data point during an update, blocking any reads until the lock is released.
In a node-down scenario, a single-instance RDBMS will be completely offline, so instead of inconsistent data you'd have data loss instead. In a HA RDBMS scenario, there would be a short pause (during which you would likely encounter connection/query failures) until it has failed-over to the new primary. Once the replica comes up, there would probably be additional time necessary to sync-up the replicas, until HA can be restored.

GAE transaction failure and idempotency

The Google App Engine documentation contains this paragraph:
Note: If your application receives an exception when committing a
transaction, it does not always mean that the transaction failed. You
can receive DatastoreTimeoutException,
ConcurrentModificationException, or DatastoreFailureException
exceptions in cases where transactions have been committed and
eventually will be applied successfully. Whenever possible, make your
Datastore transactions idempotent so that if you repeat a transaction,
the end result will be the same.
Wait, what? It seems like there's a very important class of transactions that just simply cannot be made idempotent because they depend on current datastore state. For example, a simple counter, as in a like button. The transaction needs to read the current count, increment it, and write out the count again. If the transaction appears to "fail" but doesn't REALLY fail, and there's no way for me to tell that on the client side, then I need to try again, which will result in one click generating two "likes." Surely there is some way to prevent this with GAE?
Edit:
it seems that this is problem inherent in distributed systems, as per non other than Guido van Rossum -- see this link:
app engine datastore transaction exception
So it looks like designing idempotent transactions is pretty much a must if you want a high degree of reliability.
I was wondering if it was possible to implement a global system across a whole app for ensuring idempotency. The key would be to maintain a transaction log in the datastore. The client would generated a GUID, and then include that GUID with the request (the same GUID would be re-sent on retries for the same request). On the server, at the start of each transaction, it would look in the datastore for a record in the Transactions entity group with that ID. If it found it, then this is a repeated transaction, so it would return without doing anything.
Of course this would require enabling cross-group transactions, or having a separate transaction log as a child of each entity group. Also there would be a performance hit if failed entity key lookups are slow, because almost every transaction would include a failed lookup, because most GUIDs would be new.
In terms of the additional $ cost in terms of additional datastore interactions, this would probably still be less than if I had to make every transaction idempotent, since that would require a lot of checking what's in the datastore in each level.
dan wilkerson, simon goldsmith, et al. designed a thorough global transaction system on top of app engine's local (per entity group) transactions. at a high level, it uses techniques similar to the GUID one you describe. dan dealt with "submarine writes," ie the transactions you describe that report failure but later surface as succeeded, as well as many other theoretical and practical details of the datastore. erick armbrust implemented dan's design in tapioca-orm.
i don't necessarily recommend that you implement his design or use tapioca-orm, but you'd definitely be interested in the research.
in response to your questions: plenty of people implement GAE apps that use the datastore without idempotency. it's only important when you need transactions with certain kinds of guarantees like the ones you describe. it's definitely important to understand when you do need them, but you often don't.
the datastore is implemented on top of megastore, which is described in depth in this paper. in short, it uses multi-version concurrency control within each entity group and Paxos for replication across datacenters, both of which can contribute to submarine writes. i don't know if there are public numbers on submarine write frequency in the datastore, but if there are, searches with these terms and on the datastore mailing lists should find them.
amazon's S3 isn't really a comparable system; it's more of a CDN than a distributed database. amazon's SimpleDB is comparable. it originally only provided eventual consistency, and eventually added a very limited kind of transactions they call conditional writes, but it doesn't have true transactions. other NoSQL databases (redis, mongo, couchdb, etc.) have different variations on transactions and consistency.
basically, there's always a tradeoff in distributed databases between scale, transaction breadth, and strength of consistency guarantees. this is best known by eric brewer's CAP theorem, which says the three axes of the tradeoff are consistency, availability, and partition tolerance.
The best way I came up with making counters idempotent is using a set instead of an integer in order to count. Thus, when a person "likes" something, instead of incrementing a counter I add the like to the thing like this:
class Thing {
Set<User> likes = ....
public void like (User u) {
likes.add(u);
}
public Integer getLikeCount() {
return likes.size();
}
}
this is in java, but i hope you get my point even if you are using python.
This method is idempotent and you can add a single user for how many times you like, it will only be counted once. Of course, it has the penalty of storing a huge set instead of a simple counter. But hey, don't you need to keep track of likes anyway? If you don't want to bloat the Thing object, create another object ThingLikes, and cache the like count on the Thing object.
another option worth looking into is app engine's built in cross-group transaction support, which lets you operate on up to five entity groups in a single datastore transaction.
if you prefer reading on stack overflow, this SO question has more details.

Are long high volume transactions bad

Hallo,
I am writing a database application that does a lot of inserts and updates with fake serialisable isolation level (snapshot isolation).
To not do tonnes of network roundtrips I'm batching inserts and updates in one transaction with PreparedStatements. They should fail very seldom because the inserts are prechecked and nearly conflict free to other transactions, so rollbacks don't occur often.
Having big transactions should be good for WAL, because it can flush big chunks and doesn't have to flush for mini transactions.
1.) I can only see positive effects of a big transaction. But I often read that they are bad. Why could they be bad in my use case?
2.) Is the checking for conflicts so expensive when the local snapshots are merged back into the real database? The database will have to compare all write sets of possible conflicts (parallel transaction). Or does it do some high speed shortcut? Or is that quite cheap anyways?
[EDIT] It might be interesting if someone could bring some clarity into how a snapshot isolation database checks if transaction, which have overlapping parts on the timeline, are checked for disjunct write sets. Because that's what fake serializable isolation level is all about.
The real issues here are two fold. The first possible problem is bloat. Large transactions can result in a lot of dead tuples showing up at once. The other possible problem is from long running transactions. As long as a long running transaction is running, the tables it's touching can't be vacuumed so can collect lots of dead tuples as well.
I'd say just use check_postgresql.pl to check for bloating issues. As long as you don't see a lot of table bloat after your long transactions you're ok.
1) Manual says that it is good: http://www.postgresql.org/docs/current/interactive/populate.html
I can recommend also to Use COPY, Remove Indexes (but first test), Increase maintenance_work_mem, Increase checkpoint_segments, Run ANALYZE (or VACUUM ANALYZE) Afterwards.
I will not recommed if you are not sure: Remove Foreign Key Constraints, Disable WAL archival and streaming replication.
2) Always data are merged on commit but there is no checks, data are just written. Read again: http://www.postgresql.org/docs/current/interactive/transaction-iso.html
If your inserts/updates does not depend on other inserts/updates you don't need "wholly consistent view". You may use read committed and transaction will never fail.

Resources