What is best practice for batching Grakn insert queries?
from the docs:
"Keep the number of operations per transaction minimal. Although it is technically possible to commit a write transaction once after many operations, it is not recommended. To avoid lengthy rollbacks, running out of memory and conflicting operations, it is best to keep the number of queries per transaction minimal, ideally to one query per transaction."
On the other hand, I've heard some recommend 500-1000 queries per commit.
What are the possible gains, bottlenecks and risks?
In general, we would like to use many small lightweight transactions and keep this as the recommended advice where possible. However, in Grakn Versions < 2.0 there are still significant speed gains to be made by batching queries into a larger transaction, each of which can insert between 500-1000 concepts before committing.
In Grakn versions before 2.0, the Grakn's storage system has relatively expensive open and commit transaction operations, which should be much lighter in 2.0 and going forwards. By using small transactions, you also end up with smaller re-try operations for the cases where transactions fail due for any reason (for example, due to conflicts with other write transactions).
I have a web application where I want to ensure concurrency with a DB level lock on the object I am trying to update. I want to make sure that a batch change or another user or process may not end up introducing inconsistency in the DB.
I see that Isolation levels ensure read consistency and optimistic lock with #Version field can ensure data is written with a consistent state.
My question is can't we ensure consistency with isolation level only? By making my any transaction that updates the record Serializable(not considering performance), will I not ensure that a proper lock is taken by the transaction and any other transaction trying to update or acquire lock or this transaction will fail?
Do I really need version or timestamp management for this?
Depending on isolation level you've chosen, specific resource is going to be locked until given transaction commits or rollback - it can be lock on a whole table, row or block of sql. It's a pessimistic locking and it's ensured on database level when running a transaction.
Optimistic locking on the other hand assumes that multiple transactions rarely interfere with each other so no locks are required in this approach. It is a application-side check that uses #Version attribute in order to establish whether version of a record has changed between fetching and attempting to update it.
It is reasonable to use optimistic locking approach in web applications as most of operations span through multiple HTTP request. Usually you fetch some information from database in one request, and update it in another. It would be very expensive and unwise to keep transactions open with lock on database resources that long. That's why we assume that nobody is going to use set of data we're working on - it's cheaper. If the assumption happens to be wrong and version has changed in between requests by someone else, Hibernate won't update the row and will throw OptimisticLockingException. As a developer, you are responsible for managing this situation.
Simple example. Online auctions service - you're watching an item page. You read its description and specification. All of it takes, let's say, 5 minutes. With pessimistic locking and some isolation levels you'd block other users from this particular item page (or all of the items even!). With optimistic locking everybody can access it. After reading about the item you're willing to bid on it so you click the proper button. If any other of users watching this item and change its state (owner changed its description, someone other bid on it) in the meantime you will probably (depending on app implementation) be informed about the changes before application will accept your bid because version you've got is not the same as version persisted in database.
Hope that clarifies a few things for you.
Unless we are talking about some small, isolated web application (only app that is working on a database), then making all of your transactions to be Serializable would mean having a lot of confidence in your design, not taking into account the fact that it may not be the only application hitting on that certain database.
In my opinion the incorporation of Serializable isolation level, or a Pessimistic Lock in other words, should be very well though decision and applied for:
Large databases and short transactions that update only a few rows
Where the chance that two concurrent transactions will modify the same rows is relatively low.
Where relatively long-running transactions are primarily read-only.
Based on my experience, in most of the cases using just the Optimistic Locking would be the most beneficial decision, as frequent concurrent modifications mostly happen in only small percentage of cases.
Optimistic locking definately also helps other applications run faster (dont think only of yourself!).
So when we take the Pessimistic - Optimistic locking strategies spectrum, in my opinion the truth lies somewhere more towards the Optimistic locking with a flavor of serializable here and there.
I really cannot reference anything here as the answer is based on my personal experience with many complex web projects and from my notes when i was preapring to my JPA Certificate.
Hope that helps.
I'm building a system which has the potential to require support for 500+ concurrent users, each making dozens of queries (selects, inserts AND updates) each minute. Based on these requirements and tables with many millions of rows I suspect that there will be the need to use database replication in the future to reduce some of the query load.
Having not used replication in the past, I am wondering if there is anything I need to consider in the schema design?
For instance, I was once told that it is necessary to use GUIDs for primary keys to enable replication. Is this true?
What special considerations or best practices for database design are there for a database that will be replicated?
Due to time constraints on the project I don't want to waste any time by implementing replication when it may not be needed. (I have enough definite problems to overcome at the moment without worrying about having to solve possible ones.) However, I don't want to have to make potentially avoidable schema changes when/if replication is required in the future.
Any other advice on this subject, including good places to learn about implementing replication, would also be appreciated.
While every row must have a rowguid column, you are not required to use a Guid for your primary key. In reality, you aren't even required to have a primary key (though you will be stoned to death for failing to create one). Even if you define your primary key as a guid, not making it the rowguid column will result in Replication Services creating an additional column for you. You definitely can do this, and it's not a bad idea, but it is by no means necessary nor particularly advantageous.
Here are some tips:
Keep table (or, rather, row) sizes small; unless you use column-level replication, you'll be downloading/uploading the entire contents of a row, even if only one column changes. Additionally, smaller tables make conflict resolution both easier and less frequent.
Don't use sequential or deterministic algorithm-driven primary keys. This includes identity columns. Yes, Replication Services will handle identity columns and allocating key allotments by itself, but it's a headache that you don't want to deal with. This alone is a great argument for using a Guid for your primary key.
Don't let your applications perform needless updates. This is obviously a bad idea to begin with, but this issue is made exponentially worse in replication scenarios, both from a bandwidth usage and a conflict resolution perspective.
You may want to use GUIDs for primary keys - in a replicated system rows must be unique throughout your entire topology, and GUID PKs is one way of achieving this.
Here's a short article about use of GUIDs in SQL Server
I'd say your real question is not how to handle replication, but how to handle scale out, or at least scale out for queryability. And while there are various answers to this conundrum, one answer will stand out: not using replication.
The problem with replication, specially with merge replication, is that writes gets multiplied in replication. Say you have a system which handles a load of 100 queries (90 reads and 10 writes) per second. You want to scale out and you choose replication. Now you have 2 systems, each handling 50 queries, 45 reads and 5 writes each. Now those writes have to be replicated so the actual number of writes is not 5+5, but 5+5 (original writes ) and then another 5+5 (the replica writes), so you have 90 reads and 20 writes. So while the load on each system was reduced, the ratio of writes and reads has increased. This not only changes the IO patterns, but most importantly it changes the concurency pattern of the load. Add a third system and you'll have 90 reads and 30 writes and so on and so forth. Soon you'll have more writes than reads and the replication update latency combined with the concurency issues and merge conflicts will derail your project. The gist of it is that the 'soon' is much sooner than you expect. Is soon enough to justify looking into scale up instead, since you're talking a scale out of 6-8 peers at best anyway, and 6-8 times capacity increase using scale up will be faster, much more simpler and possible even cheaper to start with.
And keep in mind that all these are just purely theorethical numbers. In practice what happens is that the replication infrastructure is not free, it adds its own load on the system. Writes needs to be tracked, changes have to be read, a distributor has to exists to store changes until distributed to subscribers, then changes have to be writes and mediated for possible conflicts. That's why I've seen very few deployments that could claim success with a replication based scale out strategy.
One alternative is to scale out only reads and here replication does work, usualy using transactional replication, but so does log-shipping or mirroring with a database snapshot.
The real alternative is partitioning (ie. sharding). Requests are routed in the application to the proper partition and land on the server containig the appropiate data. Changes on one partiton that need to be reflected on another partition are shipped via asynchronous (usually messaging based) means. Data can only be joined within a partition. For a more detailed discussion of what I'm talking about, read how MySpace does it. Needless to say, such a strategy has a major impact on the application design and cannot be simply glued in after v1.
I want to know that what is the best way to arrive at the isolation level of the transaction?
This is a good link of the available ISOLATION levels.
Blockquote It will be nice if someone can explain the various isolation levels of a transaction
Update: Clarified and corrected explanation.
Isolation levels just indicate how much of your transaction is affected by other concurrent transactions. The higher the isolation level, the less affected it is.
The effort will be made manifest in cpu load, memory load, and perhaps commit latency. In addition, write conflicts can be more likely in higher isolation levels, which may mean that you have to abort your transaction and retry the whole thing. (This only affects transactions that perform updates or inserts, not transactions which only perform selects.)
In general, the rule of thumb is to use the lowest level that gives your application the consistency it needs.
The partial transaction isolation provided by Read Committed mode is adequate for many applications, and this mode is fast and simple to use; however, it is not sufficient for all cases. Applications that do complex queries and updates might require a more rigorously consistent view of the database than Read Committed mode provides.
The Serializable mode provides a rigorous guarantee that each transaction sees a wholly consistent view of the database. However, the application has to be prepared to retry transactions when concurrent updates make it impossible to sustain the illusion of serial execution. Since the cost of redoing complex transactions can be significant, serializable mode is recommended only when updating transactions contain logic sufficiently complex that they might give wrong answers in Read Committed mode. Most commonly, Serializable mode is necessary when a transaction executes several successive commands that must see identical views of the database.
( http://www.postgresql.org/docs/8.4/interactive/transaction-iso.html is very nice. )
If you're not sure about the differences in isolation levels, then stick to the default. Changing the level can have peculiar side-effects. 99% of applications are fine with the default.
The default I think varies with each JDBC driver, although some frameworks like JPA may enforce it, I can't recall offhand. The most common default is read_committed, because it gives the best general-purpose balance between transactional safety and concurrency. If you pick a different isolation level, you sacrifice either safety or concurrency, and you have to be aware of the compromise.
What the heck is the question?!
Isolation levels define the lock type and lock granularity used by the DBMS. Locking is essential in the context of DBMS's, as transactions are executed concurrently, by potentially many users. Higher transaction isolation--such as SERIALIZABLE--is safer--you can potentially eliminate dirty reads and phantom updates--but impose a penalty as serialized transactions limit concurrency and therefore preclude scalability.
What to do? Architect the application such that the logic limits the possibility of "bad data" by judiciously using serialized transactions when they're absolutely needed, but not such that concurrency is unnecessarily hampered.
Transaction Isolation Levels are about solving data reading problems in concurrent transactions (when, within one transaction, we read the same data that another transaction changes at the same time).
There are 4 isolation levels. Each solves 1 related problem + the problems of all previous levels:
#
Isolation level
Problem to be solved
Problem description
1
Read Uncommitted
Lost Update
Only the last of the concurrent transactions affects the read data. The impact of other transactions is lost
2
Read Committed
Dirty Read
The read data was changed by a transaction, that was then rolled back
3
Repeatable Read
Non-repeatable Read
Second read of the same data gives a result that is not equal to the first read, because the data was changed by another transaction between reads
4
Serialization
Phantom Read
Second data select by the same parameters is different from the first one, because the data was changed by another transaction between reads
Isolation levels refer to the DB layer.
Isolation levels are a tradeoff between data accuracy and performance: higher level gives higher accuracy, but lower speed.
The default database level is usually Read Committed (PostgreSQL) or Repeatable Read (MySQL).
I understand the differences between optimistic and pessimistic locking. Now, could someone explain to me when I would use either one in general?
And does the answer to this question change depending on whether or not I'm using a stored procedure to perform the query?
But just to check, optimistic means "don't lock the table while reading" and pessimistic means "lock the table while reading."
Optimistic Locking is a strategy where you read a record, take note of a version number (other methods to do this involve dates, timestamps or checksums/hashes) and check that the version hasn't changed before you write the record back. When you write the record back you filter the update on the version to make sure it's atomic. (i.e. hasn't been updated between when you check the version and write the record to the disk) and update the version in one hit.
If the record is dirty (i.e. different version to yours) you abort the transaction and the user can re-start it.
This strategy is most applicable to high-volume systems and three-tier architectures where you do not necessarily maintain a connection to the database for your session. In this situation the client cannot actually maintain database locks as the connections are taken from a pool and you may not be using the same connection from one access to the next.
Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking but requires you to be careful with your application design to avoid Deadlocks. To use pessimistic locking you need either a direct connection to the database (as would typically be the case in a two tier client server application) or an externally available transaction ID that can be used independently of the connection.
In the latter case you open the transaction with the TxID and then reconnect using that ID. The DBMS maintains the locks and allows you to pick the session back up through the TxID. This is how distributed transactions using two-phase commit protocols (such as XA or COM+ Transactions) work.
When dealing with conflicts, you have two options:
You can try to avoid the conflict, and that's what Pessimistic Locking does.
Or, you could allow the conflict to occur, but you need to detect it upon committing your transactions, and that's what Optimistic Locking does.
Now, let's consider the following Lost Update anomaly:
The Lost Update anomaly can happen in the Read Committed isolation level.
In the diagram above we can see that Alice believes she can withdraw 40 from her account but does not realize that Bob has just changed the account balance, and now there are only 20 left in this account.
Pessimistic Locking
Pessimistic locking achieves this goal by taking a shared or read lock on the account so Bob is prevented from changing the account.
In the diagram above, both Alice and Bob will acquire a read lock on the account table row that both users have read. The database acquires these locks on SQL Server when using Repeatable Read or Serializable.
Because both Alice and Bob have read the account with the PK value of 1, neither of them can change it until one user releases the read lock. This is because a write operation requires a write/exclusive lock acquisition, and shared/read locks prevent write/exclusive locks.
Only after Alice has committed her transaction and the read lock was released on the account row, Bob UPDATE will resume and apply the change. Until Alice releases the read lock, Bob's UPDATE blocks.
Optimistic Locking
Optimistic Locking allows the conflict to occur but detects it upon applying Alice's UPDATE as the version has changed.
This time, we have an additional version column. The version column is incremented every time an UPDATE or DELETE is executed, and it is also used in the WHERE clause of the UPDATE and DELETE statements. For this to work, we need to issue the SELECT and read the current version prior to executing the UPDATE or DELETE, as otherwise, we would not know what version value to pass to the WHERE clause or to increment.
Application-level transactions
Relational database systems have emerged in the late 70's early 80's when a client would, typically, connect to a mainframe via a terminal. That's why we still see database systems define terms such as SESSION setting.
Nowadays, over the Internet, we no longer execute reads and writes in the context of the same database transaction, and ACID is no longer sufficient.
For instance, consider the following use case:
Without optimistic locking, there is no way this Lost Update would have been caught even if the database transactions used Serializable. This is because reads and writes are executed in separate HTTP requests, hence on different database transactions.
So, optimistic locking can help you prevent Lost Updates even when using application-level transactions that incorporate the user-think time as well.
Conclusion
Optimistic locking is a very useful technique, and it works just fine even when using less-strict isolation levels, like Read Committed, or when reads and writes are executed in subsequent database transactions.
The downside of optimistic locking is that a rollback will be triggered by the data access framework upon catching an OptimisticLockException, therefore losing all the work we've done previously by the currently executing transaction.
The more contention, the more conflicts, and the greater the chance of aborting transactions. Rollbacks can be costly for the database system as it needs to revert all current pending changes which might involve both table rows and index records.
For this reason, pessimistic locking might be more suitable when conflicts happen frequently, as it reduces the chance of rolling back transactions.
Optimistic locking is used when you don't expect many collisions. It costs less to do a normal operation but if the collision DOES occur you would pay a higher price to resolve it as the transaction is aborted.
Pessimistic locking is used when a collision is anticipated. The transactions which would violate synchronization are simply blocked.
To select proper locking mechanism you have to estimate the amount of reads and writes and plan accordingly.
Optimistic assumes that nothing's going to change while you're reading it.
Pessimistic assumes that something will and so locks it.
If it's not essential that the data is perfectly read use optimistic. You might get the odd 'dirty' read - but it's far less likely to result in deadlocks and the like.
Most web applications are fine with dirty reads - on the rare occasion the data doesn't exactly tally the next reload does.
For exact data operations (like in many financial transactions) use pessimistic. It's essential that the data is accurately read, with no un-shown changes - the extra locking overhead is worth it.
Oh, and Microsoft SQL server defaults to page locking - basically the row you're reading and a few either side. Row locking is more accurate but much slower. It's often worth setting your transactions to read-committed or no-lock to avoid deadlocks while reading.
I would think of one more case when pessimistic locking would be a better choice.
For optimistic locking every participant in data modification must agree in using this kind of locking. But if someone modifies the data without taking care about the version column, this will spoil the whole idea of the optimistic locking.
There are basically two most popular answers. The first one basically says
Optimistic needs a three-tier architectures where you do not necessarily maintain a connection to the database for your session whereas Pessimistic Locking is when you lock the record for your exclusive use until you have finished with it. It has much better integrity than optimistic locking you need either a direct connection to the database.
Another answer is
optimistic (versioning) is faster because of no locking but (pessimistic) locking performs better when contention is high and it is better to prevent the work rather than discard it and start over.
or
Optimistic locking works best when you have rare collisions
As it is put on this page.
I created my answer to explain how "keep connection" is related to "low collisions".
To understand which strategy is best for you, think not about the Transactions Per Second your DB has but the duration of a single transaction. Normally, you open trasnaction, performa operation and close the transaction. This is a short, classical transaction ANSI had in mind and fine to get away with locking. But, how do you implement a ticket reservation system where many clients reserve the same rooms/seats at the same time?
You browse the offers, fill in the form with lots of available options and current prices. It takes a lot of time and options can become obsolete, all the prices invalid between you started to fill the form and press "I agree" button because there was no lock on the data you have accessed and somebody else, more agile, has intefered changing all the prices and you need to restart with new prices.
You could lock all the options as you read them, instead. This is pessimistic scenario. You see why it sucks. Your system can be brought down by a single clown who simply starts a reservation and goes smoking. Nobody can reserve anything before he finishes. Your cash flow drops to zero. That is why, optimistic reservations are used in reality. Those who dawdle too long have to restart their reservation at higher prices.
In this optimistic approach you have to record all the data that you read (as in mine Repeated Read) and come to the commit point with your version of data (I want to buy shares at the price you displayed in this quote, not current price). At this point, ANSI transaction is created, which locks the DB, checks if nothing is changed and commits/aborts your operation. IMO, this is effective emulation of MVCC, which is also associated with Optimistic CC and also assumes that your transaction restarts in case of abort, that is you will make a new reservation. A transaction here involves a human user decisions.
I am far from understanding how to implement the MVCC manually but I think that long-running transactions with option of restart is the key to understanding the subject. Correct me if I am wrong anywhere. My answer was motivated by this Alex Kuznecov chapter.
In most cases, optimistic locking is more efficient and offers higher performance. When choosing between pessimistic and optimistic locking, consider the following:
Pessimistic locking is useful if there are a lot of updates and
relatively high chances of users trying to update data at the same
time. For example, if each operation can update a large number of
records at a time (the bank might add interest earnings to every
account at the end of each month), and two applications are running
such operations at the same time, they will have conflicts.
Pessimistic locking is also more appropriate in applications that contain small tables that are frequently updated. In the case of these so-called hotspots, conflicts are so probable that optimistic locking wastes effort in rolling back conflicting transactions.
Optimistic locking is useful if the possibility for conflicts is very
low – there are many records but relatively few users, or very few updates and mostly read-type operations.
One use case for optimistic locking is to have your application use the database to allow one of your threads / hosts to 'claim' a task. This is a technique that has come in handy for me on a regular basis.
The best example I can think of is for a task queue implemented using a database, with multiple threads claiming tasks concurrently. If a task has status 'Available', 'Claimed', 'Completed', a db query can say something like "Set status='Claimed' where status='Available'. If multiple threads try to change the status in this way, all but the first thread will fail because of dirty data.
Note that this is a use case involving only optimistic locking. So as an alternative to saying "Optimistic locking is used when you don't expect many collisions", it can also be used where you expect collisions but want exactly one transaction to succeed.
Lot of good things have been said above about optimistic and pessimistic locking.
One important point to consider is as follows:
When using optimistic locking, we need to cautious of the fact that how will application recover from these failures.
Specially in asynchronous message driven architectures, this can lead of out of order message processing or lost updates.
Failures scenarios need to be thought through.
Let's say in an ecommerce app, a user wants to place an order. This code will get executed by multiple threads. In pessimistic locking, when we get the data from the DB, we lock it so no other thread can modify it. We process the data, update the data, and then commit the data. After that, we release the lock. Locking duration is long here, we have locked the database record from the beginning till committing.
In optimistic locking, we get the data and process the data without locking. So multiple threads can execute the code so far concurrently. This will speed up. While we update, we lock the data. We have to verify that no other thread updated that record. For example, If we had 100 items in inventory and we have to update it to 99 (because your code might be quantity=queantity-1) but if another thread already used 1 it should be 98. We had race condition here. In this case, we restart the thread so we execute the same code from the beginning. But this is an expensive operation, you already came to end but then restart. if we had a few race conditions, that would not be a big deal, If the race condition was high, there would be a lot of threads to restart. We might run in a loop. In the race condition is high, we should be using `pessimistic locking
Optimistic locking means exclusive lock is not used when reading a row so lost update or write skew is not prevented. So, use optimistic locking:
If lost update or write skew doesn't occur.
Or, if there are no problems even if lost update or write skew occurs.
Pessimistic locking means exclusive lock is used when reading a row so lost update or write skew is prevented. So, use pessimistic locking:
If lost update or write skew occurs.
Or if there are some problems if lost update or write skew occurs.
In MySQL and PostgreSQL, you can use exclusive lock with SELECT FOR UPDATE.
You can check my answer of the lost update and write skew examples with optimistic locking(without SELECT FOR UPDATE) and pessimistic locking(with SELECT FOR UPDATE) in MySQL.
On a more practical note, when updating a distributed system, optimistic locking in the DB may be inadequate to provide the consistency needed across all parts of the distributed system.
For example, in applications built on AWS, it is common to have data in both a DB (e.g. DynamoDB) and a storage (e.g. S3). If an update touches both DynamoDB and S3, an optimistic locking in DynamoDB could still leave the data in S3 inconsistent. In this type of cases, it is probably safer to use a pessimistic lock that is held in DynamoDB until the S3 update is finished. In fact, AWS provides a locking library for this purpose.
Optimistic locking and Pessimistic locking are two models for locking data in a database.
Optimistic locking : where a record is locked only when changes are committed to the database.
Pessimistic locking : where a record is locked while it is edited.
Note : In both data-locking models, the lock is released after the changes are committed to the database.