Why concurrent execution of transactions is desirable? - sql-server

Can somebody please explain me why concurrent execution of transactions is desirable? I couldn't find a clear answer even though searched for hours. Thank you.

why concurrent execution of transactions is desirable
Without concurrent execution of transactions, your application throughput is limited to inverse of your transaction duration.
Eg if each transaction takes 50ms, your throughput limited to 20 transactions per second. Which means you can't perform a lot of transactions or support a large number of users.

Concurrency does not help in the case of a single-core/single-node case. (Except in the case when reading from disk takes a lot more time that can be used for processing something else)
Because at any time, you can only run one transaction. So you can run all the transactions one after the other without any interleaving of operations between different transactions and get the same performance.
In case of multi-core/multi-node case, you can make use of parallel processing and run multiple transactions parallelly.
If you run the transactions one by one, you can't make use of the multiple cores/nodes. So we need to run transactions in parallel on the multiple cores/nodes and make sure the concurrency control protocols are in place to keep the database consistent when multiple transactions running concurrently (simultaneously at the same time).

Related

Snowflake as backend for high demand API

My team and I have been using Snowflake daily for the past eight months to transform/enrich our data (with DBT) and make it available in other tools.
While the platform seems great for heavy/long running queries on large datasets and powering analytics tools such as Metabase and Mode, it just doesnt seem to behave well in cases where we need to run really small queries (grab me one line of table A) behind a high demand API, what I mean by that is that SF sometimes takes as much as 100ms or even 300ms on a XLARGE-2XLARGE warehouse to fetch one row in a fairly small table (200k computed records/aggregates), that added up to the network latency makes for a very poor setup when we want to use it as a backend to power a high demand analytics API.
We've tested multiple setups with Nodejs + Fastify, as well as Python + Fastapi, with connection pooling (10-20-50-100)/without connection pooling (one connection per request, not ideal at all), deployed in same AWS region as our SF deployment, yet we werent able to sustain something close to 50-100 Requests/sec with 1s latency (acceptable), but rather we were only able to get 10-20 Requests/sec with as high as 15-30s latency. Both languages/frameworks behave well on their own, or even with just acquiring/releasing connections, what actually takes the longest and demands a lot of IO is the actual running of queries and waiting for a response. We've yet to try a Golang setup, but it all seems to boil down to how quick Snowflake can return results for such queries.
We'd really like to use Snowflake as database to power a read-only REST API that is expected to have something like 300 requests/second, while trying to have response times in the neighborhood 1s. (But are also ready to accept that it was just not meant for that)
Is anyone using Snowflake in a similar setup? What is the best tool/config to get the most out of Snowflake in such conditions? Should we spin up many servers and hope that we'll get to a decent request rate? Or should we just copy transformed data over to something like Postgres to be able to have better response times?
I don't claim to be the authoritative answer on this, so people can feel free to correct me, but:
At the end of the day, you're trying to use Snowflake for something it's not optimized for. First, I'm going to run SELECT 1; to demonstrate the lower-bound of latency you can ever expect to receive. The result takes 40ms to return. Looking at the breakdown that is 21ms for the query compiler and 19ms to execute it. The compiler is designed to come up with really smart ways to process huge complex queries; not to compile small simple queries quickly.
After it has its query plan it must find worker node(s) to execute it on. A virtual warehouse is a collection of worker nodes (servers/cloud VMs), with each VW size being a function of how many worker nodes it has, not necessarily the VM size of each worker (e.g. EC2 instance size). So now the compiled query gets sent off to a different machine to be run where a worker process is spun up. Similar to the query planner, the worker process is not likely optimized to run small queries quickly, so the spin-up and tear-down of that process might be involved (at least relative to say a PostgreSQL worker process).
Putting my SELECT 1; example aside in favor of a "real" query, let's talk caching. First, Snowflake does not buffer tables in memory the same way a typical RDBS does. RAM is reserved for computation resources. This makes sense since in traditional usage you're dealing with tables many GBs to TBs in size, so there would be no point since a typical LRU cache would purge that data before it was ever accessed again anyways. This means that a trip to an SSD disk must occur. This is where your performance will start to depend on how homogeneous/heterogeneous your API queries are. If you're lucky you get a cache hit on SSD, otherwise its off to S3 to get your tables. Table files are not redundantly cached across all worker nodes, so while the query planner will make an attempt to schedule a computation on a node most likely to have the needed files in cache, there is no guarantee that a subsequent query will benefit from the cache resulting from the first query if it is assigned to a different worker node. The likeliness of this happening increases if you're firing 100s of queries at the VM/second.
Lastly, and this could be the bulk of your problem but have saved it for last since I am the least certain on it. A small query can run on a subset of the workers in a virtual warehouse. In this case the VH can run concurrent queries with different queries on different nodes. BUT, I am not sure if a given worker node can process more than one query at once. In that case, your concurrency will be limited by the number of nodes in the VH, e.g. a VH with 10 worker nodes can at most run 10 queries in parallel, and what you're seeing are queries piling up at the query planner stage while it waits for worker nodes to free up.
maybe for this type of workload , the new SF feature Search Optimization Service could help you speeding up performances ( https://docs.snowflake.com/en/user-guide/search-optimization-service.html ).
I have to agree with #Danny C - that Snowflake is NOT designed for very low (sub-second) latency on single queries.
To demonstrate this consider the following SQL statements (which you can execute yourself):
create or replace table customer as
select *
from SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
limit 500000;
-- Execution time 840ms
create or replace table customer_ten as
select *
from SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.CUSTOMER
limit 10;
-- Execution time 431ms
I just ran this on an XSMALL warehouse and it demonstrates currently (November 2022) Snowflake can copy a HALF MILLION ROWS in 840 milliseconds - but takes 431 ms to copy just 10 rows.
Why is Snowflake so slow compared (for example) to Oracle 11g on premises:
Well - here's what Snowflake has do complete:
Compile the query and produce an efficient execution plan (plans are not currently cached as they often lead to a sub-optimal plan being executed on data which has significantly increased in volume)
Resume a virtual warehouse (if suspended)
Execute the query and write results to cloud storage
Synchronously replicate the data to two other data centres (typically a few miles apart)
Return OK to the user
Oracle on the other hands needs to:
Compile the query (if the query plan is not already cached)
Execute the query
Write results to local disk
If you REALLY want sub-second query performance on SELECT, INSERT, UPDATE and DELETE on Snowflake - it's coming soon. Just check out Snowflake Unistore and Hybrid Tables Explained
Hope this helps.

Two Phase Commit and merging execution and prepare phase

As far as I understand before two phase commit is even run a round trip communication to send the transactions to each site is needed. Each site excecutes their part of the transaction and when the coordinator gets a response from all sites then it runs two phase commit. This initiates the prepare phase, etc.
Why is it necessary to have the prepare phase be separate from the execution that precedes two phase commit? Is there a reason for not merging execution and the prepare phase, thus cutting out a round trip communication cost?
This is a followup to my previous question.
There are several good reasons for doing it like this:
Operations may require data from other sites. How would you implement a swap(a,b) operation between data items at different sites if you merge execution and prepare phases?
The coordinator is likely to become a performance bottleneck. Merging the execution and propose phases would have it involved in relaying application data, further overloading it.
Transparency and encapsulation. Note that code between begin/commit in the reply to your previous question (i.e., business logic) is not concerned with distributed transactions at all. It doesn't need to know which sites, or even how many, will be involved! It invokes arbitrary procedures that may be (or not...) remote calls to unknown sites. To merge execution and prepare you'd have to explicitly package your application logic in independent callbacks for each participant.
Moreover, when thinking about performance with persistence involved, you should be concerned about round-trips that imply flushing the log. Invocations during the execution phase don't need to flush the log and thus should be much cheaper than the prepare and commit phases, that do.

Slow XA transactions in JBoss

We are running jboss 4.2.2 with SQL server 2005 (sqljdbc driver 1.2).
We have recently installed new relic and can see a large bottleneck with our transactions.
Generally for any one web request the bottleneck of sits on one of these:
master..xp_sqljdbc_xa_start
master..xp_sqljdbc_xa_commit
org.jboss.resource.adapter.jdbc.WrapperDataSource.getConnection()
master..xp_sqljdbc_xa_end
Several hundred ms are spent on one of these items (in some cases several seconds). Cumulatively most of the response time is spent on these items.
I'm trying to indentify whether its any of the following:
Will moving away from XA transactions help?
Is there a larger problem at my database that I dont have visibility over?
Can I upgrade my SQL driver to help with this?
Or is this an indication that there are just a lot of queries, and we should start by looking at our code, and trying to lower the number of queries overall?
XA transactions are necessary if you are performing work against more than one resource in a single transaction, if you need consistency then you need XA. However you are talking in terms of "queries" which might imply that you are mostly doing read-only activities, and so XA may be overkill. Furthermore you don't speak about using multiple databases or other transactional resources so do you really need XA at all?
So first step: understand the requirements, do you need transaction scopes that span several database interactions? If you are just doing a single query then XA is not needed. If you have mixture of activities needing XA and simple queries not needing XA then use two different connections, one with XA and one without - this clarifies your intention. However I would expect XA drivers to use single resource optimisations so that if XA is not needed you don't pay the overhead so I suspect something more is going on here. (disclaimer I don't use JBoss so my intuition is suspect).
So look to see whether your transaction scopes are appropriate, isolation levels are sensible and so on. Are you getting contention because transactions are unreasonably long, for example are transactions held over user think time?
Next those multi-second waits: that implies contention (or some bizarre network issue) The only reason I can think of for an xa_start being slow is that writing a transaction log is taking unreasonably long - are your logs perhaps on some slow network device? Waits for getConnection() might simply imply that your connection pool is too small (or you're holding connections for too long) If xa_commit and xa_end are taking a long time I'd want to know what the resource managers are doing, can you get any info from the database server.
My overall position: If you truly need XA then you will pay some logging and network message overheads, but these should not be costing you hundreds or thousands of millseconds. Most business systems need XA in a small subset of their overall resource accesses typically when updating two otherwise independent systems, and almost never in read-only scenarios - absolute consistency across distributed systems is pretty much meaningless so using XA for queries is almost certainly overkill.

Is relational database appropriate for soft real-time system?

I'm working on a real-time video analysis system which processes the video stream frame by frame. At each frame it can generate several events which should be recorded and some delivered to another system via network. The system is soft real-time, i.e. message latencies higher than 25ms are highly undesirable, but not fatal.
Are relational databases (specifically, MySQL and Postgres) appropriate as the datastore for such system?
Can I expect the DB to work well when it is installed on its own server and has ~50 25fps streams of single-row SQL inserts coming in over the network?
EDIT: I think in general performance would not be a problem, but I worry about the latency variance. If it will occasionally delay for 1000 ms, that would be very bad.
Oh, and the system runs 24/7 so the DB could grow arbitrarily big. Does that degrade the insert latency?
I wouldn't worry too much about performance when choosing a relational database over another type of datastore, choose the solution that best meets your requirements for accessing that data later. However, if you do choose not only a RDBMS but one over the network then you might want to consider buffering events to a local disk briefly on their way over to the DB. Use a separate thread or process or something to push events into the DB to keep the realtime system unaffected.
Biggest problems are how unpredictable the latency will be and how it never goes down, always up. But modern hardware to the rescue, specify a machine with enough cpu cores. You can count on at least two, getting four is easy. So you can spin up a thread and dedicate one core to the dbase updates, isolating it from your soft real-time code. Now you don't care about the variability in the delays, at least as long as the dbase updates don't take so long that you generate data faster than it can consume.
Setup a dbase server and load it up with fake data, double the amount you think it ever needs to store. Test continuously while you develop, add the instrumenting code you need to measure how it is doing at an early stage in the project.
As I've written, if you queue the rows that need to be saved and save them in an async way (so not to stop the "main" thread) there shouldn't be any problem... BUT!!!
You want to save them in a DB... So someone else will read the rows AT THE SAME TIME they are being written. Sadly it's normally quite difficult to tell to a DB "this work is very high priority, everything else can be stalled but not this". So if someone does:
BEGIN TRANSACTION
SELECT COUNT(*) FROM TABLE
WAITFOR DELAY '01:00:00'
(I'm using T-Sql here... But I think it's quite clear. Ask for the COUNT(*) of the table, so that there is a lock on the table and then WAITFOR an hour)
then the writes could be stalled and go in timeout. In general if you configure everyone but the app to be able only to do reads, these problems shouldn't be present.

Are long high volume transactions bad

Hallo,
I am writing a database application that does a lot of inserts and updates with fake serialisable isolation level (snapshot isolation).
To not do tonnes of network roundtrips I'm batching inserts and updates in one transaction with PreparedStatements. They should fail very seldom because the inserts are prechecked and nearly conflict free to other transactions, so rollbacks don't occur often.
Having big transactions should be good for WAL, because it can flush big chunks and doesn't have to flush for mini transactions.
1.) I can only see positive effects of a big transaction. But I often read that they are bad. Why could they be bad in my use case?
2.) Is the checking for conflicts so expensive when the local snapshots are merged back into the real database? The database will have to compare all write sets of possible conflicts (parallel transaction). Or does it do some high speed shortcut? Or is that quite cheap anyways?
[EDIT] It might be interesting if someone could bring some clarity into how a snapshot isolation database checks if transaction, which have overlapping parts on the timeline, are checked for disjunct write sets. Because that's what fake serializable isolation level is all about.
The real issues here are two fold. The first possible problem is bloat. Large transactions can result in a lot of dead tuples showing up at once. The other possible problem is from long running transactions. As long as a long running transaction is running, the tables it's touching can't be vacuumed so can collect lots of dead tuples as well.
I'd say just use check_postgresql.pl to check for bloating issues. As long as you don't see a lot of table bloat after your long transactions you're ok.
1) Manual says that it is good: http://www.postgresql.org/docs/current/interactive/populate.html
I can recommend also to Use COPY, Remove Indexes (but first test), Increase maintenance_work_mem, Increase checkpoint_segments, Run ANALYZE (or VACUUM ANALYZE) Afterwards.
I will not recommed if you are not sure: Remove Foreign Key Constraints, Disable WAL archival and streaming replication.
2) Always data are merged on commit but there is no checks, data are just written. Read again: http://www.postgresql.org/docs/current/interactive/transaction-iso.html
If your inserts/updates does not depend on other inserts/updates you don't need "wholly consistent view". You may use read committed and transaction will never fail.

Resources