Can someone please help to understand from which layer in snowflake data is being fetched in this below plan? I understand snowflake uses either of 3 (besides results from metadata for queries like select count(*)) - result cache, warehouse cache or disk IO. In the below plan - its not from result cache ( as for the plan would say 'query result reuse'), its not showing any remote disk I/O and also the cache usage is 0%.
So its not very clear how data is being processed here. Any thoughts or pointers will be helpful.
The picture says that 0.44MB were scanned.
The picture says that 0% of those 0.44MB came from the local cache.
Hence 0.44MB were read from the main storage layer.
The data is read from the storage layer. I will assume AWS, thus from the S3 there you table is stored. There are three primary reasons for a remote read:
It is the first time this warehouse has used this data. This is the same thing that happens if you stop/start the warehouse.
The data has changed (which can be anything from 0% - 100% change of partitions), given in your example there is only one partition, any insertion happening in the back ground will cause 100% cache invalidation.
The data was flushed from the local caches by more active data, if you read this table once every 30 minutes, but between then read GB of other tables, like all caches low usage data gets dropped.
The result cache can be used, but it also can be turned off for a session, but then local disk cache still happens. And you WHERE 20 = 20 in theory might cache bust the result cache, but as it's a meaningless statement it might not. But given your results it seems, at this point of time it's enough to trick the result cache. Which implies if you want to not avoid the result cache, stop changing the number, and it you want to avoid, this seems to work.
I see you have highlighted the two spilling options, those are when working state data is too large for memory, and too large for local disk so are sent to remote (s3). The former is a sign your warehouse is undersized, and both are a hint that something in your query is rather bloated. Now maybe that is what you want/needed, but it slows things down very much. Now to know if there is perhaps "another way" if in the profile plan there is some step that goes 100M rows -> 100GB rows -> 42 rows this implies a giant mess was made, and then some filter smashed the heck out of nearly all of it, which implies the work could be done different, to avoid that large explosion/filtering.
Related
I know oracle automatically preserve frequently accessed data in memory. I'm curious is any way to keep a table in memory manually for more performance?
Yes, you could certainly do that. You need to pin the table in the KEEP POOL cache in DB cache.
For example,
ALTER TABLE table_name STORAGE (buffer_pool KEEP);
By the way, Oracle 11g and up, you can have a look at the RESULT CACHE. It is quite useful.
Have a look at this AskTom link https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:253415112676
The short answer is no, and you don't want to.
If you need that high a level of retrieval performance, then consider using an in memory DB like Times 10.
Think about what you are asking the DB to do. You are asking the DB to dedicate n amount of cache memory to a single table and hold it there indefinitely. In a busy DB this will simply kill performance to the point of the DB being useless. Lets say you have a DB with a few hundred tables in it, some of them small, some large and some very large and throw in a few PK's and indexes.
A query comes along that asks for say 100K rows of data that are 1 Kbyte each and the index is a 100 byte long string. The DB will allocate sufficient memory to load up the index, and then start grabbing 8K chunks of data off the disk and putting those into cache.
If you ask it to store a few gigabytes of data, in ram, permanently, you will run out of memory in a big hurry unless you have a VERY expensive machine with 512 gigs of ram in it and you will start hitting the swap file and well at that point your performance is toast.
If you are having performance issues on queries run explain plan and learn how to use it to discover the bottle necks. I have a 24 core machine with 48 gigs of ram, but I have tables with billions of rows of data. I keep a close eye on my cache hits and execution plans.
Also consider materialized views.
The project requires storing binary data into PostgreSQL (project requirement) database. For that purpose we made a table with following columns:
id : integer, primary key, generated by client
data : bytea, for storing client binary data
The client is a C++ program, running on Linux.
The rows must be inserted (initialized with a chunk of binary data), and after that updated (concatenating additional binary data to data field).
Simple tests have shown that this yields better performance.
Depending on your inputs, we will make client use concurrent threads to insert / update data (with different DB connections), or a single thread with only one DB connection.
We haven't much experience with PostgreSQL, so could you help us with some pointers concerning possible bottlenecks, and whether using multiple threads to insert data is better than using a single thread.
Thank you :)
Edit 1:
More detailed information:
there will be only one client accessing the database, using only one Linux process
database and client are on the same high performance server, but this must not matter, client must be fast no matter the machine, without additional client configuration
we will get new stream of data every 10 seconds, stream will provide new 16000 bytes per 0.5 seconds (CBR, but we can use buffering and only do inserts every 4 seconds max)
stream will last anywhere between 10 seconds and 5 minutes
It makes extremely little sense that you should get better performance inserting a row then appending to it if you are using bytea.
PostgreSQL's MVCC design means that an UPDATE is logically equivalent to a DELETE and an INSERT. When you insert the row then update it, what's happening is that the original tuple you inserted is marked as deleted and new tuple is written that contains the concatentation of the old and added data.
I question your testing methodology - can you explain in more detail how you determined that insert-then-append was faster? It makes no sense.
Beyond that, I think this question is too broad as written to really say much of use. You've given no details or numbers; no estimates of binary data size, rowcount estimates, client count estimates, etc.
bytea insert performance is no different to any other insert performance tuning in PostgreSQL. All the same advice applies: Batch work into transactions, use multiple concurrent sessions (but not too many; rule of thumb is number_of_cpus + number_of_hard_drives) to insert data, avoid having transactions use each others' data so you don't need UPDATE locks, use async commit and/or a commit_delay if you don't have a disk subsystem with a safe write-back cache like a battery-backed RAID controller, etc.
Given the updated stats you provided in the main comments thread, the amount of data you want to consume sounds entirely practical with appropriate hardware and application design. Your peak load might be achievable even on a plain hard drive if you had to commit every block that came in, since it'd require about 60 transactions per second. You could use a commit_delay to achieve group commit and significantly lower fsync() overhead, or even use synchronous_commit = off if you can afford to lose a time window of transactions in case of a crash.
With a write-back caching storage device like a battery-backed cache RAID controller or an SSD with reliable power-loss-safe cache, this load should be easy to cope with.
I haven't benchmarked different scenarios for this, so I can only speak in general terms. If designing this myself, I'd be concerned about checkpoint stalls with PostgreSQL, and would want to make sure I could buffer a bit of data. It sounds like you can so you should be OK.
Here's the first approach I'd test, benchmark and load-test, as it's in my view probably the most practical:
One connection per data stream, synchronous_commit = off + a commit_delay.
INSERT each 16kb record as it comes in into a staging table (if possible UNLOGGED or TEMPORARY if you can afford to lose incomplete records) and let Pg synchronize and group up commits. When each stream ends, read the byte arrays, concatenate them, and write the record to the final table.
For absolutely best speed with this approach, implement a bytea_agg aggregate function for bytea as an extension module (and submit it to PostgreSQL for inclusion in future versions). In reality it's likely you can get away with doing the bytea concatenation in your application by reading the data out, or with the rather inefficient and nonlinearly scaling:
CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);
INSERT INTO final_table SELECT stream_id, bytea_agg(data_block) FROM temp_stream_table;
You would want to be sure to tune your checkpointing behaviour, and if you were using an ordinary or UNLOGGED table rather than a TEMPORARY table to accumulate those 16kb records, you'd need to make sure it was being quite aggressively VACUUMed.
See also:
Whats the fastest way to do a bulk insert into Postgres?
How to speed up insertion performance in PostgreSQL
I'm new to Ehcache and am searching on how to do this but now quite sure if this is a normal use case. I am working on an application that isn't a traditional web app, its something that is only used by a few people at a time and is for retrieving data from a very large dataset so rather than making a call to the DB each time I want to use caching to cache this large table. However, there is a chance that a new entry could be added to this table and I need this reflected in the cache but I don't want to reload the entire cache each time as its quite large. Any advice on how to approach this / further resources is appreciated.
You should learn about Hibernate query cache. In simple words: it works on top of second level cache (L2) and stores results of queries. But it only stores ids of the records that should be returned by the query rather than a whole list. This means that you need to have L2 working and fine tuned.
In your scenario suppose you have 1M records in table T and a query that returns 1K by average. The first time you run this query it will miss the query cache and:
run the SQL
fetch 1K records
put all of them in L2
put 1K ids in query cache
The next time you execute the query it will hit the query cache and lookup all the result from L2. The interesting part comes when you modify table T. Hibernate will figure out that the results in query cache might be stale and it will invalidate the whole cache but not the L2. It will basically repeat points 1-4 but refreshing only query cache (most of entities from table T are already in L2).
In some scenarios it works great, in others it introduces N+1 problems in unpredictable moments. This is just a tip of an iceberg, you should be really careful as this mechanism is very fragile and requires great understanding.
Can someone give me a relative idea of when it makes more sense to hit the database many times for small query results vs caching a large number of rows and querying that?
For example, if I have a query returning 2,000 results. And then I have additional queries on those results that take maybe 10-20 items, would it be better to cache the 2000 results or hit the database every time for each set of 10 or 20 results?
Other answers here are correct -- the RDBMS and your data are key factors. However, another key factor is how much time it will take to sort and/or index your data in memory versus in the database. We have one application where, for performance, we added code to grab about 10,000 records into an in-memory DataSet and then do subqueries on that. As it turns out, keeping that data up to date and selecting out subsets is actually slower than just leaving all the data in the database.
So my advice is: do it the simplest possible way first, then profile it and see if you need to optimize for performance.
It depends on a variety of things. I will list some points that come to mind:
If you have a .Net web app that is caching data in the client, you do not want to pull 2k rows.
If you have a web service, they are almost always better Chunky than Chatty because of the added overhead of XML on the transport.
In a fairly decently normalized and optimized database, there really should be very few times that you have to pull 2k rows out at a time unless you are doing reports.
If the underlying data is changing at a rapid pace, then you should really be careful caching it on the middle tier or the presentation layer because what you present will you will be out of date.
Reports (any DSS) will pull and chomp through much larger data sets, but since they are not interactive, we denormalize and let them have their fun.
In cases of cascading dropdowns and such, AJAX techniques will prove to be more efficient and effective.
I guess I'm not really giving you one answer to your question. "It depends" is the best I can do.
Unless there is a big performance problem (e.g. a highly latent db connection), I'd stick with leaving the data in the database and letting the db take care of things for you. A lot of things are done efficiently on the database level, for example
isolation levels (what happens if other transactions update the data you're caching)
fast access using indexes (the db may be quicker to access a few rows than you searching through your cached items, especially if that data already is in the db cache like in your scenario)
updates in your transaction to the cached data (do you want to deal with updating your cached data as well or do you "refresh" everything from the db)
There are a lot of potential issues you may run into if you do your own caching. You need to have a very good performance reason befor starting to take care of all that complexity.
So, the short answer: It depends, but unless you have some good reasons, this smells like premature optimizaton to me.
in general, network round trip latency is several orders of magnitude greater than the capacity of a database to generate and feed data onto the network, and the capacity of a client box to consume it from a network connection.
But look at the width of your network bus ( Bits/sec ) and compare that to the average round trip time for a database call...
On 100baseT ethernet, for example you are about 12 MBytes / sec data transfer rate. If your average round trip time is say, 200 ms, then your network bus can deliver 3 MBytes in each 200 ms round trip call..
If you're on gigabit ethernet, that number jumps to 30 Mbytes per round trip...
So if you split up a request for data into two round trips, well that's 400 ms, and each query would have to be over 3Mb (or 30Mb for gigibit ) before that would be faster...
This likely varies from RDBMS to RDBMS, but my experience has been that pulling in bulk is almost always better. After all, you're going to have to pull the 2000 records anyway, so you might as well do it all at once. And 2000 records isn't really a large amount, but that depends largely on what you're doing.
My advice is to profile and see what works best. RDBMSes can be tricky beasts performance-wise and caching can be just as tricky.
"I guess I'm not really giving you one answer to your question. "It depends" is the best I can do."
yes, "it depends". It depends on the volatility of the data that you are intending to cache, and it depends on the level of "accuracy" and reliability that you need for the responses that you generate from the data that you intend to cache.
If volatility on your "base" data is low, then any caching you do on those data has a higher probability of remaining valid and correct for a longer time.
If "caching-fault-tolerance" on the results you return to your users is zero percent, you have no option.
The type of data your bringing back affects the decision as well. You don't want to be caching volatile data or data for potential updates that may get stale.
I'm creating an app that will have to put at max 32 GB of data into my database. I am using B-tree indexing because the reads will have range queries (like from 0 < time < 1hr).
At the beginning (database size = 0GB), I will get 60 and 70 writes per millisecond. After say 5GB, the three databases I've tested (H2, berkeley DB, Sybase SQL Anywhere) have REALLY slowed down to like under 5 writes per millisecond.
Questions:
Is this typical?
Would I still see this scalability issue if I REMOVED indexing?
What are the causes of this problem?
Notes:
Each record consists of a few ints
Yes; indexing improves fetch times at the cost of insert times. Your numbers sound reasonable - without knowing more.
You can benchmark it. You'll need to have a reasonable amount of data stored. Consider whether or not to index based upon the queries - heavy fetch and light insert? index everywhere a where clause might use it. Light fetch, heavy inserts? Probably avoid indexes. Mixed workload; benchmark it!
When benchmarking, you want as real or realistic data as possible, both in volume and on data domain (distribution of data, not just all "henry smith" but all manner of names, for example).
It is typical for indexes to sacrifice insert speed for access speed. You can find that out from a database table (and I've seen these in the wild) that indexes every single column. There's nothing inherently wrong with that if the number of updates is small compared to the number of queries.
However, given that:
1/ You seem to be concerned that your writes slow down to 5/ms (that's still 5000/second),
2/ You're only writing a few integers per record; and
3/ You're queries are only based on time queries,
you may want to consider bypassing a regular database and rolling your own sort-of-database (my thoughts are that you're collecting real-time data such as device readings).
If you're only ever writing sequentially-timed data, you can just use a flat file and periodically write the 'index' information separately (say at the start of every minute).
This will greatly speed up your writes but still allow a relatively efficient read process - worst case is you'll have to find the start of the relevant period and do a scan from there.
This of course depends on my assumption of your storage being correct:
1/ You're writing records sequentially based on time.
2/ You only need to query on time ranges.
Yes, indexes will generally slow inserts down, while significantly speeding up selects (queries).
Do keep in mind that not all inserts into a B-tree are equal. It's a tree; if all you do is insert into it, it has to keep growing. The data structure allows for some padding, but if you keep inserting into it numbers that are growing sequentially, it has to keep adding new pages and/or shuffle things around to stay balanced. Make sure that your tests are inserting numbers that are well distributed (assuming that's how they will come in real life), and see if you can do anything to tell the B-tree how many items to expect from the beginning.
Totally agree with #Richard-t - it is quite common in offline/batch scenarios to remove indexes completely before bulk updates to a corpus, only to reapply them when update is complete.
The type of indices applied also influence insertion performance - for example with SQL Server clustered index update I/O is used for data distribution as well as index update, where as nonclustered indexes are updated in seperate (and therefore more expensive) I/O operations.
As with any engineering project - best advice is to measure with real datasets (skews page distribution, tearing etc.)
I think somewhere in the BDB docs they mention that page size greatly affects this behavior in btree's. Assuming you arent doing much in the way of concurrency and you have fixed record sizes, you should try increasing your page size