How t optimize read time in a postgresql database - database

I have a slight problem with a postgresql database with exploding read times.
Background info:
two tables, both with only 4 columns: uuid (uuid), timestamp (bigint), type (text) and value (double) in one, values (double[]) in the other. (yes, I thought about combining it in one table... decision on that isn't in my hands).
Given that only a fairly small amount of the held data is needed for each "projectrun", I'm already copying the needed data to tables dedicated to each projectrun. Now the interesting part starts, when I try to read the data:
CREATE TABLE fake_timeseries1
(
"timestamp" bigint,
uuid uuid,
value double precision,
type text COLLATE "default".pg_catalog
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE fake_timeseries1
OWNER to user;
CREATE INDEX fake_timeseries1_timestamp_idx
ON fake_timeseries1 USING btree
(timestamp)
TABLESPACE pg_default;
ALTER TABLE fake_timeseries1
CLUSTER ON fake_timeseries1_timestamp_idx;
From that temporary table I do:
"SELECT * FROM table_name WHERE timestamp BETWEEN ? AND ? ;"
Simple enough, should work rather fast, right? Wrong.
At the moment I'm testing with small batches (only x*40k rows, returning 25% of them).
For 10k rows, it takes "only" 6 sec, 20k already 34 sec, and for 40k rows (out of a mere 160k) it already takes 3 minutes per table ... 6 minutes for a mere 6 Mb of data. (yes, we are at a gb line, so it's probably no bottleneck there)
I already tried using an index and cluster on timestamp, but that does slow it down even more. Interestingly not on the creation of the temporary tables, but rather when reading the data.
What could I do to speed up the read process? It needs to be able to read those 10-50k rows in less than 5 minutes (preferably less than 1 minute) from a table that holds not 160k rows, but rather tens of millions.
What could be responsible for a simple Select being as slow as creating the whole table in the first place? (3 mins read vs. 3.5 mins create).
Thank you in advance.
As wished an analyze (for 20k out of 80k):
"Execution Time": 27.501,
"Planning Time": 0.514,
"Plan": {
"Filter": "((\"timestamp\" >= '1483224970970'::bigint) AND (\"timestamp\" <= '1483232170970'::bigint))",
"Node Type": "Seq Scan",
"Relation Name": "fake_timeseries1",
"Alias": "fake_timeseries1",
"Actual Rows": 79552,
"Rows Removed by Filter": 0,
"Actual Loops": 1
},
"Triggers": []
The real execution time was 34.047 seconds.
UPDATE:
continued testing with different test data sets. The following is an analyze from a significantly larger testset, where I read only 0.25% of the data... still using seq scan. Anyone an idea?
[
{
"Execution Time": 7121.59,
"Planning Time": 0.124,
"Plan": {
"Filter": "((\"timestamp\" >= '1483224200000'::bigint) AND (\"timestamp\" <= '1483233200000'::bigint))",
"Node Type": "Seq Scan",
"Relation Name": "fake_forecast",
"Alias": "fake_forecast",
"Actual Rows": 171859,
"Rows Removed by Filter": 67490381,
"Actual Loops": 1
},
"Triggers": []
}
]
UPDATE: After even more testing, on a second PostgresQL Database, it seems that I somehow have hit a hard cap.
Whatever I do, the max I can get is 3.3k rows from those two tables per second. And that's only if I use the sweet spot of calling for 20-80k rows in a large batch. Which take 6 resp. 24 seconds even on a DB on my own machine.
Is there nothing that can be done (except better hardware) to speed this up?

Related

indexed query to decimate time series results

The context here is I'm scoping a design that slices time-series data at user-defined intervals - too many permutations to simply roll up the data (eg, a 2nd hourly table, rolled up as (or after) the data ingest). I am not a database expert, so am hoping to learn if there are standard approaches to this that rely on table indexes, not duplicating the data in new table/collection, or otherwise encoding it a priori by file structure, etc. Especially if there is a particular db suited to or supporting this load. A prototype of the backend is in Mongo, but we can easily pivot to a more suited store at this time.
QUESTION: In any mainstream database or similar, is it possible to query a ~time series over a given time window, (efficiently) returning only data points at a consistent interval? (what db and example query, specifically).
My data is a few 10's of gb today, but growing if we're successful. I'd expect indexes against timestamp to continue to fit ~ok in memory. A custom file-based schema such as using parquet might work, but an off the shelf DB would be ideal. By consistent interval, i mean some "skip" meaningful to a human, such as
"every nth value", if the data could be assumed to be at a reliable cadence
or perhaps "first sample per hour", if not and the timestamps were epoch times
Eg, if my data is
ts
value
1001
1
1002
2
1003
3
... continuous ...
1010
10
1011
11
... etc ...
... large data set
and query had the parameters:
- skip_value:10
- ts:
- after:1000
- before:2000
the returned set would be:
[ (1001, 1), (1011,11), (1021,21) .... (1991,991) ]

Table Scan very high "actual rows" when filter placed on different table

I have a query, that I did not write, that takes 2.5 minutes to run. I am trying to optimize it without being able to modify the underlying tables, i.e. no new indexes can be added.
During my optimization troubleshooting I commented out a filter and all of a sudden my query ran in .5 seconds. I have screwed with the formatting and placing of that filter and if it is there the query takes 2.5 minutes, without it .5 seconds. The biggest problem is that the filter is not on the table that is being table-scanned (With over 300k records), it is on a table with 300 records.
The "Actual Execution Plan" of both the 0:0:0.5 vs 0:2:30 are identical down to the exact percentage costs of all steps:
Execution Plan
The only difference is that on the table-scanned table the "Actual Number of Rows" on the 2.5 min query shows 3.7 million rows. The table only has 300k rows. Where the .5 sec query shows Actual Number of Rows as 2,063. The filter is actually being placed on the FS_EDIPartner table that only has 300 rows.
With the filter I get the correct 51 records, but it takes 2.5 minutes to return. Without the filter I get duplication, so I get 2,796 rows, and only take half a second to return.
I cannot figure out why adding the filter to a table with 300 rows and a correct index is causing the Table scan of a different table to have such a significant difference in actual number of rows. I am even doing the "Table scan" table as a sub-query to filter its records down from 300k to 17k prior to doing the join. Here is the actual query in its current state, sorry the tables don't make a lot of sense, I could not reproduce this behavior in test data.
SELECT dbo.FS_ARInvoiceHeader.CustomerID
, dbo.FS_EDIPartner.PartnerID
, dbo.FS_ARInvoiceHeader.InvoiceNumber
, dbo.FS_ARInvoiceHeader.InvoiceDate
, dbo.FS_ARInvoiceHeader.InvoiceType
, dbo.FS_ARInvoiceHeader.CONumber
, dbo.FS_EDIPartner.InternalTransactionSetCode
, docs.DocumentName
, dbo.FS_ARInvoiceHeader.InvoiceStatus
FROM dbo.FS_ARInvoiceHeader
INNER JOIN dbo.FS_EDIPartner ON dbo.FS_ARInvoiceHeader.CustomerID = dbo.FS_EDIPartner.CustomerID
LEFT JOIN (Select DocumentName
FROM GentranDatabase.dbo.ZNW_Documents
WHERE DATEADD(SECOND,TimeCreated,'1970-1-1') > '2016-06-01'
AND TransactionSetID = '810') docs on dbo.FS_ARInvoiceHeader.InvoiceNumber = docs.DocumentName COLLATE Latin1_General_BIN
WHERE docs.DocumentName IS NULL
AND dbo.FS_ARInvoiceHeader.InvoiceType = 'I'
AND dbo.FS_ARInvoiceHeader.InvoiceStatus <> 'Y'
--AND (dbo.FS_EDIPartner.InternalTransactionSetCode = '810')
AND (NOT (dbo.FS_ARInvoiceHeader.CONumber LIKE 'CB%'))
AND (NOT (dbo.FS_ARInvoiceHeader.CONumber LIKE 'DM%'))
AND InvoiceDate > '2016-06-01'
The Commented out line in the Where statement is the culprit, uncommenting it causes the 2.5 minute run.
It could be that the table statistics may have gotten out of whack. These include the number of records tables have which is used to choose the best query plan. Try running this and running the query again:
EXEC sp_updatestats
Using #jeremy's comment as a guideline to point out the Actual Number of Rows was not my problem, but instead the number of executions, I figured out that the Hash Match was .5 seconds, the Nested loop was 2.5 minutes. Trying to force the Hash Match using Left HASH Join was inconsistent depending on what the other filters were set to, changing dates took it from .5 seconds, to 30 secs sometimes. So forcing the Hash (Which is highly discouraged anyway) wasn't a good solution. Finally I resorted to moving the poor performing view to a Stored Procedure and splitting out both of the tables that were related to the poor performance into Table Variables, then joining those table variables. This resulted in the most consistently good performance of getting the results. On average the SP returns in less than 1 second, which is far better than the 2.5 minutes it started at.
#Jeremy gets the credit, but since his wasn't an answer, I thought I would document what was actually done in case someone else stumbles across this later.

Adding datalength condition makes query slow

I have a table mytable with some columns including the column datekey (which is a date and has an index), a column contents which is a varbinary(max), and a column stringhash which is a varchar(100). The stringhash and the datekey together form the primary key of the table. Everything is running on my local machine.
Running
SELECT TOP 1 * FROM mytable where datekey='2012-12-05'
returns 0 rows and takes 0 seconds.
But if I add a datalength condition:
SELECT TOP 1 * FROM mytable where datekey='2012-12-05' and datalength(contents)=0
it runs for a very long time and does not return anything before I give up waiting.
My question:
Why? How do I find out why this takes such a long time?
Here is what I checked so far:
When I click "Display estimated execution plan" it also takes a very long time and does not return anything before I give up waiting.
If I do
SELECT TOP 1000 datalength(contents) FROM mytable order by datalength(contents) desc
it takes 7 seconds and returns a list 4228081, 4218689 etc.
exec sp_spaceused 'mytable'
returns
rows reserved data index_size unused
564019 50755752 KB 50705672 KB 42928 KB 7152 KB
So the table is quite large at 50 GB.
Running
SELECT TOP 1000 * FROM mytable
takes 26 seconds.
The sqlservr.exe process is around 6 GB which is the limit I have set for the database.
It takes a long time because your query needs DATALENGTH to be evaluated for every row and then the results sorted before it can return the 1st record.
If the DATALENGTH of the field (or whether it contains any value) is something you're likely to query repeatedly, I would suggest an additional indexed field (perhaps a persisted computed field) holding the result, and searching on that.
This old msdn blog post seems to agree with #MartW answer that datalength is evaluated for every row. But it's good to understand what is really meant by "evaluated" and what is the real root of the performance degradation.
As mentioned in the question, the size of every value in the column contents may be large. It means that every value bigger than ~8Kb is stored in special LOB-storage. So, taking into account the size of the other columns, it's clear that most of the space occupied by the table is taken by this LOB-storage, i.e. it's around 50Gb.
Even if the length of contents column for every row has been already evaluated, which is proved in post linked above, it's still stored in LOB. So engine still needs to read some parts of the LOB-storage to execute the query.
If LOB-storage isn't in RAM at the time of a query execution then we need to read it from a disk, which is of course much slower than from RAM. Also possibly the read of LOB-parts is rather randomized than linear which is even more slow as it tends to raise the whole number of memory-blocks needed to be read from a disk.
At the moment it probably won't be using the primary key because of the stringhash column included before the datekey column. Try adding an additional index that just contains the datekey column. Once that key is created if it's still slow you could also try a query hint such as:
SELECT TOP 1 * FROM mytable where datekey='2012-12-05' and datalength(contents)=0 WITH INDEX = IX_datekey
You could also create a seperate length column that's updated either in your application or in an insert / update trigger.

Paginated searching... does performance degrade heavily after N records?

I just tried the following query on YouTube:
http://www.youtube.com/results?search_query=test&search=tag&page=100
and received the error message:
Sorry, YouTube does not serve more than 1000 results for any query.
(You asked for results starting from 2000.)
I also tried Google search for "test", and although it said there were about 3.44 billion results, I was only able to get to page 82 (or about 820 results).
This leads me to wonder, does performance start to degrade with paginated searches after N records (specifically wondering about with ROW_NUMBER() in SQL Server or similar feature in other DB systems), or are YouTube/Google doing this for other reasons? Granted, it's pretty unlikely that most people would need to go past the first 1000 results for a query, but I would imagine the limitation is specifically put in place for some technical reason.
Then again Stack Overflow lets you page through 47k results: https://stackoverflow.com/questions/tagged/c?page=955&sort=newest&pagesize=50
Yes. High offsets are slow and inefficient.
The only way to find the records at an offset, is to compute all records that came before and then discard them.
(I dont know ROW_NUMBER(), but would be LIMIT in standard SQL. So
SELECT * FROM table LIMIT 1999,20
)
.. in the above example, the first 2000 records have to be fetched first, and then discarded. Generally it can't skip ahead, or use indexes to jump right to the correct location in the data, because normally there would be a 'WHERE' clause filtering the results.
It is possible to cache the results, which is probably what SO does. So it doesn't actually have to compute the large offsets each and every time. (Most of SO's searches are a 'small' set of known tags, so its quite feasible to cache. A arbitrary search query is will have much versions to catch, making it impractical)
(Alternatively it might be using some other implementation that does allow arbitrary offsets)
Other places taking about similar things
http://sphinxsearch.com/docs/current.html#conf-max-matches
Back of the envolope test:
mysql> select gridimage_id from gridimage_search where moderation_status = "geograph" order by imagetaken limit 100999,3;
...
3 rows in set (11.32 sec)
mysql> select gridimage_id from gridimage_search where moderation_status = "geograph" order by imagetaken limit 3;
...
3 rows in set (4.59 sec)
(Arbitrary query choosen so as not to use indexes very well, if indexes can be used the difference is less pronounced and harder to see. But in a production system running lots of queries, 1 or 2ms difference is huge)
Update: (to show a indexed query)
mysql> select gridimage_id from gridimage_search order by imagetaken limit 10;
...
10 rows in set (0.00 sec)
mysql> select gridimage_id from gridimage_search order by imagetaken limit 100000,10;
...
10 rows in set (1.70 sec)
It's a TOP clause designed to limit the amount of physical reads that the database has to perform, which limits the amount of time that the query takes. Imagine you have 82 billion links to stories about "Japan" in your database. What if someone queries "Japan"? Are all 82 billion results really going to be clicked? No. The user needs the top 1000 most relevant results. When the search is generic, like "test", there is no way to determine relevance. In this case, YouTube/Google has to limit the volume returned so other users aren't affected by generic searches. What's faster, returning 1,000 results or 82,000,000,000 results?

11 seconds to delete 240 rows in SQL Server

i am running a delete statement:
DELETE FROM TransactionEntries
WHERE SessionGUID = #SessionGUID
The actual execution plan of the delete is:
Execution Tree
--------------
Clustered Index Delete(
OBJECT:([GrobManagementSystemLive].[dbo].[TransactionEntries].IX_TransactionEntries_SessionGUIDTransactionGUID]),
WHERE:([TransactionEntries].[SessionGUID]=[#SessionGUID])
)
The table is clustered by SessionGUID, so the 240 rows are physically together.
The table has no triggers on it.
The operation takes:
Duration: 11821 ms
CPU: 297
Reads: 14340
Writes: 1707
The table contains 11 indexes:
1 clustered index (SessionGUID)
1 unique (primary key) index
9 other non-unique, non-clustered indexes
How can i figure out why this delete operation is performing 14,340 reads, and takes 11 seconds?
the Avg. Disk Read Queue Length reaches 0.8
the Avg. Disk sec/Read never exceeds 4ms
the Avg. Disk Write Queue Length reaches 0.04
the Avg. Disk sec/Write never exceeds 4ms
What are the other reads for? The execution plan gives no indication of what it's reading.
Update:
EXECUTE sp_spaceused TransactionEntries
TransactionEntries
Rows 6,696,199
Data: 1,626,496 KB (249 bytes per row)
Indexes: 7,303,848 KB (1117 bytes per row)
Unused: 91,648 KB
============
Reserved: 9,021,992 KB (1380 bytes per row)
With 1,380 bytes per row, and 240 rows, that's 340 kB to be deleted.
Counter intuitive that it can be so difficult for 340 kB.
Update Two: Fragmentation
Name Scan Density Logical Fragmentation
============================= ============ =====================
IX_TransactionEntries_Tran... 12.834 48.392
IX_TransactionEntries_Curr... 15.419 41.239
IX_TransactionEntries_Tran... 12.875 48.372
TransactionEntries17 98.081 0.0049325
TransactionEntries5 12.960 48.180
PK_TransactionEntries 12.869 48.376
TransactionEntries18 12.886 48.480
IX_TranasctionEntries_CDR... 12.799 49.157
IX_TransactionEntries_CDR... 12.969 48.103
IX_TransactionEntries_Tra... 13.181 47.127
i defragmented TransactionEntries17
DBCC INDEXDEFRAG (0, 'TransactionEntries', 'TransactionEntries17')
since INDEXDEFRAG is an "online operation" (i.e. it only holds IS Intent Shared locks). i was going to then manually defragment the others until the business operations called, saying that the system is dead - and they switched to doing everything on paper.
What say you; 50% fragmentation, and only 12% scan density, cause horrible index scan performance?
As #JoeStefanelli points out in comments, it's the extra non-clustered indexes.
You are deleting 240 rows from the table.
This equates to 2640 index rows, 240 of which include all fields in the table.
Depending on how wide they are and how many included fields you have, this could equate to all the extra read activity you are seeing.
The non-clustered index rows will definitely NOT be grouped together on disk, which will increase delays.
I think the indexing might be the likeliest culprit but I wanted to throw out another possibility. You mentioned no triggers, but are there any tables that have a foreign key relationship to this table? They would have to be checked to make sure no records are in them and if you have cascade delete turned on, those records would have to be deleted as well.
Having banged my head on many-a-SQL performance issue, my standard operating procedure for something like this is to:
Back up the data
Delete one of the indexes on the table in question
Measure the operation
Restore DB
Repeat w/#2 until #3 shows a drastic change. That's likely your culprit.

Resources