Why does Postgres not use better index for my query? - database

I have a table where I keep record of who is following whom on a Twitter-like application:
\d follow
Table "public.follow" .
Column | Type | Modifiers
---------+--------------------------+-----------------------------------------------------
xid | text |
followee | integer |
follower | integer |
id | integer | not null default nextval('follow_id_seq'::regclass)
createdAt | timestamp with time zone |
updatedAt | timestamp with time zone |
source | text |
Indexes:
"follow_pkey" PRIMARY KEY, btree (id)
"follow_uniq_users" UNIQUE CONSTRAINT, btree (follower, followee)
"follow_createdat_idx" btree ("createdAt")
"follow_followee_idx" btree (followee)
"follow_follower_idx" btree (follower)
Number of entries in table is more than a million and when I run explain analyze on the query I get this:
explain analyze SELECT "follow"."follower"
FROM "public"."follow" AS "follow"
WHERE "follow"."followee" = 6
ORDER BY "follow"."createdAt" DESC
LIMIT 15 OFFSET 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..353.69 rows=15 width=12) (actual time=5.456..21.497
rows=15 loops=1)
-> Index Scan Backward using follow_createdat_idx on follow (cost=0.43..61585.45 rows=2615 width=12) (actual time=5.455..21.488 rows=15 loops=1)
Filter: (followee = 6)
Rows Removed by Filter: 62368
Planning time: 0.068 ms
Execution time: 21.516 ms
Why it is doing backward index scan on follow_createdat_idx where it could have been more faster execution if it had used follow_followee_idx.
This query is taking around 33 ms when running first time and then subsequent calls are taking around 22 ms which I feel are on higher side.
I am using Postgres 9.5 provided by Amazon RDS. Any idea what wrong could be happening here?

The multicolumn index on (follower, "createdAt") that user1937198 suggested is perfect for the query - as you found in your test already.
Since "createdAt" can be NULL (not defined NOT NULL), you may want to add NULLS LAST to query and index:
...
ORDER BY "follow"."createdAt" DESC NULLS LAST
And:
"follow_follower_createdat_idx" btree (follower, "createdAt" DESC NULLS LAST)
More:
PostgreSQL sort by datetime asc, null first?
There are minor other performance implications:
The multicolumn index on (follower, "createdAt") is 8 bytes per row bigger than the simple index on (follower) - 44 bytes vs 36. More (btree indexes have mostly the same page layout as tables):
Making sense of Postgres row sizes
Columns involved in an index in any way cannot be changed with a HOT update. Adding more columns to an index might block this optimization - which seems particularly unlikely given the column name. And since you have another index on just ("createdAt") that's not an issue anyway. More:
PostgreSQL Initial Database Size
There is no downside in having another index on just ("createdAt") (other than the maintenance cost for each (for write performance, not for read performance). Both indexes support different queries. You may or may not need the index on just ("createdAt") additionally. Detailed explanation:
Is a composite index also good for queries on the first field?

Related

Is my PSQL database performance within expectations for my hardware?

I'm brand new to the concept of database administration, so I have no basis for what to expect. I am working with approximately 100GB of data in the form of five different tables. Descriptions of the data, as well as the first few rows of each file, can be found here.
I'm currently just working with the flows tables in an effort to gauge performance. Here is the results from \d flows:
Table "public.flows"
Column | Type | Modifiers
------------+-------------------+-----------
time | real |
duration | real |
src_comp | character varying |
src_port | character varying |
dest_comp | character varying |
dest_port | character varying |
protocol | character varying |
pkt_count | real |
byte_count | real |
Indexes:
"flows_dest_comp_idx" btree (dest_comp)
"flows_dest_port_idx" btree (dest_port)
"flows_protocol_idx" btree (protocol)
"flows_src_comp_idx" btree (src_comp)
"flows_src_port_idx" btree (src_port)
Here is the results from EXPLAIN ANALYZE SELECT src_comp, COUNT(DISTINCT dest_comp) FROM flows GROUP BY src_comp;, which I thought would be a relatively simple query:
GroupAggregate (cost=34749736.06..35724568.62 rows=200 width=64) (actual time=1292299.166..1621191.771 rows=11154 loops=1)
Group Key: src_comp
-> Sort (cost=34749736.06..35074679.58 rows=129977408 width=64) (actual time=1290923.435..1425515.812 rows=129977412 loops=1)
Sort Key: src_comp
Sort Method: external merge Disk: 2819360kB
-> Seq Scan on flows (cost=0.00..2572344.08 rows=129977408 width=64) (actual time=26.842..488541.987 rows=129977412 loops=1)
Planning time: 6.575 ms
Execution time: 1636290.138 ms
(8 rows)
If I'm interpreting this correctly (which I might not be since I'm new to PSQL), it's saying that my query would take almost 30 minutes to execute, which is much, much longer than I would expect. Even with ~130 million rows.
My computer is running with an 8th-gen i7 quad-core CPU, 16GBs of RAM, and a 2TB HDD (full specs can be found here).
My questions then are: 1) is this performance to be expected, and 2) is there anything I can do to speed it up, other than buying an external SSD?
1 - src_comp and dest_comp, which are used by the query, are both indexed. However, they are indexed independently. If you had an index of 'src_comp, dest_comp' then there is a possibility that the database could process this all via indexes, eliminating a full table scan.
2 - src_comp and dest_comp are character varying. That is NOT a good thing for indexed fields, unless necessary. What are these values really? Numbers? IP addresses? Computer network names? If there is a relatively finite number of these items and they can be identified as they are added to the database, change them to integers that are used as foreign keys into other tables. That will make a HUGE difference in this query. If they can't be stored that way, but they at least have a definite finite length - e.g., 15 characters for IPv4 addresses in dotted quad format - then set a maximum length for the fields, which should help some.

Index in SQL Server 2008 database [duplicate]

I've created composite indexes (indices for you mathematical folk) on tables before with an assumption of how they worked. I was just curious if my assumption is correct or not.
I assume that when you list the order of columns for the index, you are also specifying how the indexes will be grouped. For instance, if you have columns a, b, and c, and you specify the index in that same order a ASC, b ASC, and c ASC then the resultant index will essentially be many indexes for each "group" in a.
Is this correct? If not, what will the resultant index actually look like?
Composite indexes work just like regular indexes, except they have multi-values keys.
If you define an index on the fields (a,b,c) , the records are sorted first on a, then b, then c.
Example:
| A | B | C |
-------------
| 1 | 2 | 3 |
| 1 | 4 | 2 |
| 1 | 4 | 4 |
| 2 | 3 | 5 |
| 2 | 4 | 4 |
| 2 | 4 | 5 |
Composite index is like a plain alphabet index in a dictionary, but covering two or more letters, like this:
AA - page 1
AB - page 12
etc.
Table rows are ordered first by the first column in the index, then by the second one etc.
It's usable when you search by both columns OR by first column. If your index is like this:
AA - page 1
AB - page 12
…
AZ - page 245
BA - page 246
…
you can use it for searching on 2 letters ( = 2 columns in a table), or like a plain index on one letter:
A - page 1
B - page 246
…
Note that in case of a dictionary, the pages themself are alphabetically ordered. That's an example of a CLUSTERED index.
In a plain, non-CLUSTERED index, the references to pages are ordered, like in a history book:
Gaul, Alesia: pages 12, 56, 78
Gaul, Augustodonum Aeduorum: page 145
…
Gaul, Vellaunodunum: page 24
Egypt, Alexandria: pages 56, 194, 213, 234, 267
Composite indexes may also be used when you ORDER BY two or more columns. In this case a DESC clause may come handy.
See this article in my blog about using DESC clause in a composite index:
Descending indexes
The most common implementation of indices uses B-trees to allow somewhat rapid lookups, and also reasonably rapid range scans. It's too much to explain here, but here's the Wikipedia article on B-trees. And you are right, the first column you declare in the create index will be the high order column in the resulting B-tree.
A search on the high order column amounts to a range scan, and a B-tree index can be very useful for such a search. The easiest way to see this is by analogy with the old card catalogs you have in libraries that have not yet converted to on line catalogs.
If you are looking for all the cards for Authors whose last name is "Clemens", you just go to the author catalog, and very quickly find a drawer that says "CLE- CLI" on the front. That's the right drawer. Now you do a kind of informal binary search in that drawer to quickly find all the cards that say "Clemens, Roger", or "Clemens, Samuel" on them.
But suppose you want to find all the cards for the authors whose first name is "Samuel". Now you're up the creek, because those cards are not gathered together in one place in the Author catalog. A similar phenomenon happens with composite indices in a database.
Different DBMSes differ in how clever their optimizer is at detecting index range scans, and accurately estimating their cost. And not all indices are B-trees. You'll have to read the docs for your specific DBMS to get the real info.
No. Resultant index will be single index but with compound key.
KeyX = A,B,C,D; KeyY = 1,2,3,4;
Index KeyX, KeyY will be actually: A1,A2,A3,B1,B3,C3,C4,D2
So that in case you need to find something by KeyX and KeyY - that will be fast and will use single index. Something like SELECT ... WHERE KeyX = "B" AND KeyY = 3.
But it's important to understand: WHERE KeyX = ? requests will use that index, while WHERE KeyY = ? will NOT use such index at all.

Cassandra's ORDER BY does not work as expected

So, I'm storing some statistics in cassandra.
I want to get the top 10 best entrys based on a specific column. The column in this case is kills.
As there is no ORDER BY command like in mysql, I have to create a PARTITION KEY.
I've created the following table:
CREATE TABLE IF NOT EXISTS stats ( uuid uuid, kills int, deaths int, playedGames int, wins int, srt int, PRIMARY KEY (srt, kills) ) WITH CLUSTERING ORDER BY (kills DESC);
The Problem I have is the following, as you see above, I'm using the column srt for ordering because when I'm going to use the column uuid for ordering, the result from my select query is totally random and not sorted as expected.
So I tried to add a column with always the same value for my PARTITION KEY. Sorting works now, but not really good. When I now try to SELECT * FROM stats;, the result is the following:
srt | kills | deaths | playedgames | uuid | wins
-----+-------+--------+-------------+--------------------------------------+------
0 | 49 | 35 | 48 | 6f284e6f-bd9a-491f-9f52-690ea2375fef | 2
0 | 48 | 21 | 30 | 4842ad78-50e4-470c-8ee9-71c5a731c935 | 4
0 | 47 | 48 | 14 | 91f41144-ef5a-4071-8c79-228a7e192f34 | 42
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0 | 2 | 32 | 20 | 387448a7-a08e-46d4-81a2-33d8a893fdb6 | 31
0 | 1 | 16 | 17 | fe4efbcd-34c3-419a-a52e-f9ae8866f2bf | 12
0 | 0 | 31 | 25 | 82b13d11-7eeb-411c-a521-c2c2f9b8a764 | 10
The problem about the result is, that "per kill" amout/value, there is only one row - but there should be definitly more.
So, any idea about using sorting in Cassandra without getting data stripped out?
I also heard about Datastax Enterprise (DSE) which supports solr in querys but DSE is only free for non-productive (and also only for 6 months) and the paid version is, at least what I heared of, pretty expensive (around 4000$ per node). So, is there any alternative like a Datastax Enterprise Community Edtion? Does not make sense but I'm just asking. I haven't found anything from googling so, can I also use solr with the "normal" cassandra?
Thank you for your help!
PS: Please don't mark this as a duplicate of order by caluse not working in Cassandra query because it didn't helped me. I already googled like 1 and a half hour for a solution.
EDIT:
Because of the fact that my primary key is PRIMARY KEY(srt, kills), the combination of (srt, kills) must be unique. Which basicly means, that rows with the same amout of kills are getting overwritten by each other. I would use PRIMARY KEY(uuid, kills) which would solve the problem with overwriting rows but when I do SELECT * FROM stats LIMIT 10 then, the results are totally random and not sorted by kills.
If you want to use column for sorting get it out from partition key. Rows will be sorted by this column within every partition - Cassandra splits data between nodes using partition key, and ordering it in each partition using clustering key:
PRIMARY KEY ((srt), kills)
EDIT:
You need to understand concepts a little bit more, i suggest you to take some free course on DSE site, it can help you with further development.
Anyway, about your question:
Primary key is a set of columns that make each row unique.
There are 2 types of columns in this primary key - partition key columns and clustering columns.
You can't use partition key for sorting or range queries - it is against the model of Cassandra - such query will be splitted to several nodes, or even all nodes and sstables. If you want to use both of the listed columns for sorting, you can use other column for partitioning (random number from 1 to 100 for example), and then you need to execute your query for each "batch", or simply use another column that has high enough number of unique values (at least 100),the data is evenly distributed between such values, and data is accessed using all these values, otherwise you will end up with hot nodes/partitions.
Primary key ((another_column), kills, srt)
What you have to understand, you can order your data only within partitions, but not between partitions.
that "per kill" amout/value - can you elaborate? There are only one row for each key in Cassandra, if you insert several rows with same key they will be overwritten with last insert values (read about upserts).

Advice on sql server index performance

I have "UserLog" table with 15 millions rows.
On this table I have a cluster index on User_ID field which is of type bigint identity, and a non clustered index on User_uid field which is of type varchar(35) (a fake uniqueidentifier).
On my application we can have 2 categories of users connection. 1 of them concerns only 0.007% of rows (about 1150 raws over 15 millions) and the 2nd concerns the remaining rows (99%).
The objective is to improve performance of the 0.007% users connection.
That's why I create a 'split' field "Userlog_ID" with type bit with default value of 0. So for each user connection we insert a new row in Userlog (with 0 as a value for User_log).
This field (User_Log) will then be update and it will take either 0 (for more then 99% of rows) or 1 (for 0.007% of rows) depending on the user category.
I create then a non clustered index on this field (User_log).
the select statment I want to optimize is:
SELECT User_UID, User_LastAuthentificationDate,
Language_ID,User_SecurityString
FROM dbo.UserLog
WHERE User_Active = 1
AND User_UID = '00F5AA38-C288-48C6-8ED1922601819486'
So the idea is now to add a filter on User_Log field to optimise the performance (precisely the index seek operator), only when the user belongs to the category 1 (0.007%):
SELECT User_UID, User_LastAuthentificationDate, Language_ID,User_SecurityString
FROM dbo.UserLog
WHERE User_Active = 1
AND User_UID = '00F5AA38-C288-48C6-8ED1922601819486'
and User_Log = 1
In my mind, I have the idea that since we add this filter the index seek will perform better because we have now a smaller result set.
Unfortunately, when I compare the 2 queries with the estimated execution plan I obtain 50% for each query. For both queries the optimiser use an index seek on user_uid non clustered index and then a key lookup on the cluster index (User_id).
So In conclusion by adding the split field and a non clustered index (either normal or filtered) on it, I don't improve the performance.
Can anyone have an explanation why. Maybe my reasoning and my interpretation are totaly wrong.
Thank you

best way to query on over 15m rows?

I have a table that has over 15m rows in Postgresql. The users can save these rows (let's say items) into their library and when they request their library, the system loads the user's library.
The query in Postgresql is like
SELECT item.id, item.name
FROM items JOIN library ON (library.item_id = item.id)
WHERE library.user_id = 1
, the table is already indexed and denormalized so I don't need any other JOIN.
If a user has many items in the library (like 1k items) the query time increases normally. (for example for 1k items the query time is 7s) My aim is to reduce the query time for large datasets.
I already use Solr for full-text searching, and I tried queries like ?q=id:1 OR id:100 OR id:345 but I'm not sure whether it's efficient or not in Solr.
I want to know my alternatives for querying this datasets. The bottleneck in my system seems disk speed. Should I buy a server that has over 15gb memory and go with Postgresql in increased shared_memory option or try something like Mongodb or another memory based databases, or should I create a cluster system and replicate the data in Postgresql?
items:
Column | Type
--------------+-------------------
id | text
mbid | uuid
name | character varying
length | integer
track_no | integer
artist | text[]
artist_name | text
release | text
release_name | character varying
rank | numeric
user_library:
Column | Type | Modifiers
--------------+-----------------------------+--------------------------------------------------------------
user_id | integer | not null
recording_id | character varying(32) |
timestamp | timestamp without time zone | default now()
id | integer | primary key nextval('user_library_idx_pk'::regclass)
-------------------
explain analyze
SELECT recording.id,name,track_no,artist,artist_name,release,release_name
FROM recording JOIN user_library ON (user_library.recording_id = recording.id)
WHERE user_library.user_id = 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..10745.33 rows=1036539 width=134) (actual time=0.168..57.663 rows=1000 loops=1)
Join Filter: (recording.id = (recording_id)::text)
-> Seq Scan on user_library (cost=0.00..231.51 rows=1000 width=19) (actual time=0.027..3.297 rows=1000 loops=1) (my opinion: because user_library has only 2 rows, Postgresql didn't use index to save resources.)
Filter: (user_id = 1)
-> Append (cost=0.00..10.49 rows=2 width=165) (actual time=0.045..0.047 rows=1 loops=1000)
-> Seq Scan on recording (cost=0.00..0.00 rows=1 width=196) (actual time=0.001..0.001 rows=0 loops=1000)
-> Index Scan using de_recording3_table_pkey on de_recording recording (cost=0.00..10.49 rows=1 width=134) (actual time=0.040..0.042 rows=1 loops=1000)
Index Cond: (id = (user_library.recording_id)::text)
Total runtime: 58.589 ms
(9 rows)
First, if your interesting (commonly used) set of data fits comfortably in memory as well as all indexes, you will have much better performance so yes, more RAM will help. However, with 1k records, part of your time will be spent on materializing records and sending them to the client.
A couple other initial points:
There is no substitute for real profiling. You may be surprised as to where the time is being spent. On PostgreSQL use explain analyze to do profiling of a query.
Look into cursors and limit/offset.
I don;t think one can come up with better advice until these are done.

Resources