I can't find a definite answer to this question in the documentation. If a column is an array type, will all the entered values be individually indexed?
I created a simple table with one int[] column, and put a unique index on it. I noticed that I couldn't add the same array of ints, which leads me to believe the index is a composite of the array items, not an index of each item.
INSERT INTO "Test"."Test" VALUES ('{10, 15, 20}');
INSERT INTO "Test"."Test" VALUES ('{10, 20, 30}');
SELECT * FROM "Test"."Test" WHERE 20 = ANY ("Column1");
Is the index helping this query?
Yes you can index an array, but you have to use the array operators and the GIN-index type.
Example:
CREATE TABLE "Test"("Column1" int[]);
INSERT INTO "Test" VALUES ('{10, 15, 20}');
INSERT INTO "Test" VALUES ('{10, 20, 30}');
CREATE INDEX idx_test on "Test" USING GIN ("Column1");
-- To enforce index usage because we have only 2 records for this test...
SET enable_seqscan TO off;
EXPLAIN ANALYZE
SELECT * FROM "Test" WHERE "Column1" #> ARRAY[20];
Result:
Bitmap Heap Scan on "Test" (cost=4.26..8.27 rows=1 width=32) (actual time=0.014..0.015 rows=2 loops=1)
Recheck Cond: ("Column1" #> '{20}'::integer[])
-> Bitmap Index Scan on idx_test (cost=0.00..4.26 rows=1 width=0) (actual time=0.009..0.009 rows=2 loops=1)
Index Cond: ("Column1" #> '{20}'::integer[])
Total runtime: 0.062 ms
Note
it appears that in many cases the gin__int_ops option is required
create index <index_name> on <table_name> using GIN (<column> gin__int_ops)
I have not yet seen a case where it would work with the && and #> operator without the gin__int_ops options
#Tregoreg raised a question in the comment to his offered bounty:
I didn't find the current answers working. Using GIN index on
array-typed column does not increase the performance of ANY()
operator. Is there really no solution?
#Frank's accepted answer tells you to use array operators, which is still correct for Postgres 11. The manual:
... the standard distribution of PostgreSQL includes a GIN operator
class for arrays, which supports indexed queries using these
operators:
<#
#>
=
&&
The complete list of built-in operator classes for GIN indexes in the standard distribution is here.
In Postgres indexes are bound to operators (which are implemented for certain types), not data types alone or functions or anything else. That's a heritage from the original Berkeley design of Postgres and very hard to change now. And it's generally working just fine. Here is a thread on pgsql-bugs with Tom Lane commenting on this.
Some PostGis functions (like ST_DWithin()) seem to violate this principal, but that is not so. Those functions are rewritten internally to use respective operators.
The indexed expression must be to the left of the operator. For most operators (including all of the above) the query planner can achieve this by flipping operands if you place the indexed expression to the right - given that a COMMUTATOR has been defined. The ANY construct can be used in combination with various operators and is not an operator itself. When used as constant = ANY (array_expression) only indexes supporting the = operator on array elements would qualify and we would need a commutator for = ANY(). GIN indexes are out.
Postgres is not currently smart enough to derive a GIN-indexable expression from it. For starters, constant = ANY (array_expression) is not completely equivalent to array_expression #> ARRAY[constant]. Array operators return an error if any NULL elements are involved, while the ANY construct can deal with NULL on either side. And there are different results for data type mismatches.
Related answers:
Check if value exists in Postgres array
Index for finding an element in a JSON array
SQLAlchemy: how to filter on PgArray column types?
Can IS DISTINCT FROM be combined with ANY or ALL somehow?
Asides
While working with integer arrays (int4, not int2 or int8) without NULL values (like your example implies) consider the additional module intarray, that provides specialized, faster operators and index support. See:
How to create an index for elements of an array in PostgreSQL?
Compare arrays for equality, ignoring order of elements
As for the UNIQUE constraint in your question that went unanswered: That's implemented with a btree index on the whole array value (like you suspected) and does not help with the search for elements at all. Details:
How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
It's now possible to index the individual array elements. For example:
CREATE TABLE test (foo int[]);
INSERT INTO test VALUES ('{1,2,3}');
INSERT INTO test VALUES ('{4,5,6}');
CREATE INDEX test_index on test ((foo[1]));
SET enable_seqscan TO off;
EXPLAIN ANALYZE SELECT * from test WHERE foo[1]=1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Index Scan using test_index on test (cost=0.00..8.27 rows=1 width=32) (actual time=0.070..0.071 rows=1 loops=1)
Index Cond: (foo[1] = 1)
Total runtime: 0.112 ms
(3 rows)
This works on at least Postgres 9.2.1. Note that you need to build a separate index for each array index, in my example I only indexed the first element.
Related
I want to index an array column with either GIN or GiST. The fact that GIN is slower in insert/update operations, however, made me wonder if it would have any impact on performance - even though the indexed column itself will remain static.
So, assuming that, for instance, I have a table with columns (A, B, C) and that B is indexed, does the index get updated if I update only column C?
It depends :^)
Normally, PostgreSQL will have to modify the index, even if nothing changes in the indexed column, because an UPDATE in PostgreSQL creates a new row version, so you need a new index entry to point to the new location of the row in the table.
Since this is unfortunate, there is an optimization called “HOT update”: If none of the indexed columns are modified and there is enough free space in the block that contains the original row, PostgreSQL can create a “heap-only tuple” that is not referenced from the outside and therefore does not require a new index entry.
You can lower the fillfactor on the table to increase the likelihood for HOT updates.
For details, you may want to read my article on the topic.
Laurenz Albe answer is great. The following part is my interpretation.
Because the gin array_ops can not do index only scan. Which means that even if you only query the array column, you can only use bitmap index scan. for bitmap scan. with low fillfactor, you probably don't need to visit extract pages.
demo:
begin;
create table test_gin_update(cola int, colb int[]);
insert into test_gin_update values (1,array[1,2]);
insert into test_gin_update values (1,array[1,2,3]);
insert into test_gin_update(cola, colb) select g, array[g, g + 1] from generate_series(10, 10000) g;
commit;
for example, select colb from test_gin_update where colb = array[1,2]; see the following query plan.
because GIN cannot distinguish array[1,2] and array[1,2,3] then even if we created gin index. create index on test_gin_update using gin(colb array_ops ); We can only use bitmap index scan.
QUERY PLAN
-----------------------------------------------------------------------------
Bitmap Heap Scan on test_gin_update (actual rows=1 loops=1)
Recheck Cond: (colb = '{1,2}'::integer[])
Rows Removed by Index Recheck: 1
Heap Blocks: exact=1
-> Bitmap Index Scan on test_gin_update_colb_idx (actual rows=2 loops=1)
Index Cond: (colb = '{1,2}'::integer[])
(6 rows)
I have a table with a jsonb field named "data" with the following content:
{
customerId: 1,
something: "..."
list: [{ nestedId: 1, attribute: "a" }, { nestedId: 2, attribute: "b" }]
}
I need to retrieve the whole row based on its 'nestedId' attribute, note that the field is inside an array.
After checking the query plans I found out I could benefit from an index. So I added:
CREATE INDEX i1 ON mytable using gin ((data->'list') jsonb_path_ops))
From what I understood from the doc, this creates index items for the values in the "list", the solution solves my problem.
For the sake of completion follow the query I can use to retrieve my data
SELECT data FROM mytable where data->'list' #> '[{"nestedId": 1}]'
Tho, I wonder if there are more optimal indexing I could do. Is it possible to create an index only for the "nestedId" field for example?
You can index only the numeric values and not also the keys, using functional indexes. You probably need to make a helper function to do so.
create function jsonb_objarray_to_intarray(jsonb,text) returns int[] immutable language sql as
$$ select array_agg((x->>$2)::int) from jsonb_array_elements($1) f(x) $$;
create index on mytable using gin (jsonb_objarray_to_intarray(data->'list','nestedId'));
SELECT data FROM mytable where jsonb_objarray_to_intarray(data->'list','nestedId') #> ARRAY[3];
I wrote it this way so the function could be reused in other similar situations. If you don't care about it being re-used, you can make the code that uses it look prettier by hard coding the dereference and the key value into the function:
create function mytable_to_intarray(jsonb) returns int[] immutable language sql as
$$ select array_agg((x->>'nestedId')::int) from jsonb_array_elements($1->'list') f(x) $$;
create index on mytable using gin (mytable_to_intarray(data));
SELECT data FROM mytable where mytable_to_intarray(data) #> ARRAY[3];
Now those indexes do take longer to make than your original, but they are about half the size and are at least as fast to query. More importantly, the planner has better statistics about the selectivity, and so in more complicated queries is likely to come up with better query plans.
We are changing DB(PostgreSQL 10.11) structure for one of our projects. And one of the changes is moving field of type uuid[] (called “areasoflawid”) into the jsonb field (called “data”).
So, we have a table which look like this:
CREATE TABLE public.documents
(
id serial,
areasoflawid uuid[], --the field to be moved into the ‘data’
data jsonb,
….
)
We are not changing the values of the array or its structure.
i.e. documents.data->'metadata'->'areaoflawids' contains the same items as documents.areasoflawid)
After data migration, the JSON stored in the “data” field has following structure:
{
...
"metadata": {
...
"areaoflawids": [
"e34e0ee5-78e0-4d92-9186-ac69c109408b",
"b3af9163-d910-4d19-8f40-0602b75c25b0",
"50dc7fd8-ebdf-4cd2-bcab-b8d755fe96e8",
"8955c062-363f-4a1a-ac3c-d1c2ffe96c9b",
"bdb79f9f-4539-45f5-ac82-92baaf915f6c"
],
....
},
...
}
So, after migrating data we started benchmarking jsonb field-related queries and figured out that searching over array field documents.data->’metadata’->’areaoflawids’ takes MUCH longer than searching over uuid[] field documents.areasoflawid.
Here are the queries:
--search over jsonb array field, takes 6.2 sec, returns 13615 rows
SELECT id FROM documents WHERE data->'metadata'->'areaoflawids' #> '"e34e0ee5-78e0-4d92-9186-ac69c109408b"'
--search over uuid[] field, takes 600ms, returns 13615 rows
SELECT id FROM documents WHERE areasoflawid #> ARRAY['e34e0ee5-78e0-4d92-9186-ac69c109408b']::uuid[]
Here is the index over jsonb field:
CREATE INDEX test_documents_aols_gin_idx
ON public.documents
USING gin
(((data -> 'metadata'::text) -> 'areaoflawids'::text) jsonb_path_ops);
And here is the execution plan:
EXPLAIN ANALYZE SELECT id FROM documents WHERE data->'metadata'->'areaoflawids' #> '"e34e0ee5-78e0-4d92-9186-ac69c109408b"'
"Bitmap Heap Scan on documents (cost=6.31..390.78 rows=201 width=4) (actual time=2.297..5859.886 rows=13614 loops=1)"
" Recheck Cond: (((data -> 'metadata'::text) -> 'areaoflawids'::text) #> '"e34e0ee5-78e0-4d92-9186-ac69c109408b"'::jsonb)"
" Heap Blocks: exact=4859"
" -> Bitmap Index Scan on test_documents_aols_gin_idx (cost=0.00..6.30 rows=201 width=0) (actual time=1.608..1.608 rows=13614 loops=1)"
" Index Cond: (((data -> 'metadata'::text) -> 'areaoflawids'::text) #> '"e34e0ee5-78e0-4d92-9186-ac69c109408b"'::jsonb)"
"Planning time: 0.133 ms"
"Execution time: 5862.807 ms"
Other queries over jsonb field work with acceptable speed, but this particular search is about 10 times slower than search over separated field. We were expecting it to be a bit slower but not that bad. We consider option of leaving this “areasoflawid” field as a separated field but we would definitely prefer to move it inside the json. I’ve been playing with different indexes and operations (also used ? and ?|) but the search is still slow. Any help is appreciated!
Finding the 13,614 candidate matches in the index is very fast (1.608 milliseconds). The slow part is reading all of those rows from the table itself. If you turn on track_io_timing, then do EXPLAIN (ANALYZE, BUFFERS), I'm sure you will find you are waiting on IO. If you run the query several times in a row, does it get faster?
I think you are doing an unequal benchmark here, where one table is already in cache and the alternative table is not. But it could also be that the new table is too large to actually fit in cache.
thank you for your response! We came up with another solution taken from this post: https://www.postgresql.org/message-id/CAONrwUFOtnR909gs+7UOdQQB12+pXsGUYu5YHPtbQk5vaE9Gaw#mail.gmail.com . The query now takes about 600-800ms to execute.
So, here is the solution:
CREATE OR REPLACE FUNCTION aol_uuids(data jsonb) RETURNS TEXT[] AS
$$
SELECT
array_agg(value::TEXT) as val
FROM
jsonb_array_elements(case jsonb_typeof(data) when 'array' then data else '[]' end)
$$ LANGUAGE SQL IMMUTABLE;
SELECT id FROM documents WHERE aol_uuids(data->'metadata'->'areaoflawids')#>ARRAY['"e34e0ee5-78e0-4d92-9186-ac69c109408b"']
I have a table with 3 columns: "Id", "A", "B".
All of them are searchable. Id is identity and used only to search exact rows so it's clear. But I have doubts about "A" and "B". I have a 3 cases to search in my application: search by "A", search by "B" and search by "A" and "B" simultaneously. So i'm not sure which index type to choose. Should I use two single-column indexes or one multi-column? Or maybe it's better to combine single-column indexes with multi-column (3 indexes in total)? I don't really care about INSERT/UPDATE/DELETE duration, my target priority is to make SELECT as fast as possible.
I use SQL Server 2017.
Thank you.
I think two additional indexes will be enough:
CREATE INDEX IDX_YourTable_AB ON YourTable(A,B) -- the first column here which has more different values
CREATE INDEX IDX_YourTable_B ON YourTable(B)INCLUDE(A)
If you have other columns in this table you can create included indexes:
CREATE INDEX IDX_YourTable_AB ON YourTable(A,B) INCLUDE(C,D,E,...)
CREATE INDEX IDX_YourTable_B ON YourTable(B) INCLUDE(A,C,D,E,...)
Index IDX_YourTable_AB might used for conditions WHERE A='...' or WHERE A='...' AND B='...' or WHERE A LIKE '...%' AND B='...' - used only A column or A&B columns.
Index IDX_YourTable_B might used for conditions with B column only (WHERE B='...' or WHERE B LIKE '...%').
Also try to test CREATE INDEX IDX_YourTable_BA ON YourTable(B,A) instead of CREATE INDEX IDX_YourTable_B ON YourTable(B)INCLUDE(A). Maybe it will be better.
Below is the table structure with about 6 million records:
CREATE TABLE "ip_loc" (
"start_ip" inet,
"end_ip" inet,
"iso2" varchar(4),
"state" varchar(100),
"city" varchar(100)
);
CREATE INDEX "index_ip_loc" on ip_loc using gist(iprange(start_ip,end_ip));
It takes about 1 second to do the query.
EXPLAIN ANALYZE select * from ip_loc where iprange(start_ip,end_ip)#>'180.167.1.25'::inet;
Bitmap Heap Scan on ip_loc (cost=1080.76..49100.68 rows=28948 width=41) (actual time=1039.428..1039.429 rows=1 loops=1)
Recheck Cond: (iprange(start_ip, end_ip) #> '180.167.1.25'::inet)
Heap Blocks: exact=1
-> Bitmap Index Scan on index_ip_loc (cost=0.00..1073.53 rows=28948 width=0) (actual time=1039.411..1039.411 rows=1 loops=1)
Index Cond: (iprange(start_ip, end_ip) #> '180.167.1.25'::inet) Planning time: 0.090 ms Execution time: 1039.466 ms
iprange is a customized type:
CREATE TYPE iprange AS RANGE (
SUBTYPE = inet
);
Is there a way to do the query faster?
The inet type is a composite type and not the simple 32-bits needed to construct an IPv4 address; it includes a netmask for instance. That makes storage, indexing and retrieval needlessly complex if all you are interested in is actual IP addresses (i.e. the 32 bit of the actual address, as opposed to addresses with netmasks, such as you would get from a web server listing the clients of an app) and you do not manipulate the IP addresses inside the database. If that is the case, you could store your start_ip and end_ip as simple integers and operate on those using simple integer comparison. (The same can be done for IPv6 addresses using an integer[4] data type.)
A point to keep in mind is that the default range constructor behaviour is to include the lower bound and exclude the upper bound so in your index and query the actual end_ip is not included.
Lastly, if you stick with a range type, on your index you should add the range_ops operator class for maximum performance.
These ranges are non-overlapping? I'd try to btree index end_ip and do:
with candidate as (
select * from ip_loc
where end_ip<='38.167.1.53'::inet
order by end_ip desc
limit 1
)
select * from candidate
where start_ip<='38.167.1.53'::inet;
Works in 0.1ms on 4M rows on my computer.
Remember to analyze table after populating it with data.
Add a clustered index for end_ip only