in postgres, how to RI array elements? - arrays

with regular RI
create table t2(
c1 serial primary key,
c2 int references t1(c1) on delete cascade
);
create index t2_c2 on t1(c2);
what if c2 were an Array of integers?
is there any way to reference elements of the array
what is the best way to make sure the elements are indexed
create table t2(
c1 serial primary key,
c2 int[] -- ??
}
create index t2_c2 on t2( ?? );
(sure this has been asked and answered a thousand times, but for some reason out of my google-enable sphere)

Related

How to create index on AgensGraph?

When traverse vertex or egde, It is very slow.
I want to create index for accelerating speed.
# match (n:v{id:1}) return n;
n
-----------------
v[3.1]{"id": 1}
(1 row)
Time: 693.100 ms
How can I create index for vertex or edge?
Use "CREATE PROPERTY INDEX" statement for create index on graph object.
# match (n:v{id:1}) return n;
n
-----------------
v[3.1]{"id": 1}
(1 row)
Time: 693.100 ms
# create property index on v ( id );
CREATE PROPERTY INDEX
Time: 2227.147 ms
# match (n:v{id:1}) return n;
n
-----------------
v[3.1]{"id": 1}
(1 row)
Time: 5.935 ms
In this case, accelerated over than hundred times.
Creating Index
agens=> CREATE PROPERTY INDEX ON [VERTEX OR EDGE LABEL] ([PROPERTY])
agens=> CREATE PROPERTY INDEX ON CUSTOMER (AGE)
Creating Unique Index (Allow only one edge between two vertices)
agens=> CREATE UNIQUE INDEX [INDEX NAME] ON [GRAPH_PATH.VERTEX OR EDGE LABEL] ([PROPERTIES])
agens=> CREATE UNIQUE INDEX STUDENT_UNIQ_INDEX ON [AGENS_GRAPH.CUSTOMER] ("start", "end")
Creating Unique Constraint
agens=> CREATE CONSTRAINT ON [VERTEX OR EDGE LABEL] [PROPERTY] IS UNIQUE
agens=> CREATE CONSTRAINT ON CUSTOMER CUSTOMER_ID IS UNIQUE

Postgres select by array element range

In my table I've got column facebook where I store facebook data ( comment count, share count etc.) and It's an array. For example:
{{total_count,14},{comment_count,0},{comment_plugin_count,0},{share_count,12},{reaction_count,2}}
Now I'm trying to SELECT rows that facebook total_count is between 5 and 10. I've tried this:
SELECT * FROM pl where regexp_matches(array_to_string(facebook, ' '), '(\d+).*')::numeric[] BETWEEN 5 and 10;
But I'm getting an error:
ERROR: operator does not exist: numeric[] >= integer
Any ideas?
There is no need to convert the array to text and use regexp. You can access a particular element of the array, e.g.:
with pl(facebook) as (
values ('{{total_count,14},{comment_count,0},{comment_plugin_count,0},{share_count,12},{reaction_count,2}}'::text[])
)
select facebook[1][2] as total_count
from pl;
total_count
-------------
14
(1 row)
Your query may look like this:
select *
from pl
where facebook[1][2]::numeric between 5 and 10
Update. You could avoid the troubles described in the comments if you would use the word null instead of empty strings ''''.
with pl(id, facebook) as (
values
(1, '{{total_count,14},{comment_count,0}}'::text[]),
(2, '{{total_count,null},{comment_count,null}}'::text[]),
(3, '{{total_count,7},{comment_count,10}}'::text[])
)
select *
from pl
where facebook[1][2]::numeric between 5 and 10
id | facebook
----+--------------------------------------
3 | {{total_count,7},{comment_count,10}}
(1 row)
However, it would be unfair to leave your problems without an additional comment. The case is suitable as an example for the lecture How not to use arrays in Postgres. You have at least a few better options. The most performant and natural is to simply use regular integer columns:
create table pl (
...
facebook_total_count integer,
facebook_comment_count integer,
...
);
If for some reason you need to separate this data from others in the table, create a new secondary table with a foreign key to the main table.
If for some mysterious reason you have to store the data in a single column, use the jsonb type, example:
with pl(id, facebook) as (
values
(1, '{"total_count": 14, "comment_count": 0}'::jsonb),
(2, '{"total_count": null, "comment_count": null}'::jsonb),
(3, '{"total_count": 7, "comment_count": 10}'::jsonb)
)
select *
from pl
where (facebook->>'total_count')::integer between 5 and 10
hstore can be an alternative to jsonb.
All these ways are much easier to maintain and much more efficient than your current model. Time to move to the bright side of power.

Oracle db unique constraint design consultant

I have 3 tables: A,B,C.
A consists of a column D.
B consists of columns E,F,G,H,I,J (PK is J).
C consists of foreign key K to table B.
now I need to have F,G,H unique, but if G is null then have F,H unique and I and E unique. (and G XOR I must be null).
is there a way I can do it in db and not programatically?
Thanks.
I'm pretty sure you can do this with unique indexes and the fact that NULL is ignored for a unique index.
First, create an index on F, G, and H:
create unique index idx_b_f_g_h on b(f, g, h)
This handles the "F, G, H unique" case. To handle the "G is null, then F, H unique" do:
create unique index idx_b_f_h_j on b(f, h, (case when G is null then 0 else j end));
This replaces a non-NULL values of G with the primary key -- ensuring uniqueness. It uses an arbitrary "constant" value when G is null. (Note the constant should be of the same type as the primary key.)
To handle, I and E unique, you can also use a functional index. I think you mean:
create unique index idx_b_i_e_j on b(coalesce(i, e));
You can handle the fact that i or e is NULL using a check constraint.

SQLServer choosing primary key type

I have a list of objects each of its own id and I need to create a table for them in a database. It's a good idea to use their ids(since they are unique) as a primary key in the table but there's one problem. All ids are integers except for the one object - it has 2 subobjects with ids 142.1 and 142.2, so the id list is 140, 141, 142.1, 142.2, 143...
Now if I choose a double as a type of primary key then it will store unnecessary 6 bytes(since double is 8 bytes and INT is 2) to only support two double numbers and I can't choose INT. So what type should I use if I cannot change the list of objects?
The math for double is imprecise, you shouldn't use it for discrete numbers like money or object id's. Consider using decimal(p,s) instead. Where p is the total number of digits, and s is the number of digits behind the dot. For example, a decimal(5,2) could store 123.45, but not 1234 or 12.345.
Another option is a composite primary key for two integers n1, n2:
alter table YourTable add constraint PK_YourTable primary key (n1, n2)
An int is four bytes, not two, so the size difference to a double is not so big.
However, you should definitely not use a floating point number as key, as a floating point number isn't stored as an exact values, but as an approximation.
You can use a decimal with one fractional digit, like decimal(5,1), to store a value like that. A decimal is a fixed point number, so it's stored as an exact value, not an approximation.
Choose VARCHAR of an appropriate length, with CHECK constraints to ensure the data conforms to your domain rules e.g. based on the small sample data you posted:
CREATE TABLE Ids
(
id VARCHAR(5) NOT NULL UNIQUE
CONSTRAINT id__pattern
CHECK (
id LIKE '[0-9][0-9][0-9]'
OR id LIKE '[0-9][0-9][0-9].[1-9]'
)
);

Database Design for 2D Matrix Algebra

Can anyone advise on a database design/DBMS for storing 2D Time Series Matrix data. To allow for quick BACK END algebraic calculations: e.g:
Table A,B,C..
Col1: Date- Timestamp
col2: Data- Array? (Matrix Data)
SQL Psuedo Code
INSERT INTO TABLE C
SELECT
Multiply A.Data A by B.Data
Where Matrix A Start Date = Matrix B Start Date
And Matrix A End Date = Matrix B End Date
Essentially set the co-ordinates for the calculation.
The difficulty with matrix algebra is determining what is a domain on the matrix for data modelling purposes. Is it a value? Is it a matrix as a whole? This is not a pre-defined question, so I will give you two solutions and what the tradeoffs are.
Solution 1: Value in a matrix cell is a domain:
CREATE TABLE matrix_info (
x_size int,
y_size int,
id serial not null unique,
timestamp not null,
);
CREATE TABLE matrix_cell (
matrix_id int references matrix_info(id),
x int,
y int,
value numeric not null,
primary key (matrix_id, x, y)
);
The big concern is that this does not enforce matrix sizes very well. Additionally a missing value could be used to represent 0, or might not be allowed. The idea of using a matrix as a whole as a domain has some attractiveness. In this case:
CREATE TABLE matrix (
id serial not null unique,
timestamp not null,
matrix_data numeric[]
);
Note that many db's including PostgreSQL will enforce that an array is actually a matrix. Then you'd need to write your own functions for multiplication etc. I would recommend doing this in an object-relational way and on PostgreSQL since it is quite programmable for this sort of thing. Something like:
CREATE TABLE matrix(int) RETURNS matrix LANGUAGE SQL AS
$$ select * from matrix where id = $1 $$;
CREATE FUNCTION multiply(matrix, matrix) RETURNS matrix LANGUAGE plpgsql AS
$$
DECLARE matrix1 = $1.matrix_data;
matrix2 = $2.matrix_data;
begin
...
end;
$$;
Then you can call the matrix multiplication as:
SELECT * FROM multiply(matrix(1), matrix(2));
You could even insert into the table the product of two other matrices:
INSERT INTO matrix (matrix_data)
SELECT matrix_data FROM multiply(matrix(1), matrix(2));

Resources