postgres convert json array to columns - arrays

I'm working in postgres 9.6 and still getting my head around json
i have a column with a json object that is an array of numbers that represent recurrence frequency and the days of the week.
{"every":"1","weekdays":["1"]}
{"every":"1","weekdays":["1","3"]}
{"every":"1","weekdays":["1","2","3","4","5"]}
ROW1 -[1] : MON
ROW2 -[1,3] : MON , WED
ROW3 -[1,2,3,4,5] : MON , TUE , WED , THU , FRI
I want to expand these into columns such that:
|ROW- |MON | TUE| WED|THU|FRI|
------------------------------
|ROW1 |Y |N |N |N |N |
|ROW2 |Y |N |Y |N |N |
|ROW3 |Y |Y |Y |Y |Y |
I can get the elements out using jsonb_array_elements(pattern) but then what?
i thought to use the 'contains' expression to build each column
pattern #> '{1}', pattern #> '{2}' etc but I couldn't construct an object that would give a hit

Example data:
create table my_table(id serial primary key, pattern jsonb);
insert into my_table (pattern) values
('{"weekdays": [1]}'),
('{"weekdays": [1, 3]}'),
('{"weekdays": [1, 2, 3, 4, 5]}');
You can use the operator #> in this way:
select
id,
pattern->'weekdays' #> '[1]' as mon,
pattern->'weekdays' #> '[2]' as tue,
pattern->'weekdays' #> '[3]' as wed,
pattern->'weekdays' #> '[4]' as thu,
pattern->'weekdays' #> '[5]' as fri
from my_table
id | mon | tue | wed | thu | fri
----+-----+-----+-----+-----+-----
1 | t | f | f | f | f
2 | t | f | t | f | f
3 | t | t | t | t | t
(3 rows)

It seems i was on the right track with 'contains' but i had confused myself about what was in the array. I should have been looking for a string not a number
, bookings.pattern->'weekdays' #> '"1"'::jsonb
Thanks to Pitto for the prompt to put the outer json in the question which made it obvious

Related

data matrix html table from flattened data in Vue

I'm using Vue. Lets say I have a database table with the historical price of a few kinds of fruits for the last few years, so: fruit, year and price columns.
| fruit | year | price |
|--------|------|-------|
| apple | 2018 | 52 |
| apple | 2019 | 57 |
| apple | 2020 | 56 |
| apple | 2021 | 50 |
| banana | 2018 | 25 |
| banana | 2019 | 26 |
| banana | 2021 | 28 |
| pear | 2018 | 61 |
| pear | 2019 | 65 |
| pear | 2020 | 67 |
| pear | 2021 | 64 |
Now I want to create a html table which has fruit names on one axis and years on the other, and the cells contain the price for the given fruit / year combination as below. Some combinations might be missing from the data.
What features and template syntax you'd use? Please do not suggest tranforming the raw data: it comes straight from a database and there will be many tables like this, and I need a generic solution.
| | 2018 | 2019 | 2020 | 2021 |
|--------|------|------|------|------|
| apple | 52 | 57 | 56 | 50 |
| banana | 25 | 26 | n/a | 28 |
| pear | 61 | 65 | 67 | 64 |
I'm looking for elegant "vue-like" solutions. For now I created getRows(), getColumns() functions which collect all possible row and column values and then a getCell(col, row) function to pick up the right value from the dataset - but this might force Vue to rebuild the display more than optimal times when I edit the underlying data.
The broader question is how you work with relational data in Vue, because this is just the basic example, normally the name of the fruit would come from another base table...

Postgres sum of array stored in jsonb

I have a postgres database where some data are stored as jsonb arrays:
id | start | duration | value
----+------------------------+--------------+------------
1 | 2019-01-04 18:34:00+01 | [60] | [7]
2 | 2019-01-04 18:44:00+01 | [60] | [9]
3 | 2019-01-04 19:00:00+01 | [60] | [6]
4 | 2019-01-04 19:06:00+01 | [60] | [17]
5 | 2019-01-04 19:19:00+01 | [60] | [9]
6 | 2019-01-04 19:41:00+01 | [60, 60, 60] | [13, 8, 9]
7 | 2019-01-04 19:46:00+01 | [60] | [7]
8 | 2019-01-04 19:49:00+01 | [60] | [0]
I would like to get the sum of all the values in the array in the 'value'-field.
I can get all the values from the arrays using jsonb_array_elements:
=# select jsonb_array_elements(value),value from step limit 20;
jsonb_array_elements | value
----------------------+------------
7 | [7]
9 | [9]
6 | [6]
17 | [17]
9 | [9]
13 | [13, 8, 9]
8 | [13, 8, 9]
9 | [13, 8, 9]
7 | [7]
and so on. So I thought
select sum(jsonb_array_elements(value)::integer),start from step group by start
would do it, but I am told:
ERROR: aggregate function calls cannot contain set-returning function calls
HINT: You might be able to move the set-returning function into a LATERAL FROM item.
I have been looking a little bit into LATERAL FROM, but I still don't really get what postgres wants me to do...
Would it be easier to do this if I store the duration and value as arrays rather than json?
Use the function in a lateral join:
select start, sum(number::int)
from step s
cross join jsonb_array_elements_text(value) as number
group by start
start | sum
------------------------+-----
2019-01-04 19:00:00+01 | 6
2019-01-04 19:46:00+01 | 7
2019-01-04 18:44:00+01 | 9
2019-01-04 19:19:00+01 | 9
2019-01-04 18:34:00+01 | 7
2019-01-04 19:06:00+01 | 17
2019-01-04 19:49:00+01 | 0
2019-01-04 19:41:00+01 | 30
(8 rows)
This cross join is a lateral join, the function is executed once for each row from step.

SSRS 'where clause'

I've got a table that contains sales information for several companies. Each sales transaction the company makes is stored in the table, and the week of the year (1-52) that the sale took place within is stored also. Here's a small example of the database table that I'm querying to produce the SSRS report.
|---------------------|------------------|------------------|
| Company | Week |Sales_Transaction |
|---------------------|------------------|------------------|
| Alpha | 20 | 1.00 |
|---------------------|------------------|------------------|
| Alpha | 20 | 2.00 |
|---------------------|------------------|------------------|
| Beta | 20 | 9.00 |
|---------------------|------------------|------------------|
| Alpha | 21 | 5.00 |
|---------------------|------------------|------------------|
| Coolbeans | 21 | 5.50 |
|---------------------|------------------|------------------|
| Alpha | 22 | 2.00 |
|---------------------|------------------|------------------|
| Alpha | 22 | 2.00 |
|---------------------|------------------|------------------|
| Coolbeans | 22 | 3.00 |
|---------------------|------------------|------------------|
I have a matrix with a row group which produces a line in the matrix for each company. The matrix has 52 additional columns for each week of the year. Here's a condensed version of the matrix and data I want to see.
|--------------|---------------|----------------|----------------|
| Company | # Sales Wk 20 | # Sales Wk 21 | # Sales Wk 22 |
|--------------|---------------|----------------|----------------|
| Alpha | 2 | 1 | 2 |
|--------------|---------------|----------------|----------------|
| Beta | 1 | 0 | 0 |
|--------------|---------------|----------------|----------------|
| Coolbeans | 0 | 1 | 1 |
|--------------|---------------|----------------|----------------|
To count the number of sales transactions for each week for each company, I'm using this expression like this for each column:
=Count(IIF(Fields!Sales_Week_Number.Value = "20", Fields!Sales.Value, 0))
Using the example expression above which I'm placing in the # Sales Wk 20 matrix column, the problem is that instead of counting ONLY the transactions that occurred in week 20, it counts transactions for all weeks for the company. The result is that in column # Sales Wk 20, it shows a 5 for Alpha, a 1 for Beta, and a 2 for Coolbeans.
What do I need to do to make it only count the sales transaction from the specific week?
Side Note: Regarding the 52 columns for each week of the year, I intentionally did not use a column group for this b/c I need to do some other calculations/comparisons with another matrix which doesn't play nice when column groups are used. I did, however, use a row group for the companies.
Your expression should use SUM instead of count
=SUM(IIF(Fields!Sales_Transaction.Value=0,0,1))
I think you may be going down the wrong path here. Since your using a matrix in SSRS, then the easiest way is to make SSRS handle the separation for you rather than building a WHERE.
Try just adding =CountRows() as part of your formula, and ssrs handles the grouping for you. I'll check the format of the command when I'm on-line properly not on my phone.
Use this expression in your matrix's value column -
=IIf((Fields!Sales_Transaction.Value)>0,Count(Fields!Sales_Transaction.Value),0);

Postgres Insert based on JSON data

Trying to do an insert based on whether a value all ready exists or not in a JSON blob in a Postgres table.
| a | b | c | d | metadata |
_____________________________________________________
| 1 | 2 | 3 | 4 | {"other-key": 1} |
| 2 | 1 | 4 | 4 | {"key": 99} |
| 3 | 1 | 4 | 4 | {"key": 99, "other-key": 33} |
Currently trying to use something like this.
INSERT INTO mytable (a, b, c, d, metadata)
SELECT :a, :b, :c, :d, :metadata::JSONB
WHERE
(:metadata->>'key'::TEXT IS NULL
OR :metadata->>'key'::TEXT NOT IN (SELECT :metadata->>'key'::TEXT
FROM mytable));
But keep getting an ERROR: operator does not exist: unknown ->> boolean
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
It was just a simple casting issue
INSERT INTO mytable (a, b, c, d, metadata)
SELECT :a, :b, :c, :d, :metadata::JSONB
WHERE
((:metadata::JSONB->>'key') is NULL or
:metadata::JSONB->>'key' NOT IN (
SELECT metadata->>'key' FROM mytable
WHERE
metadata->>'key' = :metadata::JSONB->>'key'));

Store weekdays and time of the day table in database

I have the following data (this is not a real table in database, it's just a group of information I need to store with each post in my database):
X = yes / true
O = no / false
Weekday | Morning | Day | Evening | Night |
---------------------------------------------
Monday | X | O | O | X |
Tuesday | X | O | O | X |
Wednesday | O | O | X | X |
Thursday | O | X | O | O |
Friday | X | X | X | X |
Saturday | O | O | X | O |
Sunday | X | X | X | O |
How should I store data like this in a database? Im not too experienced with database design and all the possible ways I could think of waste a lot of space. Normalization is not a requirement for this.
I don't need to query by this data, I just need to store it efficiently in parent object/entity.
From what you've said, I'd create a table with 4 columns (Morning, Day, Evening, Night) and a Primary Key to reference the individual datasets (like the one you've shown). Then I'd use a bitfield for each row entry. For example Monday = 1, Tuesday = 2, Wednesday = 4, Thurs = 8, Friday = 16, Saturday = 32, and Sunday = 64.
Your dataset provided (here as PK 001) could be saved in a single row as:
PK | Morning | Day | Evening | Night |
001 83 88 116 23
The morning value is 83, because Monday (1) + Tues (2) + Friday (16) + Sunday (64) = 83.
You'd only need to use a datatype that can store a 128-bit number max. This depends on which database you use but many databases have a binary(64) type that would work.
You would then use the bitwise & operator to test if a day is represented by a particular value:
83 (Morning Value) & 16 (Friday) = 16 (Friday, therefore true)
88 (Day value) & 4 (Wednesday) = 0 (therefore false)
116 (Evening value) & 32 (Saturday) = 32 (Saturday, therefor true)
Alternatively, you could create a bitfield column for each Day of the Week, and the value would be Morning = 1, Day = 2, Evening = 4, and Night = 8.
Your given dataset would be represented as a single row in the database as:
PK | Mon | Tue | Wed | Thu | Fri | Sat | Sun |
001 9 9 12 2 15 4 7
You may think of creating a table with 28 columns, being each group of 4 columns corresponding to the 4 times of each day. Something like:
CREATE TABLE <name> ( SUNDAY_MORNING VARCHAR(1) ,
SUNDAY_DAY VARCHAR(1) ,
SUNDAY_EVENING VARCHAR(1) ,
SUNDAY_NIGHT VARCHAR(1) ,
MONDAY_MORNING VARCHAR(1) ,
MONDAY_DAY VARCHAR(1) ,
MONDAY_EVENING VARCHAR(1) ,
MONDAY_NIGHT VARCHAR(1) ,
...
) ;
Of course, KEYs may need to be defined, but it is not possible to suggest anything based on the provided info.
As per your concern of SPACE, this structure will se MUCH LESS space than you can imagine.

Resources