I have a table like this;
CREATE TABLE test (
id BIGSERIAL PRIMARY KEY,
data JSONB
);
INSERT INTO test(data) VALUES('[1,2,"a",4,"8",6]'); -- id = 1
INSERT INTO test(data) VALUES('[1,2,"b",4,"7",6]'); -- id = 2
How to update element data->1 and data->3 into something else without PL/*?
For Postgres 9.5 or later use jsonb_set(). See later answer of adriaan.
You cannot manipulate selected elements of a json / jsonb type directly. Functionality for that is still missing in Postgres 9.4. You have to do 3 steps:
Unnest / decompose the JSON value.
Manipulate selected elements.
Aggregate / compose the value back again.
To replace the 3rd element of the json array (data->3) in the row with id = 1 with a given (new) value ('<new_value>'):
UPDATE test t
SET data = t2.data
FROM (
SELECT id, array_to_json(
array_agg(CASE WHEN rn = 1 THEN '<new_value>' ELSE elem END))
) AS data
FROM test t2
, json_array_elements_text(t2.data) WITH ORDINALITY x(elem, rn)
WHERE id = 1
GROUP BY 1
) t2
WHERE t.id = t2.id
AND t.data <> t2.data; -- avoid empty updates
About json_array_elements_text():
How to turn JSON array into Postgres array?
About WITH ORDINALITY:
PostgreSQL unnest() with element number
You can do this from PostgreSQL 9.5 with jsonb_set:
INSERT INTO test(data) VALUES('[1,2,"a",4,"8",6]');
UPDATE test SET data = jsonb_set(data, '{2}','"b"', false) WHERE id = 1
Try it out with a simple select:
SELECT jsonb_set('[1,2,"a",4,"8",6]', '{2}','"b"', false)
-- [1, 2, "b", 4, "8", 6]
And if you want to update two fields you can do:
SELECT jsonb_set(jsonb_set('[1,2,"a",4,"8",6]', '{0}','100', false), '{2}','"b"', false)
-- [100, 2, "b", 4, "8", 6]
Related
Let's for example I have the next table:
CREATE TABLE temp
(
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
arr bigint[] NOT NULL
);
And insert rows it:
INSERT INTO temp (arr) VALUES
(ARRAY[2, 3]),
(ARRAY[2,3,4]),
(ARRAY[4]),
(ARRAY[1, 2, 3])
So, I have now in the table:
I want to have a query that return only arrays which are unique (in a manner that are not contains by other arrays)
So, the return will be rows number 2 & 4 (the arr column)
This can be don using a NOT EXISTS condition:
select t1.*
from temp t1
where not exists (select *
from temp t2
where t1.id <> t2.id
and t2.arr #> t1.arr);
I have the following values in the SQL Server table:
But I need to build query from which output look like this:
I know that I should probably use combination of substring and charindex but I have no idea how to do it.
Could you please help me how the query should like?
Thank you!
Try the following, it may work.
SELECT
offerId,
cTypes
FROM yourTable AS mt
CROSS APPLY
EXPLODE(mt.contractTypes) AS dp(cTypes);
You can use string_split function :
select t.offerid, trim(translate(tt.value, '[]"', ' ')) as contractTypes
from table t cross apply
string_split(t.contractTypes, ',') tt(value);
The data in each row in the contractTypes column is a valid JSON array, so you may use OPENJSON() with explicit schema (result is a table with columns defined in the WITH clause) to parse this array and get the expected results:
Table:
CREATE TABLE Data (
offerId int,
contractTypes varchar(1000)
)
INSERT INTO Data
(offerId, contractTypes)
VALUES
(1, '[ "Hlavni pracovni pomer" ]'),
(2, '[ "ÖCVS", "Staz", "Prahovne" ]')
Table:
SELECT d.offerId, j.contractTypes
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) WITH (contractTypes varchar(100) '$') j
Result:
offerId contractTypes
1 Hlavni pracovni pomer
2 ÖCVS
2 Staz
2 Prahovne
As an additional option, if you want to return the position of the contract type in the contractTypes array, you may use OPENJSON() with default schema (result is a table with columns key, value and type and the value in the key column is the 0-based index of the element in the array):
SELECT
d.offerId,
CONVERT(int, j.[key]) + 1 AS contractId,
j.[value] AS contractType
FROM Data d
OUTER APPLY OPENJSON(d.contractTypes) j
ORDER BY CONVERT(int, j.[key])
Result:
offerId contractId contractType
1 1 Hlavni pracovni pomer
2 1 ÖCVS
2 2 Staz
2 3 Prahovne
If I have this table
CREATE TABLE tmp (
a integer,
b integer,
c text
);
INSERT INTO tmp (a, b, c) VALUES (1, 2, 'foo');
And this json:
{
"a": 4,
"c": "bar"
}
Where the keys map to the column names, and the values are the new values.
How can I update the tmp table without touching columns that aren't in the map?
I thought about constructing a dynamic string of SQL update statement that can be executed in pl/pgsql, but it seems the number of arguments that get passed to USING must be predetermined. But the actual number of arguments is determined by the number of keys in the map, which is dynamic, so this seems like a dead end.
I know I can update the table using multiple update statements as I loop over the keys, but the problem is that I have a trigger set up for the table that will revision the table (by inserting changed columns into another table), so the columns must be updated in a single update statement.
I wonder if it's possible to dynamically update a table with a json map?
Use coalesce(). Example table:
drop table if exists my_table;
create table my_table(id int primary key, a int, b text, c date);
insert into my_table values (1, 1, 'old text', '2017-01-01');
and query:
with jsondata(jdata) as (
values ('{"id": 1, "b": "new text"}'::jsonb)
)
update my_table set
a = coalesce((jdata->>'a')::int, a),
b = coalesce((jdata->>'b')::text, b),
c = coalesce((jdata->>'c')::date, c)
from jsondata
where id = (jdata->>'id')::int;
select * from my_table;
id | a | b | c
----+---+----------+------------
1 | 1 | new text | 2017-01-01
(1 row)
Consider a table temp1
create temporary table temp1 (
id integer,
days integer[]
);
insert into temp1 values (1, '{}');
And another table temp2
create temporary table temp2(
id integer
);
insert into temp2 values (2);
insert into temp2 values (5);
insert into temp2 values (6);
I want to use temp2 id values as indices of the days array of temp1. i.e. I want to update
days[index] = 99 where index is the id value from temp2. I want to accomplish this in single query or if not possible, the most optimal way.
Here is what I am trying and it updates only one index and not all. Is it possible to update multiple indices of the array ? I understand it can be done using a loop but just was hoping if more optimized solution is possible ?
update temp1
set days[temp2.id] = 99
from temp2;
select * from temp1;
id | days
----+------------
1 | [2:2]={99}
(1 row)
TL;DR: Don't use arrays for this. Really. Just because you can doesn't mean you should.
PostgreSQL's arrays are really not designed for in-place modification; they're data values, not dynamic data structures. I don't think what you're trying to do makes much sense, and suggest you re-evaluate your schema now before you dig yourself into a deeper hole.
You can't just construct a single null-padded array value from temp2 and do a slice-update because that'll overwrite values in days with nulls. There is no "update only non-null array elements" operator.
So we have to do this by decomposing the array into a set, modifying it, recomposing it into an array.
To solve that what I'm doing is:
Taking all rows from temp2 and adding the associated value, to produce (index, value) pairs
Doing a generate_series over the range from 1 to the highest index on temp2 and doing a left join on it, so there's one row for each index position
Left joining all that on the unnested original array and coalesceing away nulls
... then doing an array_agg ordered by index to reconstruct the array.
With a more realistic/useful starting array state:
create temporary table temp1 (
id integer primary key,
days integer[]
);
insert into temp1 values (1, '{42,42,42}');
Development step 1: index/value pairs
First associate values with each index:
select id, 99 from temp2;
Development step 2: add nulls for missing indexes
then join on generate_series to add entries for missing indexes:
SELECT gs.i, temp2values.newval
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select max(id) from temp2)) i
) gs
ON (temp2values.newvalindex = gs.i);
Development step 3: merge the original array values in
then join that on the unnested original array. You can use UNNEST ... WITH ORDINALITY for this in PostgreSQL 9.4, but I'm guessing you're not running that yet so I'll show the old approach with row_number. Note the use of a full outer join and the change to the outer bound of the generate_series to handle the case where the original values array is longer than the highest index in the new values list:
SELECT gs.i, coalesce(temp2values.newval, originals.val) AS val
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i)
ORDER BY gs.i;
This produces something like:
regress=> \e
i | val
---+----------
1 | 42
2 | 99
3 | 42
4 |
5 | 99
6 | 99
(6 rows)
Development step 4: Produce the desired new array value
so now we just need to turn it back into an array by removing the ORDER BY clause at the end and using array_agg:
SELECT array_agg(coalesce(temp2values.newval, originals.val) ORDER BY gs.i)
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i);
with a result like:
array_agg
-----------------------
{42,99,42,NULL,99,99}
(1 row)
Final query: Use it in an UPDATE
UPDATE temp1
SET days = newdays
FROM (
SELECT array_agg(coalesce(temp2values.newval, originals.val) ORDER BY gs.i)
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i)
) calc_new_days(newdays)
WHERE temp1.id = 1;
Note, however, that **this only works for a single entry in temp1.id,and I've specified temp1.id twice in the query: once inside the query to generate the new array value, and once in the update predicate.
To avoid that, you'd need a key in temp2 that references temp1.id and you'd need to make some changes to allow the generated padding rows to have the correct id value.
I hope this convinces you that you should probably not be using arrays for what you're doing, because it's horrible.
I have a query that selects number of rows containing repeated rows that all columns values are the same except one column let's call it X column.
What I want to do is to combine all values of X column values in all repeated rows and separate the values with ',' char.
The query I use:
SELECT App.ID,App.Name,Grp.ColumnX
FROM (
SELECT * FROM CustomersGeneralGroups AS CG WHERE CG.GeneralGroups_ID IN(1,2,3,4)
) AS GroupsCustomers
LEFT JOIN Appointments AS App ON GroupsCustomers.Customers_ID = App.CustomerID
INNER JOIN Groups AS Grp ON Grp.ID = GroupsCustomers.GeneralGroups_ID
WHERE App.AppointmentDateTimeStart > #startDate AND App.AppointmentDateTimeEnd < #endDate
The column which will differ is ColumnX, columns ID and Name will be same but ColumnX will be different.
Ex:
if the query will return rows like these:
ID Name ColumnX
1 test1 1
1 test1 2
1 test1 3
The result I want to be is:
ID Name ColumnX
1 test1 1,2,3
I don't mind if I have to do it with linq not sql.
I used GroupBy in linq but it merges the ColumnX values.
If you have this data loaded in objects, you can use LINQ methods to achieve this like so:
var groupedRecords =
items
.GroupBy(item => new { item.Id, item.Name })
.Select(grouping => new
{
grouping.Key.Id,
grouping.Key.Name,
columnXValues = string.Join(",", grouping.Select(g => g.ColumnX))
});