How to insert JSON data to PostgreSQL - database

I have the json field like blow which I want to store in database
{
id: 1
name: "test entity 1"
description: "a test entity for some guy's blog"
status: "passed"
web_url: "http://localhost:3000"
jobs: [{
id: "1"
name: "test1"
status: "passed"
},
{
id: "2"
name: "test2"
status: "passed"
},
{
id: "3"
name: "test3"
status: "failed"
}]
}
I proceed with one way like for creating table uses:
CREATE TABLE test3 (id INT PRIMARY KEY, name VARCHAR, description VARCHAR, status VARCHAR, web_url VARCHAR, jobs JSON[]);
and for Inserting data uses:
sqlStatement := `
INSERT INTO jobs (id, name, description, status, web_url, jobs)
VALUES ($1, $2, $3, $4, $5, $6)
ON CONFLICT (id) DO UPDATE
SET status = $4
RETURNING id`
id := 0
err = database.Db.QueryRow(sqlStatement, y[i].ID, y[i].Name, y[i].Description, y[i].Status, y[i].WebURL, jobsdata).Scan(&id)
if err != nil {
panic(err)
}
But won't work, need help!!
Getting errors:
panic: sql: converting argument $6 type: unsupported type handler.Jobs, a slice of struct
What i want:
postgres=# SELECT * FROM test3;
id | name | description | status | web_url | jobs
------+------------------------------------------+--------+---------+----------------------------------------------------------+----------------------------------------------------------
1 | test entity 1 | a test entity for some guy's blog | passed | https://localhost:3000 | {id: "1",name: "test1", status: "passed"},{id: "2",name: "test2", status: "passed"},{id: "3",name: "test3", status: "failed"}

As the error indicates, you're trying to bind the sixth value from an unsupported data type, handler.Jobs. You haven't told us what this type is, but from the error, it's clear that it does not implement the driver.Valuer interface, so it won't work, because it has no way of knowing how to represent that value to the database.
You'll need to implement that interface, by adding a Value() method to the handler.Jobs type, or use a different data type.

sqlx has a type JSONText in github.com/jmoiron/sqlx/types that will do what you need.
Example

Related

Wth rails 6 and postgresql, return a object collection using an array item as references

Using Rail 6 and postgres, I need to generate a collection of objects, using the item array on field relations, returning all person 'duplicated' with the relation he has.
The database field has created with this migration:
t.integer :relations, array: true, null: false, default: []
...
add_index :people, :relations, using: :gin
And it created this table:
id | name | relations |
------------------------|
1 | João | {nil,1,2,3} |
2 | Maria| {nil,1} |
I need a return like this, without nil;
[
{id: 1, name: 'João', relation: 1},
{id: 1, name: 'João', relation: 2},
{id: 1, name: 'João', relation: 3},
{id: 2, name: 'Maria', relation: 1}
]
Something like that poor sql:
SELECT id, name, '1' as relation FROM people WHERE 1 = ANY (relations)
union
SELECT id, name, '2' as relation FROM people WHERE 2 = ANY (relations)
union
SELECT id, name, '3' as relation FROM people WHERE 3 = ANY (relations);
tks :)
This should give you the expected result:
Person.joins(", unnest(people.relations) rel")
.where("rel is not null")
.select("people.id, people.name, rel")
(For Postgres version > 9.4)

"wrong element type" when using JSONBArray using pgx

I am trying to insert a new row that has an inventory with data type jsonb[]:
elements := []pgtype.Text{{String: `{"adsda": "asdasd"}`, Status: pgtype.Present}}
dimensions := []pgtype.ArrayDimension{{Length: 1, LowerBound: 1}}
inventory := pgtype.JSONBArray{Elements: elements, Dimensions: dimensions, Status: pgtype.Present}
row = db.pool.QueryRow(context.Background(), `INSERT INTO user ("email", "password", "inventory") VALUES($1, $2, $3) RETURNING uuid, email, "password"`, requestEmail, requestPassword, inventory)
But I get the following error:
"Severity": "ERROR",
"Code": "42804",
"Message": "wrong element type",
"Detail": "",
"Hint": "",
"Position": 0,
"InternalPosition": 0,
"InternalQuery": "",
"Where": "",
"SchemaName": "",
"TableName": "",
"ColumnName": "",
"DataTypeName": "",
"ConstraintName": "",
"File": "arrayfuncs.c",
"Line": 1316,
"Routine": "array_recv"
Postgres table definition:
CREATE TABLE public.user (
uuid uuid NOT NULL DEFAULT uuid_generate_v4(),
email varchar(64) NOT NULL,
"password" varchar(32) NOT NULL,
inventory _jsonb NULL,
CONSTRAINT user_pk PRIMARY KEY (uuid)
);
What might be the issue? Any idea would help.
Type jsonb[] in pgx was broken
As for the error message you report:
"Severity": "ERROR",
"Code": "42804",
"Message": "wrong element type",
...
The Guthub page on pgx reveals:
The binary format can be substantially faster, which is what the pgx interface uses.
So you are using the binary protocol. For this, the data types have to use a compatible binary format, and it seems that ARRAY of jsonb is not encoded properly? Related:
PostgreSQL/PostGIS - PQexecParams - wrong element type
Luckily for you, the author seems to have fixed this just yesterday: (!)
jackc: Fix JSONBArray to have elements of JSONB
Your problem should go away once you install the latest version containing commit 79b05217d14ece98b13c69ba3358b47248ab4bbc
jsonb[] vs. jsonb with nested JSON array
It might be simpler to use a plain jsonb instead of jsonb[]. JSON can nest arrays by itself. Consider:
SELECT '[{"id": 1}
, {"txt": "something"}]'::jsonb AS jsonb_array
, '{"{\"id\": 1}"
,"{\"txt\": \"something\"}"}'::jsonb[] AS pg_array_of_jsonb;
Either can be unnested in Postgres:
SELECT jsonb_array_elements('[{"id": 1}, {"txt": "something"}]'::jsonb) AS jsonb_element_from_json_array;
SELECT unnest('{"{\"id\": 1}","{\"txt\": \"something\"}"}'::jsonb[]) AS jsonb_element_from_pg_array;
Same result.
db<>fiddle here
That should also avoid your error.
Additional error
Your INSERT command:
INSERT INTO user ("email", "password", "inventory") VALUES ...
... should really raise this:
ERROR: syntax error at or near "user"
Because user is a reserved word. You would have to double-quote it to make it work. But rather don't use user it as Postgres identifier. Ever.
The table creation works because there the tablename is schema-qualified, which makes it unambiguous:
CREATE TABLE public.user ( ...

snowflake pivot attribute values into columns in array of objects

EDIT: I gave bad example data. Updated some details and switched out dummy data for sanitized, actual data.
Source system: Freshdesk via Stitch
Table Structure:
create or replace TABLE TICKETS (
CC_EMAILS VARIANT,
COMPANY VARIANT,
COMPANY_ID NUMBER(38,0),
CREATED_AT TIMESTAMP_TZ(9),
CUSTOM_FIELDS VARIANT,
DUE_BY TIMESTAMP_TZ(9),
FR_DUE_BY TIMESTAMP_TZ(9),
FR_ESCALATED BOOLEAN,
FWD_EMAILS VARIANT,
ID NUMBER(38,0) NOT NULL,
IS_ESCALATED BOOLEAN,
PRIORITY FLOAT,
REPLY_CC_EMAILS VARIANT,
REQUESTER VARIANT,
REQUESTER_ID NUMBER(38,0),
RESPONDER_ID NUMBER(38,0),
SOURCE FLOAT,
SPAM BOOLEAN,
STATS VARIANT,
STATUS FLOAT,
SUBJECT VARCHAR(16777216),
TAGS VARIANT,
TICKET_CC_EMAILS VARIANT,
TYPE VARCHAR(16777216),
UPDATED_AT TIMESTAMP_TZ(9),
_SDC_BATCHED_AT TIMESTAMP_TZ(9),
_SDC_EXTRACTED_AT TIMESTAMP_TZ(9),
_SDC_RECEIVED_AT TIMESTAMP_TZ(9),
_SDC_SEQUENCE NUMBER(38,0),
_SDC_TABLE_VERSION NUMBER(38,0),
EMAIL_CONFIG_ID NUMBER(38,0),
TO_EMAILS VARIANT,
PRODUCT_ID NUMBER(38,0),
GROUP_ID NUMBER(38,0),
ASSOCIATION_TYPE NUMBER(38,0),
ASSOCIATED_TICKETS_COUNT NUMBER(38,0),
DELETED BOOLEAN,
primary key (ID)
);
Note the variant field, "custom_fields". It undergoes an unfortunate transformation between the api and snowflake. The resulting field contains an array of 3 or more objects, each one a custom field. I do not have the ability to change the data format. Examples:
# values could be null
[
{
"name": "cf_request",
"value": "none"
},
{
"name": "cf_related_with",
"value": "none"
},
{
"name": "cf_question",
"value": "none"
}
]
# or values could have a combination of null and non-null values
[
{
"name": "cf_request",
"value": "none"
},
{
"name": "cf_related_with",
"value": "none"
},
{
"name": "cf_question",
"value": "concern"
}
]
# or they could all have non-null values
[
{
"name": "cf_request",
"value": "issue with timer"
},
{
"name": "cf_related_with",
"value": "timer stopped"
},
{
"name": "cf_question",
"value": "technical problem"
}
]
I would essentially like to pivot these into fields in a select query where the name attribute's value becomes a column header. Making the output similar to the following:
+----+------------------+-----------------+-------------------+-----------------------------+
| id | cf_request | cf_related_with | cf_question | all_other_fields |
+----+------------------+-----------------+-------------------+-----------------------------+
| 5 | issue with timer | timer stopped | technical problem | more data about this ticket |
| 6 | hq | laptop issues | some value | more data |
| 7 | a thing | about a thing | about something | more data |
+----+------------------+-----------------+-------------------+-----------------------------+
Is there a function that searches the values of array objects and returns objects with qualifying values? Something like:
select
id,
get_object_where(name = 'category', value) as category,
get_object_where(name = 'subcategory', value) as category,
get_object_where(name = 'subsubcategory', value) as category
from my_data_table
Unfortunately, PIVOT requires an aggregate function, I tried using min and max, but only get a return of null values. Something similar to this approach would be great if there is another syntax to do it that doesn't require aggregation.
with arr as (
select
id,
cs.value:name col_name,
cs.value:value col_value
from my_data_table,
lateral flatten(input => custom_fields) cs
)
select
*
from arr
pivot(col_name for col_value in ('category', 'subcategory', 'subsubcategory')
as p (id, category, subcategory, subsubcategory);
It is possible to use the following approach, but it is flawed in that any time a new custom field is added I have to add cases to account for new positions within the array.
select
id,
case
when custom_fields[0]:name = 'cf_request' then custom_fields[0]:value
when custom_fields[1]:name = 'cf_request' then custom_fields[1]:value
when custom_fields[2]:name = 'cf_request' then custom_fields[2]:value
when custom_fields[2]:name = 'cf_request' then custom_fields[3]:value
else null
end cf_request,
case
when custom_fields[0]:name = 'cf_related_with' then custom_fields[0]:value
when custom_fields[1]:name = 'cf_related_with' then custom_fields[1]:value
when custom_fields[2]:name = 'cf_related_with' then custom_fields[2]:value
when custom_fields[2]:name = 'cf_related_with' then custom_fields[3]:value
else null
end cf_related_with,
case
when custom_fields[0]:name = 'cf_question' then custom_fields[0]:value
when custom_fields[1]:name = 'cf_question' then custom_fields[1]:value
when custom_fields[2]:name = 'cf_question' then custom_fields[2]:value
when custom_fields[2]:name = 'cf_question' then custom_fields[3]:value
else null
end cf_question,
created_at
from my_db.my_schema.tickets;
I think you almost had it. You just need to add a max() or min() around your col_name. As you stated, it needs an aggregate function, and something like max() or min() will work here, since it is aggregating on the name/value pairs that you have. If you have 2 subcategory values, for example, it'll pick the min/max value. From your example, that doesn't appear to be an issue, so it'll always choose the value you want. I was able to replicate your scenario with this query:
WITH x AS (
SELECT parse_json('[{"name": "category","value": "Bikes"},{"name": "subcategory","value": "Mountain Bikes"},{"name": "subsubcategory","value": "hardtail bikes"}]')::VARIANT as field_var
),
arr as (
select
seq,
cs.value:name::varchar col_name,
cs.value:value::varchar col_value
from x,
lateral flatten(input => x.field_var) cs
)
select
*
from arr
pivot(max(col_value) for col_name in ('category','subcategory','subsubcategory')) as p (seq, category, subcategory, subsubcategory);

Postgres jsonb cast recordset from UNIX to timestamp

I'm working in a Postgres table that has a jsonb column. I've been able to create a recordset to turn the json to rows from the jsonb object. I'm struggling to convert timestamp from UNIX to readable timestamp.
This is what the jsonb object looks like with timestamp stored as UNIX:
{
"signal": [
{
"id": "e80",
"on": true,
"unit": "sample 1",
"timestamp": 1521505355
},
{
"id": "97d",
"on": false,
"unit": "sample 2",
"timestamp": 1521654433
},
{
"id": "97d",
"on": false,
"unit": "sample 3",
"timestamp": 1521654433
}
]
}
ideally i'd like it to look like this but get an error for the timestamp
id | on | unit | timestamp
---+------+----------+--------------------------
e80|true | sample 1 | 2018-03-20 00:22:35+00:00
97d|false | sample 2 | 2018-03-21 17:47:13+00:00
97d|false | sample 3 | 2018-03-21 17:47:13+00:00
this is what i have so far which returns the expected values for the columns but gives an error for the timestamp column
select b.*
from device d
cross join lateral jsonb_to_recordset(d.events->'signal') as
b("id" integer, "on" boolean, "unit" text, "timestamp" timestamp)
the timestamp datatype is throwing off an error.
[22008] ERROR: date/time field value out of range
Any help or suggestions for casting the timestamp from UNIX to an actual timestamp is greatly appreciated.
You may specify it as INTEGER in column definition list and then Convert it to TIMESTAMP using TO_TIMESTAMP
Furthermore, Theid which you are trying to define can't be integer.
SQL Fiddle
Query 1:
SELECT b.id
,b.ON
,b.unit
,to_timestamp("timestamp") AS "timestamp"
FROM device d
CROSS JOIN lateral jsonb_to_recordset(d.events -> 'signal')
AS b("id" TEXT, "on" boolean, "unit" TEXT, "timestamp" INT)
Results:
| id | on | unit | timestamp |
|-----|-------|----------|----------------------|
| e80 | true | sample 1 | 2018-03-20T00:22:35Z |
| 97d | false | sample 2 | 2018-03-21T17:47:13Z |
| 97d | false | sample 3 | 2018-03-21T17:47:13Z |

Postgres/JSON - update all array elements

Given the following json:
{
"foo": [
{
"bar": true
},
{
"bar": true
}
]
}
How can I select the following:
{
"foo": [
{
"bar": false
},
{
"bar": false
}
]
}
?
So far I've figured out how to manipulate a single array value:
SELECT
jsonb_set(
'{
"foo": [
{
"bar": true
},
{
"bar": true
}
]
}'::jsonb, '{foo,0,bar}', to_jsonb(false)
)
But how do I set all elements within an array?
You might want to kill two birds with one stone - update existing key in every object in the array or insert the key with a given value. jsonb_set is a perfect match here, but it requires us to pass the index of each object, so we have to iterate over the array first.
The implementation is HIGHLY inspired by klin's answer, which didn't solve my problem (which was updating and inserting) and didn't work if there were multiple keys in the object.
So, the implementation is as follows:
-- the params are the same as in aforementioned `jsonb_set`
CREATE OR REPLACE FUNCTION update_array_elements(target jsonb, path text[], new_value jsonb)
RETURNS jsonb language sql AS $$
-- aggregate the jsonb from parts created in LATERAL
SELECT jsonb_agg(updated_jsonb)
-- split the target array to individual objects...
FROM jsonb_array_elements(target) individual_object,
-- operate on each object and apply jsonb_set to it. The results are aggregated in SELECT
LATERAL jsonb_set(individual_object, path, new_value) updated_jsonb
$$;
And that's it... :)
I hope it'll help someone with the same problem I had.
There is no standard function to update json array elements by key.
A custom function is probably the simplest way to solve the problem:
create or replace function update_array_elements(arr jsonb, key text, value jsonb)
returns jsonb language sql as $$
select jsonb_agg(jsonb_build_object(k, case when k <> key then v else value end))
from jsonb_array_elements(arr) e(e),
lateral jsonb_each(e) p(k, v)
$$;
select update_array_elements('[{"bar":true},{"bar":true}]'::jsonb, 'bar', 'false');
update_array_elements
----------------------------------
[{"bar": false}, {"bar": false}]
(1 row)
Your query may look like this:
with a_data(js) as (
values(
'{
"foo": [
{
"bar": true
},
{
"bar": true
}
]
}'::jsonb)
)
select
jsonb_set(js, '{foo}', update_array_elements(js->'foo', 'bar', 'false'))
from a_data;
jsonb_set
-------------------------------------------
{"foo": [{"bar": false}, {"bar": false}]}
(1 row)

Resources