I'm currently cleaning some tables in our application, and we have some JSONb defined in some tables, whose values are just true or false for a given key. Seeing that, i want to get rid of the jsonb, to have instead an enum array, in which i could get the former keys of my jsonb (whose value was true)
For instance, i want to transform this table
CREATE TABLE my_table (
...
my_field JSONB
...
);
to
CREATE TYPE field_enum AS ENUM (
'key1',
'key2',
'key3',
'key4',
'key5',
'key6'
);
CREATE TABLE my_table (
...
my_field field_enum[] DEFAULT NULL,
...
);
And i want the data to be migrated from (for this example)
{"key1": true, "key2": null, "key3": false, "key4": true}
to
['key1','key4']
I'm trying to alter my colum type by doing the data migration by a unique command, which i think could be something of:
ALTER TABLE
my_table
ALTER COLUMN
my_field
TYPE
field_enum
USING __XXXXXX___;
The using part is where i have difficulties, anybody has an idea of how i should alter my column without losing data?
I'm also open to create a new field and rename it after that, and use an UPDATE on the table.
I was thinking of using the jsonb_object_keys function but it first gives me all keys, and not just those whose values are true, and it also gives me an record[] which i don't manage to cast as a enum[].
Even more deeper, but it is not mandatory for me as i can do it as a post-treatment, it may happen that the json keys has to be linked to a enum key that is not the same, lets say that key1 should be converted as NEW_KEY(upper_case and name changing). Do you think it's possible to include in the same postgresql command?
If anybody of you has an idea of what i could do, i would appreciate any help.
Thanks !
X.
You need a function to convert a json object to your enum array:
create or replace function jsonb_to_field_enums(jsonb)
returns field_enum[] language sql immutable
as $$
select array_agg(key)::field_enum[]
from jsonb_each_text($1)
where value::bool
$$;
Db<>fidlle.
Related
I read the document on "CREATE TABLE" at https://learn.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-ver15
It said
timestamp data types must be NOT NULL.
However, when I create a table, I can create a field with timestamp type and make it nullable. So, what is the problem?
Update
When using the following query:
USE MyDB6;
CREATE TABLE MyTable (Col1 timestamp NULL);
I expect an error saying the column Col1 cannot be NULL but nothing happens.
After creating the table, I run the following query:
USE MyDB6
SELECT COLUMNPROPERTY(OBJECT_ID('MyTable', 'U'), 'Col1', 'AllowsNull');
I expect the result is 0, but actually it is 1.
So, my question is, though the document has said "timestamp data types must be NOT NULL.", and in the real cases, this data type will also not be NULL, why the create table query does not prevent me from setting it to nullable and the system still save the column as nullable?
Like marc_s said in their comment, this datatype is handled internally and will never be null. Try the following:
declare #test timestamp = null -- ROWVERSION would be less confusing
select #test
It does not return NULL
As to why you're allowed to set it to nullable; what would be won by creating this deviation from the standard? You cannot INSERT NULL into a TIMESTAMP/ROWVERSION column, you cannot UPDATE it at all. I imagine it is quite a lot of trouble to alter the CREATE syntax to make certain datatype not nullable; more trouble than its worth.
I am trying to transform my data stored in HSTORE column ('data') of Postgres.
My row values have key "entity" and value is in the array.
"entity"=>"[{'id': .............}]
I used the following code:
Alter TABLE my_table
ALTER COLUMN h_store_column TYPE jsonb
USING hstore_to_jsonb_loose(data -> 'entity');
which resulted in value as output in a new column as below:
"[{'id': .............}]"
but, with quotes "". This made it scalar in JSONB type column and doesn't let me run the query.
How can I change the value of every row in a new column named 'entity' with JSONB, without quotes?
[{'id': .............}]
SAMPLE CODE TO GENERATE SIMILAR DATA:
"key" => "[json_text_array]"
stored in hstore data type column.
When changed to JSON B type, I get {'key':'[array]'}, whereas I am after {'key': [array]} - No quotes. I tried loose functions in postgres, no help.
As per your question what I understood, you have a column with type hstore having a key named entity and value as JSON ARRAY. Explanation of your problem and solution will be as:
Your Alter query mentioned in the question will through error because hstore_to_jsonb_loose function accepts hstore type value but you are passing text. So the correct statement for query should be.
Alter TABLE my_table
ALTER COLUMN h_store_column TYPE jsonb
USING hstore_to_jsonb_loose(data) -> 'entity';
Above query will convert the hstore key-value into jsonb key value pair and update it into the column h_store_column.
So the function hstore_to_jsonb_loose will convert the data as { "entity": "[{'id':..........}]" } from which you are your extracting the JSON value of key 'entity' which is "[{'id':..........}]".
You want to store your value fetched from hstore_to_jsonb_loose(data) -> 'entity' as full JSON ARRAY. Your data stored as value in hstore type column seems like a JSON but its not a JSON. In JSON, keys and values (other than numeric and boolean) are surrounded by " but in your string it is surrounded by '. So it can not be stored as JSON in JSONB type column.
Considering that there is no other problem in the structure of values as JSON (other than '). We should replace the ' with " and store the value as JSONB in the column. Try This query to do the same.
Alter TABLE test
ALTER COLUMN h_store_column TYPE jsonb
USING replace(hstore_to_jsonb_loose(data)->>'entity','''','"')::jsonb;
DEMO1
Even hstore_to_jsonb_loose is not required in your case. You can write your Alter statement as below:
Alter TABLE test
ALTER COLUMN h_store_column TYPE jsonb
USING replace((data)->'entity','''','"')::jsonb;
DEMO2
I got an table which looks like this:
CREATE TABLE "USER"
( "NUMBER" VARCHAR2(8) NOT NULL ENABLE,
"ROLE" VARCHAR2(100) NOT NULL ENABLE,
"QUESTION_ORDER" "T_NUMARRAY",
"FORENAME" VARCHAR2(20),
"SURNAME" VARCHAR2(20),
CONSTRAINT "USER_PK" PRIMARY KEY ("NUMBER")
USING INDEX ENABLE
)
VARRAY "QUESTION_ORDER" STORE AS SECUREFILE LOB
/
I'm trying to Update the column Order with an Array which is filled with numbers.
My Code which generates the Array:
DECLARE
TYPE T_NUMARRAY IS TABLE OF number INDEX BY BINARY_INTEGER;
numArray T_NUMARRAY;
BEGIN
SELECT PAGE_ID BULK COLLECT INTO numArray FROM APEX_APPLICATION_PAGES WHERE APPLICATION_ID = 943 AND PAGE_NAME LIKE '%Questions_%' ORDER BY PAGE_ID ASC;
FOR i IN 1 .. numArray.Count Loop
UPDATE USER SET Question_Order = numArray WHERE QNUMMER = :APP_USER;
END LOOP;
END
When I try to Update the entrys in the table then I get the error:
ORA-06550: line 9, colum 48: PLS-00382: expression is of wrong type
I don't know how to INSERT the array correctly. Maybe someone can help me? :)
If you really have a table with a column of type T_NUMARRAY, then T_NUMARRAY has to be defined at the schema level using the CREATE TYPE statement. If that is true, then you must not define T_NUMARRAY locally. Remove the line right after DECLARE where you define a local collection with the same name.
I would feel much better about this answer if you provided more information in your question. Please add the DDL that creates the table for starters.
Do you really, really need to store the data as an array? I would avoid that if possible and instead store the data in the "normal" way.
CREATE TABLE people(
name_ varchar(50) NOT NULL,
count int NOT NULL DEFAULT 0
);
CREATE TABLE person_added(
date_ date NOT NULL,
all_people_ people[],
all_people_count int NOT NULL
);
CREATE TABLE all_people_array_table(
id SERIAL,
people_array person_added[]
);
{"(2016-02-27,{(Jack,3),(John,6)},1000)","(2016-03-27,{(Ben,3),(Francis,6)},2000)"}
people_array contains of person_added composite types. And I need to get dates in this array. (2016-02-27 and 2016-03-27)
I guess I need to use slice but it didn't work for me.
You can use the unnest() function to convert the array into a set of rows:
SELECT date_
FROM all_people_array_table, unnest(people_array);
The unnest() function is a table function so you use it like you would a table name. Such table functions can use columns from previously specified relations (see section on "LATERAL Subqueries" following the section in the link above).
But you should really review your table design. Working with arrays like this undermines the relational structure of the data. Your design looks more like a hierarchical model, the database design paradigm from the 1960's. Instead of arrays, use keys from one table to the next.
We have a tool and this tool create some insert scripts to add rows in our plsql table.
this scripts is not modifiable and we can't see this scripts. so,
when data row comes (with insert script that we don't know structure), we should give a primary key. we can use trigger but we don't want do this for reasons of performance.
Below code doesn't work.
CREATE TABLE qname
(
qname_id integer NOT NULL default qname_id_seq.nextval PRIMARY KEY,
);
Any idea?
Thanks..
.nextval cannot be used in Default Value of table, This is achieved by Trigger and Sequence when you want serialized number that anyone can easily read/remember/understand. But if you don't want to manage ID Column (like qname_id) by this way, and value of this column is not much considerable, you can use SYS_GUID() at Table Creation to get Auto Increment like this.
CREATE TABLE qname
(qname_id RAW(16) DEFAULT SYS_GUID() PRIMARY KEY,
name VARCHAR2(30));
(or by Modifying Column)
Now your qname_id column will accept "globally unique identifier value".
you can insert value in table by ignoring emp_id column like this.
INSERT INTO qname (name) VALUES ('name value');
So, it will insert unique value to your qname_id Column.