How to have not null constraint inside a jsonb column in Postgres.
I have created a Postgres table with just one column called id like this below
create table tablea (
id jsonb,
check
((id->>'test1', id->>'test2') != (null, null))
);
The caller will insert data into the table in the below json format:-
[
{
"test1":"",
"test2":"",
"test3":""
},
{
"test1":"",
"test2":"",
"test3":""
}
]
My goal is to when a caller insert data in the id column i want the key test1 and test2 be not null. How can in achieve that. My table creation logic is explained above. I am trying to insert data like
insert into tablea(id) values
('[{"test1":null,"test2":"a","test3":""}]');
Ideally this insert statement should throw me error but it is inserting data in the table. Can anyone help me out
You will need to create a function that iterates through your array and validates every array element.
Something like this:
create or replace function validate_json(p_input jsonb)
returns boolean
as
$$
select not exists (select *
from jsonb_array_elements(p_input) as t(element)
where nullif(element ->> 'test1', '') is null
or nullif(element ->> 'test2', '') is null);
$$
language sql
stable;
Then you can use it to define a check constraint:
You can't compare null with = or <>. You need to use IS NOT NULL for that.
It also seems you want to treat an empty string the same way as null.
create table tablea
(
id jsonb,
constraint check_json check ( validate_json(id) )
);
Related
I have the following table:
CREATE TABLE fun (
id uuid not null,
tag varchar[] NOT NULL,
CONSTRAINT fun_pkey PRIMARY KEY(id, tag)
);
CREATE UNIQUE INDEX idx_fun_id ON fun USING btree (id);
Then I inserted a data into the table
insert into fun (id, tag)
values('d7f17de9-c1e9-47ba-9e3d-cd1021c644d2', array['123','234'])
So currently, the value of my tag is ["123", "234"]
How can I add the value of the array, and ignore any of the existing varchar, only adding the non-existing one?
currently, this is how I approach it
update fun
set tag = tag || array['234','345']
where id = 'd7f17de9-c1e9-47ba-9e3d-cd1021c644d2'
but my tag will become ["123", "234", "234", "345"]. The value of 234 becomes a duplicated one. What I need to achieve is the value of the tag becomes ["123", "234", "345"]
There is no built-in function to only append unique elements, but it's easy to write one:
create function append_unique(p_one text[], p_two text[])
returns text[]
as
$$
select array(select *
from unnest(p_one)
union
select *
from unnest(p_two));
$$
language sql
immutable;
Then you can use it like this:
update fun
set tag = append_unique(tag,array['234','345'])
where id = 'd7f17de9-c1e9-47ba-9e3d-cd1021c644d2'
Note that this does not preserve the order of the items.
A function that preserves the order of the elements of the existing array and appends the elements of the second one in the order provided would be:
create function append_unique(p_one text[], p_two text[])
returns text[]
as
$$
select p_one||array(select x.item
from unnest(p_two) with ordinality as x(item,idx)
where x.item <> all (p_one)
order by x.idx);
$$
language sql
immutable;
I have been trying to create a production ERP using C# and SQL Server.
I want to create a table where the insert statement should only occur when at least one of the 3 main columns have a different value.
The main columns are prod_date, order_no, mach_no, shift_no, prod_type. If all the values are repeated a second time the data must not be entered.
create table p1_order(id int not null,
order_no int not null,
prod_date date notnull,
prod_type nvarchar(5),
shift_no int not null,
mach_no nvarchar(5) not null,
prod_qty,
float not null)
Based on the information you provided, You should check for the identical values when executing insert query, while writing your code. for example you can write:
if(prod_date == order_no == mach_no)// or any other 3 variables
{
//error of identical values
}
else{
// your insert query
}
The best way to implement this is by creating a unique constraint on the table.
alter table p1_order
add constraint UC_Order unique (prod_date,order_no,mach_no,shift_no,prod_type);
Due to some reason, if you are not able to create a unique constraint, you can write your query like the following using NOT EXISTS
insert into p1_order (order_no , prod_date , prod_type <remaining columns>)
select 123, '2022-09-20 15:11:43.680', 't1' <remaining values>
where not exists
(select 1
from p1_order
where order_no = 123 AND prod_date = '2022-09-20 15:11:43.680' <additional condition>)
Now I have a array of varchar (nullable & no default value),
and wanting to convert it into a array of pair(varchar, integer).
For the integer value, just set 0 for now.
For example, ["a", "b"] will be [("a", 0), ("b", 0)].
I already know that I can create pair type via:
CREATE TYPE keycount AS (key VARCHAR, count INTEGER);
but have no idea how to use SET to alter the column.
Thank you for any advices!
First create the type
CREATE TYPE keycount AS (key VARCHAR, count INTEGER);
Now you need to create the casts..
CREATE FUNCTION text_to_keycount(a text)
RETURNS keycount AS
$$
SELECT ($1, 0)::keycount
$$ LANGUAGE sql;
CREATE CAST (text AS keycount) WITH FUNCTION text_to_keycount(text);
SELECT ARRAY['asdf','asdf']::text[]::keycount[];
Now you can create the table and cast the type USING
CREATE TABLE foo ( a text[] );
INSERT INTO foo (a) VALUES
( ARRAY['1','2','3'] );
ALTER TABLE foo
ALTER COLUMN a
SET DATA TYPE keycount[]
USING CAST (a AS keycount[]);
If all the rows in the keycount column have valid data, i.e there aren't any rows that are like '{"abc", ""}', then the following would work. If there are any invalid data in the column, then that has to be first cleaned, before changing the column type.
The trick is to create a temporary pair column populate it with data, delete the old column & rename the temporary column back to the original.
This is needed because there is no way for Postgresql to figure out how to convert between a VARCHAR[] to a keycount automatically.
example:
-- setup a test table
BEGIN;
CREATE TABLE test (col VARCHAR[]);
INSERT INTO test (col) VALUES ('{"abc", "1"}'::varchar[]);
SELECT col FROM test;
-- returns {abc,1}
CREATE TYPE keycount AS (key VARCHAR, count INTEGER);
ALTER TABLE test ADD COLUMN col2 keycount;
UPDATE test SET col2 = (col[1], col[2]::int)::keycount;
ALTER TABLE test DROP COLUMN col;
ALTER TABLE test RENAME COLUMN col2 TO col;
SELECT col FROM test;
-- returns (abc,1)
COMMIT;
EDIT
The above isn't required. Although it works, it is overly complicated.
It is possible to specify the type cast in the alter statement via a USING clause:
Utilizing this clause in the ALTER statement
ALTER TABLE test ALTER COLUMN col TYPE keycount USING (col[1], col[2]::INTEGER)::keycount;
There's no need to stage the data & drop & rename.
I'm having a table called student, with id and name as fields in PostgreSQL:
Create table student (id int, name text[]);
I need to add the constraint for the name field. Which means it has to accept only character for that field. But the field name is a text array.
I tried this check constraint:
Alter table student
add constraint stud_const check (ALL(name) NOT LIKE '%[^a-zA-Z]%');
But it throws this error:
ERROR: syntax error atERROR: syntax error at or near "all"
LINE 1: ... student add constraint stud_const check (all(name) ...
or near "all"
How could I solve this problem? The constraint should be set to whole array.
It is necessary to unnest the array to match it to a regular expression:
select bool_and (n ~ '^[a-zA-Z]*$')
from unnest(array['John','Mary']) a(n)
;
bool_and
----------
t
bool_and. Since it is not possible to use a subquery in the check constraint wrap it in a function:
create function check_text_array_regex (
a text[], regex text
) returns boolean as $$
select bool_and (n ~ regex)
from unnest(a) s(n);
$$ language sql immutable;
and use the function in the check constraint:
create table student (
id serial,
name text[] check (check_text_array_regex (name, '^[a-zA-Z]*$'))
);
Test it:
insert into student (name) values (array['John', 'Mary']);
INSERT 0 1
insert into student (name) values (array['John', 'Mary2']);
ERROR: new row for relation "student" violates check constraint "student_name_check"
DETAIL: Failing row contains (2, {John,Mary2}).
I have this function.
select * from dbo.flsSplitString('1,2,3',',')
It returns three rows 1, 2 and 3,
Now I have declared a table variable
DECLARE #IDList TABLE
(
ID varchar(200)
);
Now i have to insert the rows returned by the split function to table variable. Function may return as many rows per the requirement. How could i do that?
May be something like this
Insert Into #IDList (ID)
SELECT [value] -- Use the Column name returned from the SQL Function
FROM dbo.flsSplitString('1,2,3',',')