Let's for example I have the next table:
CREATE TABLE temp
(
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
arr bigint[] NOT NULL
);
And insert rows it:
INSERT INTO temp (arr) VALUES
(ARRAY[2, 3]),
(ARRAY[2,3,4]),
(ARRAY[4]),
(ARRAY[1, 2, 3])
So, I have now in the table:
I want to have a query that return only arrays which are unique (in a manner that are not contains by other arrays)
So, the return will be rows number 2 & 4 (the arr column)
This can be don using a NOT EXISTS condition:
select t1.*
from temp t1
where not exists (select *
from temp t2
where t1.id <> t2.id
and t2.arr #> t1.arr);
Related
I have one table (Table1) that has several columns used in combination: Name, TestName, DevName, Dept. When each of these 4 columns have values, the record is inserted into Table2. I need to confirm that all of the records with existing values in each of these fields within Table1 were correctly copied into Table 2.
I have created a query for it:
SELECT DISTINCT wr.Name,wr.TestName, wr.DEVName ,wr.Dept
FROM table2 wr
where NOT EXISTS (
SELECT NULL
FROM TABLE1 ym
WHERE ym.Name = wr.Name
AND ym.TestName = wr. TestName
AND ym.DEVName = wr.DEVName
AND ym. Dept = wr. Dept
)
My counts are not adding up, so I believe that this is incorrect. Can you advise me on the best way to write this query for my needs?
You can use the EXCEPT set operator for this one if the table definitions are identical.
SELECT DISTINCT ym.Name, ym.TestName, ym.DEVName, ym.Dept
FROM table1 ym
EXCEPT
SELECT DISTINCT wr.Name, wr.TestName, wr.DEVName, wr.Dept
FROM table2 wr
This returns distinct rows from the first table where there is not a match in the second table. Read more about EXCEPT and INTERSECT here: https://learn.microsoft.com/en-us/sql/t-sql/language-elements/set-operators-except-and-intersect-transact-sql?view=sql-server-2017
Your query should do the job. It checks anything that are in Table1, but not Table2
SELECT ym.Name, ym.TestName, ym.DEVName, ym.Dept
FROM Table1 ym
WHERE NOT EXISTS (
SELECT 1
FROM table2
WHERE ym.Name = Name AND ym.TestName = TestName AND ym.DEVName = DEVName AND ym. Dept = Dept
)
If the structure of both tables are the same, EXCEPT is probably simpler.
IF OBJECT_ID(N'tempdb..#table1') IS NOT NULL drop table #table1
IF OBJECT_ID(N'tempdb..#table2') IS NOT NULL drop table #table2
create table #table1 (id int, value varchar(10))
create table #table2 (id int)
insert into #table1(id, value) VALUES (1,'value1'), (2,'value2'), (3,'value3')
--test here. Comment next line
insert into #table2(id) VALUES (1) --Comment/Uncomment
select * from #table1
select * from #table2
select #table1.*
from #table1
left JOIN #table2 on
#table1.id = #table2.id
where (#table2.id is not null or not exists (select * from #table2))
I need to to prove the existence of the amount of values from table1 in an MS SQL DB.
The table1 for proving has the following values:
MANDT DOKNR LFDNR
1 0020999956
1 0020999958
1 0020999960 2
1 0020999960 3
1 0020999960
1 0020999962
As you can see there are single rows and then there are special cases, where values are doubled with a running number (means it exists three times in the source), so all 2nd/3rd/further entries do get a increasing number in LFDNR.
The target table2 (where I need to proove for the amount/existance) has two columns with matching data:
DataID Facet
42101976 0020999956
42100240 0020999958
65688960 0020999960
65694287 0020999960
65697507 0020999960
42113401 0020999962
I would like to insert the DataID from 2nd table to the first table to have a 'proof', so to see if anything is missing from table2 and keep the table1 as proof.
I tried to uses joins and then I thought about a do while script running all rows down, but my knowledge stops creating scripts for this.
Edit:
Output should be then:
MANDT DOKNR LFDNR DataID
1 0020999956 42101976
1 0020999958 42100240
1 0020999960 2 65688960
1 0020999960 3 65694287
1 0020999960 65697507
1 0020999962 42113401
But it could be, for example, that a row in table 2 is missing, so a DataID would be empty then (and show that one is missing).
Any help appreciated!
You can use ROW_NUMBER to calculated [LFDNR] for each row in the second table, then to update the first table. If the [DataID] is null after the update, we have a mismatch.
DECLARE #table1 TABLE
(
[MANDT] INT
,[DOKNR] VARCHAR(32)
,[LFDNR] INT
,[DataID] INT
);
DECLARE #table2 TABLE
(
[DataID] INT
,[Facet] VARCHAR(32)
);
INSERT INTO #table1 ([MANDT], [DOKNR], [LFDNR])
VALUES (1, '0020999956', NULL)
,(1, '0020999958', NULL)
,(1, '0020999960', 2)
,(1, '0020999960', 3)
,(1, '0020999960', NULL)
,(1, '0020999962',NULL)
INSERT INTO #table2 ([DataID], [Facet])
VALUES (42101976, '0020999956')
,(42100240, '0020999958')
,(65688960, '0020999960')
,(65694287, '0020999960')
,(65697507, '0020999960')
,(42113401, '0020999962');
WITH DataSource ([DataID], [DOKNR], [LFDNR]) AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY [Facet] ORDER BY [DataID])
FROM #table2
)
UPDATE #table1
SET [DataID] = DS.[DataID]
FROM #table1 T
INNER JOIN DataSource DS
ON T.[DOKNR] = DS.[DOKNR]
AND ISNULL(T.[LFDNR], 1) = DS.[LFDNR];
SELECT *
FROM #table1;
I have a simple problem. How can I add a unique constraint for a table, without relating the values to their columns? For example, I have this table
ID_A ID_B
----------
1 2
... ...
In that example, I have the record (1,2). For me, (1,2) = (2,1). So i don't want to allow my database to store both values. I know I can accomplish it using, triggers or checks and functions. But i was wondering if there is any instruccion like
CREATE UNIQUE CONSTRAINT AS A SET_CONSTRAINT
You could write a view like that:
select 1 as Dummy
from T t1
join T t2 on t1.ID1 = t2.ID2 AND t1.ID2 = t2.ID1 --join to corresponding row
cross join TwoRows
And create a unique index on Dummy. TwoRows is a table that contains two rows with arbitrary contents. It is supposed to make the unique index fail if there ever is a row in it. Any row in this view indicates a uniqueness violation.
You can do this using Instead of Insert trigger.
Demo
Table Schema
CREATE TABLE te(ID_A INT,ID_B INT)
INSERT te VALUES ( 1,2)
Trigger
Go
CREATE TRIGGER trg_name
ON te
instead OF INSERT
AS
BEGIN
IF EXISTS (SELECT 1
FROM inserted a
WHERE EXISTS (SELECT 1
FROM te b
WHERE ( ( a.id_a = b.id_b
AND a.id_b = b.id_a )
OR ( a.id_a = b.id_a
AND a.id_b = b.id_b ) )))
BEGIN
PRINT 'duplciate record'
ROLLBACK
END
ELSE
INSERT INTO te
SELECT Id_a,id_b
FROM inserted
END
SELECT * FROM te
Insert Script
INSERT INTO te VALUES (2,1) -- Duplicate
INSERT INTO te VALUES (1,2) --Duplicate
INSERT INTO te VALUES (3,2) --Will work
Consider a table temp1
create temporary table temp1 (
id integer,
days integer[]
);
insert into temp1 values (1, '{}');
And another table temp2
create temporary table temp2(
id integer
);
insert into temp2 values (2);
insert into temp2 values (5);
insert into temp2 values (6);
I want to use temp2 id values as indices of the days array of temp1. i.e. I want to update
days[index] = 99 where index is the id value from temp2. I want to accomplish this in single query or if not possible, the most optimal way.
Here is what I am trying and it updates only one index and not all. Is it possible to update multiple indices of the array ? I understand it can be done using a loop but just was hoping if more optimized solution is possible ?
update temp1
set days[temp2.id] = 99
from temp2;
select * from temp1;
id | days
----+------------
1 | [2:2]={99}
(1 row)
TL;DR: Don't use arrays for this. Really. Just because you can doesn't mean you should.
PostgreSQL's arrays are really not designed for in-place modification; they're data values, not dynamic data structures. I don't think what you're trying to do makes much sense, and suggest you re-evaluate your schema now before you dig yourself into a deeper hole.
You can't just construct a single null-padded array value from temp2 and do a slice-update because that'll overwrite values in days with nulls. There is no "update only non-null array elements" operator.
So we have to do this by decomposing the array into a set, modifying it, recomposing it into an array.
To solve that what I'm doing is:
Taking all rows from temp2 and adding the associated value, to produce (index, value) pairs
Doing a generate_series over the range from 1 to the highest index on temp2 and doing a left join on it, so there's one row for each index position
Left joining all that on the unnested original array and coalesceing away nulls
... then doing an array_agg ordered by index to reconstruct the array.
With a more realistic/useful starting array state:
create temporary table temp1 (
id integer primary key,
days integer[]
);
insert into temp1 values (1, '{42,42,42}');
Development step 1: index/value pairs
First associate values with each index:
select id, 99 from temp2;
Development step 2: add nulls for missing indexes
then join on generate_series to add entries for missing indexes:
SELECT gs.i, temp2values.newval
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select max(id) from temp2)) i
) gs
ON (temp2values.newvalindex = gs.i);
Development step 3: merge the original array values in
then join that on the unnested original array. You can use UNNEST ... WITH ORDINALITY for this in PostgreSQL 9.4, but I'm guessing you're not running that yet so I'll show the old approach with row_number. Note the use of a full outer join and the change to the outer bound of the generate_series to handle the case where the original values array is longer than the highest index in the new values list:
SELECT gs.i, coalesce(temp2values.newval, originals.val) AS val
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i)
ORDER BY gs.i;
This produces something like:
regress=> \e
i | val
---+----------
1 | 42
2 | 99
3 | 42
4 |
5 | 99
6 | 99
(6 rows)
Development step 4: Produce the desired new array value
so now we just need to turn it back into an array by removing the ORDER BY clause at the end and using array_agg:
SELECT array_agg(coalesce(temp2values.newval, originals.val) ORDER BY gs.i)
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i);
with a result like:
array_agg
-----------------------
{42,99,42,NULL,99,99}
(1 row)
Final query: Use it in an UPDATE
UPDATE temp1
SET days = newdays
FROM (
SELECT array_agg(coalesce(temp2values.newval, originals.val) ORDER BY gs.i)
FROM (
SELECT id AS newvalindex, 99 as newval FROM temp2
) temp2values
RIGHT OUTER JOIN (
SELECT i FROM generate_series(1, (select greatest(max(temp2.id), array_length(days,1)) from temp2, temp1 group by temp1.id)) i
) gs
ON (temp2values.newvalindex = gs.i)
FULL OUTER JOIN (
SELECT row_number() OVER () AS index, val
FROM temp1, LATERAL unnest(days) val
WHERE temp1.id = 1
) originals
ON (originals.index = gs.i)
) calc_new_days(newdays)
WHERE temp1.id = 1;
Note, however, that **this only works for a single entry in temp1.id,and I've specified temp1.id twice in the query: once inside the query to generate the new array value, and once in the update predicate.
To avoid that, you'd need a key in temp2 that references temp1.id and you'd need to make some changes to allow the generated padding rows to have the correct id value.
I hope this convinces you that you should probably not be using arrays for what you're doing, because it's horrible.
I have 2 tables
Table A
Column A1 Column A2 and
Table B
Column B1 Column B2
Column A1 is not unique and not the PK, but I want to put a constraint on column B1 that it cannot have values other than what is found in Column A1, can it be done?
It cannot be done using FK. Instead you can use a check constraint to see if B value is available in A.
Example:
alter table TableB add constraint CK_BValueCheck check dbo.fn_ValidateBValue(B1) = 1
create function dbo.fn_ValidateBValue(B1 int)
returns bit as
begin
declare #ValueExists bit
select #ValueExists = 0
if exists (select 1 from TableA where A1 = B1)
select #ValueExists = 1
return #ValueExists
end
You can not have dynamic constraint to limit the values in Table B. Instead you can either have trigger on TableB or you need to limit all inserts or updates on TbaleB to select values from Column A only:
Insert into TableB
Select Col from Table where Col in(Select ColumnA from TableA)
or
Update TableB
Set ColumnB= <somevalue>
where <somevalue> in(Select columnA from TableA)
Also, I would add its a very design practice and can not guarantee accuracy all the time.
Long way around but you could add an identity to A and declare the PK as iden, A1.
In B iden would just be an integer (not identity).
You asked for any other ways.
Could create a 3rd table that is a FK used by both but that does not assure B1 is in A.
Here's the design I'd go with, if I'm free to create tables and triggers in the database, and still want TableA to allow multiple A1 values. I'd introduce a new table:
create table TableA (ID int not null,A1 int not null)
go
create table UniqueAs (
A1 int not null primary key,
Cnt int not null
)
go
create trigger T_TableA_MaintainAs
on TableA
after insert, update, delete
as
set nocount on
;With UniqueCounts as (
select A1,COUNT(*) as Cnt from inserted group by A1
union all
select A1,COUNT(*) * -1 from deleted group by A1
), CombinedCounts as (
select A1,SUM(Cnt) as Cnt from UniqueCounts group by A1
)
merge into UniqueAs a
using CombinedCounts cc
on
a.A1 = cc.A1
when matched and a.Cnt = -cc.Cnt then delete
when matched then update set Cnt = a.Cnt + cc.Cnt
when not matched then insert (A1,Cnt) values (cc.A1,cc.Cnt);
And test it out:
insert into TableA (ID,A1) values (1,1),(2,1),(3,2)
go
update TableA set A1 = 2 where ID = 1
go
delete from TableA where ID = 2
go
select * from UniqueAs
Result:
A1 Cnt
----------- -----------
2 2
Now we can use a genuine foreign key from TableB to UniqueAs. This should all be relatively efficient - the usual FK mechanisms are available between TableB and UniqueAs, and the maintenance of this table is always by PK reference - and we don't have to needlessly rescan all of TableA - we just use the trigger pseudo-tables.