Is it possible to UNION select from 2 tables? - pact-lang

Is it possible to do something like a sql UNION select with pact or select a common row from 2 separate tables returning a result?
For example we have 2 tables, and we want to select row ("Bob") from both tables (ages, favorite-food), and return a single object composed of data (age, food) from both tables .
;Table 1 schema
(defschema table-schema
age:decimal)
;Table 1
(deftable ages:{supply})
;Table 2 schema
(defschema table-schema
food:string)
;Table 2
(deftable favorite-food:{supply})
In such a case, a single function which would return an object containing Bob's age and his favorite food.
Is such a query possible in pact that serves this purpose?

Thank you kitty_kad I originally did mean to ask about a use case scenario which takes a list as input. Here is what I came up with just in case anyone else needs a use case with a list instead of just a single key:
(defschema user-schema
account:string
petId:string
balance:decimal
)
(deftable users:{user-schema})
(defschema pets-schema
petId:string
owner:string
petType:string
)
(deftable pets:{pets-schema})
(defun get-user-pet-details
( account:string )
(let ((x (select users['petId]
(and? (where 'account (= account))
(where 'balance (< 0.0))))))
(map (get-pet-details) x))
)
(defun get-pet-details
( x:object{pets-schema} )
(bind x { "petId" := id }
(with-read pets id
{ "petId" := pid
, "owner" := powner
, "petType" := ptype
}
{
"ID": pid
, "OWNER": powner
, "TYPE": ptype
} )
)
)

You can use let and read together
(let (
(age (at "age" (read ages "Bob" ["age"])) )
(food (at "food" (read favourite-food "Bob" ["food"])) )
)
{"age":age, "food" food}
)

Related

Postgresql how to select a value from multiple jsons inside a array on a jsonB column

I have this table
create table <table_name>(attr jsonb)
And this is the data inside
{
"rules": [
{
"id": "foo",
"name": "test_01",
...
},
{
"id": "bar",
"name": "test_02",
...
}
]
}
What I want is to select both names, what I have accomplished so far is this
select attr -> 'rules' -> 0 -> 'name' from <table_name>;
which returns test_01
select attr -> 'rules' -> 1 -> 'name' from <table_name>;
which returns test_02
I want to return something like this:
test_01,test_02
or if it's possible to return them in multiple lines, that would be even better
This is a sample data to show my problem, for reasons beyond my control, it's not possible to store each rule on a distinct line
You can use jsonb_array_length together with generate_series to get each name. Then use string_agg to aggregate list of names. Without plpgsql and with a single statement. (see demo)
with jl(counter) as ( select jsonb_array_length(attr->'rules') from table_name )
select string_agg(name,' ') "Rule Names"
from (select attr->'rules'-> n ->> 'name' name
from table_name
cross join ( select generate_series(0,counter-1) from jl ) gs(n)
) rn;
if anyone else get stuck on a situation like this, this is the solution the I found
create or replace function func_get_name() RETURNS text
language 'plpgsql'
AS $$
declare
len character varying(255);
names character varying(255);
res character varying(255);
begin
select jsonb_array_length(attr->'rules') into len from <table_name>;
res := '';
for counter in 0..len loop
select attr->'rules'-> counter ->> 'name'
into names
from <table_name>;
if names is not null then
res := res || ' ' || names;
end if;
end loop;
return res;
end;
$$
select func_get_name();
it's a solution: yes, it's a good solution: I have no ideia

PostgreSQL JSON building an array without null values

The following query
SELECT jsonb_build_array(jsonb_build_object('use', 'Home'),
CASE WHEN 1 = 2 THEN jsonb_build_object('use', 'Work')
END)
produces
[{"use":"Home"},null]
When I actually want
[{"use":"Home"}]
How do I go about doing this? json_strip_nulls() does not work for me.
By using a PostgreSQL array like that:
SELECT array_to_json(array_remove(ARRAY[jsonb_build_object('use', 'Home'),
CASE WHEN 1 = 2 THEN jsonb_build_object('use', 'Work') END], null))
which does produce:
[{"use": "Home"}]
while, to be sure:
SELECT array_to_json(array_remove(ARRAY[jsonb_build_object('use', 'Home'),
CASE WHEN 1 = 2 THEN jsonb_build_object('use', 'Work') END,
jsonb_build_object('real_use', 'NotHome')], null))
does produce:
[{"use": "Home"},{"real_use": "NotHome"}]
Creating a custom function seems to be the simplest way.
create or replace function jsonb_build_array_without_nulls(variadic anyarray)
returns jsonb language sql immutable as $$
select jsonb_agg(elem)
from unnest($1) as elem
where elem is not null
$$;
select
jsonb_build_array_without_nulls(
jsonb_build_object('use', 'home'),
case when 1 = 2 then jsonb_build_object('use', 'work') end
)
jsonb_build_array_without_nulls
---------------------------------
[{"use": "home"}]
(1 row)
I'm assuming this query is generaetd dynamically, somehow. If you're in control of the SQL generation, you can also use ARRAY_AGG(...) FILTER(...) instead, which, depending on your real world query, might be more convenient than using all the different array conversion functions suggested by Patrick.
SELECT (
SELECT json_agg(v) FILTER (WHERE v IS NOT NULL)
FROM (
VALUES
(jsonb_build_object('use', 'Home')),
(CASE WHEN 1 = 2 THEN jsonb_build_object('use', 'Work') END)
) t (v)
)
Or also:
SELECT (
SELECT json_agg(v)
FROM (
VALUES
(jsonb_build_object('use', 'Home')),
(CASE WHEN 1 = 2 THEN jsonb_build_object('use', 'Work') END)
) t (v)
WHERE v IS NOT NULL
)
One other way that this can be handled is the following:
SELECT jsonb_build_array(
jsonb_build_object('use', 'Home'),
CASE
WHEN 1 = 2 THEN jsonb_build_object('use', 'Work')
ELSE '"null"'
END
) - 'null'
(Unfortunately it's not really possible to do much with null by itself in postgres -- or most other DBs)
In the case above, '"null"' could be replaced with just about any unique string that would not be mistaken for live data in the array. I would not use numbers since - 0 would actually try to remove the first item from the array, rather than a number within the array. But you could probably use '"0"' and remove using something like - '0' if you wanted.
For those not using CASE, a COALESCE could be used to convert nulls into the desired string (alas, no NVL, IFNULL or ISNULL in postgres, but at least COALESCE is portable)

Listagg for large data and included values in quotes

I would like to get all the type names of a user seperated in commas and included in single quotes. The problem I have is that &apos ; character is displayed as output instead of '.
Trial 1
SELECT LISTAGG(TYPE_NAME, ''',''') WITHIN GROUP (ORDER BY TYPE_NAME)
FROM ALL_TYPES
WHERE OWNER = 'USER1';
ORA-01489: result of string concatenation is too long
01489. 00000 - "result of string concatenation is too long"
*Cause: String concatenation result IS more THAN THE maximum SIZE.
*Action: Make sure that the result is less than the maximum size.
Trial 2
SELECT '''' || RTRIM(XMLAGG(XMLELEMENT(E,TYPE_NAME,q'$','$' ).EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal(),q'$','$') AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1';
&apos ;TYPE1&apos ;,&apos ;TYPE2&apos ;, ............... ,&apos;TYPE3&apos ;,&apos ;
Trial 3
SELECT
dbms_xmlgen.CONVERT(XMLAGG(XMLELEMENT(E,TYPE_NAME,''',''').EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal())
AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1';
TYPE1&amp ;apos ;,&amp ;apos ;TYPE2&amp ;apos ;, ......... ,&amp ;apos ;TYPE3&amp ;apos ;,&amp ;apos ;
I don;t want to call replace function and then make substring as follow
With tbla as (
SELECT REPLACE('''' || RTRIM(XMLAGG(XMLELEMENT(E,TYPE_NAME,q'$','$' ).EXTRACT('//text()')
ORDER BY TYPE_NAME).GetClobVal(),q'$','$'),'&apos;',''') AS LIST
FROM ALL_TYPES
WHERE OWNER = 'USER1')
select SUBSTR(list, 1, LENGTH(list) - 2)
from tbla;
Is there any other way ?
use dbms_xmlgen.convert(col, 1) to prevent escaping.
According to Official docs, the second param flag is:
flag
The flag setting; ENTITY_ENCODE (default) for encode, and
ENTITY_DECODE for decode.
ENTITY_DECODE - 1
ENTITY_ENCODE - 0 default
Try this:
select
''''||substr(s, 1, length(s) - 2) list
from (
select
dbms_xmlgen.convert(xmlagg(xmlelement(e,type_name,''',''')
order by type_name).extract('//text()').getclobval(), 1) s
from all_types
where owner = 'USER1'
);
Tested the similar code below with 100000 rows:
with t (s) as (
select level
from dual
connect by level < 100000
)
select
''''||substr(s, 1, length(s) - 2)
from (
select
dbms_xmlgen.convert(xmlagg(XMLELEMENT(E,s,''',''') order by s desc).extract('//text()').getClobVal(), 1) s
from t);

What is the most efficient way to check for path non-existence in a non-selective query?

I have a graph model that contains three types of vertices (User, Group, Document) and two types of edges (member_of, permissions). The relationships can be expressed as:
User,Group --- member_of ---> Group (depth can be arbitrary)
Group --- permissions ---> Document (depth is 1)
I'm working to write a query that would answer "What are all of the users that have no permissions of any document?". This is a very non-selective query, as I'm not specifying an id for the User class.
I've come up with this solution:
SELECT id, name FROM User
LET $p = (
SELECT expand(outE('permissions')) FROM (
TRAVERSE out('member_of') FROM $parent.$current
)
)
WHERE $p.size() = 0
This solution appears to work, but is taking between 12-15 seconds to execute. Currently in my graph there are 10,000 Users, Groups and Documents each. There are ~10,000 permissions and ~50,000 member_of.
What is the most efficient way to check for path non-existence? Is there any way to improve the performance of my existing query or am I taking the wrong approach?
There are a few ways to improve your query. First, it isn't necessary to expand the Permissions edges, you can simply check the amount of edges stored on the query. We can also limit this check so that it stops at the first group with permissions edges, rather than checking them all (credit to Luigi D for giving me this idea). Thus the query becomes as follows.
SELECT * FROM User
LET $p = (
SELECT FROM (
TRAVERSE out('Member_Of') FROM $parent.$current
) WHERE out('Permissions').size() > 0 LIMIT 1
)
WHERE $p.size() = 0
It's hard for me to check any query improvements without a sizeable dataset, but there may be a minute improvement by using the more explicit out_Member_Of and out_Permissions properties, rather than the out(field) functions.
There might be another opportunity to slightly improve the query by 'removing' the User record from the traverse results, thus reducing the amount of records checked by the WHERE clause. This could be done via
SELECT * FROM User
LET $p = (
SELECT FROM (
TRAVERSE out('Member_Of') FROM (SELECT out('Member_Of') FROM $parent.$parent.$current)
) WHERE out('Permissions').size() > 0 LIMIT 1
)
WHERE $p.size() = 0
The previous query can also be rearranged, although I suspect this one will be slower due to it checking all of the traversed results, rather than stopping at the first. It's just another option for you try.
SELECT * FROM User
LET $p = (TRAVERSE out('Member_Of') FROM (SELECT out('Member_Of') FROM $parent.$current))
WHERE $p.out('Permissions').size() = 0
Now I'm going to diverge away from that query. Perhaps it will be quicker to pre-compute if a group has access to docs, and then check each users group with the precomputed ones. This may save a lot of repetitive traversal.
I think the best way is to get all the Groups without docs. This way all groups with docs can be eliminated before traversing their other groups.
SELECT * FROM (SELECT FROM Group WHERE out('Permissions').size() = 0)
LET $p = (
SELECT FROM (
TRAVERSE out('Member_Of') FROM $parent.$current
) WHERE out('Permissions').size() > 0 LIMIT 1
)
WHERE $p.size() = 0
Perhaps creating and using an index will make the previous query even more performant, although the process currently seems a bit janky. Before you can create an index on out_Permissions, you need to create the property with create property Group.out_Permissions LINKBAG, and then you can create the index with CREATE INDEX hasDocument ON Groups (out_Permissions, #rid) notunique METADATA {ignoreNullValues: false} (creating the index this way seems strange, but it was the only way I could get it to work, hence my janky comment). You can then query the index with select expand(rid) from index:hasDocument where key = null, which will return all the Groups without permission edges, and that would replace SELECT FROM Group WHERE out('Permissions').size() = 0 in the previous query.
So here is the query that gets the groups with docs, and checks the users against it. It correctly returns users without groups too.
SELECT expand($users)
LET $groups_without_docs = (
SELECT FROM (SELECT FROM Group WHERE out('Permissions').size() = 0)
LET $p = (
SELECT FROM (
TRAVERSE out('Member_Of') FROM $parent.$current
) WHERE out('Permissions').size() > 0 LIMIT 1
)
WHERE $p.size() = 0
),
$users = (
SELECT FROM User
LET $groups = (SELECT expand(out('Member_Of')) FROM $current)
WHERE $groups containsall (#rid in $parent.$groups_without_docs)
)
Note I think $users = (SELECT FROM User WHERE out('Member_Of') containsall (#rid in $parent.$groups_without_docs)) should work, but it doesn't. I think this may be related to a bug I've previously posted, see https://github.com/orientechnologies/orientdb/issues/4692.
I am very interested to know if the various queries above improve your query, so please comment back.
As you said, this is a very non-selective query, so it's hard to optimize.
Have you tried to add a LIMIT to the inner query?
SELECT id, name FROM User
LET $p = (
SELECT expand(outE('permissions')) FROM (
TRAVERSE out('member_of') FROM $parent.$current
) LIMIT 1
)
WHERE $p.size() = 0
or even
SELECT id, name FROM User
LET $p = (
SELECT sum(outE('permissions').size()) as s FROM (
TRAVERSE out('member_of') FROM $parent.$current
)
)
WHERE $p[0].s = 0

Removing the repeating elements from a row in a squlite table

Please let me know if there is any query where in I remove the repeating entries in a row.
For eg: I have a table which has name with 9 telephone numbers:
Name Tel0 Tel1 Tel2 Tel3 Tel4 Tel5 Tel6 Tel7 Tel8
John 1 2 2 2 3 3 4 5 1
The final result should be as shown below:
Name Tel0 Tel1 Tel2 Tel3 Tel4 Tel5 Tel6 Tel7 Tel8
John 1 2 3 4 5
regards
Maddy
I fear that it will be more complicated to keep this format than to split the table in two as I suggested. If you insist on keeping the current schema then I would suggest that you query the row, organise the fields in application code and then perform an update on the database.
You could also try to use SQL UNION operator to give you a list of the numbers, a UNION by default will remove all duplicate rows:
SELECT Name, Tel FROM
(SELECT Name, Tel0 AS Tel FROM Person UNION
SELECT Name, Tel1 FROM Person UNION
SELECT Name, Tel2 FROM Person) ORDER BY Name ;
Which should give you a result set like this:
John|1
John|2
You will then have to step through the result set and saving each number into a separate variable (skipping those variables that do not exist) until the "Name" field changes.
Tel1 := Null; Tel2 := Null;
Name := ResultSet['Name'];
Tel0 := ResultSet['Tel'];
ResultSet.Next();
if (Name == ResultSet['Name']) {
Tel1 := ResultSet['Tel'];
} else {
UPDATE here.
StartAgain;
}
ResultSet.Next();
if (Name == ResultSet['Name']) {
Tel2 := ResultSet['Tel'];
} else {
UPDATE here.
StartAgain;
}
I am not recommending you do this, it is very bad use of a relational database but once implemented in a real language and debugged that should work.

Resources