Filter Condition in Business Object BO - database

I have a problem with a filter condition in BO.
Imagine that I have this database
ID | DESC
0 | None
1 | Company
2 | All
In BO I have a filter that ask where do you want to find the objects and 2 options:
"Company" or "All".
If I choose "All" then I should have all the datas with the "ID" 0,1,2 and if I choose "Company" only the data with the "ID" 1.
So I did something like this:
TABLE_NAME.ID <= (CASE WHEN #Prompt('where do you want to find the objects','A',{'Company', 'All'},mono,constrained,not_persistent,{'Company'}) = 'Company' THEN 1 ELSE 2 END)
This filter is OK when I choose "All" because I have all the "ID" smaller than 2, i.e, 0,1,2.
But It does not work when my option is company, because it also shows the data with the "ID" 0.
I should have some with "=" combined with "<="

If it's really only that simple, the following will work:
TABLE_NAME.ID =
(CASE #Prompt('where do you want to find the objects',
'A',
{'Company', 'All'},
mono,
constrained,
not_persistent,{'Company'}
)
WHEN 'Company'
THEN 1
WHEN 'All'
THEN TABLE_NAME.ID
END)

Related

Return Parts of an Array in Postgres

I have a column (text) in my Postgres DB (v.10) with a JSON format.
As far as i now it's has an array format.
Here is an fiddle example: Fiddle
If table1 = persons and change_type = create then i only want to return the name and firstname concatenated as one field and clear the rest of the text.
Output should be like this:
id table1 did execution_date change_type attr context_data
1 Persons 1 2021-01-01 Create Name [["+","name","Leon Bill"]]
1 Persons 2 2021-01-01 Update Firt_name [["+","cur_nr","12345"],["+","art_cd","1"],["+","name","Leon"],["+","versand_art",null],["+","email",null],["+","firstname","Bill"],["+","code_cd",null]]
1 Users 3 2021-01-01 Create Street [["+","cur_nr","12345"],["+","art_cd","1"],["+","name","Leon"],["+","versand_art",null],["+","email",null],["+","firstname","Bill"],["+","code_cd",null]]
Disassemble json array into SETOF using json_array_elements function, then assemble it back into structure you want.
select m.*
, case
when m.table1 = 'Persons' and m.change_type = 'Create'
then (
select '[["+","name",' || to_json(string_agg(a.value->>2,' ' order by a.value->>1 desc))::text || ']]'
from json_array_elements(m.context_data::json) a
where a.value->>1 in ('name','firstname')
)
else m.context_data
end as context_data
from mutations m
modified fiddle
(Note:
utilization of alphabetical ordering of names of required fields is little bit dirty, explicit order by case could improve readability
resulting json is assembled from string literals as much as possible since you didn't specified if "+" should be taken from any of original array elements
the to_json()::text is just for safety against injection
)

Query for condition in array of JSON objects in PostgreSQL

Lets assume we have a PostgreSQL db with a table with rows of the following kind:
id | doc
---+-----------------
1 | JSON Object
2 | JSON Object
3 | JSON Object
...
The JSON has the following structure:
{
'header' : {
'info' : 'foo'},
'data' :
[{'a' : 1, 'b' : 123},
{'a' : 2, 'b' : 234},
{'a' : 1, 'b' : 543},
...
{'a' : 1, 'b' : 123},
{'a' : 4, 'b' : 452}]
}
with arbitrary values for 'a' and 'b' in 'data' in all rows of the table.
First question: how do I query for rows in the table where the following condition holds:
There exists a dictionary in the list/array with the key 'data', where a==i and b>j.
For example for i=1 and j=400 the condition would be fulfilled for the example above and the respective column would be returned.
Second question:
In my problem I have to deal with time series data in Json. Every measurement is represented by one Json and therefore one row in the table. I want to identify measurements where certain events occurred. For the case that the above structure is unsuitable in terms of easy querying: How could such a time series look like to be more easily queryable?
Thanks a lot!
I believe a query like this should answer your first question:
select distinct id, doc
from (
select id, doc, jsonb_array_elements(doc->'data') as elem
from docs
) as docelem
where (elem->>'a')::int = 4 and (elem->>'b')::int > 400
db<>fiddle here

Case insensitive Postgres query with array contains

I have records which contain an array of tags like these:
id | title | tags
--------+-----------------------------------------------+----------------------
124009 | bridge photo | {bridge,photo,Colors}
124018 | Zoom 5 | {Recorder,zoom}
123570 | Sint et | {Reiciendis,praesentium}
119479 | Architecto consectetur | {quia}
I'm using the following SQL query to fetch a specific record by tags ('bridge', 'photo', 'Colors'):
SELECT "listings".* FROM "listings" WHERE (tags #> ARRAY['bridge', 'photo', 'Colors']::varchar[]) ORDER BY "listings"."id" ASC LIMIT $1 [["LIMIT", 1]]
And this returns a first record in this table.
The problem with this is that I have mixed type cases and I would like this to return the same result if I search for: bridge, photo, colors. Essentially I need to make this search case-insensitive but can't find a way to do so with Postgres.
This is the SQL query I've tried which is throwing errors:
SELECT "listings".* FROM "listings" WHERE (LOWER(tags) #> ARRAY['bridge', 'photo', 'colors']::varchar[]) ORDER BY "listings"."id" ASC LIMIT $1
This is the error:
PG::UndefinedFunction: ERROR: function lower(character varying[]) does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
You can convert text array elements to lower case in the way like this:
select lower(tags::text)::text[]
from listings;
lower
--------------------------
{bridge,photo,colors}
{recorder,zoom}
{reiciendis,praesentium}
{quia}
(4 rows)
Use this in your query:
SELECT *
FROM listings
WHERE lower(tags::text)::text[] #> ARRAY['bridge', 'photo', 'colors']
ORDER BY id ASC;
id | title | tags
--------+--------------+-----------------------
124009 | bridge photo | {bridge,photo,Colors}
(1 row)
You can't apply LOWER() to an array directly, but you can unpack the array, apply it to each element, and reassemble it when you're done:
... WHERE ARRAY(SELECT LOWER(UNNEST(tags))) #> ARRAY['bridge', 'photo', 'colors']
You could also install the citext (case-insensitive text) module; if you declare listings.tags as type citext[], your query should work as-is.

Solr demote all documents with condition

I want to demote all documents that have inv=0(possible values from 0 to 1000) to the end of the result set. i have got other sorting options like name desc also as part of the query.
For example below are my solr documents
Doc1 : name=apple , Inv=2
Doc2 : name=ball , Inv=1
Doc3 : name=cat , Inv=0
Doc4 : name=dog , Inv=0
Doc5 : name=fish , Inv=4
Doc6 : name=Goat , Inv=5
I want achieve below sorting ...here, i want to push all documents with inv=0 down to bottom and then apply "name asc" sorting.
Doc1
Doc2
Doc5
Doc6
Doc3
Doc4
my solr request is like
bq: "(: AND -inv:"0")^999.0" & defType: "edismax"
here 999 is the rank that i gave to demote results.
this boosting query works fine. it moves all documents with inv=0 down to the bottom.
But when i add &sort=name asc to the solr query, it prioritizes "sort" over bq..i am seeing below results with "name asc".
Doc1 : name=apple , Inv=2
Doc2 : name=ball , Inv=1
Doc3 : name=cat , Inv=0
Doc4 : name=dog , Inv=0
Doc5 : name=fish , Inv=4
Doc6 : name=Goat , Inv=5
can anyone please help me out. ?
The easy solution: You can just sort by inv first, then the other values. This requires that inv only have true (1) or false (0) values. I guess that is not the case, so:
You can sort by a function query - and you can use the function if to return different values based on whether the value is set or not:
sort=if(inv, 1, 0) desc, name desc
If Solr fails to resolve inv by itself, you can use field(inv), but it shouldn't be necessary.
Another option is to use the function min to get either 1 or 0 for the sort field, depending on whether it's in inventory or not:
sort=min(1, inv)

In ArangoDB, will querying, with filters, from the neighbor(s) be done in O(n)?

I've been reading Aql Graph Operation and Graphs, and have found no concrete example and performance explanation for the use case of SQL-Traverse.
E.g.:
If I have a collection Users, which has a company relation to collection Company
Collection Company has relation location to collection Location;
Collection Location is either a city, country, or region, and has relation city, country, region to itself.
Now, I would like to query all users who belong to companies in Germany or EU.
SELECT from Users where Users.company.location.city.country.name="Germany";
SELECT from Users where Users.company.location.city.parent.name="Germany";
or
SELECT from Users where Users.company.location.city.country.region.name="europe";
SELECT from Users where Users.company.location.city.parent.parent.name="europe";
Assuming that Location.name is indexed, can I have the two queries above executed with O(n), with n being the number of documents in Location (O(1) for graph traversal, O(n) for index scanning)?
Of course, I could just save regionName or countryName directly in company, as these cities and countries are in EU, unlike in ... other places, won't probably change, but what if... you know what I mean (kidding, what if I have other use cases which require constant update)
I'm going to explain this using the ArangoDB 2.8 Traversals.
We Create these collections to match your shema using arangosh:
db._create("countries")
db.countries.save({_key:"Germany", name: "Germany"})
db.countries.save({_key:"France", name: "France"})
db.countries.ensureHashIndex("name")
db._create("cities")
db.cities.save({_key: "Munich"})
db.cities.save({_key: "Toulouse")
db._create("company")
db.company.save({_key: "Siemens"})
db.company.save({_key: "Airbus"})
db._create("employees")
db.employees.save({lname: "Kraxlhuber", cname: "Xaver", _key: "user1"})
db.employees.save({lname: "Heilmann", cname: "Vroni", _key: "user2"})
db.employees.save({lname: "Leroy", cname: "Marcel", _key: "user3"})
db._createEdgeCollection("CityInCountry")
db._createEdgeCollection("CompanyIsInCity")
db._createEdgeCollection("WorksAtCompany")
db.CityInCountry.save("cities/Munich", "countries/Germany", {label: "beautiful South near the mountains"})
db.CityInCountry.save("cities/Toulouse", "countries/France", {label: "crowded city at the mediteranian Sea"})
db.CompanyIsInCity.save("company/Siemens", "cities/Munich", {label: "darfs ebbes gscheits sein? Oder..."})
db.CompanyIsInCity.save("company/Airbus", "cities/Toulouse", {label: "Big planes Ltd."})
db.WorksAtCompany.save("employees/user1", "company/Siemens", {employeeOfMonth: true})
db.WorksAtCompany.save("employees/user2", "company/Siemens", {veryDiligent: true})
db.WorksAtCompany.save("employees/user3", "company/Eurocopter", {veryDiligent: true})
In AQL we would write this query the other way around.
We start with the constant time FILTER on the indexed attribute name and start our traversals from there on.
Therefor we filter for the country "Germany":
db._explain("FOR country IN countries FILTER country.name == 'Germany' RETURN country ")
Query string:
FOR country IN countries FILTER country.name == 'Germany' RETURN country
Execution plan:
Id NodeType Est. Comment
1 SingletonNode 1 * ROOT
6 IndexNode 1 - FOR country IN countries /* hash index scan */
5 ReturnNode 1 - RETURN country
Indexes used:
By Type Collection Unique Sparse Selectivity Fields Ranges
6 hash countries false false 66.67 % [ `name` ] country.`name` == "Germany"
Optimization rules applied:
Id RuleName
1 use-indexes
2 remove-filter-covered-by-index
Now that we have our well filtered start node, we do a graph traversal in reverse direction. Since we know that Employees are exactly 3 steps away from the start Vertex, and we're not interested in the path, we only return the 3rd layer:
db._query("FOR country IN countries FILTER country.name == 'Germany' FOR v IN 3 INBOUND country CityInCountry, CompanyIsInCity, WorksAtCompany RETURN v")
[
{
"cname" : "Xaver",
"lname" : "Kraxlhuber",
"_id" : "employees/user1",
"_rev" : "1286703864570",
"_key" : "user1"
},
{
"cname" : "Vroni",
"lname" : "Heilmann",
"_id" : "employees/user2",
"_rev" : "1286729095930",
"_key" : "user2"
}
]
Some words about this queries performance:
We locate Germany using a hash index is constant time -> O(1)
Based on that we want to traverse m many paths where m is the number of employees in Germany; Each of them can be traversed in constant time.
-> O(m) at this step.
Return the result in constant time -> O(1)
All combined we need O(m) where we expect m to be less than n (the number of employees) as used in your SQL-Traversal.

Resources