ST_ApproximateMedialAxis(geometry) does not exist? - postgis

Not make sense, the function exists since v2.2,
SELECT distinct geometrytype(geom) from t; -- POLYGON
SELECT ST_ApproximateMedialAxis(geom) from t;
-- ERROR: function st_approximatemedialaxis(geometry) does not exist
-- LINE 1: select ST_ApproximateMedialAxis(geom) from t...
select PostGIS_Version() = "3.0 USE_GEOS=1 USE_PROJ=1 USE_STATS=1"
select select postgis_full_version() = POSTGIS="3.0.1 ec2a9aa" [EXTENSION] PGSQL="120" GEOS="3.8.0-CAPI-1.13.1 " PROJ="6.3.1" LIBXML="2.9.10" LIBJSON="0.13.1" LIBPROTOBUF="1.3.3" WAGYU="0.4.3 (Internal)
select Version() = "PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg20.04+1) ... 64-bit"
\df st_area, and all other are there...
\df public.ST_ApproximateMedialAxis = no function!
This last check shows that was not installed (!)... Well, the guide say "This method needs SFCGAL backend", how to check it?

Seems so simple
CREATE EXTENSION postgis_sfcgal;
\df public.ST_ApproximateMedialAxis
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+--------------------------+------------------+---------------------+------
public | st_approximatemedialaxis | geometry | geometry | func
Thanks to #JGH and https://gis.stackexchange.com/a/179618/7505
Now postgis_full_version() shows also SFCGAL version, SFCGAL="1.3.7".

Related

Adding multiple records from a string

I have a string of email addresses. For example, "a#a.com; b#a.com; c#a.com"
My database is:
record | flag1 | flag2 | emailaddresss
--------------------------------------------------------
1 | 0 | 0 | a#a.com
2 | 0 | 0 | b#a.com
3 | 0 | 0 | c#a.com
What I need to do is parse the string, and if the address is not in the database, add it.
Then, return a string of just the record numbers that correspond to the email addresses.
So, if the call is made with "A#a.com; c#a.com; d#a.com", the rountine would add "d#a.com", then return "1, 3,4" corresponding to the records that match the email addresses.
What I am doing now is calling the database once per email address to look it up and confirm it exists (adding if it doesn't exist), then looping thru them again to get the addresses 1 by 1 from my powershell app to collect the record numbers.
There has to be a way to just pass all of the addresses to SQL at the same time, right?
I have it working in powershell.. but slowly..
I'd love a response from SQL as shown above of just the record number for each email address in a single response. That is, "1,2,4" etc.
My powershell code is:
$EmailList2 = $EmailList.split(";")
# lets get the ID # for each eamil address.
foreach($x in $EmailList2)
{
$data = exec-query "select Record from emailaddresses where emailAddress = #email" -parameter #{email=$x.trim()} -conn $connection
if ($($data.Tables.record) -gt 0)
{
$ResponseNumbers = $ResponseNumbers + "$($data.Tables.record), "
}
}
$ResponseNumbers = $($ResponseNumbers+"XX").replace(", XX","")
return $ResponseNumbers
You'd have to do this in 2 steps. Firstly INSERT the new values and then use a SELECT to get the values back. This answer uses delimitedsplit8k (not delimitedsplit8k_LEAD) as you're still using SQL Server 2008. On the note of 2008 I strongly suggest looking at upgrade paths soon as you have about 6 weeks of support left.
You can use the function to split the values and then INSERT/SELECT appropriately:
DECLARE #Emails varchar(8000) = 'a#a.com;b#a.com;c#a.com';
WITH Emails AS(
SELECT DS.Item AS Email
FROM dbo.DelimitedSplit8K(#Emails,';') DS)
INSERT INTO YT (emailaddress) --I don't know what the other columns value should be, so have excluded
SELECT E.Email
FROM dbo.YourTable YT
LEFT JOIN Emails E ON YT.emailaddress = E.Email
WHERE E.Email IS NULL;
SELECT YT.record
FROM dbo.YourTable YT
JOIN dbo.DelimitedSplit8K(#Emails,';') DS ON DS.Item = YT.emailaddress;

Check a value in an array inside a object json in PostgreSQL 9.5

I have an json object containing an array and others properties.
I need to check the first value of the array for each line of my table.
Here is an example of the json
{"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}
So I need for example to get all lines with ObjectType[0] = 'Demand' and objectId1 = 46.
This the the table colums
id | relationName | content
Content column contains the json.
just query them? like:
t=# with table_name(id, rn, content) as (values(1,null,'{"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}'::json))
select * From table_name
where content->'objectType'->>0 = 'Demand' and content->>'objectID1' = '46';
id | rn | content
----+----+-------------------------------------------------------------------
1 | | {"objectID2":342,"objectID1":46,"objectType":["Demand","Entity"]}
(1 row)

Case insensitive Postgres query with array contains

I have records which contain an array of tags like these:
id | title | tags
--------+-----------------------------------------------+----------------------
124009 | bridge photo | {bridge,photo,Colors}
124018 | Zoom 5 | {Recorder,zoom}
123570 | Sint et | {Reiciendis,praesentium}
119479 | Architecto consectetur | {quia}
I'm using the following SQL query to fetch a specific record by tags ('bridge', 'photo', 'Colors'):
SELECT "listings".* FROM "listings" WHERE (tags #> ARRAY['bridge', 'photo', 'Colors']::varchar[]) ORDER BY "listings"."id" ASC LIMIT $1 [["LIMIT", 1]]
And this returns a first record in this table.
The problem with this is that I have mixed type cases and I would like this to return the same result if I search for: bridge, photo, colors. Essentially I need to make this search case-insensitive but can't find a way to do so with Postgres.
This is the SQL query I've tried which is throwing errors:
SELECT "listings".* FROM "listings" WHERE (LOWER(tags) #> ARRAY['bridge', 'photo', 'colors']::varchar[]) ORDER BY "listings"."id" ASC LIMIT $1
This is the error:
PG::UndefinedFunction: ERROR: function lower(character varying[]) does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
You can convert text array elements to lower case in the way like this:
select lower(tags::text)::text[]
from listings;
lower
--------------------------
{bridge,photo,colors}
{recorder,zoom}
{reiciendis,praesentium}
{quia}
(4 rows)
Use this in your query:
SELECT *
FROM listings
WHERE lower(tags::text)::text[] #> ARRAY['bridge', 'photo', 'colors']
ORDER BY id ASC;
id | title | tags
--------+--------------+-----------------------
124009 | bridge photo | {bridge,photo,Colors}
(1 row)
You can't apply LOWER() to an array directly, but you can unpack the array, apply it to each element, and reassemble it when you're done:
... WHERE ARRAY(SELECT LOWER(UNNEST(tags))) #> ARRAY['bridge', 'photo', 'colors']
You could also install the citext (case-insensitive text) module; if you declare listings.tags as type citext[], your query should work as-is.

Why is that Strange Behaviour with SETOF in PostgreSQL?

I have the following Code Snippets{ CODE#1 , CODE#2, CODE #3 } In my Database.
CODE #1 : Create Statement of a Table "Class_Type" which is like a "ENUM in JAVA".
It Contains some data as { "class-A", 'class-B", "sports-A", "RED", "BLUE",.....}
Now i am trying to fetch these values using a Stored Procedure which is CODE#2 and CODE#3.
Expected Output of CODE#2 and CODE#3 :
{
| classtype character varying |
| class-A |
| class-B |
| sports-A |
| RED |
| BLUE |
| ....... |
}
What did i find Strange?
The CODE#2 is returning the Expected Output some times and some times it returns "Unexpected output". What is the Reason behind ?
The Code#3 is working fine and resulting the Expected Output Every Time.
Unexpected Output of CODE#2 :
{
| get_class_type_list character varying |
| class-A |
| class-B |
| sports-A |
| RED |
| BLUE |
| ....... |
}
Following are the Code Snippets :
CODE#1
{
CREATE TABLE test.class_type
(
value character varying(80) NOT NULL,
is_active boolean NOT NULL DEFAULT true,
sort_order integer,
CONSTRAINT class_type_pkey PRIMARY KEY (value)
)
WITH (
OIDS=FALSE
);
}
CODE#2
{
CREATE OR REPLACE FUNCTION test.get_class_type_list()
RETURNS SETOF character varying AS
$BODY$
DECLARE
SQL VARCHAR;
BEGIN
RETURN QUERY
(SELECT value AS "classType"
FROM TEST.CLASS_TYPE);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
}
CODE#3
{
CREATE OR REPLACE FUNCTION test.get_class_type_list()
RETURNS TABLE(classType character varying) AS
$BODY$
DECLARE
SQL VARCHAR;
BEGIN
RETURN QUERY
(SELECT value AS "classType"
FROM TEST.CLASS_TYPE);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
}
SQL Fiddle Sample Code
Edited:
I want the column name of the Return Function to be as "classType", but not the function name.
The only difference I see in the results, is the column name (which is actually an alias). You have full control over column aliases at the calling context: sqlfiddle (and the reason, sometimes it behaves differently, it's because RETURNS SETOF <primitive-type> is a special returning clause).
The rule of thumb is, if you have alias in the function definition (like OUT parameters, RETURNS TABLE & RETURNS SETOF <composite/row-type>; but not in the function body itself), postgresql will use that, unless there is an explicit alias. If you use RETURNS SETOF <primitive/simple-type> without OUT parameters, the default alias is the function name for that column.
Note: the reason behind I didn't post this as an answer the first time, is because unfortunately, I couldn't find any reference for this in the docs.

Apply function to every element of an array in a SELECT statement

I am listing all functions of a PostgreSQL schema and need the human readable types for every argument of the functions. OIDs of the types a represented as an array in proallargtypes. I can unnest the array and apply format_type() to it, which causes the query to split into multiple rows for a single function. To avoid that I have to create an outer SELECT to GROUP the argtypes again because, apperently, one cannot group an unnested array. All columns are dependent on proname but I have to list all columns in GROUP BY clause, which is unnecessary but proname is not a primary key.
Is there a better way to achieve my goal of an output like this:
proname | ... | protypes
-------------------------------------
test | ... | {integer,integer}
I am currently using this query:
SELECT
proname,
prosrc,
pronargs,
proargmodes,
array_agg(proargtypes), -- see here
proallargtypes,
proargnames,
prodefaults,
prorettype,
lanname
FROM (
SELECT
p.proname,
p.prosrc,
p.pronargs,
p.proargmodes,
format_type(unnest(p.proallargtypes), NULL) AS proargtypes, -- and here
p.proallargtypes,
p.proargnames,
pg_get_expr(p.proargdefaults, 0) AS prodefaults,
format_type(p.prorettype, NULL) AS prorettype,
l.lanname
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_language l
ON l.oid = p.prolang
JOIN pg_catalog.pg_namespace n
ON n.oid = p.pronamespace
WHERE n.nspname = 'public'
) x
GROUP BY proname, prosrc, pronargs, proargmodes, proallargtypes, proargnames, prodefaults, prorettype, lanname
you can use internal "undocumented" function pg_catalog.pg_get_function_arguments(p.oid).
postgres=# SELECT pg_catalog.pg_get_function_arguments('fufu'::regproc);
pg_get_function_arguments
---------------------------
a integer, b integer
(1 row)
Now, there are no build "map" function. So unnest, array_agg is only one possible. You can simplify life with own custom function:
CREATE OR REPLACE FUNCTION format_types(oid[])
RETURNS text[] AS $$
SELECT ARRAY(SELECT format_type(unnest($1), null))
$$ LANGUAGE sql IMMUTABLE;
and result
postgres=# SELECT format_types('{21,22,23}');
format_types
-------------------------------
{smallint,int2vector,integer}
(1 row)
Then your query should to be:
SELECT proname, format_types(proallargtypes)
FROM pg_proc
WHERE pronamespace = 2200 AND proallargtypes;
But result will not be expected probably, because proallargtypes field is not empty only when OUT parameters are used. It is empty usually. You should to look to proargtypes field, but it is a oidvector type - so you should to transform to oid[] first.
postgres=# SELECT proname, format_types(string_to_array(proargtypes::text,' ')::oid[])
FROM pg_proc
WHERE pronamespace = 2200
LIMIT 10;
proname | format_types
------------------------------+----------------------------------------------------
quantile_append_double | {internal,"double precision","double precision"}
quantile_append_double_array | {internal,"double precision","double precision[]"}
quantile_double | {internal}
quantile_double_array | {internal}
quantile | {"double precision","double precision"}
quantile | {"double precision","double precision[]"}
quantile_cont_double | {internal}
quantile_cont_double_array | {internal}
quantile_cont | {"double precision","double precision"}
quantile_cont | {"double precision","double precision[]"}
(10 rows)

Resources