I have the following Code Snippets{ CODE#1 , CODE#2, CODE #3 } In my Database.
CODE #1 : Create Statement of a Table "Class_Type" which is like a "ENUM in JAVA".
It Contains some data as { "class-A", 'class-B", "sports-A", "RED", "BLUE",.....}
Now i am trying to fetch these values using a Stored Procedure which is CODE#2 and CODE#3.
Expected Output of CODE#2 and CODE#3 :
{
| classtype character varying |
| class-A |
| class-B |
| sports-A |
| RED |
| BLUE |
| ....... |
}
What did i find Strange?
The CODE#2 is returning the Expected Output some times and some times it returns "Unexpected output". What is the Reason behind ?
The Code#3 is working fine and resulting the Expected Output Every Time.
Unexpected Output of CODE#2 :
{
| get_class_type_list character varying |
| class-A |
| class-B |
| sports-A |
| RED |
| BLUE |
| ....... |
}
Following are the Code Snippets :
CODE#1
{
CREATE TABLE test.class_type
(
value character varying(80) NOT NULL,
is_active boolean NOT NULL DEFAULT true,
sort_order integer,
CONSTRAINT class_type_pkey PRIMARY KEY (value)
)
WITH (
OIDS=FALSE
);
}
CODE#2
{
CREATE OR REPLACE FUNCTION test.get_class_type_list()
RETURNS SETOF character varying AS
$BODY$
DECLARE
SQL VARCHAR;
BEGIN
RETURN QUERY
(SELECT value AS "classType"
FROM TEST.CLASS_TYPE);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
}
CODE#3
{
CREATE OR REPLACE FUNCTION test.get_class_type_list()
RETURNS TABLE(classType character varying) AS
$BODY$
DECLARE
SQL VARCHAR;
BEGIN
RETURN QUERY
(SELECT value AS "classType"
FROM TEST.CLASS_TYPE);
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
}
SQL Fiddle Sample Code
Edited:
I want the column name of the Return Function to be as "classType", but not the function name.
The only difference I see in the results, is the column name (which is actually an alias). You have full control over column aliases at the calling context: sqlfiddle (and the reason, sometimes it behaves differently, it's because RETURNS SETOF <primitive-type> is a special returning clause).
The rule of thumb is, if you have alias in the function definition (like OUT parameters, RETURNS TABLE & RETURNS SETOF <composite/row-type>; but not in the function body itself), postgresql will use that, unless there is an explicit alias. If you use RETURNS SETOF <primitive/simple-type> without OUT parameters, the default alias is the function name for that column.
Note: the reason behind I didn't post this as an answer the first time, is because unfortunately, I couldn't find any reference for this in the docs.
Related
Not make sense, the function exists since v2.2,
SELECT distinct geometrytype(geom) from t; -- POLYGON
SELECT ST_ApproximateMedialAxis(geom) from t;
-- ERROR: function st_approximatemedialaxis(geometry) does not exist
-- LINE 1: select ST_ApproximateMedialAxis(geom) from t...
select PostGIS_Version() = "3.0 USE_GEOS=1 USE_PROJ=1 USE_STATS=1"
select select postgis_full_version() = POSTGIS="3.0.1 ec2a9aa" [EXTENSION] PGSQL="120" GEOS="3.8.0-CAPI-1.13.1 " PROJ="6.3.1" LIBXML="2.9.10" LIBJSON="0.13.1" LIBPROTOBUF="1.3.3" WAGYU="0.4.3 (Internal)
select Version() = "PostgreSQL 12.3 (Ubuntu 12.3-1.pgdg20.04+1) ... 64-bit"
\df st_area, and all other are there...
\df public.ST_ApproximateMedialAxis = no function!
This last check shows that was not installed (!)... Well, the guide say "This method needs SFCGAL backend", how to check it?
Seems so simple
CREATE EXTENSION postgis_sfcgal;
\df public.ST_ApproximateMedialAxis
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+--------------------------+------------------+---------------------+------
public | st_approximatemedialaxis | geometry | geometry | func
Thanks to #JGH and https://gis.stackexchange.com/a/179618/7505
Now postgis_full_version() shows also SFCGAL version, SFCGAL="1.3.7".
I'm doing a simple concatenation in SSIS.
For example I have a table like this:
+--------+-----------+------------+--------+
| ID | COL_A | COL_B | COL_C |
+--------+-----------+------------+--------+
| 110-99 | | APPLE | Orange |
+--------+-----------+------------+--------+
| 111-01 | Mango | Palm | |
+--------+-----------+------------+--------+
| 111-02 | | Strawberry | |
+--------+-----------+------------+--------+
| 111-05 | Pineapple | Guava | Lemom |
+--------+-----------+------------+--------+
I'm doing this in SSIS Derived column. Concatenation of 3 columns with Pipe |
COL_A +"|"+COL_B+"|"+COL_C
Actual Result:
|APPLE|Orange
MANGO|Palm|
|Strawberry|
Pineapple|Guava|Lemom
Expected Result:
APPLE|Orange
MANGO|Palm
Strawberry
Pineapple|Guava|Lemom
I'm not sure how to remove those extra | when the value is empty. I have tried using CASE but it is not working. Actually I don't know how to use CASE in Derived column expression.
You execute conditional logic in SSIS expressions using ?: syntax. ? : (Conditional) (SSIS Expression) It works much like an inline IFF.
Along these lines:
boolean_expression ? returnIfTrue : returnIfFalse
In order to get your desired results, I think I'd use two derived column transformations. In the first one, I'd create the pipe delimited string, then in the second one, I'd trim off the trailing pipe if there was one after building the string. Otherwise, the conditional logic would get pretty hairy in order to avoid leaving a trailing delimiter.
Step one - If each column is NULL or an empty string, return an empty string. Otherwise, return the contents of the column with a pipe concatenated to it:
((ISNULL(COL_A) || COL_A == "") ? "" : COL_A + "|"
Repeat that logic for all three columns, putting this expression into your derived column (Line breaks added for readability here):
(((ISNULL(COL_A) || COL_A == "") ? "" : COL_A + "|" ) +
(((ISNULL(COL_B) || COL_B == "") ? "" : COL_B + "|" ) +
(((ISNULL(COL_C) || COL_C == "") ? "" : COL_C ) --No pipe here, since nothing follows.
Then, in the second transformation, trim the trailing pipes from the instances where the last column or two were empty:
(RIGHT(NewColumnFromAbove,1)=="|") ? LEFT(NewColumnFromAbove,LEN(NewColumnFromAbove)-1) : NewColumnFromAbove
On the other hand, if there are lots of columns, or if performance gets bogged down, I would strongly consider writing the concatenation into a stored procedure, using CONCAT_WS, and then invoke that from an Execute SQL task instead.
In SQL Server, one option is concat_ws(), which ignores null values by design. If you have empty strings, your can turn them to null values with nullif().
concat_ws(
' | ',
nullif(col_a, ''),
nullif(col_b, ''),
nullif(col_c, '')
)
Suppose I have a table PRODUCTS with many columns, and that I want to insert/update a row using a MERGE statement. It is something along these lines:
MERGE INTO PRODUCTS AS Target
USING (VALUES(42, 'Foo', 'Bar', 0, 14, 200, NULL)) AS Source (ID, Name, Description, IsSpecialPrice, CategoryID, Price, SomeOtherField)
ON Target.ID = Source.ID
WHEN MATCHED THEN
-- update
WHEN NOT MATCHED BY TARGET THEN
-- insert
To write the UPDATE and INSERT "sub-statements" it seems I have to specify once again each and every column field. So -- update would be replaced by
UPDATE SET ID = Source.ID, Name = Source.Name, Description = Source.Description...
and -- insert by
INSERT (ID, Name, Description...) VALUES (Source.ID, Source.Name, Source.Description...)
This is very error-prone, hard to maintain, and apparently not really needed in the simple case where I just want to merge two "field sets" each representing a full table row. I appreciate that the update and insert statements could actually be anything (I've already used this in an unusual case in the past), but it would be great if there was a more concise way to represent the case where I just want "Target = Source" or "insert Source".
Does a better way to write the update and insert statements exist, or do I really need to specify the full column list every time?
You have to write the complete column lists.
You can check the documentation for MERGE here. Most SQL Server statement documentation starts with a syntax definition that shows you exactly what is allowed. For instance, the section for UPDATE is defined as:
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
<set_clause>::=
SET
{ column_name = { expression | DEFAULT | NULL }
| { udt_column_name.{ { property_name = expression
| field_name = expression }
| method_name ( argument [ ,...n ] ) }
}
| column_name { .WRITE ( expression , #Offset , #Length ) }
| #variable = expression
| #variable = column = expression
| column_name { += | -= | *= | /= | %= | &= | ^= | |= } expression
| #variable { += | -= | *= | /= | %= | &= | ^= | |= } expression
| #variable = column { += | -= | *= | /= | %= | &= | ^= | |= } expression
} [ ,...n ]
As you can see, the only options in <set clause> are individual columns/assignments. There is no "bulk" assignment option. Lower down in the documentation you'll find the options for INSERT also requires individual expressions (at least, in the VALUES clause - you can omit the column names after the INSERT but that's generally frowned upon).
SQL tends to favour verbose, explicit syntax.
I have multiple checkboxes on a form generated from a model to view which is presented this way:
{{Form::open(array('action'=>'LaboratoryController#store'))}}
#foreach (Accounts::where('accountclass',$i)->get() as $accounttypes)
{{ Form::checkbox('accounttype[]', $accounttypes->id)}}
#endforeach
{{Form::submit('Save')}}
{{Form::close()}}
When I return the Input::all() from my controller store method, it outputs like this:
{"client":"1","accounttype":["2","3","5","12","13","14","16","31","32","33"]}
Now I want to store the accounttypes array values to the accounts table by looping through the array in order to store each values on each rows using the same client id.
The same accounttype will be inserted to the second table but with different data.
So, my accounts table:
+-------------+---------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------------------------+------+-----+---------+----------------+
| accountno | int(11) unsigned zerofill | NO | PRI | NULL | auto_increment |
| accounttype | int(11) | NO | | NULL | |
| client | int(11) | NO | | NULL | |
| created_at | datetime | NO | | NULL | |
+-------------+---------------------------+------+-----+---------+----------------+
My controller store method:
public function store()
{
$accounttypes = Input::get('accounttype');
if(is_array($accounttypes))
{
for($i=0;$i < count($accounttypes);$i++)
{
// insert data on first table (accounts table)
$accountno = DB::table('accounts')->insertGetId(array('client'=>Input::get('client'),'accounttype',$accounttypes[$i]));
// insert data on the second table (account summary table) using the account no above
// DB::table('accountsummary')...blah blah
}
}
return Redirect::to('some/path');
}
The function seems to work but only for the first array value which is "2". I don't know what's wrong with the code but it seems that the loop doesn't go through the rest of the values. I was testing other loop methods like while and foreach but still the looping variable ($i) returns zero.
I was thinking if laravel controller doesn't allow loops on POST methods.
Your inputs are greatly appreciated. Thanks..
Foreach and DB::insert() works for me.
foreach ($accounttypes as $accounttype) {
DB::insert('INSERT INTO tb_accounts (accounttype,client) VALUES (?,?)', array($accounttype,Input::get('client'));
}
I just need to create separate query to get the last insert id because DB::insertGetId doesn't work the way I want it. But that's another issue. Anyway, thanks.
I am listing all functions of a PostgreSQL schema and need the human readable types for every argument of the functions. OIDs of the types a represented as an array in proallargtypes. I can unnest the array and apply format_type() to it, which causes the query to split into multiple rows for a single function. To avoid that I have to create an outer SELECT to GROUP the argtypes again because, apperently, one cannot group an unnested array. All columns are dependent on proname but I have to list all columns in GROUP BY clause, which is unnecessary but proname is not a primary key.
Is there a better way to achieve my goal of an output like this:
proname | ... | protypes
-------------------------------------
test | ... | {integer,integer}
I am currently using this query:
SELECT
proname,
prosrc,
pronargs,
proargmodes,
array_agg(proargtypes), -- see here
proallargtypes,
proargnames,
prodefaults,
prorettype,
lanname
FROM (
SELECT
p.proname,
p.prosrc,
p.pronargs,
p.proargmodes,
format_type(unnest(p.proallargtypes), NULL) AS proargtypes, -- and here
p.proallargtypes,
p.proargnames,
pg_get_expr(p.proargdefaults, 0) AS prodefaults,
format_type(p.prorettype, NULL) AS prorettype,
l.lanname
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_language l
ON l.oid = p.prolang
JOIN pg_catalog.pg_namespace n
ON n.oid = p.pronamespace
WHERE n.nspname = 'public'
) x
GROUP BY proname, prosrc, pronargs, proargmodes, proallargtypes, proargnames, prodefaults, prorettype, lanname
you can use internal "undocumented" function pg_catalog.pg_get_function_arguments(p.oid).
postgres=# SELECT pg_catalog.pg_get_function_arguments('fufu'::regproc);
pg_get_function_arguments
---------------------------
a integer, b integer
(1 row)
Now, there are no build "map" function. So unnest, array_agg is only one possible. You can simplify life with own custom function:
CREATE OR REPLACE FUNCTION format_types(oid[])
RETURNS text[] AS $$
SELECT ARRAY(SELECT format_type(unnest($1), null))
$$ LANGUAGE sql IMMUTABLE;
and result
postgres=# SELECT format_types('{21,22,23}');
format_types
-------------------------------
{smallint,int2vector,integer}
(1 row)
Then your query should to be:
SELECT proname, format_types(proallargtypes)
FROM pg_proc
WHERE pronamespace = 2200 AND proallargtypes;
But result will not be expected probably, because proallargtypes field is not empty only when OUT parameters are used. It is empty usually. You should to look to proargtypes field, but it is a oidvector type - so you should to transform to oid[] first.
postgres=# SELECT proname, format_types(string_to_array(proargtypes::text,' ')::oid[])
FROM pg_proc
WHERE pronamespace = 2200
LIMIT 10;
proname | format_types
------------------------------+----------------------------------------------------
quantile_append_double | {internal,"double precision","double precision"}
quantile_append_double_array | {internal,"double precision","double precision[]"}
quantile_double | {internal}
quantile_double_array | {internal}
quantile | {"double precision","double precision"}
quantile | {"double precision","double precision[]"}
quantile_cont_double | {internal}
quantile_cont_double_array | {internal}
quantile_cont | {"double precision","double precision"}
quantile_cont | {"double precision","double precision[]"}
(10 rows)