Create postgres view from table with dynamic casting - arrays

I want to create a generic query that will allow me to create a view (from a table) and convert all Array columns into strings.
Something like:
CREATE OR REPLACE VIEW view_1 AS
SELECT *
for each column_name in columns
CASE WHEN pg_typeof(column_name) == TEXT[] THEN array_to_string(column_name)
ELSE column_name
FROM table_1;
I guess that I can do that with stored procedure but I'm looking for solution in pure SQL, if it can be to much complex.

Here is a query to do such conversion. You can then customize it to create the view and execute it.
SELECT
'CREATE OR REPLACE VIEW my_table_view AS SELECT ' || string_agg(
CASE
WHEN pg_catalog.format_type(pg_attribute.atttypid, pg_attribute.atttypmod) LIKE '%[]' THEN 'array_to_string(' || pg_attribute.attname || ', '','') AS ' || pg_attribute.attname
ELSE pg_attribute.attname
END, ', ' ORDER BY attnum ASC)
|| ' FROM ' || min(pg_class.relname) || ';'
FROM
pg_catalog.pg_attribute
INNER JOIN
pg_catalog.pg_class ON pg_class.oid = pg_attribute.attrelid
INNER JOIN
pg_catalog.pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE
pg_attribute.attnum > 0
AND NOT pg_attribute.attisdropped
AND pg_namespace.nspname = 'my_schema'
AND pg_class.relname = 'my_table'
; \gexec
Example:
create table tarr (id integer, t_arr1 text[], regtext text, t_arr2 text[], int_arr integer[]);
==>
SELECT id, array_to_string(t_arr1) AS t_arr1, regtext, array_to_string(t_arr2) AS t_arr2, int_arr FROM tarr;

Related

Trying to create Javascript base UDTF in Snowflake by passing the external Arguments(Arg1,Arg2) into the control SELECT in the function

Can you please help on below requirement in snowflake. We need to pass external argument into the query and need to build query based on external argument values then get out put of query in tabular format.
Below is the javascript base UDTF we are trying to create but couldn't finish as we are new to javascript.
CREATE OR REPLACE FUNCTION GetV_Test(I_VendorName varchar(100),I_Department DOUBLE)
RETURNS TABLE(VendorName VARCHAR(100), Vendor VARCHAR(100))
LANGUAGE JAVASCRIPT
AS
$$
if ((I_VendorName != null || I_VendorName !='') && (I_Department == null || I_Department == 0))
{
SELECT ' ' AS VendorName, ' ' AS Vendor
UNION
SELECT distinct(Cast(Vendor as varchar(1000)) || ' ' || VendorName) as VendorName, Vendor
FROM vdrs WHERE VendorName LIKE '%'||I_VendorName||'%'
}
else if((I_VendorName == null || I_VendorName == ' ') && (I_Department != null || I_Department!=0))
{
SELECT ' ' AS VendorName, ' ' AS Vendor
UNION
SELECT distinct (Cast(Vendor as varchar(1000)) || ' ' || VendorName) as VendorName,Vendor
FROM vdrs WHERE department =I_Department order by Vendor
}
else
{
SELECT ' ' AS VendorName, ' ' AS Vendor
UNION
Select distinct (CAST(vdrs2.Vendor as varchar(1000)) || ' ' || vdrs2.VendorName) as VendorName,vdrs2.Vendor
from (SELECT (Cast(Vendor as varchar(1000)) || ' ' || VendorName) as VendorName,Vendor
FROM vdrs WHERE department =I_Department)as temp1
Inner join vdrs2 on vdrs2.Vendor=temp1.Vendor
and vdrs2.VendorName LIKE '%'||I_VendorName||'%'
order by Vendor
}
$$
;
You can not combine SQL and JavaScript like you do:
if () {
SQL statement
} else {
SQL statement
}
It's not possible to read from another table(s) with JavaScript UDTF. Please check the samples:
https://docs.snowflake.com/en/developer-guide/udf/javascript/udf-javascript-tabular-functions.html
As a workaround:
You may create a JavaScript Stored Procedure, run these SQL statements to fill a transient or temp table, and then query this table with another SQL.
You may try to combine these SQLs and create a SQL UDTF:
https://docs.snowflake.com/en/developer-guide/udf/sql/udf-sql-tabular-functions.html

Build Dynamic SQL with Joins on multiple keys

I want to build a dynamic query where all the data points are stored in a table.
e.g.
Declare #Load varchar(2000) = 'Id,CategoryID,SubcatID'
I need the below output:
on b.id = b2.id and b.CategoryID = b2.CategoryID and b.SubcatID = b2.SubcatID
where b.id is null and b.CategoryID is null and b.SubcatID is null
I was able to accomplish the task
Thanks
SELECT *
, 'b.'+Value Sources
, 'b2.'+Value Destination
, 'b.'+Value +'='+ 'b2.'+Value JoinCondition
, 'b2.'+Value + ' IS NULL ' JoinFilters
into #temp
FROM dbo.Split(#IncrementalLoad,',')
SELECT STRING_AGG(JoinCondition, ' and ') AS JoinCondition
, STRING_AGG(JoinFilters, ' and ') AS JoinFilters
from #temp

Oracle: update multiple columns with dynamic query

I am trying to update all the columns of type NVARCHAR2 to some random string in my database. I iterated through all the columns in the database of type nvarchar2 and executed an update statement for each column.
for i in (
select
table_name,
column_name
from
user_tab_columns
where
data_type = 'NVARCHAR2'
) loop
execute immediate
'update ' || i.table_name || 'set ' || i.column_name ||
' = DBMS_RANDOM.STRING(''X'', length('|| i.column_name ||'))
where ' || i.column_name || ' is not null';
Instead of running an update statement for every column of type nvarchar2, I want to update all the nvarchar columns of a particular table with a single update statement for efficiency(that is, one update statement per 1 table). For this, I tried to bulk collect all the nvarchar columns in a table, into a temporary storage. But, I am stuck at writing a dynamic update statement for this. Could you please help me with this? Thanks in advance!
You can try this one. However, depending on your table it could be not the fastest solution.
for aTable in (
select table_name,
listagg(column_name||' = nvl2('||column_name||', DBMS_RANDOM.STRING(''XX'', length('||column_name||')), NULL)') WITHIN GROUP (ORDER BY column_name) as upd,
listagg(column_name) WITHIN GROUP (ORDER BY column_name) as con
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate
'UPDATE '||aTable.table_name ||
' SET '||aTable.upd ||
' WHERE COALESCE('||aTable.con||') IS NOT NULL';
end loop;
Resulting UPDATE (verify with DBMS_OUTPUT.PUT_LINE(..)) should look like this:
UPDATE MY_TABLE SET
COL_A = nvl2(COL_A, DBMS_RANDOM.STRING('XX', length(COL_A)), NULL),
COL_B = nvl2(COL_B, DBMS_RANDOM.STRING('XX', length(COL_B)), NULL)
WHERE COALESCE(COL_A, COL_B) IS NOT NULL;
Please try this:
DECLARE
CURSOR CUR IS
SELECT
TABLE_NAME,
LISTAGG(COLUMN_NAME||' = DBMS_RANDOM.STRING(''X'', length(NVL('||
COLUMN_NAME ||',''A''))',', ')
WITHIN GROUP (ORDER BY COLUMN_ID) COLUMN_NAME
FROM DBA_TAB_COLUMNS
WHERE DATA_TYPE = 'NVARCHAR2'
GROUP BY TABLE_NAME;
TYPE TAB IS TABLE OF CUR%ROWTYPE INDEX BY PLS_INTEGER;
T TAB;
S VARCHAR2(4000);
BEGIN
OPEN CUR;
LOOP
FETCH CUR BULK COLLECT INTO T LIMIT 1000;
EXIT WHEN T.COUNT = 0;
FOR i IN 1..T.COUNT LOOP
S := 'UPDATE ' || T(i).TABLE_NAME || ' SET ' || T(i).COLUMN_NAME;
EXECUTE IMMEDIATE S;
END LOOP;
END LOOP;
COMMIT;
END;
/
I think that would do it. But as I said in the comments, you need to validate the syntax since I don't have an Oracle instance to test it.
for i in (
select table_name,
'update || i.table_name || set ' ||
listagg( column_name || '= NLV( ' || column_name || ', '
|| 'DBMS_RANDOM.STRING(''X'', length('|| column_name ||') ) )'
|| ';'
) WITHIN GROUP (ORDER BY column_name) as updCommand
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate i.updCommand;
end loop;
If you find any error, let me know in the comments so I can fix it.

Do a row count in all tables in a DB where columnA=1 and columnB=DEV

I need to do a row count on a Netezza system but only where 2 columns have certain values
I have
SELECT TABLENAME, RELTUPLES
FROM _V_TABLE
WHERE objtype = 'TABLE' and tablename like 'MY%STUFF'
This will show me all table names and their row counts
But I need to add a where clause into it. columnA= 1 and columnB = ABC,
every table has these 2 column names in it.
Thanks
Craig
If you want to do this via SQL, then you can accomplish this with a stored procedure. Here's a stored procedure that will do what I think you're asking for.
My assumption is that you want to only count the rows in tables that actually match the column values specified, so RELTUPLES in _v_tables won't be of use for that. The sample stored procedure code also assumes that each of the columns is varchar(1000). You'll have to alter the code if you want to match other datatypes.
Prior to creating the stored procedure, create a reference table like so:
create table reftable_sp_row_count (tablename varchar(1000), rowcount bigint) distribute on random;
Then create the stored procedure like so:
CREATE OR REPLACE PROCEDURE SP_ROW_COUNT(VARCHAR(254), VARCHAR(254), VARCHAR(254), VARCHAR(254), VARCHAR(254))
RETURNS REFTABLE(REFTABLE_SP_ROW_COUNT)
EXECUTE AS OWNER
LANGUAGE NZPLSQL AS
BEGIN_PROC
DECLARE
pTablePattern ALIAS FOR $1;
pColOneName ALIAS FOR $2;
pColOneValue ALIAS FOR $3;
pColTwoName ALIAS FOR $4;
pcolTwoValue ALIAS FOR $5;
vRecord RECORD;
BEGIN
for vRecord in
EXECUTE 'SELECT schema
|| ''.''
|| tablename tablename
FROM _v_table v
WHERE tablename LIKE ''' || upper(pTablePattern) || '''
AND EXISTS
(
SELECT attname
FROM _v_relation_column c
WHERE c.objid = v.objid
AND attname = ''' || upper(pColOneName) || '''
)
AND EXISTS
(
SELECT attname
FROM _v_relation_column c
WHERE c.objid = v.objid
AND attname = ''' || upper(pColTwoName) || '''
)'
LOOP
EXECUTE IMMEDIATE 'INSERT INTO ' || REFTABLENAME ||
' SELECT ''' || vRecord.tablename || ''' , COUNT(1) from ' || vRecord.tablename ||
' where ' || upper(pColOneName) || ' = ''' || pColOneValue || ''' and ' || upper(pColTwoName) || ' = ''' || pColTwoValue || ''' ;';
-- Note that if you change the data types for a given column to a different type then you'll want to change ''' || pColOneValue || ''' to ' || pColOneValue || ' as appropriate
END LOOP;
RETURN REFTABLE;
END;
END_PROC;
Here is some sample output.
TESTDB.ADMIN(ADMIN)=> select * from table_1 order by colA, colB;
COLA | COLB
------+------
ABC | BLUE
ABC | BLUE
ABC | BLUE
ABC | RED
ABC | RED
(5 rows)
TESTDB.ADMIN(ADMIN)=> select * from table_2 order by colA, colB;
COLA | COLB
------+--------
ABC | BLUE
ABC | BLUE
XYZ | BLUE
XYZ | BLUE
XYZ | YELLOW
(5 rows)
TESTDB.ADMIN(ADMIN)=> call sp_row_count('TABLE_%', 'COLA', 'ABC', 'COLB','BLUE');
TABLENAME | ROWCOUNT
---------------+----------
ADMIN.TABLE_1 | 3
ADMIN.TABLE_2 | 2
(2 rows)

PostgreSQL "DESCRIBE TABLE"

How do you perform the equivalent of Oracle's DESCRIBE TABLE in PostgreSQL with psql command?
Try this (in the psql command-line tool):
\d+ tablename
See the manual for more info.
In addition to the PostgreSQL way (\d 'something' or \dt 'table' or \ds 'sequence' and so on)
The SQL standard way, as shown here:
select column_name, data_type, character_maximum_length, column_default, is_nullable
from INFORMATION_SCHEMA.COLUMNS where table_name = '<name of table>';
It's supported by many db engines.
If you want to obtain it from query instead of psql, you can query the catalog schema. Here's a complex query that does that:
SELECT
f.attnum AS number,
f.attname AS name,
f.attnum,
f.attnotnull AS notnull,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
CASE
WHEN p.contype = 'p' THEN 't'
ELSE 'f'
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 't'
ELSE 'f'
END AS uniquekey,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreignkey,
CASE
WHEN p.contype = 'f' THEN p.confkey
END AS foreignkey_fieldnum,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreignkey,
CASE
WHEN p.contype = 'f' THEN p.conkey
END AS foreignkey_connnum,
CASE
WHEN f.atthasdef = 't' THEN d.adsrc
END AS default
FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
WHERE c.relkind = 'r'::char
AND n.nspname = '%s' -- Replace with Schema name
AND c.relname = '%s' -- Replace with table name
AND f.attnum > 0 ORDER BY number
;
It's pretty complex but it does show you the power and flexibility of the PostgreSQL system catalog and should get you on your way to pg_catalog mastery ;-). Be sure to change out the %s's in the query. The first is Schema and the second is the table name.
You can do that with a psql slash command:
\d myTable describe table
It also works for other objects:
\d myView describe view
\d myIndex describe index
\d mySequence describe sequence
Source: faqs.org
The psql equivalent of DESCRIBE TABLE is \d table.
See the psql portion of the PostgreSQL manual for more details.
This should be the solution:
SELECT * FROM information_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table'
You may do a \d *search pattern * with asterisks to find tables that match the search pattern you're interested in.
In addition to the command line \d+ <table_name> you already found, you could also use the information-schema to look up the column data, using info_schema.columns
SELECT *
FROM info_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table'
Use the following SQL statement
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'tbl_name'
AND COLUMN_NAME = 'col_name'
If you replace tbl_name and col_name, it displays data type of the particular coloumn that you looking for.
You can use this :
SELECT attname
FROM pg_attribute,pg_class
WHERE attrelid=pg_class.oid
AND relname='TableName'
AND attstattarget <>0;
In MySQL , DESCRIBE table_name
In PostgreSQL , \d table_name
Or , you can use this long command:
SELECT
a.attname AS Field,
t.typname || '(' || a.atttypmod || ')' AS Type,
CASE WHEN a.attnotnull = 't' THEN 'YES' ELSE 'NO' END AS Null,
CASE WHEN r.contype = 'p' THEN 'PRI' ELSE '' END AS Key,
(SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid), '\'(.*)\'')
FROM
pg_catalog.pg_attrdef d
WHERE
d.adrelid = a.attrelid
AND d.adnum = a.attnum
AND a.atthasdef) AS Default,
'' as Extras
FROM
pg_class c
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON a.atttypid = t.oid
LEFT JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
AND r.conname = a.attname
WHERE
c.relname = 'tablename'
AND a.attnum > 0
ORDER BY a.attnum
This variation of the query (as explained in other answers) worked for me.
SELECT
COLUMN_NAME
FROM
information_schema.COLUMNS
WHERE
TABLE_NAME = 'city';
It's described here in details:
http://www.postgresqltutorial.com/postgresql-describe-table/
To improve on the other answer's SQL query (which is great!), here is a revised query. It also includes constraint names, inheritance information, and a data types broken into it's constituent parts (type, length, precision, scale). It also filters out columns that have been dropped (which still exist in the database).
SELECT
n.nspname as schema,
c.relname as table,
f.attname as column,
f.attnum as column_id,
f.attnotnull as not_null,
f.attislocal not_inherited,
f.attinhcount inheritance_count,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS data_type_full,
t.typname AS data_type_name,
CASE
WHEN f.atttypmod >= 0 AND t.typname <> 'numeric'THEN (f.atttypmod - 4) --first 4 bytes are for storing actual length of data
END AS data_type_length,
CASE
WHEN t.typname = 'numeric' THEN (((f.atttypmod - 4) >> 16) & 65535)
END AS numeric_precision,
CASE
WHEN t.typname = 'numeric' THEN ((f.atttypmod - 4)& 65535 )
END AS numeric_scale,
CASE
WHEN p.contype = 'p' THEN 't'
ELSE 'f'
END AS is_primary_key,
CASE
WHEN p.contype = 'p' THEN p.conname
END AS primary_key_name,
CASE
WHEN p.contype = 'u' THEN 't'
ELSE 'f'
END AS is_unique_key,
CASE
WHEN p.contype = 'u' THEN p.conname
END AS unique_key_name,
CASE
WHEN p.contype = 'f' THEN 't'
ELSE 'f'
END AS is_foreign_key,
CASE
WHEN p.contype = 'f' THEN p.conname
END AS foreignkey_name,
CASE
WHEN p.contype = 'f' THEN p.confkey
END AS foreign_key_columnid,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreign_key_table,
CASE
WHEN p.contype = 'f' THEN p.conkey
END AS foreign_key_local_column_id,
CASE
WHEN f.atthasdef = 't' THEN d.adsrc
END AS default_value
FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
WHERE c.relkind = 'r'::char
AND f.attisdropped = false
AND n.nspname = '%s' -- Replace with Schema name
AND c.relname = '%s' -- Replace with table name
AND f.attnum > 0
ORDER BY f.attnum
;
You can also check using below query
Select * from schema_name.table_name limit 0;
Expmple : My table has 2 columns name and pwd. Giving screenshot below.
*Using PG admin3
In postgres \d is used to describe the table structure.
e.g. \d schema_name.table_name
this command will provide you the basic info of table such as, columns, type and modifiers.
If you want more info about table use
\d+ schema_name.table_name
this will give you extra info such as, storage, stats target and description
The best way to describe a table such as a column, type, modifiers of columns, etc.
\d+ tablename or \d tablename
When your table is not part of the default schema, you should write:
\d+ schema_name.table_name
Otherwise, you would get the error saying that "the relation doesn not exist."
When your table name starts with a capital letter you should put your table name in the quotation.
Example: \d "Users"
Use this command
\d table name
like
\d queuerecords
Table "public.queuerecords"
Column | Type | Modifiers
-----------+-----------------------------+-----------
id | uuid | not null
endtime | timestamp without time zone |
payload | text |
queueid | text |
starttime | timestamp without time zone |
status | text |
1) PostgreSQL DESCRIBE TABLE using psql
In psql command line tool, \d table_name or \d+ table_name to find the information on columns of a table
2) PostgreSQL DESCRIBE TABLE using information_schema
SELECT statement to query the column_names,datatype,character maximum length of the columns table in the information_schema database;
SELECT
COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
from INFORMATION_SCHEMA.COLUMNS where table_name = 'tablename';
For more information https://www.postgresqltutorial.com/postgresql-describe-table/
I'll add the pg_dump command even thou the psql command was requested. because it generate an output more common to previous MySQl users.
# sudo -u postgres pg_dump --table=my_table_name --schema-only mydb
/dt is the commad which lists you all the tables present in a database. using
/d command and /d+ we can get the details of a table. The sysntax will be like
* /d table_name (or) \d+ table_name
The command below can describe multiple tables simply
\dt <table> <table>
The command below can describe multiple tables in detail:
\d <table> <table>
The command below can describe multiple tables in more detail:
\d+ <table> <table>
I worked out the following script for get table schema.
'CREATE TABLE ' || 'yourschema.yourtable' || E'\n(\n' ||
array_to_string(
array_agg(
' ' || column_expr
)
, E',\n'
) || E'\n);\n'
from
(
SELECT ' ' || column_name || ' ' || data_type ||
coalesce('(' || character_maximum_length || ')', '') ||
case when is_nullable = 'YES' then ' NULL' else ' NOT NULL' end as column_expr
FROM information_schema.columns
WHERE table_schema || '.' || table_name = 'yourschema.yourtable'
ORDER BY ordinal_position
) column_list;

Resources