Updating sequences for a single schema not working (Postgres 12) - database

Postgresql12
I am trying to fix a schema's sequences, but for some reason this yields attributes from other tables in some cases. I assume I am leaving out something in the where clause that isn't limiting the results to the sequence columns I expect:
SELECT 'SELECT SETVAL(' ||
quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname)) ||
', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM ' ||
quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'
FROM pg_class AS S,
pg_depend AS D,
pg_class AS T,
pg_attribute AS C,
pg_tables AS PGT,
pg_namespace AS N
WHERE S.relkind = 'S'
AND N.nspname = 'my_schema'
AND S.oid = D.objid
AND S.relnamespace = N.oid
AND D.refobjid = T.oid
AND D.refobjid = C.attrelid
AND D.refobjsubid = C.attnum
AND T.relnamespace = N.oid
AND T.relname = PGT.tablename
AND PGT.schemaname = 'my_schema'
ORDER BY S.relname;
Must be something obvious, just not seeing it.

Related

flaten json dynamic in snowflake

I am using this
select 'select '
|| (select listagg(distinct 'json_col:'||key::text||'::string'||' as '||key::text, ', ') from GAURAV.PUBLIC.jsondata, lateral flatten(input=>json_col, mode=>'OBJECT'))
|| ' from GAURAV.PUBLIC.jsondata'
but it can't pick keys dynamic
it shows me output like this
I have Multiple json data in a table
how to flatten it
If I have a table of json like:
('{"key_a":"a", "key_b":"b"}'),
('{"key_c":"a", "key_d":"b"}')
this SQL
select
'select ' || listagg(distinct 'json_col:'|| f.key::text || '::string as ' || f.key::text, ', ') || ' from GAURAV.PUBLIC.jsondata' as output
from jsondata as jd,
table(flatten(input=>jd.json_col, mode=>'OBJECT')) f
group by f.seq
gives:
OUTPUT
select json_col:key_d::string as key_d, json_col:key_c::string as key_c from GAURAV.PUBLIC.jsondata
select json_col:key_a::string as key_a, json_col:key_b::string as key_b from GAURAV.PUBLIC.jsondata
The major points are to loop across the table as a first order selection. And to group the LISTAGG via the SEQ of the flatten which gives a distinct value per row of input, thus if you have 100 input rows, you will get 100 different seq, thus able to keep them separated.
It can be written with that output line split apart also:
select
'select ' ||
listagg(distinct 'json_col:'|| f.key::text || '::string as ' || f.key::text, ', ') ||
' from GAURAV.PUBLIC.jsondata' as output
from jsondata as jd,
table(flatten(input=>jd.json_col, mode=>'OBJECT')) f
group by f.seq

Create postgres view from table with dynamic casting

I want to create a generic query that will allow me to create a view (from a table) and convert all Array columns into strings.
Something like:
CREATE OR REPLACE VIEW view_1 AS
SELECT *
for each column_name in columns
CASE WHEN pg_typeof(column_name) == TEXT[] THEN array_to_string(column_name)
ELSE column_name
FROM table_1;
I guess that I can do that with stored procedure but I'm looking for solution in pure SQL, if it can be to much complex.
Here is a query to do such conversion. You can then customize it to create the view and execute it.
SELECT
'CREATE OR REPLACE VIEW my_table_view AS SELECT ' || string_agg(
CASE
WHEN pg_catalog.format_type(pg_attribute.atttypid, pg_attribute.atttypmod) LIKE '%[]' THEN 'array_to_string(' || pg_attribute.attname || ', '','') AS ' || pg_attribute.attname
ELSE pg_attribute.attname
END, ', ' ORDER BY attnum ASC)
|| ' FROM ' || min(pg_class.relname) || ';'
FROM
pg_catalog.pg_attribute
INNER JOIN
pg_catalog.pg_class ON pg_class.oid = pg_attribute.attrelid
INNER JOIN
pg_catalog.pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE
pg_attribute.attnum > 0
AND NOT pg_attribute.attisdropped
AND pg_namespace.nspname = 'my_schema'
AND pg_class.relname = 'my_table'
; \gexec
Example:
create table tarr (id integer, t_arr1 text[], regtext text, t_arr2 text[], int_arr integer[]);
==>
SELECT id, array_to_string(t_arr1) AS t_arr1, regtext, array_to_string(t_arr2) AS t_arr2, int_arr FROM tarr;

Error 512: Subquery returned more than 1 value

my SQL is:
SELECT DB1.IdUtente
,DB2.Gruppo
,DB1.Username
,DB1.Psw
,CASE WHEN DB1.RagioneSociale IS NOT NULL
AND DB1.RagioneSociale <> ''
THEN DB1.RagioneSociale
ELSE DB1.Cognome + ' ' + DB1.Nome
END AS Nominativo
,DB1.Indirizzo + ' - ' + DB1.Cap+ ' ' + DB1.Citta + '(' + DB1.Provincia + ')' AS IndirizzoCompleto
,DB1.Telefono + ' ' + DB1.Email AS Contatti
,(SELECT DISTINCT COUNT (*)
FROM DB3
WHERE DB3.IdAttivazione = DB1.IdUtente
) AS NumeroAccessi
,(SELECT DB4.NumTarga
FROM DB4
WHERE DB4.IdUtente = DB1.IdUtente
) AS NumeroTarghe
,DB1.DataRegistrazione
,DB1.DataScadenza
,DB1.Attivo
FROM DB1
INNER JOIN DB2
ON DB1.IdGruppo = DB2.IdGruppo
WHERE DB1.Demo = 0
ORDER BY DB1.RagioneSociale
Why i receive this error from sql server?
Error 512: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
What am i wrong?
Sorry for bad english.
Thanks for any help
Kind Regards
M.W.
Your sub-selects should be rewritten as joins even if your code was working without issue.
If you do this and actually do some testing across your whole dataset you will see where your duplication is coming from, which is giving you more than one row and causing the error you are seeing:
SELECT DB1.IdUtente
,DB2.Gruppo
,DB1.Username
,DB1.Psw
,CASE WHEN DB1.RagioneSociale IS NOT NULL
AND DB1.RagioneSociale <> ''
THEN DB1.RagioneSociale
ELSE DB1.Cognome + ' ' + DB1.Nome
END AS Nominativo
,DB1.Indirizzo + ' - ' + DB1.Cap+ ' ' + DB1.Citta + '(' + DB1.Provincia + ')' AS IndirizzoCompleto
,DB1.Telefono + ' ' + DB1.Email AS Contatti
,DB3.NumeroAccessi
-- Somewhere in your data you will have at least two rows with different values in this field.
,DB4.NumTarga AS NumeroTarghe
,DB1.DataRegistrazione
,DB1.DataScadenza
,DB1.Attivo
FROM DB1
INNER JOIN DB2
ON DB1.IdGruppo = DB2.IdGruppo
INNER JOIN (SELECT IdAttivazione
,COUNT(*) as NumeroAccessi
FROM DB3
GROUP BY IdAttivazione
) DB3
ON DB3.IdAttivazione = DB1.IdUtente
INNER JOIN DB4
ON DB4.IdUtente = DB1.IdUtente
WHERE DB1.Demo = 0
ORDER BY DB1.RagioneSociale

Sql Select value in From Clause

I am having a tricky time doing this so hopefully someone can help. This is what I want my end result to be:
SELECT 'PRODUCT' AS ItemType,
'x' + CAST(MB_StaticOrderProducts.Quantity AS varchar(50)),
MB_StaticOrderProducts.ProductName + ' (' + CAST((MB_StaticOrderProducts.ProductSize) AS varchar(50)) + ' ' + MB_StaticProductMeasure.Value + ')' AS Name,
MB_StaticOrderProducts.ProductSizeID,
GTIN as BarCode
FROM
MB_StaticOrderProducts
INNER JOIN MB_StaticOrderVersions ON MB_StaticOrderProducts.StaticOrderVersionId = MB_StaticOrderVersions.StaticOrderVersionId
INNER JOIN MB_StaticProductMeasure ON MB_StaticOrderProducts.StaticProductMeasureId = MB_StaticProductMeasure.StaticProductMeasureId
Inner Join ProductVariantAttributeCombination pvac on (pvac.Id = (select id from (select Id, cast(AttributesXml as xml) data from ProductVariantAttributeCombination) d cross apply data.nodes('//ProductVariantAttributeValue[Value[1] = 32]') data(d)))
WHERE
MB_StaticOrderProducts.StaticOrderVersionId = '8D803EAE-2CFC-455C-9CE7-0849618E1548'
I would like the column, MB_StaticOrderProducts.ProductSizeId to be in the 4th line of the From section, ProductVariantAttributeCombination, where the number 32 is. Is there a way to use the variable in that area?
For those that ever need it. I changed the data.nodes line to be:
data.nodes( '//ProductVariantAttributeValue[Value[1]
=sql:column("MB_StaticOrderProducts.ProductSizeID")]')
and all worked. Thanks.
Use the Concat function
Like this:
data.nodes( concat('//ProductVariantAttributeValue[Value[1] = ',
MB_StaticOrderProducts.ProductSizeId,
']')

PostgreSQL "DESCRIBE TABLE"

How do you perform the equivalent of Oracle's DESCRIBE TABLE in PostgreSQL with psql command?
Try this (in the psql command-line tool):
\d+ tablename
See the manual for more info.
In addition to the PostgreSQL way (\d 'something' or \dt 'table' or \ds 'sequence' and so on)
The SQL standard way, as shown here:
select column_name, data_type, character_maximum_length, column_default, is_nullable
from INFORMATION_SCHEMA.COLUMNS where table_name = '<name of table>';
It's supported by many db engines.
If you want to obtain it from query instead of psql, you can query the catalog schema. Here's a complex query that does that:
SELECT
f.attnum AS number,
f.attname AS name,
f.attnum,
f.attnotnull AS notnull,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS type,
CASE
WHEN p.contype = 'p' THEN 't'
ELSE 'f'
END AS primarykey,
CASE
WHEN p.contype = 'u' THEN 't'
ELSE 'f'
END AS uniquekey,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreignkey,
CASE
WHEN p.contype = 'f' THEN p.confkey
END AS foreignkey_fieldnum,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreignkey,
CASE
WHEN p.contype = 'f' THEN p.conkey
END AS foreignkey_connnum,
CASE
WHEN f.atthasdef = 't' THEN d.adsrc
END AS default
FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
WHERE c.relkind = 'r'::char
AND n.nspname = '%s' -- Replace with Schema name
AND c.relname = '%s' -- Replace with table name
AND f.attnum > 0 ORDER BY number
;
It's pretty complex but it does show you the power and flexibility of the PostgreSQL system catalog and should get you on your way to pg_catalog mastery ;-). Be sure to change out the %s's in the query. The first is Schema and the second is the table name.
You can do that with a psql slash command:
\d myTable describe table
It also works for other objects:
\d myView describe view
\d myIndex describe index
\d mySequence describe sequence
Source: faqs.org
The psql equivalent of DESCRIBE TABLE is \d table.
See the psql portion of the PostgreSQL manual for more details.
This should be the solution:
SELECT * FROM information_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table'
You may do a \d *search pattern * with asterisks to find tables that match the search pattern you're interested in.
In addition to the command line \d+ <table_name> you already found, you could also use the information-schema to look up the column data, using info_schema.columns
SELECT *
FROM info_schema.columns
WHERE table_schema = 'your_schema'
AND table_name = 'your_table'
Use the following SQL statement
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'tbl_name'
AND COLUMN_NAME = 'col_name'
If you replace tbl_name and col_name, it displays data type of the particular coloumn that you looking for.
You can use this :
SELECT attname
FROM pg_attribute,pg_class
WHERE attrelid=pg_class.oid
AND relname='TableName'
AND attstattarget <>0;
In MySQL , DESCRIBE table_name
In PostgreSQL , \d table_name
Or , you can use this long command:
SELECT
a.attname AS Field,
t.typname || '(' || a.atttypmod || ')' AS Type,
CASE WHEN a.attnotnull = 't' THEN 'YES' ELSE 'NO' END AS Null,
CASE WHEN r.contype = 'p' THEN 'PRI' ELSE '' END AS Key,
(SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid), '\'(.*)\'')
FROM
pg_catalog.pg_attrdef d
WHERE
d.adrelid = a.attrelid
AND d.adnum = a.attnum
AND a.atthasdef) AS Default,
'' as Extras
FROM
pg_class c
JOIN pg_attribute a ON a.attrelid = c.oid
JOIN pg_type t ON a.atttypid = t.oid
LEFT JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
AND r.conname = a.attname
WHERE
c.relname = 'tablename'
AND a.attnum > 0
ORDER BY a.attnum
This variation of the query (as explained in other answers) worked for me.
SELECT
COLUMN_NAME
FROM
information_schema.COLUMNS
WHERE
TABLE_NAME = 'city';
It's described here in details:
http://www.postgresqltutorial.com/postgresql-describe-table/
To improve on the other answer's SQL query (which is great!), here is a revised query. It also includes constraint names, inheritance information, and a data types broken into it's constituent parts (type, length, precision, scale). It also filters out columns that have been dropped (which still exist in the database).
SELECT
n.nspname as schema,
c.relname as table,
f.attname as column,
f.attnum as column_id,
f.attnotnull as not_null,
f.attislocal not_inherited,
f.attinhcount inheritance_count,
pg_catalog.format_type(f.atttypid,f.atttypmod) AS data_type_full,
t.typname AS data_type_name,
CASE
WHEN f.atttypmod >= 0 AND t.typname <> 'numeric'THEN (f.atttypmod - 4) --first 4 bytes are for storing actual length of data
END AS data_type_length,
CASE
WHEN t.typname = 'numeric' THEN (((f.atttypmod - 4) >> 16) & 65535)
END AS numeric_precision,
CASE
WHEN t.typname = 'numeric' THEN ((f.atttypmod - 4)& 65535 )
END AS numeric_scale,
CASE
WHEN p.contype = 'p' THEN 't'
ELSE 'f'
END AS is_primary_key,
CASE
WHEN p.contype = 'p' THEN p.conname
END AS primary_key_name,
CASE
WHEN p.contype = 'u' THEN 't'
ELSE 'f'
END AS is_unique_key,
CASE
WHEN p.contype = 'u' THEN p.conname
END AS unique_key_name,
CASE
WHEN p.contype = 'f' THEN 't'
ELSE 'f'
END AS is_foreign_key,
CASE
WHEN p.contype = 'f' THEN p.conname
END AS foreignkey_name,
CASE
WHEN p.contype = 'f' THEN p.confkey
END AS foreign_key_columnid,
CASE
WHEN p.contype = 'f' THEN g.relname
END AS foreign_key_table,
CASE
WHEN p.contype = 'f' THEN p.conkey
END AS foreign_key_local_column_id,
CASE
WHEN f.atthasdef = 't' THEN d.adsrc
END AS default_value
FROM pg_attribute f
JOIN pg_class c ON c.oid = f.attrelid
JOIN pg_type t ON t.oid = f.atttypid
LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
LEFT JOIN pg_class AS g ON p.confrelid = g.oid
WHERE c.relkind = 'r'::char
AND f.attisdropped = false
AND n.nspname = '%s' -- Replace with Schema name
AND c.relname = '%s' -- Replace with table name
AND f.attnum > 0
ORDER BY f.attnum
;
You can also check using below query
Select * from schema_name.table_name limit 0;
Expmple : My table has 2 columns name and pwd. Giving screenshot below.
*Using PG admin3
In postgres \d is used to describe the table structure.
e.g. \d schema_name.table_name
this command will provide you the basic info of table such as, columns, type and modifiers.
If you want more info about table use
\d+ schema_name.table_name
this will give you extra info such as, storage, stats target and description
The best way to describe a table such as a column, type, modifiers of columns, etc.
\d+ tablename or \d tablename
When your table is not part of the default schema, you should write:
\d+ schema_name.table_name
Otherwise, you would get the error saying that "the relation doesn not exist."
When your table name starts with a capital letter you should put your table name in the quotation.
Example: \d "Users"
Use this command
\d table name
like
\d queuerecords
Table "public.queuerecords"
Column | Type | Modifiers
-----------+-----------------------------+-----------
id | uuid | not null
endtime | timestamp without time zone |
payload | text |
queueid | text |
starttime | timestamp without time zone |
status | text |
1) PostgreSQL DESCRIBE TABLE using psql
In psql command line tool, \d table_name or \d+ table_name to find the information on columns of a table
2) PostgreSQL DESCRIBE TABLE using information_schema
SELECT statement to query the column_names,datatype,character maximum length of the columns table in the information_schema database;
SELECT
COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
from INFORMATION_SCHEMA.COLUMNS where table_name = 'tablename';
For more information https://www.postgresqltutorial.com/postgresql-describe-table/
I'll add the pg_dump command even thou the psql command was requested. because it generate an output more common to previous MySQl users.
# sudo -u postgres pg_dump --table=my_table_name --schema-only mydb
/dt is the commad which lists you all the tables present in a database. using
/d command and /d+ we can get the details of a table. The sysntax will be like
* /d table_name (or) \d+ table_name
The command below can describe multiple tables simply
\dt <table> <table>
The command below can describe multiple tables in detail:
\d <table> <table>
The command below can describe multiple tables in more detail:
\d+ <table> <table>
I worked out the following script for get table schema.
'CREATE TABLE ' || 'yourschema.yourtable' || E'\n(\n' ||
array_to_string(
array_agg(
' ' || column_expr
)
, E',\n'
) || E'\n);\n'
from
(
SELECT ' ' || column_name || ' ' || data_type ||
coalesce('(' || character_maximum_length || ')', '') ||
case when is_nullable = 'YES' then ' NULL' else ' NOT NULL' end as column_expr
FROM information_schema.columns
WHERE table_schema || '.' || table_name = 'yourschema.yourtable'
ORDER BY ordinal_position
) column_list;

Resources