In Snowflake database, I'm trying to extract string from an array column.
The name of the column in the table is: mbus.
So, if you query the table:
select PRO.JSON_DATA:mbus
FROM SOURCE_TABLE1 PRO
the result will be:
[{"region":"EAME"},{"region":"LA"},[{"region":"NA"},[{"region":"NAP"},[{"region":"SAP"}]
I'm using ARRAY_TO_STRING function:
SELECT ARRAY_TO_STRING(PRO.JSON_DATA:mbus:region, ', ')
FROM SOURCE_TABLE1 PRO
, but the result is NULL.
The final result should be: EAME, LA, NA, NAP, SAP (Extracting from the column).
Could you help me on this one? I need to build a query to extract the properly strings from the array.
Using FLATTEN to transform json to rows and LISTAGG to combine back to single string:
SELECT LISTAGG(f.value:region::STRING, ',') AS col
FROM SOURCE_TABLE1 PRO
,LATERAL FLATTEN(input => PRO.JSON_DATE:mbus) f
Related
We have a scenario to apply on below table (sample provided) to split the date_clmn of varchar type to each individual.
tableA
emp_id date_clmn
123 ("2021-01-01", "2021-03-03")
456 ("2021-02-01", "2021-04-03")
So, we have a scenario while inserting the data in the following table through dml operations.
For example1:
DELETE FROM tableA WHERE cast(BEGIN(date_clmn) as DATE FORMAT 'YYYY-MM-DD') =current_date AND END(date_clmn) IS UNTIL_CHANGED ;
Which we need to convert to snowflake format, to take the first value and the second value from the date_clmn column where we are considering the "date_clmn" is in varchar datatype.
So, in snowflake we need to know how to get the first and second value from all rows when we are accessing the column in the filter clause ?
This is actually the teradata period datatype, which we want to equivalate in snowflake.
If there are always just 2 dates in there, easiest to me would be:
to_date(split_part(column_1, ',', 1)) as date_1
to_date(split_part(column_1, ',', 2)) as date_2
You might have to clean up the parentheses after with replace.
Below is script which helps you to pull needed data:
https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
Setup Data:
create or replace table splittable (a number, v varchar);
insert into splittable (a, v) values (456, '("2021-02-01", "2021-04-03")');
Output Query:
select a, regexp_replace(value,'\\("|"|\\)') as value
from splittable, lateral split_to_table(splittable.v, ',');
Hello I need to create a new table, from another table that has a nested column (EX: metrics col.) that I need to unnest for the new table without writing out each column (because.. what if I have 100 columns):
INSERT INTO new_table
SELECT sales_uk, sales_ca, sales_sp, sales_us, sales, metrics[0]:category::string FROM og_table
is there another way? I've tried this but it didn't work:
select io.metrics[0]:category::string as new_id, io.*
from og_table io
If I understand your question correctly (that you have a variant column which you want to structure into a column for every object key) and it is acceptable to use copy/paste, then there should be an acceptable approach like this using lateral flatten of the variant column to produce a list of copy/paste-able columns for your new table definition.
with og as (select array_construct(object_construct(
'category', 1,
'bar', 2,
'baz', 3
)) metrics)
select 'metrics'||metrics_array.path||':'||metrics_object.path||'::string as '||metrics_object.path||',' newcol from og
,lateral flatten(metrics) metrics_array
,lateral flatten(metrics_array.value) metrics_object;
--Produces output like:
metrics[0]:bar::string as bar,
metrics[0]:baz::string as baz,
metrics[0]:category::string as category,
In column A there is a list of tasks.
In column B each task has an associated group.
How to, using built-in formulas, generate sequence like in column D?
Here is a screenshot :
try:
=ARRAYFORMULA(TRIM(TRANSPOSE(SPLIT(QUERY(TRANSPOSE(QUERY(QUERY(IF(A2:B="",,A2:B&"♦"),
"select max(Col1) where Col1 is not null group by Col1 pivot Col2", 0)
,,999^99)),,999^99), "♦"))))
This should work as well as player0s. I keep trying to get him to use FLATTEN() :)
=QUERY(FLATTEN(TRANSPOSE(QUERY(A2:B,"select Max(A) group by A pivot B"))),"where Col1<>''")
So I have a Postgres database where one of the columns is an array of strings
If I do the query
SELECT count(*) FROM table WHERE column #> ARRAY['string']::varchar[];
I get a certain set of data back, but if I want to query that array with a wildcard on the string I can't seem to figure that out, something like
SELECT count(*) FROM table WHERE column LIKE ARRAY['%string%']::varchar[];
Any help is appreciated!
use unnest():
WITH t(arr) AS ( VALUES
(ARRAY['foo','bar']),
(ARRAY['foo1','bar1'])
)
SELECT count(*) FROM t,unnest(t.arr) AS str
WHERE str ILIKE '%foo%';
Result:
count
-------
2
(1 row)
I am hoping it is straightforward to do the following:
Given rows containing jsonb of the form
{
'a':"hello",
'b':['jim','bob','kate']
}
I would like to be able to get all the 'b' fields from a table (as in select jsondata->'b' from mytable) and then form a list consisting of all strings which occur in at least one 'b' field. (Basically a set-union.)
How can I do this? Or am I better off using a python script to extract the 'b' entries, do the set-union there, and then store it back into the database somewhere else?
This gives you the union set of elements in list 'b' of the json.
SELECT array_agg(a order by a)
FROM (SELECT DISTINCT unnest(txt_arr) AS a FROM
(SELECT ARRAY(SELECT trim(elem::text, '"')
FROM jsonb_array_elements(jsondata->'b') elem) AS txt_arr
FROM jtest1)y)z;
Query Explanation:
Gets the list from b as jsondata->'b'
Expands a JSON array to a set of JSON values from jsonb_array_elements() function.
Trims the " part in the elements from trim() function.
Converts to an array again using array() function after trimming.
Get the distinct value by unnesting it using unnest() function.
Finally array_agg() is used to form the expected result.