Snowflake:Copy from S3 into Table with nested JSON - snowflake-cloud-data-platform

Requirement: To load the Nested JSON file into the Snowflake from S3
Error: SQL compilation error: COPY statement only supports simple SELECT from stage statements for import.
I know I can create a temporary table from the SQL, Is there a better way to load directly from the S3 into Snowflake
COPY INTO schema.table_A FROM (
WITH s3 AS (
SELECT $1 AS json_array
FROM '#public.stage'
(file_format => 'public.json',
pattern => 'abc/xyz/.*')
)
SELECT DISTINCT
CURRENT_TIMESTAMP() AS exec_t,
json_array AS data,
json_array:id AS id,
json_array:code::text AS code
FROM s3,TABLE(Flatten(s3.json_array)) f
);

Basically Transformations during loading come along with certain limitations, see here: https://docs.snowflake.com/en/user-guide/data-load-transform.html#transforming-data-during-a-load
If you still want to keep your code and not apply the transformations later, you may create a view on top of the stage and then basically you INSERT into another table based on SELECT * from that view.
Maybe avoiding the CTE is already helping.

Related

How to use inline file format to query data from stage in Snowflake data warehouse

Is there any way to query data from a stage with an inline file format without copying the data into a table?
When using a COPY INTO table statement, I can specify an inline file format:
COPY INTO <table>
FROM (
SELECT ...
FROM #my_stage/some_file.csv
)
FILE_FORMAT = (
TYPE = CSV,
...
);
However, the same thing doesn't work when running the same select query directly, outside of the COPY INTO command:
SELECT ...
FROM #my_stage/some_file.csv
(FILE_FORMAT => (
TYPE = CSV,
...
));
Instead, the best I can do is to use a pre-existing file format:
SELECT ...
FROM #my_stage/some_file.csv
(FILE_FORMAT => 'my_file_format');
But this doesn't allow me to programatically change the file format when creating the query. I've tried every syntax variation possible, but this just doesn't seem to be supported right now.
I don't believe it is possible but, as a workaround, can't you create the file format programatically, use that named file format in your SQL and then, if necessary, drop it?

How to view timestamp when pipe finished copying data from stage to table?

I've created a Pipe from an S3 stage and with a python script I'm generating the timestamps of when I am generating the data from a streaming service into file batches. I would also like to be able to add the timestamp when the files were actually copied into the table from the S3 stage. I've found some documentation regarding the PIPE_USAGE_HISTORY method but although I've already ran for the past days quite a few tests the below returns an empty table. What am I doing wrong?
select * from table(information_schema.pipe_usage_history(
date_range_start=>dateadd('day',-14,current_date()),
date_range_end=>current_date())
)
I found the answer. There is another query I should be using: copy_history
The above query would be rewritten as follows
select * from table(information_schema.copy_history(
table_name => '{replace with your schema.table}',
start_time => dateadd(days, -14, current_timestamp()),
end_time => current_timestamp())
)

BigQuery or SQL Server SPLIT query

I have searched around and can not find much on this topic. I have a table, that gets logging information. As a result the column I am interested in contains multiple values that I need to search against. The column is formatted in a php URL style. i.e.
/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32
This makes all searches end up with really long regexes to get data. Then join statements to combine data.
Is there a way in BigQuery, or SQL Server that I can pull the information from that column and put it into new columns?
Example:
The information I would like extracted begins after the ?, and ends at &, The string can sometimes be longer, and contains additional headers.
Thanks,
Below is for BigQuery Standard SQL and addresses below aspect of your question
Is there a way in BigQuery, ... that I can pull the information from that column and put it into new columns?
#standardSQL
CREATE TEMP FUNCTION parseColumn(kv STRING, column_name STRING) AS (
IF(SPLIT(kv, '=')[OFFSET(0)]= column_name, SPLIT(kv, '=')[OFFSET(1)], NULL)
);
WITH `project.dataset.table` AS (
SELECT '/test/test.aspx?extra=abc&DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url UNION ALL
SELECT '/test/test.aspx?DS_Vendor=55192&DS_ProdVer=4.30.100.0&more=123&DS_ProdLang=DE&DS_Product=MTE&DS_OfficeBits=64'
)
SELECT
MIN(parseColumn(kv, 'DS_Vendor')) AS DS_Vendor,
MIN(parseColumn(kv, 'DS_ProdVer')) AS DS_ProdVer,
MIN(parseColumn(kv, 'DS_ProdLang')) AS DS_ProdLang,
MIN(parseColumn(kv, 'DS_Product')) AS DS_Product,
MIN(parseColumn(kv, 'DS_OfficeBits')) AS DS_OfficeBits
FROM `project.dataset.table`,
UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS kv
GROUP BY url
with the result as below
Row DS_Vendor DS_ProdVer DS_ProdLang DS_Product DS_OfficeBits
1 55039 7.90.100.0 EN MTT 32
2 55192 4.30.100.0 DE MTE 64
Below is also addressed
The string can sometimes be longer, and contains additional headers.
One example using BigQuery (with standard SQL):
SELECT REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
This returns the parts of the URL as an ARRAY<STRING>. To go one step further, you can get back an ARRAY<STRUCT<key STRING, value STRING>> with a query of this form:
SELECT
ARRAY(
SELECT AS STRUCT
SPLIT(part, '=')[OFFSET(0)] AS key,
SPLIT(part, '=')[OFFSET(1)] AS value
FROM UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS part
) AS keys_and_values
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
...or with the keys and values as top-level columns:
SELECT
SPLIT(part, '=')[OFFSET(0)] AS key,
SPLIT(part, '=')[OFFSET(1)] AS value
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
CROSS JOIN UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS part

Creating XML Schema for Bulk Load to SQL Server - Child Element Describes Parent

I have an XML document that I'm working to build a schema for in order to bulk load these documents into a SQL Server table. The XML I'm focusing on looks like this:
<Coverage>
<CoverageCd>BI</CoverageCd>
<CoverageDesc>BI</CoverageDesc>
<Limit>
<FormatCurrencyAmt>
<Amt>30000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>PerPerson</LimitAppliesToCd>
</Limit>
<Limit>
<FormatCurrencyAmt>
<Amt>85000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>PerAcc</LimitAppliesToCd>
</Limit>
</Coverage>
<Coverage>
<CoverageCd>PD</CoverageCd>
<CoverageDesc>PD</CoverageDesc>
<Limit>
<FormatCurrencyAmt>
<Amt>50000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>Coverage</LimitAppliesToCd>
</Limit>
</Coverage>
Inside the Limit element, there's a child LimitAppliesToCd that I need to use to determine where the Amt element's value actually gets stored inside my table. Is this possible to do using the standard XML Bulk Load feature of SQL Server? Normally in XML I'd expect that the element would have an attribute containing the "PerPerson" or "PerAcc" information, but this standard we're using does not call for that.
If anyone has worked with the ACORD standard before, you might know what I'm working with here. Any help is greatly appreciated.
Don't know exactly what you are talking about, but this is a solution to get the information out of your XML.
Assumption: Your XML is already bulk-loaded into a declared variable #xml of type XML:
A CTE will pull the information out of your XML. The final query will then use PIVOT to put your data into the right column.
With a fitting table's structure the actual insert should be simple...
WITH DerivedTable AS
(
SELECT cov.value('CoverageCd[1]','varchar(max)') AS CoverageCd
,cov.value('CoverageDesc[1]','varchar(max)') AS CoverageDesc
,lim.value('(FormatCurrencyAmt/Amt)[1]','decimal(14,4)') AS Amt
,lim.value('LimitAppliesToCd[1]','varchar(max)') AS LimitAppliesToCd
FROM #xml.nodes('/root/Coverage') AS A(cov)
CROSS APPLY cov.nodes('Limit') AS B(lim)
)
SELECT p.*
FROM
(SELECT * FROM DerivedTable) AS tbl
PIVOT
(
MIN(Amt) FOR LimitAppliesToCD IN(PerPerson,PerAcc,Coverage)
) AS p

Parse json arrays using HIVE

I have many json arrays stored in a table (jt) that looks like this:
[{"ts":1403781896,"id":14,"log":"show"},{"ts":1403781896,"id":14,"log":"start"}]
[{"ts":1403781911,"id":14,"log":"press"},{"ts":1403781911,"id":14,"log":"press"}]
Each array is a record.
I would like to parse this table in order to get a new table (logs) with 3 fields: ts, id, log.
I tried to use the get_json_object method, but it seems that method is not compatible with json arrays because I only get null values.
This is the code I have tested:
CREATE TABLE logs AS
SELECT get_json_object(jt.value, '$.ts') AS ts,
get_json_object(jt.value, '$.id') AS id,
get_json_object(jt.value, '$.log') AS log
FROM jt;
I tried to use other functions but they seem really complicated.
Thank you! :)
Update!
I solved my issue by performing a regexp:
CREATE TABLE jt_reg AS
select regexp_replace(regexp_replace(value,'\\}\\,\\{','\\}\\\n\\{'),'\\[|\\]','') as valuereg from jt;
CREATE TABLE logs AS
SELECT get_json_object(jt_reg.valuereg, '$.ts') AS ts,
get_json_object(jt_reg.valuereg, '$.id') AS id,
get_json_object(jt_reg.valuereg, '$.log') AS log
FROM ams_json_reg;
I just ran into this problem, with the JSON array stored as a string in the hive table.
The solution is a bit hacky and ugly, but it works and doesn't require serdes or external UDFs
SELECT
get_json_object(single_json_table.single_json, '$.ts') AS ts,
get_json_object(single_json_table.single_json, '$.id') AS id,
get_json_object(single_json_table.single_json, '$.log') AS log
FROM ( SELECT explode (
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"}","', '"}",,,,"'), ',,,,')
) FROM src_table) single_json_table;
I broke the lines up so that it would be a little easier to read.
I'm using substr() to strip the first and last characters, removing [ and ] . I'm then using regex_replace to match the separator between records in the json array and adding or changing the separator to be something unique that can then be used easily with split() to turn the string into a hive array of json objects which can then be used with explode() as described in the previous solution.
Note, the separator regex used here ( "}"," ) wouldn't work with the original data set...the regex would have to be ( "},\{" ) and the replacement would then need to be "},,,,{" eg..
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"},\\{"', '"},,,,{"'), ',,,,')
Use explode() function
hive (default)> CREATE TABLE logs AS
> SELECT get_json_object(single_json_table.single_json, '$.ts') AS ts,
> get_json_object(single_json_table.single_json, '$.id') AS id,
> get_json_object(single_json_table.single_json, '$.log') AS log
> FROM
> (SELECT explode(json_array_col) as single_json FROM jt) single_json_table ;
Automatically selecting local only mode for query
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
hive (default)> select * from logs;
OK
ts id log
1403781896 14 show
1403781896 14 start
1403781911 14 press
1403781911 14 press
Time taken: 0.118 seconds, Fetched: 4 row(s)
hive (default)>
where json_array_col is column in jt which holds your array of jsons.
hive (default)> select json_array_col from jt;
json_array_col
["{"ts":1403781896,"id":14,"log":"show"}","{"ts":1403781896,"id":14,"log":"start"}"]
["{"ts":1403781911,"id":14,"log":"press"}","{"ts":1403781911,"id":14,"log":"press"}"]
because get_json_object doesn't support json array string, so you can concat to a json object, like this:
SELECT
get_json_object(concat(concat('{"root":', jt.value), '}'), '$.root')
FROM jt;

Resources