not able to create stage in snowflake - snowflake-cloud-data-platform

create or replace stage AWS_OWNER1
url = 's3 url'
credentials = (aws_key_id = 'aws_key_name'
aws_secret_key = 'aws_secret_key')
file_format = CSV;
when i run above query i will get error as "SQL compilation error: File format 'CSV' does not exist or not authorized."
please send valied answer to solve this issue.
Thank You

the syntax is:
file_format = (type = 'CSV')
However, as CSV is the default, you can leave this out entirely

Related

snowflake: continue on error but also list all the errors

I am inserting data into snowflake using the below statement
copy into "sampletable"
from s3://test/test/ credentials=(aws_key_id='xxxx' aws_secret_key='yyyyy')
file_format = (type = csv field_delimiter = '|'skip_header = 1)
on_error = 'continue';
but after the ingestion is done, i also want to know which rows are not inserted and whats the reason, since i am using the option on_error = 'continue'
any idea how can i do this.
You can use the VALIDATE function: https://docs.snowflake.com/en/sql-reference/functions/validate.html

unload Snowflake data to s3 without the extension/file_format

How can I unload snowflake data to s3 without using any file format?
For unloading the data into a specific extension we use file format in snowflake.
E.g. code
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format);
But what I want is to store data without any extension.
SINGLE is what I was looking for. It is one of parameters we can use with COPY command which creates the file without extension.
Code:
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format)
SINGLE = TRUE;
Go through note of below link for better understanding:
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#:~:text=comma%20(%2C)-,FILE_EXTENSION,-%3D%20%27string%27%20%7C%20NONE
You can add the parameter FILE_EXTENSION = NONE to your file format. With this parameter Snowflake is not adding a file extension based on your file format (in this case .csv), but is using the passed extension (NONE or any other).
copy into 's3://mybucket/unload/'
from mytable
storage_integration = myint
file_format = (format_name = my_csv_format file_extension = NONE);
https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html

Load data into Snowflake using Pentaho

I am using pentaho 7.1 and trying to load data to Snowflake . The SQl is running fine on Snowflake. But in Pentaho I am getting error :
Couldn't execute SQL: copy into "DEMO_DB"."PUBLIC"."STG_DIMACTIVITY" Table 'DEMO_DB.PUBLIC.STG_DIMACTIVITY' does not exist
SQL used is :
copy into "DEMO_DB"."PUBLIC"."STG_DIMACTIVITY"
from #my_s3_stage
FILES = ('MB_ACTIVITY.txt_0')
--pattern='.*MB_ACTIVITY.txt_0.*'
file_format = (type = csv field_delimiter = '|' skip_header = 1)
force=true;
Please let me know what i am missing here.Any help is much appreciated.

SNOWFLAKE COPY Command not copying data inside

Im getting started with Snowflake and something I dont understand. I tried to issue a copy command as below but it shows no rows processed.
copy into customer
from #bulk_copy_example_stage
FILES = ('dataDec-9-2020.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
I tried with another file from the same S3 folder
copy into customer
from #bulk_copy_example_stage
FILES = ('generated_customer_data.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
And this worked.
At this stage im pretty sure that something was wrong with my first file. but my question is, how do we get to print out what the error was? all it shows in the console is as below which is not really helpful.
You could try looking at the copy_history to find out what's wrong with the file.
Reference: copy_history

What regex parser is used for the files_pattern for the 'COPY INTO' sql query?

(Submitted on behalf of a Snowflake User)
I have a test s3 folder called s3://bucket/path/test=integration_test_sanity/file.parquet
I want to be able to load this into snowflake using the COPY INTO command but I want to be able to load all the test folders which have a structure like test=*/file.parquet.
I've tried:
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='test=(.*)/.*'
FILE_FORMAT = (TYPE = parquet)
and also
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='test=.*/.*'
FILE_FORMAT = (TYPE = parquet)
Neither of these works. I was wondering what regex parser is used by Snowflake and which regex I should use to get this to work.
This works but I can't filter on just test folders which can cause issues
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='.*/.*'
FILE_FORMAT = (TYPE = parquet)
Any recommendations? Thanks!
Try this:
COPY INTO raw.test_sanity_test_parquet
FROM 's3://bucket/path/'
CREDENTIALS=(AWS_KEY_ID='XXX' AWS_SECRET_KEY='XXX')
PATTERN='.*/test.*[.]parquet'
FILE_FORMAT = (TYPE = parquet)

Resources