Copy the same file into table using COPY command & snowpipe - snowflake-cloud-data-platform

I coudln't load the samefile into table in snowflake using COPY command/snowpipe.
I am always getting the following result
Copy executed with 0 files processed.
I have re-created the table. Truncated the table. But the copy_history doesn't show any data
select * from table(information_schema.copy_history(table_name=>'mytable', start_time=> dateadd(hours, -10, current_timestamp())));
I have used FORCE = true in COPY Command and COPY command didnt load the same file into Table. I have explicitly mentioned file path in COPY COMMAND
FROM
#STAGE_DEV/myfile/05/28/16/myfile_1.csv
) file_format = (
format_name = STANDARD_CSV_FORMAT Skip_header = 1 FIELD_OPTIONALLY_ENCLOSED_BY = '"' NULL_IF = 'NULL'
)
on_error = continue
Force = True;
Anyone faced similar issue and what would the process to load the same file again using COPY command or SNOWPIPE ? I dont have option to change file name or put the files in different S3 bucket.
ls#stage shows the following files ls#stage

I have reloaded files to S3 bucket and it's working. Thank you guys for all the responses. –

Related

SQL Compilation error while loading CSV file from S3 to Snowflake

we are facing below issue while loading csv file from S3 to Snowflake.
SQL Compilation error: Insert column value list does not match column list expecting 7 but got 6
we have tried removing the column from table and tried again but this time it is showing expecting 6 but got 5
below are the the commands that we have used for stage creation and copy command.
create or replace stage mystage
url='s3://test/test'
STORAGE_INTEGRATION = test_int
file_format = (type = csv FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION=GZIP);
copy into mytable
from #mystage
MATCH_BY_COLUMN_NAME = CASE_INSENSITIVE;
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION=GZIP error_on_column_count_mismatch=false TRIM_SPACE=TRUE NULL_IF=(''))
FORCE = TRUE
ON_ERROR = Continue
PURGE=TRUE;
You can not use MATCH_BY_COLUMN_NAME for the CSV files, this is why you get this error.
This copy option is supported for the following data formats:
JSON
Avro
ORC
Parquet
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html

S3 into Snowflake : COPY INTO With purge = true is not deleting files in S3 Bucket

Can't find much documentation on why I'm seeing this issue. I believe I have the permissions to delete objects in S3, as I can go into the bucket on AWS and delete files myself. However, when I'm running the code below, it'll load into the staging table successfully but then I'll still see all of the files in the S3 bucket.
Is this a permission issue or is there something wrong with the way I'm creating the stage or copy statement?
create or replace stage s3_stage
url='s3://bucket_name/folder/'
credentials = (
aws_key_id= 'my_key_id'
aws_secret_key='my_secret_key')
file_format = (type = 'parquet');
copy into staging_table
from (
select
$1:id,
COALESCE($1:Op, 'F'),
$1:timestamp,
$1
from #s3_stage)
pattern = ".*folder/.*"
file_format = (type = 'parquet')
purge = TRUE;

Snowflake-Internal Stage data load error: How to load "\" character

In a file, few of the rows have \ in a column value for example, i have rows in below format.
101,Path1,Z:\VMC\PSPS,abc
102,Path5,C:\wintm\PSPS,abc
I was wondering how to load \ character
COPY INTO TEST_TABLE from #database.schema.stage_name FILE_FORMAT = ( TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '\"' SKIP_HEADER = 1 );
is there any thing that i can mention the file_format line?
Are you still getting this error? I just tried to recreate it by creating a CSV based off your sample data and a test table. I loaded the CSV into an internal stage and then ran your COPY command. It worked for me. Please see the screenshot below.
Could you provide more details on the error you are facing? Perhaps there was something off with your table definition.

SNOWFLAKE COPY Command not copying data inside

Im getting started with Snowflake and something I dont understand. I tried to issue a copy command as below but it shows no rows processed.
copy into customer
from #bulk_copy_example_stage
FILES = ('dataDec-9-2020.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
I tried with another file from the same S3 folder
copy into customer
from #bulk_copy_example_stage
FILES = ('generated_customer_data.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
And this worked.
At this stage im pretty sure that something was wrong with my first file. but my question is, how do we get to print out what the error was? all it shows in the console is as below which is not really helpful.
You could try looking at the copy_history to find out what's wrong with the file.
Reference: copy_history

Snowflake ON_ERROR=CONTINUE abort the COPY command for file

Snowkflake documentation for COPY INTO command states (for COPY options)
ON_ERROR = CONTINUE | SKIP_FILE | SKIP_FILE_num | SKIP_FILE_num% | ABORT_STATEMENT
Continue loading the file. The COPY statement returns an error message
for a maximum of one error encountered per data file. Note that the
difference between the ROWS_PARSED and ROWS_LOADED column values
represents the number of rows that include detected errors. However,
each of these rows could include multiple errors. To view all errors
in the data files, use the VALIDATION_MODE parameter or query the
VALIDATE function.
But for me, it just doesn't seem to obey, as I see the default value i.e SKIP_FILE is getting applied as files are getting skipped on any error in the file.
create or replace file format jsonThing type = 'json' DATE_FORMAT='yyyy-mm-dd'
TIMESTAMP_FORMAT='YYYY-MM-DD"T"HH24:MI:SSZ' TRIM_SPACE=TRUE NULL_IF=('\\N', 'NULL','');
create or replace stage snowflake_json_stage
storage_integration = snowflake_json_storage_integration
url = 'azure://snowflakejson.blob.core.windows.net/cdrs'
file_format = jsonThing
COPY_OPTIONS = (ON_ERROR=CONTINUE PURGE=TRUE MATCH_BY_COLUMN_NAME=CASE_INSENSITIVE)
COMMENT='The snowflake json stage';
CREATE or REPLACE PIPE SNOWFLAKE_JSON_PIPE
AUTO_INGEST = TRUE
integration = snowflake_json_notification_integration
as
COPY INTO purge.public.cdrs
from #SNOWFLAKE_JSON_STAGE
ON_ERROR=CONTINUE
MATCH_BY_COLUMN_NAME=CASE_INSENSITIVE;
Do ON_ERROR=CONTINUE options work with PIPE?
NOTE: The file is an NDJSON file.

Resources