Found character ':' instead of field delimiter ',' - snowflake-cloud-data-platform

Again I am facing an issue with loading a file into snowflake.
My file format is:
TYPE = CSV
FIELD_DELIMITER = ','
FIELD_OPTIONALLY_ENCLOSED_BY = '\042'
NULL_IF = ''
ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE
[ COMMENT = '<string_literal>' ]
Now by running the:
copy into trips from #citibike_trips
file_format=CSV;
I am receiving the following error:
Found character ':' instead of field delimiter ','
File 'citibike-trips-json/2013-06-01/data_01a304b5-0601-4bbe-0045-e8030021523e_005_7_2.json.gz', line 1, character 41
Row 1, column "TRIPS"["STARTTIME":2]
If you would like to continue loading when an error is encountered, use other values such as 'SKIP_FILE' or 'CONTINUE' for the ON_ERROR option. For more information on loading options, please run 'info loading_data' in a SQL client.
I am a little confused about the file I am trying to load. Actually, I got the file from a tutorial on YouTube and in the video, it works properly. However, inside the file, there are not only CSV datasets, but also JSON, and parquet. I think this could be the problem, but I am not sure to solve it, since the command code above is having the file_format = CSV.

Remove FIELD_OPTIONALLY_ENCLOSED_BY = '\042' , recreate the file format and run the copy statement again.

You're trying to import a JSON file using a CSV file format. In most cases all you need to do is specify JSON as the file type in the COPY INTO statement.
FILE_FORMAT = ( { FORMAT_NAME = '[<namespace>.]<file_format_name>' |
TYPE = { CSV | JSON | AVRO | ORC | PARQUET | XML } [ formatTypeOptions ] } ) ]
You're using CSV, but it should be JSON:
FILE_FORMAT = (TYPE = JSON)
If you're more comfortable using a named file format, use the builder to create a named file format that's of type JSON:

I found a thread in the Snowflake Community forum that explains what I think you might have been facing. There are now three different kinds of files in the stage - CSV, parquet, and JSON. The copy process given in the tutorial expects there to be only CSV. You can use this syntax to exclude non-CSV files from the copy:
copy into trips from #citibike_trips
on_error = skip_file
pattern = '.*\.csv\.gz$'
file_format = csv;
Using the PATTERN option with a regular expression you can filter only the csv files to be loaded.
https://community.snowflake.com/s/feed/0D53r0000AVKgxuCQD
And if you also run into an error related to timestamps, you will want to set this file format before you do the copy:
create or replace file format
citibike.public.csv
type = 'csv'
field_optionally_enclosed_by = '\042'
S3 to Snowflake ( loading csv data in S3 to Snowflake table throwing following error)

Related

How to solve error "Field delimiter ',' found while expecting record delimiter '\n'" while loading json data to the stage

I am trying to "COPY INTO" command to load data from s3 to the snowflake
Below are the steps I followed to create the stage and loading file from stage to Snowflake
JSON file
{
"Name":"Umesh",
"Desigantion":"Product Manager",
"Location":"United Kingdom"
}
create or replace stage emp_json_stage
url='s3://mybucket/emp.json'
credentials=(aws_key_id='my id' aws_secret_key='my key');
# create the table with variant
CREATE TABLE emp_json_raw (
json_data_raw VARIANT
);
#load data from stage to snowflake
COPY INTO emp_json_raw from #emp_json_stage;
I am getting below error
Field delimiter ',' found while expecting record delimiter '\n' File
'emp.json', line 2, character 18 Row 2, column
"emp_json_raw"["JSON_DATA_RAW":1]
I am using a simple JSON file, and I don't understand this error.
What causes it and how can I solve it?
File format is not specified and is defaulting to CSV format hence the error.
Try this:
COPY INTO emp_json_raw
from #emp_json_stage
file_format=(TYPE=JSON);
There are other options too that can be specified with file_format other than TYPE. Refer the documentation here: https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#type-json
try:
file_format = (type = csv field_optionally_enclosed_by='"')
The default settings do not expect the " wrapping around your data.
So you could strip all the " or ... just set the field_optionally_enclosed_by to a ". This does mean if your data has " in it things get messy.
https://docs.snowflake.com/en/user-guide/getting-started-tutorial-copy-into.html
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html#type-csv
Also have a standard practice to mention type of file either it could be CSV, JSON ,AVRO , Parquet etc.
https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html

SQL Compilation error while loading CSV file from S3 to Snowflake

we are facing below issue while loading csv file from S3 to Snowflake.
SQL Compilation error: Insert column value list does not match column list expecting 7 but got 6
we have tried removing the column from table and tried again but this time it is showing expecting 6 but got 5
below are the the commands that we have used for stage creation and copy command.
create or replace stage mystage
url='s3://test/test'
STORAGE_INTEGRATION = test_int
file_format = (type = csv FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION=GZIP);
copy into mytable
from #mystage
MATCH_BY_COLUMN_NAME = CASE_INSENSITIVE;
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY='"' COMPRESSION=GZIP error_on_column_count_mismatch=false TRIM_SPACE=TRUE NULL_IF=(''))
FORCE = TRUE
ON_ERROR = Continue
PURGE=TRUE;
You can not use MATCH_BY_COLUMN_NAME for the CSV files, this is why you get this error.
This copy option is supported for the following data formats:
JSON
Avro
ORC
Parquet
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html

Snowflake-Internal Stage data load error: How to load "\" character

In a file, few of the rows have \ in a column value for example, i have rows in below format.
101,Path1,Z:\VMC\PSPS,abc
102,Path5,C:\wintm\PSPS,abc
I was wondering how to load \ character
COPY INTO TEST_TABLE from #database.schema.stage_name FILE_FORMAT = ( TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '\"' SKIP_HEADER = 1 );
is there any thing that i can mention the file_format line?
Are you still getting this error? I just tried to recreate it by creating a CSV based off your sample data and a test table. I loaded the CSV into an internal stage and then ran your COPY command. It worked for me. Please see the screenshot below.
Could you provide more details on the error you are facing? Perhaps there was something off with your table definition.

How to create a csv file format definition to load data into snowflake table

I have a CSV file, a sample of it looks like this:
Image of CSV file
Snowpipe is failing to load this CSV file with the following error:
Number of columns in file (5) does not match that of the corresponding table (3), use file format option error_on_column_count_mismatch=false to ignore this error
Can someone advise me csv file format definition to accomodate load without fail ?
The issue is that the data you are trying to load contains commas (,) inside the data itself. Snowflake thinks that those commas represent new columns which is why it thinks there are 5 columns in your file. It is then trying to load these 5 columns into a table with only 3 columns resulting in an error.
You need to tell Snowflake that anything inside double-quotes (") should be loaded as-is, and not to interpret commas inside quotes as column delimiters.
When you create your file-format via the web interface there is an option which allows you to tell Snowflake to do this. Set the "Field optionally enclosed by" dropdown to "Double Quote" like in this picture:
Alternatively, if you're creating your file-format with SQL then there is an option called FIELD_OPTIONALLY_ENCLOSED_BY that you can set to \042 which does the same thing:
CREATE FILE FORMAT "SIMON_DB"."PUBLIC".sample_file_format
TYPE = 'CSV'
COMPRESSION = 'AUTO'
FIELD_DELIMITER = ','
RECORD_DELIMITER = '\n'
SKIP_HEADER = 0
FIELD_OPTIONALLY_ENCLOSED_BY = '\042' # <---------------- Set to double-quote
TRIM_SPACE = FALSE
ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE
ESCAPE = 'NONE'
ESCAPE_UNENCLOSED_FIELD = '\134'
DATE_FORMAT = 'AUTO'
TIMESTAMP_FORMAT = 'AUTO';
If possible share the file format and one sample record to figure out the issue. Seems issue with number of column, Can you include field_optionally_enclosed_by option into your copy statement and try it once.
When the TAB character is unlikely to occur, I tend to use TAB delimited files - which, also - together with a Header -, make the source files more human-readable in case they need to be open for troubleshooting loading failures:
FIELD_DELIMITER = '\t'
Also (although a bit off-topic), note that Snowflake suggests files to be compressed: https://docs.snowflake.com/en/user-guide/data-load-prepare.html#data-file-compression
I mostly use GZip compression type:
COMPRESSION = GZIP
A (working) example:
CREATE FILE FORMAT Public.CSV_GZIP_TABDELIMITED_WITHHEADER_QUOTES_TRIM
FIELD_DELIMITER = '\t'
SKIP_HEADER = 1
TRIM_SPACE = TRUE
NULL_IF = ('NULL')
COMPRESSION = GZIP
;

SNOWFLAKE COPY Command not copying data inside

Im getting started with Snowflake and something I dont understand. I tried to issue a copy command as below but it shows no rows processed.
copy into customer
from #bulk_copy_example_stage
FILES = ('dataDec-9-2020.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
I tried with another file from the same S3 folder
copy into customer
from #bulk_copy_example_stage
FILES = ('generated_customer_data.csv')
file_format = (type = csv field_delimiter = '|' skip_header = 1)
FORCE=TRUE;
And this worked.
At this stage im pretty sure that something was wrong with my first file. but my question is, how do we get to print out what the error was? all it shows in the console is as below which is not really helpful.
You could try looking at the copy_history to find out what's wrong with the file.
Reference: copy_history

Resources