The following error message is logged when running the procedure from a task, but works fine when I run it manually:
Execution error in store procedure STAGE_SERVICEBUS_ORDER: "Query
code" missing from JSON response At Statement.execute, line 4 position
60.
the procedure looks like this:
CREATE OR REPLACE PROCEDURE "STAGE_SERVICEBUS_ORDER"(YEARMONTH VARCHAR)
RETURNS VARCHAR(16777216)
LANGUAGE JAVASCRIPT
EXECUTE AS OWNER
AS '
snowflake.createStatement({ sqlText: `Truncate table DM.STG.SERVICEBUS_ORDER`}).execute();
var copy_into_statement = `copy into DM.STG.SERVICEBUS_ORDER (FILE_NAME,OBJECT) from ( select metadata$filename, $1 from #SERVICEBUS_ORDER`+YEARMONTH+` ) file_format = (type = ''JSON'' strip_outer_array = false) force=true ON_ERROR = CONTINUE `;
snowflake.createStatement({ sqlText: copy_into_statement}).execute();
return ''Done'';
';
Please try replacing $1 with PARSE_JSON($1).
Here's an example from Snowflake docs illustrating this solution for accessing JSON file metadata in a COPY statement:
-- Create a file format
CREATE OR REPLACE FILE FORMAT my_json_format
TYPE = 'json';
-- Create an internal stage
CREATE OR REPLACE STAGE mystage2
FILE_FORMAT = my_json_format;
-- Stage a data file
PUT file:///tmp/data1.json #mystage2;
-- Query the filename and row number metadata columns
-- and the regular data columns in the staged file
SELECT METADATA$FILENAME, METADATA$FILE_ROW_NUMBER, parse_json($1)
FROM #mystage2/data1.json.gz;
https://docs.snowflake.com/en/user-guide/querying-metadata.html#example-2-querying-the-metadata-columns-for-a-json-file
Related
I am having trouble getting the following code to work:
create or replace secure procedure create_wh (wh_name varchar)
returns varchar
language sql
comment = '<string_literal>'
execute as owner
as
begin
create warehouse if not exists :wh_name
warehouse_size = xsmall
auto_suspend = 60
auto_resume = true
initially_suspended = true;
return 'SUCCES';
end;
The idea is that the SP can be called with a name for a warehouse. It errors in unexpected 'if' after the create warehouse statement when trying to run the above code.
I am guessing I am missing something in relation to binding the param to the query, but I can't figure out what.
It is possible to provide warehouse name as parameter by using IDENTIFIER(:wh_name):
create or replace secure procedure create_wh (wh_name varchar)
returns varchar
language sql
comment = '<string_literal>'
execute as owner
as
begin
create warehouse if not exists IDENTIFIER(:wh_name)
warehouse_size = xsmall
auto_suspend = 60
auto_resume = true
initially_suspended = true;
return 'SUCCES';
end;
CALL create_wh('test');
SHOW WAREHOUSES;
I am trying to create a stored procedure to copy from the external stage (s3 bucket) and use a pattern for the file name. The pattern is based on the concatenated current date but I need to set a variable to use as a pattern. Is it possible to do something like this?
CREATE OR REPLACE PROCEDURE test_copy()
RETURNS STRING
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
SET my_Date=(select concat('.*', regexp_replace(current_date(),'-',''), '.*.parquet' );
var sql_command = '
COPY INTO table1
FROM '#s3bucket'
(file_format => PARQUET, pattern=>$my_Date)
);
'
snowflake.execute(
{
sqlText: sql_command
});
return "Successfully executed.";
$$;
As you can generate the current date in javascript, why not create my_Date purely in javascript?
You then need to create sql_command by concatenating the required strings and variables together
I am trying to load data from azure blob storage.
The data has already been staged.
But, the issue is when I try to run
copy into random_table_name
from #stage_name_i_created
file_format = (type='csv')
pattern ='*.csv'
Below is the error I encounter:
raise error_class(
snowflake.connector.errors.ProgrammingError: 001757 (42601): SQL compilation error:
Table 'random_table_name' does not exist
Basically, it says table does not exist, which it does not, but the syntax on website is the same as mine.
COPY INTO query on Snowflake returns TABLE does not exist error
In my case the table name is case-sensitive. Snowflake seems to convert everything to upper case. I changed the database/schema/table names to all upper-case and it started working.
First run the below query to fetch the column headers
select $1 FROM #stage_name_i_created/filename.csv limit 1
Assuming below are the header lines from your csv file
id;first_name;last_name;email;age;location
Create a file_format csv
create or replace file format semicolon
type = 'CSV'
field_delimiter = ';'
skip_header=1;
Then you should define the datatype and field name as below
create or replace table <yourtable> as
select $1::varchar as id
,$2::varchar as first_name
,$3::varchar as last_name
,$4::varchar as email
,$5::int as age
,$6::varchar as location
FROM #stage_name_i_created/yourfile.csv
(file_format => semicolon );
The table must exist prior to running a COPY INTO command. In your post, you say that the table does not exist...so that is your issue.
If your table exist, try by forcing the table path like this:
copy into <database>.<schema>.<random_table_name>
from #stage_name_i_created
file_format = (type='csv')
pattern ='*.csv'
or by steps like this:
use database <database_name>;
use schema <schema_name>;
copy into database.schema.random_table_name
from #stage_name_i_created
file_format = (type='csv')
pattern ='*.csv';
rbachkaniwala, what do you mean by 'How do I create a table?( according to snowflake syntax it is not possible to create empty tables)'.
You can just do below to create a table
CREATE TABLE random_table_name (FIELD1 VARCHAR, FIELD2 VARCHAR)
The table does need to exist. You should check the documentation for COPY INTO.
Other areas to consider are
do you have the right context set for the database & schema
does the user / role have access to the table or object.
It basically seems like you don't have the table defined yet. You should
ensure the table is created
ensure all columns in the CSV exist as columns in the table
ensure the order of the columns are the same as in the CSV
I'd check data types too.
"COPY INTO" is not a query command, it is the actual data transfer execution from source to destination, which both must exist as others commented here but If you want just to query without loading the files then run the following SQL:
//Display list of files in the stage to verify stage
LIST #stage_name_i_created;
//Create a file format
CREATE OR REPLACE FILE FORMAT RANDOM_FILE_CSV
type = csv
COMPRESSION = 'GZIP' FIELD_DELIMITER = ',' RECORD_DELIMITER = '\n' SKIP_HEADER = 0 FIELD_OPTIONALLY_ENCLOSED_BY = '\042'
TRIM_SPACE = FALSE ERROR_ON_COLUMN_COUNT_MISMATCH = FALSE ESCAPE = 'NONE' ESCAPE_UNENCLOSED_FIELD = 'NONE' DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO'
NULL_IF = ('\\N');
//Now select the data in the files
Select $1 as first_col,$2 as second_col //can add as necessary number of columns ...etc
from #stage_name_i_created
(FILE_FORMAT => RANDOM_FILE_CSV)
More information can be found in the documentation link here
https://docs.snowflake.com/en/user-guide/querying-stage.html
If have a
DROP VIEW IF EXISTS mydatabase.myschema.myname;
CREATE OR REPLACE TABLE mydatabase.myschema.myname AS ...
that fails with error code 2203 SQL compilation error: Object found is of type 'TABLE', not specified type 'VIEW'..
My intention was to create a script to "convert" a set of existing views into tables (updated periodically via tasks). I wanted the script to be repeteable, so I thought I could DROP VIEW IF EXISTS xxx to drop the view if it exists but it seems that this will fail if there is already a table of the same name. So first time the script runs ok, it drops the view and creates the table but if I run the script again it will fail because now there is table with that same name.
So is there any way to ignore the error in the DROP VIEW IF EXISTS xxx or just to run the command if there is a VIEW with that name?
You have a number of options.
You can have your script read from the INFORMATION_SCHEMA to get a list of views and to delete. This SQL gets a list of all views except in the INFORMATION_SCHEMA.
select * from INFORMATION_SCHEMA.VIEWS where TABLE_SCHEMA <> 'INFORMATION_SCHEMA';
If you just want to drop the view names and avoid running into errors, here's a stored procedure you can call to try dropping a view without generating an error:
create or replace procedure DropView(viewName string)
returns string
language JavaScript
execute as OWNER
as
$$
var sql_command =
'drop view ' + VIEWNAME;
try {
var stmt = snowflake.createStatement( {sqlText: sql_command} );
var resultSet = stmt.execute();
while (resultSet.next()) {
outString = resultSet.getColumnValue('status');
}
}
catch (err) {
outString = err; // Return a success/error indicator.
}
return outString;
$$;
If you want to loop through every database and schema in the entire account, I wrote a stored procedure to do that. It's designed for dependency checking on all views, but could be modified to delete them too.
https://snowflake.pavlik.us/index.php/2019/10/14/object-dependency-checking-in-snowflake
My suggestion would be to create a stored procedure that loops through all of your views and creates tables from them. In that stored procedure, you could check to see if the object exists already as a table and skip that object.
I have the following code which is working with no errors and returning the expected output when I print the results of the pyodbc cursor I created.
cnxn = pyodbc.connect(MY_URL)
cursor = cnxn.cursor()
cursor.execute(
'''
CREATE TABLE tablename(
filename VARCHAR(100),
synopsis TEXT,
abstract TEXT,
original TEXT,
PRIMARY KEY (filename)
)
'''
)
for file in file_names_1:
try:
query = produce_row_query(file, tablename, find_tag_XML)
cursor.execute(query)
except pyodbc.DatabaseError as p:
print(p)
result = cursor.execute(
'''
SELECT filename,
DATALENGTH(synopsis),
DATALENGTH(abstract),
original
FROM ml_files
'''
)
for row in cursor.fetchall():
print(row)
However, no new tables are showing up in my actual MS SQL server. Am I missing a step to push the changes or something of that nature?
You need to commit changes or else they will not be updated in your actual database.
cnxn.commit()