I'm using snowsql in a ksh script which loading data from a named external stage to snowflake table.
I want to check if a file exists before loading: if no files there, I'd like to exit without doing the load.
I tried using
snowsql -c $CONNECT -w $WAREHOUSE -s $SCHEMA -d $DATABASE \
-o exit_on_error=true \
-q "ls ${external_stage};" \
but the result of ls is empty - not consider as error.
How do I approach it?
I am calling strored procedure thorugh SNOWSQL and getting below error.
002141 (42601): SQL compilation error:
Unknown user-defined function ETL_SCHEMA.PROC
Below is the snowsql query:
snowsql -c newConnection -o log_level=DEBUG -r ACCT_ROLE -w ETL_XS_WH -d ETL_DEV_DB -s ETL_SCHEMA -q "CALL ETL_SCHEMA.PROC('202')" -o friendly=False -o header=False -o output_format=plain -o timing=False
Is anything is wrong here?
Is CALL ETL_SCHEMA.PROC('202') working in your Snowflake Web GUI? Maybe its not a Stored Procedure but an User defined function.
The issue you are having is either permissions based or it's a search path issue.
I'd recommend prefixing the "etl_schema" with the database name (aka fully qualified name), and trying that. You can also simply run a select current_role(), current_database(), current_schema(); command instead of the call command to see what the context is, you might have something in the config that is overwriting the arguments passed in via the command.
we are exporting data into a csv file by using unix shell script (using snowsql)
below is the script
#!/bin/ksh
snowsql -c newConnection -o log_level=DEBUG -o
log_file=~/snowsql_sso_debug.log -r SRVC_ACCT_ROLE -w LOAD_WH -d
ETL_DEV_DB -s CTL_DB -q "select * from mytable" -o friendly=False -o
header=False -o output_format=pipe -o timing=False>test_file.csv
output starts something like below
|:--------|:-----------|
i dont want to display above lines in my csv file, what is the option that we need to use in my snowsql query?
appricate your response.
Thanks.
Providing my comment as an answer, just in case it works better for you.
I would leverage a COPY INTO command to create a CSV file to an internal stage location:
https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html
And then use a GET statement to pull the file down to your Unix machine.
https://docs.snowflake.com/en/sql-reference/sql/get.html
This gives you greater control over the output format and will likely perform faster, as well.
I'm trying to create tags for *.c, *.x and *.h files.
These are the following commands which I executed.
find <absolute_path_of_code> -name *.c -o -name *.x -o -name *.h > cscope.files
cscope -bkqc cscope.files
Till here everything is ok.
But after this when I execute the command,
cscope -Rb
I get the following message at console.
cscope: -c or -T option mismatch between command line and old symbol database
How do I resolve this?
If you generate a database using the -c or -T options (you use -c in your original command) you are required to pass those options to every subsequent invocation of cscope. Just add -c to your second command (making it cscope -Rbc) and it should work.
cscope -Rb generates only cscope.out file but cscope -bkqc -I cscope.files generates cscope.in.out, cscope.po.out and cscope.out. So there is no need to execute cscope -Rb.
I have a text document that contains a bunch of URLs in this format:
URL = "sitehere.com"
What I'm looking to do is to run curl -K myfile.txt, and get the output of the response cURL returns, into a file.
How can I do this?
curl -K myconfig.txt -o output.txt
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt
Appends all output you receive to the specified file.
Note: The -K is optional.
For a single file you can use -O instead of -o filename to use the last segment of the URL path as the filename. Example:
curl http://example.com/folder/big-file.iso -O
will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt"
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J
Either curl or wget can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt":
wget http://path/to/file.txt -O my_file.txt # my favorite--it has a progress bar
curl http://path/to/file.txt -o my_file.txt
curl http://path/to/file.txt > my_file.txt
Notice the first one's -O is the capital letter "O".
The nice thing about the wget command is it shows a nice progress bar.
You can prove the files downloaded by each of the 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.
See also: wget command to download a file and save as a different filename
For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command.
Example: curl https://www.google.com/robots.txt | pbcopy. This will copy all the content from the given URL to your clipboard.
Linux version: curl https://www.google.com/robots.txt | xclip
Windows version: curl https://www.google.com/robots.txt | clip
Use --trace-ascii output.txt to output the curl details to the file output.txt.
You need to add quotation marks between "URL" -o "file_output" otherwise, curl doesn't recognize the URL or the text file name.
Format
curl "url" -o filename
Example
curl "https://en.wikipedia.org/wiki/Quotation_mark" -o output_file.txt
Example_2
curl "https://en.wikipedia.org/wiki/Quotation_mark" > output_file.txt
Just make sure to add quotation marks.
A tad bit late, but I think the OP was looking for something like:
curl -K myfile.txt --trace-ascii output.txt
If you want to store your output into your desktop, follow the below command using post command in git bash.It worked for me.
curl https://localhost:8080
--request POST
--header "Content-Type: application/json"
-o "C:\Desktop\test.json"
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt