I have a text document that contains a bunch of URLs in this format:
URL = "sitehere.com"
What I'm looking to do is to run curl -K myfile.txt, and get the output of the response cURL returns, into a file.
How can I do this?
curl -K myconfig.txt -o output.txt
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt
Appends all output you receive to the specified file.
Note: The -K is optional.
For a single file you can use -O instead of -o filename to use the last segment of the URL path as the filename. Example:
curl http://example.com/folder/big-file.iso -O
will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt"
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J
Either curl or wget can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt":
wget http://path/to/file.txt -O my_file.txt # my favorite--it has a progress bar
curl http://path/to/file.txt -o my_file.txt
curl http://path/to/file.txt > my_file.txt
Notice the first one's -O is the capital letter "O".
The nice thing about the wget command is it shows a nice progress bar.
You can prove the files downloaded by each of the 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.
See also: wget command to download a file and save as a different filename
For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command.
Example: curl https://www.google.com/robots.txt | pbcopy. This will copy all the content from the given URL to your clipboard.
Linux version: curl https://www.google.com/robots.txt | xclip
Windows version: curl https://www.google.com/robots.txt | clip
Use --trace-ascii output.txt to output the curl details to the file output.txt.
You need to add quotation marks between "URL" -o "file_output" otherwise, curl doesn't recognize the URL or the text file name.
Format
curl "url" -o filename
Example
curl "https://en.wikipedia.org/wiki/Quotation_mark" -o output_file.txt
Example_2
curl "https://en.wikipedia.org/wiki/Quotation_mark" > output_file.txt
Just make sure to add quotation marks.
A tad bit late, but I think the OP was looking for something like:
curl -K myfile.txt --trace-ascii output.txt
If you want to store your output into your desktop, follow the below command using post command in git bash.It worked for me.
curl https://localhost:8080
--request POST
--header "Content-Type: application/json"
-o "C:\Desktop\test.json"
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt
Related
I am trying to run a snowsql query from the command line and also to pass config file while calling snowsql. On this blog there is an option presented:
–config PATH SnowSQL config file path.
I tried including this:
#!/bin/bash
snowsql -f training-data.sql \
-o quiet=true \
-o friendly=false \
-o header=false \
-config=./config
When I attempt ti run this I get:
No connection could be found for onfig=./config
It's odd because previously, I could swear the error message was (Note onfig Vs. nfig!):
No connection could be found for nfig=./config
How can I tell snowsql to use ./config as the config file when running the query?
You don’t need an equals sign. It should just be:
-config ./config
I am trying to write a shell script that reads a file line by line and executes a command with its arguments taken from the space-delimited fields of each line.
To be more precise, I need to download a file from an URL which is given in the second column to the path given in the first column using wget. But I don't know how to load this file and get the values in script.
File.txt
file-18.log https://example.com/temp/file-1.log
file-19.log https://example.com/temp/file-2.log
file-20.log https://example.com/temp/file-3.log
file-21.log https://example.com/temp/file-4.log
file-22.log https://example.com/temp/file-5.log
file-23.pdf https://example.com/temp/file-6.pdf
Desired output is
wget url[1] -o url[0]
wget https://example.com/temp/file-1.log -o file-18.log
wget https://example.com/temp/file-2.log -o file-19.log
...
...
wget https://example.com/temp/file-6.pdf -o file-23.pdf
Use read and a while loop in bash to iterate over the file line-by-line and call wget on each iteration:
while read -r NAME URL; do wget "$URL" -o "$NAME"; done < File.txt
Turning a file into arguments to a command is a job for xargs:
xargs -a File.txt -L1 wget -o
xargs -a File.txt: Extract arguments from the File.txt file.
-L1: Pass all arguments from 1 line to the command.
wget -o Pass arguments to the wget command.
You can count, using a for loop and the output of seq like so:
In bash, you can add numbers using $((C+3)).
This will get you:
COUNT=6
OFFSET=18
for C in `seq "$((COUNT-1))"`; do
wget https://example.com/temp/file-${C}.log -o file-$((C+OFFSET-1)).log
done
wget https://example.com/temp/file-${COUNT}.pdf -o file-$((COUNT+OFFSET-1)).pdf
Edit: Sorry, I misread your question. So if you have a file with the file mappings, you can use awk to get the URL and the FILE and then do the download:
cat File.txt | while read L; do
URL="$(echo "${L}" | awk '{print $1}'"
FILE="$(echo "${L}" | awk '{print $2}'"
wget "${URL}" -o "${FILE}"
done
we are exporting data into a csv file by using unix shell script (using snowsql)
below is the script
#!/bin/ksh
snowsql -c newConnection -o log_level=DEBUG -o
log_file=~/snowsql_sso_debug.log -r SRVC_ACCT_ROLE -w LOAD_WH -d
ETL_DEV_DB -s CTL_DB -q "select * from mytable" -o friendly=False -o
header=False -o output_format=pipe -o timing=False>test_file.csv
output starts something like below
|:--------|:-----------|
i dont want to display above lines in my csv file, what is the option that we need to use in my snowsql query?
appricate your response.
Thanks.
Providing my comment as an answer, just in case it works better for you.
I would leverage a COPY INTO command to create a CSV file to an internal stage location:
https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html
And then use a GET statement to pull the file down to your Unix machine.
https://docs.snowflake.com/en/sql-reference/sql/get.html
This gives you greater control over the output format and will likely perform faster, as well.
I have the following batch file that is executing cURL POST command:
curl -k -X POST -F "upload=#xxx0101.csv" -F "mail=******" -F "pwd=******" -F "orgid=2729" -F "response=JSON" https:************* >> log.txt
SET Today=%Date:~10,4%%Date:~4,2%%Date:~7,2%
mkdir %cd%\Backup-%Today%
move %cd%\*.csv %cd%\Backup-%Today%\
I would like the conditional execution of the latter part of the script(after the CURL has been executed) based on success/failure of the cURL POST command/file transfer.
Could you please help me with this.
I am trying to create a tar with follwing command:
tar -cvf myFile.tar -X exclude-files.txt myContentDirectory
and my exclude-file has follwing patterns to exclude:
**/*.bak
**/*.db
**/*.html
But i dont see these file types being excluded out in my tar.
What am I doing wrong here?
I found that when i have just one pattern in my exclude-files.txt, lets say only
**/*.bak
it does work. But not with multiple file patterns (EACH ON NEW LINE)
I think this:
*.bak
*.db
*.html
is the correct format for the exclude file if you want to exclude a particular directory you could do:
some-dir/*.db
Also your command should look like this:
tar -cvf myFile.tar -X exclude-files.txt myContentDirectory
Sorry if this answer is a little late.
tar -cO --exclude=*.bak myContentDirectory | tar -O --delete '*.db' | tar -O --delete '*.html' > myFile.tar
See, what you're doing here is creating the tar, but sending it to stdout instead of to a file then piping that into tar to delete the stuff you don't want, one or more times and finally writing the output to a file.
You can even test it first like this:
tar -cO --exclude=*.bak myContentDirectory | tar -O --delete '*.db' | tar -O --delete '*.html' | tar -tv
Which will spit out a list of all the files remaining in the archive.
Most likely the order of the command is incorrect.
tar -cvf myFile.tar -X exclude-files.txt myContentDirectory
should be something like
tar cv -X exclude-files.txt -f myFile.tar myContentDirectory
PS. I haven't looked into the filters itself. Most likely order of the parameters is the issue.
If issues is in the filters/patterns - it's easier to test one by one with --exclude option.