Curl batch file - batch-file

I have the following batch file that is executing cURL POST command:
curl -k -X POST -F "upload=#xxx0101.csv" -F "mail=******" -F "pwd=******" -F "orgid=2729" -F "response=JSON" https:************* >> log.txt
SET Today=%Date:~10,4%%Date:~4,2%%Date:~7,2%
mkdir %cd%\Backup-%Today%
move %cd%\*.csv %cd%\Backup-%Today%\
I would like the conditional execution of the latter part of the script(after the CURL has been executed) based on success/failure of the cURL POST command/file transfer.
Could you please help me with this.

Related

Shell script to execute command on each line of a file with space-delimited fields

I am trying to write a shell script that reads a file line by line and executes a command with its arguments taken from the space-delimited fields of each line.
To be more precise, I need to download a file from an URL which is given in the second column to the path given in the first column using wget. But I don't know how to load this file and get the values in script.
File.txt
file-18.log https://example.com/temp/file-1.log
file-19.log https://example.com/temp/file-2.log
file-20.log https://example.com/temp/file-3.log
file-21.log https://example.com/temp/file-4.log
file-22.log https://example.com/temp/file-5.log
file-23.pdf https://example.com/temp/file-6.pdf
Desired output is
wget url[1] -o url[0]
wget https://example.com/temp/file-1.log -o file-18.log
wget https://example.com/temp/file-2.log -o file-19.log
...
...
wget https://example.com/temp/file-6.pdf -o file-23.pdf
Use read and a while loop in bash to iterate over the file line-by-line and call wget on each iteration:
while read -r NAME URL; do wget "$URL" -o "$NAME"; done < File.txt
Turning a file into arguments to a command is a job for xargs:
xargs -a File.txt -L1 wget -o
xargs -a File.txt: Extract arguments from the File.txt file.
-L1: Pass all arguments from 1 line to the command.
wget -o Pass arguments to the wget command.
You can count, using a for loop and the output of seq like so:
In bash, you can add numbers using $((C+3)).
This will get you:
COUNT=6
OFFSET=18
for C in `seq "$((COUNT-1))"`; do
wget https://example.com/temp/file-${C}.log -o file-$((C+OFFSET-1)).log
done
wget https://example.com/temp/file-${COUNT}.pdf -o file-$((COUNT+OFFSET-1)).pdf
Edit: Sorry, I misread your question. So if you have a file with the file mappings, you can use awk to get the URL and the FILE and then do the download:
cat File.txt | while read L; do
URL="$(echo "${L}" | awk '{print $1}'"
FILE="$(echo "${L}" | awk '{print $2}'"
wget "${URL}" -o "${FILE}"
done

Problem with executing only first element into array [duplicate]

This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 1 year ago.
I thought that my problem is trivial, but I cannot figure out, why my scripts only performing once in array.
I have a jenkins job (bash script). This job gathering hostnames and sends ssh commands, through script, using gathered info:
rm /tmp/hosts
docker exec -t tmgnt_consul_1 consul members -status=alive | grep -v Node | awk '{print $1}' | cut -d : -f1 >> /tmp/hosts
sed -i '/someunnecessaryinfo/d' /tmp/hosts
echo >> /tmp/hosts
shopt -s lastpipe
while IFS= read -r line; do
echo "host is >>$line<<";
url="http://111.111.111.111:8500/v1/catalog/nodes"
term_IP=`curl -s $url | jq -r --arg Node "${line}" '.[] | select(.Node == "'${line}'" )|.Address' --raw-output`
echo $term_IP
sudo bash -x /home/rtm/t_mgnt/check_fw $term_IP
done < /tmp/hosts
Second script:
#!/bin/bash
term_IP=$1
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
if [ $? != 0 ]; then
sudo sshpass -p 'some.pass' \
scp -n -o StrictHostKeyChecking=no -r /home/rtm/t_mgnt/nv9 user#$term_IP:
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo mv nv9 /root/"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo dpkg -i /root/nv9/libudev0_175-0ubuntu9_amd64.deb"
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
else
sudo sshpass -p 'some.pass' \
ssh -n -o StrictHostKeyChecking=no user#$term_IP "sudo /root/nv9/DetectValidator"
fi
The job is working fine, and returns correct values, but only for the first element of array.
PS - I already searched through this and other sites, and - following answer didn't help me - Shell script while read line loop stops after the first line (already "ssh -n -o").
Perhaps you can point me, what I missed.
Possibly this ssh call eats your input:
sudo sshpass -p 'some.pass' ssh -o StrictHostKeyChecking=no user#$term_IP "sudo test -d /root/nv9"
^^^
Try adding -n.

Heredoc commands for find . -exec sh {} +

I'm trying to convert a hierarcy of TIFF image files into JPG, and out of boredom, I want to do find and ffmpeg in a single file.
So I set find to invoke sh with the -s flag, like thins:
#!/bin/sh
export IFS=""
find "$#" -iname 'PROC????.tif' -exec sh -s {} + << \EOF
for t ; do
ffmpeg -y -v quiet -i $t -c:v mjpeg ${t%.*}.jpg
rm $t
done
EOF
However, there's just too many files in the directory hierarchy, and find chopped filename array into several small pieces, and sh -s was only succesfully invoked for the first argument chunk.
The question being: how could one feed such in-body command to every sh invocation in the find command?
Update
The tag "heredoc" on the question is intended for receiving answers that do not rely on external file or self-referencing through $0. It is also intended that no filename would go through string-array processing such as padding with NUL-terminator or newline, and can be directly passed as arguments.
The heredoc is being used as the input to find. I think your best bet is to not use a heredoc at all, but just use a string:
#!/bin/sh
find "$#" -iname 'PROC????.tif' -exec sh -c '
for t ; do
ffmpeg -y -v quiet -i "$t" -c:v mjpeg "${t%.*}.jpg" &&
rm "$t"
done
' sh {} +
I am re-writing your code below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=${imgfile%.*}.jpg
cd "$path"
ffmpeg -y -v quiet -i "$imgfile" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
IF you don't want to change the current directory you can do it like below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
# If you list down all the .tif file use below command
# find "$1" -name "*.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=$path${imgfile%.*}.jpg
ffmpeg -y -v quiet -i "$filepath" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
rm -rf /tmp/imagefile_list.txt

Post datas from distant file

I know how to post data from a local file with curl :
curl -i -X POST -H 'Content-Type: text/plain' -d #foo.txt http://bar.com/foobar
But I would like to do the same, but from a distant file, for example :
curl -i -X POST -H 'Content-Type: text/plain' -d #http://www.google.fr/robots.txt http://bar.com/foobar
If I try this command, I have warnings : Couldn't read data from file, this makes an empty POST.
Is is possible to do that?
I suppose that my answer is not new for you, but why you can't do this:
curl http://www.google.fr/robots.txt > /tmp/foo.txt
curl -i -X POST -H 'Content-Type: text/plain' -d #/tmp/foo.txt http://bar.com/foobar

How to capture cURL output to a file?

I have a text document that contains a bunch of URLs in this format:
URL = "sitehere.com"
What I'm looking to do is to run curl -K myfile.txt, and get the output of the response cURL returns, into a file.
How can I do this?
curl -K myconfig.txt -o output.txt
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt
Appends all output you receive to the specified file.
Note: The -K is optional.
For a single file you can use -O instead of -o filename to use the last segment of the URL path as the filename. Example:
curl http://example.com/folder/big-file.iso -O
will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt"
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J
Either curl or wget can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt":
wget http://path/to/file.txt -O my_file.txt # my favorite--it has a progress bar
curl http://path/to/file.txt -o my_file.txt
curl http://path/to/file.txt > my_file.txt
Notice the first one's -O is the capital letter "O".
The nice thing about the wget command is it shows a nice progress bar.
You can prove the files downloaded by each of the 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.
See also: wget command to download a file and save as a different filename
For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command.
Example: curl https://www.google.com/robots.txt | pbcopy. This will copy all the content from the given URL to your clipboard.
Linux version: curl https://www.google.com/robots.txt | xclip
Windows version: curl https://www.google.com/robots.txt | clip
Use --trace-ascii output.txt to output the curl details to the file output.txt.
You need to add quotation marks between "URL" -o "file_output" otherwise, curl doesn't recognize the URL or the text file name.
Format
curl "url" -o filename
Example
curl "https://en.wikipedia.org/wiki/Quotation_mark" -o output_file.txt
Example_2
curl "https://en.wikipedia.org/wiki/Quotation_mark" > output_file.txt
Just make sure to add quotation marks.
A tad bit late, but I think the OP was looking for something like:
curl -K myfile.txt --trace-ascii output.txt
If you want to store your output into your desktop, follow the below command using post command in git bash.It worked for me.
curl https://localhost:8080
--request POST
--header "Content-Type: application/json"
-o "C:\Desktop\test.json"
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt

Resources