Chunked HTTP Post Request: Arduino/telnet - c

This is what my chunked POST request looks like. I am trying to send chunks of data from an SD card on the arduino to a server through a chunked POST request. Need chunked as it will allow me to send the data which will not all fit into the arduino's memory at once.
POST /upload HTTP/1.1
User-Agent: Arduino
Host: ooboontoo
Accept: */*
Transfer-Encoding: chunked
25
this is the text, of this file, wooo!
1d
more test, of this file
0
However I am getting an error when the server tries parsing the request:
{"25\r\nthis is the text, of this file, wooo!\r\n"=>nil} Invalid
request: Invalid HTTP format, parsing fails.
I tried reading the chunked requests documentation but it appears there are multiple ways of doing the same thing so I am a bit confused on what is the correct way.
Any advice would be appreciated thanks!
EDIT:
here is the bash code I wrote to output the above request to telnet
run with: ./testit.sh | telnet
#! /bin/bash
#testit.sh
#Arduino Telnet HTTP POST tests
header="POST /upload HTTP/1.1\n"
header+="User-Agent: Arduino\n"
header+="Host: localhost 3000\n"
header+="Accept: */*\n"
header+="Transfer-Encoding: chunked\n"
thisfile="this is the text, of this file"
thisfilelen=${#thisfile}
printf -v hexlen '%x' $thisfilelen
thisfile2="more test, of this file"
thisfilelen2=${#thisfile2}
printf -v hexlen2 '%x' $thisfilelen2
end="0"
echo "open localhost 3000"
sleep 2
echo -e $header
echo -e $hexlen
echo -e $thisfile
echo -e $hexlen2
echo -e $thisfile2
echo -e $end
sleep 2

Related

How to save the http response to a curl GET request

My C program performs a GET request using curl and redirecting the output to a file so that later on i can open, parse and use its content inside my code.
snprintf(full_data,sizeof(full_data),
"curl -m 10 -k -X GET \"%s/"
"data/json/get?key=key"
"&fields=%s&separator=,\""
" -H \"Authorization: Bearer %s\""
" > /home/pi/Documents/file.txt",
server,
data_to_get,
authtkn
);
FILE *pf;
pf=popen(full_data,"r");
if (!pf)
return -1;
pclose(pf);
Now, i want to make sure that my request status is OK (200/201 ecc) but i dont know how to get the http responde code (and save it somewhere so that i can use it inside my code)

Bash: works at command line but getting 'curl: (1) Protocol ""https" not supported or disabled in libcurl' when using in a script

Got stuck with the situation that a curl request works fine with the command line but does not work at a script giving weird errors. Here is the part of the script:
#we already got the $token
#vars defining curl parameters
dc="NA EU AU SEA JP"
curl_url='https://example.com/api/config/v1/'
curl_request='alertingProfiles?'
curl_properties='tenant=all&stage=prd&cluster='
curl_auth='"Authorization: Bearer '"$token"'"'
for k in $dc; do
# Download the json for each DC using the curl request
curl_combined=\""$curl_url$curl_request$curl_properties$k"\"; echo "$curl_combined"
curlCMD=( -vv --location --request GET "$curl_combined" -H "$curl_auth" -o \""$k.json"\")
echo "${curlCMD[#]}"
curl "${curlCMD[#]}"
done
When I run this in a script I see the following output:
First, the output from the first echo command
"https://example.com/api/config/v1/alertingProfiles?tenant=all&stage=prd&cluster=NA"
Then, the output from the second echo command which is what curl should take as a parameter, I've skipped the bearer token, but trust me, it is not a problem here.
-vv --location --request GET "https://example.com/api/config/v1/alertingProfiles?tenant=all&stage=prd&cluster=NA" -H "Authorization: Bearer " -o "NA.json"
And I am getting following output
Note: Unnecessary use of -X or --request, GET is already inferred.
* Protocol ""https" not supported or disabled in libcurl
* Closing connection -1
curl: (1) Protocol ""https" not supported or disabled in libcurl
Please note two double quotes in the message above in front of https.
If I simply copy/paste the above second echo output in the command line right after curl and execute, it works as expected and downloads JSON in the file.
If I remove " from curl_combined var, the request is sent without double quotes and now I am getting the message curl: (56) Unexpected EOF from the script.
If I copy/paste the second echo output (without double quotes), supply it to curl, and execute from the command line, I am getting 403 message until I put double or single quotes around the https request.
In short - it seems that double quotes around the https request is necessary but for some reason, curl does not take it.
So what went wrong here? I don't mind even to write the constructed URL in a file and read from it or use herestring if it helps, but want to know what exactly went wrong here.
Thanks.
P.S. Removing --location or --request GET or both does not change the outcome.
The script is using passing the "${curlCMD[#]}" array as parameter to curl. This approach will handle the correct expansion of the url, regardless of any special characters in it. There is no need to wrap the curl_combined with additional quotes using the \" construct.
Suggesting replacing setting curl_combined with:
curl_combined="$curl_url$curl_request$curl_properties$k"
echo "$curl_combined"
This solution works replacing curl request with array to the following
curl -vv \
--location \
--request GET \
"$curl_combined" \
--header "Authorization: Bearer ${token}" \
-o "$k.json"
But it is ugly, let's say politely...

Add content of a text file to array in Bash

I am trying to execute a curl for 300 at a same time and using array. I do not know how to bring the content of my file in array. The code I write is bellow.
array=();
for i in {1..300}; do
array+=( file.txt ) ;
done;
curl "${array[#]}";
The file.text include the following code
--next 'https://d16.server.com/easy/api/OmsOrder' -H 'Connection: keep-
alive' - H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'Accept:
application/json,
text/plain, */*' -H 'Sec-Fetch-Dest: empty' -H 'User-Agent: Mozilla/5.0
(Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/80.0.3987.132 Safari/537.36' -H 'Content-Type: application/json' -H
'Origin: https://d.server.com' -H 'Sec-Fetch-Site: same-site' -H
'Sec-Fetch-Mode: cors' -H 'Referer: https://d.server.com/' -H 'Accept-
Language: en-US,en;q=0.9,fa;q=0.8' --data-binary
'{"isin":"IRO3TPEZ0001","financeId":1,"quantity":50000,"price":5400}' --
compressed"
array=();
for i in {1..300}; do
array+=( $(cat file.txt|head -$i | tail -1) );
done;
curl "${array[#]}";
You have a file with shell formatted words that you are trying to repeat over and over in a command.
Since the words are shell formatted, you'll need to interpret them using e.g. eval:
contents=$(< file.txt)
eval "words=( $contents )"
arguments=()
for i in {1..300}
do
arguments+=( "${words[#]}" )
done
curl "${arguments[#]}"
A more robust design would be to not use shell quoting and instead format one argument per line:
--next
https://d16.server.com/easy/api/OmsOrder
-H
Connection: keep-alive
-H
Pragma: no-cache
You can then use the above code and replace the eval line with:
mapfile -t words < file.txt
The answer to this question should have been "put each request into a file, one option per line, and use -K/--config to include the file into the command line." That certainly should allow for 300 requests in a single curl command without exceeding the limit on the size of a shell command. (By "request" here, I mean "a URL with associated options". If you only want to use 300 URLs without modifying any other option, you can easily do that by just listing the URLs, on the command line if they aren't too long or otherwise in a file.)
Unfortunately, it doesn't work. I believe that it is supposed to work, and the fact that it doesn't is a bug. If you specify multiple -K options and each of them refers to a file which includes one request and the --next option, then curl will execute only the first and last file. If you instead put the --next options on the command-line in between the -K options, all the request options will be merged, and in addition curl will complain about a missing URL.
However, you can use the -K option by concatenating all 300 requests and passing them through stdin, using -K - to read from stdin. To test that, I created the file containing a single request:
$ cat post-req
--next
-H "Connection: keep-alive"
-H "Pragma: no-cache"
-H "Cache-Control: no-cache"
-H "Accept: application/json, text/plain, */*"
-H "Sec-Fetch-Dest: empty"
-H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36"
-H "Content-Type: application/json"
-H "Origin: https://d.server.com"
-H "Sec-Fetch-Site: same-site"
-H "Sec-Fetch-Mode: cors"
-H "Referer: https://d.server.com/"
-H "Accept-Language: en-US,en;q=0.9,fa;q=0.8"
--data-binary "{\"isin\":\"IRO3TPEZ0001\",\"financeId\":1,\"quantity\":50000,\"price\":5400}"
--compressed
--url "http://localhost/foo"
and then set up a little webserver that just returns the requested path, and invoked curl with:
for i in $(seq 300); do cat post-req; done | curl -K -
Indeed, all three hundred requests are passed through.
For what it's worth, I reported the bug as https://github.com/curl/curl/issues/5120, and many thanks to Daniel Stenberg for being incredibly responsive by committing a fix in less than two days. So probably the issue will be resolved in the next curl release.

Send a POST request with some data, including a file, using Curl

How can I include a file in a curl request form my working directory?
Below I've got a POST request that includes data for "first_name" and for "last_name", but now I need to add in the input for file. Theres examples out there where someone is ONLY sending a file along, but I'm trying to send 1 or more files, and other data.
curl
-H "Content-Type: application/json"
-d '{ first_name: "Donny", last_name: "P", my_file: ???? }'
https://sender.blockspring.com/api/blocks/319bfef4aad7f3477745048a2da3ae6a?api_key=2e0ef0c216078d60630d1321e67b243a
This can be only done with a multipart.
Manually building a multipart may be complex, so curl has a built-in -F option.
curl localhost:8000 -F "my_file=#file.ext" -F "name=daniel;last=P" -v
from man curl
-F, --form
(HTTP) This lets curl emulate a filled-in form in which a user has pressed the submit button. This causes curl to POST
data using the Content-Type multipart/form-data according to RFC 2388. This enables uploading of binary files etc.
To
force the 'content' part to be a file, prefix the file name with an # sign. To just get the content part from a file,
prefix the file name with the symbol <. The difference between # and < is then that # makes a file get attached in the
post as a file upload, while the < makes a text field and just get the contents for that text field from a file.

why is checking a URL failing when run through icinga?

I created my own command to check a specific URL
define command{
command_name check_url
command_line /usr/lib/nagios/plugins/check_http -f follow -H '$HOSTNAME$' -I '$HOSTADDRESS$' -u '$ARG1$'
}
If I run my command from the command line, it works:
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u http://www.example.com/server-status
HTTP OK: HTTP/1.1 200 OK - 4826 bytes in 0.011 second response time |time=0.010625s;;;0.000000 size=4826B;;;0
But when run through Icinga, I'm getting
HTTP WARNING: HTTP/1.1 404 NOT FOUND - 314 bytes in 0.011 second response time
My guess is for check_http plugin for -u option you should provide the url appended after the server name not the whole url.
Ex.
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u /server-status
Your manual test is not equivalent to your command definition.
The distinction with -H/-I is subtle, but very important.
When I have problems like this, where Icinga is abstracting exactly how it is executing the command, I find it helpful to find out precicely what Icinga is executing. I would accomplish this as follows:
Move check_http to a temporary location
# mv /usr/lib/nagios/plugins/check_http /usr/lib/nagios/plugins/check_http_actual
Make a bash script that Icinga will call instead of the actual check_http script
# vi /usr/lib/nagios/plugins/check_http
In that file, create this simple bash script, which simply echos the command line arguments it was called with, then exits:
#!/bin/bash
echo $#
Then of course, make that bash script executable:
# chmod +x /usr/lib/nagios/plugins/check_http
Now in Icinga, run the check_http command. At that point, the return status shown in the Icinga web interface will show exactly how Icinga is calling check_http. Seeing the raw command, it should be obvious as to what Icinga is doing wrong. Once you correct Icinga's mistake, you can simply move the original check_http script back into place:
# mv /usr/lib/nagios/plugins/{check_http_actual,check_http}

Resources