Wonder if anyone has any experience posting image files with CURL in C..?
I am writing a program to post to a facebook Type web service, and everything is going fine, except when I attempt to post image files...
There's a special format that the server needs or it will not accept the post...
something like this:
---webformkitXXXXXXXX\r\n
filename"somefile.jpg"\r\n
JPEG or IMAGE FILE HERE (in binary)
---webformkitXXXXXXXX\r\n
END----
So when I am finally able to to memcpy together the different pieces I need,
I can save it to file, and it looks just fine, but I can see from the packet captures, that CURL doesn't like taking the binary, it appears that it's truncating the buffer at the first sign of a '\0' because, it only sends like 300 bytes, when it should be sending 80K...
I've been using this: curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data);
Thank You!
You need to use CURLOPT_WRITEDATA and plug in a function you have written to write the POST data.
Related
I'm trying to read a binary file from a local filesystem, send it over HTTP, then in a different application I need to receive the file and write it out to the local file system, all using Apache Camel.
My (simplified) client code looks like this:
from("file:<path_to_local_directory>")
.setHeader(Exchange.HTTP_PATH, header("CamelFileNameOnly"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/octet-stream"))
.to("http4:localhost:9095");
And my server code is:
restConfiguration()
.component("spark-rest")
.port(9095);
rest("/{fileName}")
.post()
.consumes("application/octet-stream")
.to("file:<path_to_output_dir>?fileName=${header.fileName}");
As you can see, I'm using the Camel HTTP4 Component to send the file and the Spark-Rest component to receive it.
When I run this, and drop a file into the local directory, both the client and server applications work and the file is transmitted, received and written out again. The problem I'm seeing is that the original file is 5860kb, but the received file is 9932kb. As it's a binary file it's not really readable, but when I open it in a text editor I can easily see that it has changed and many characters are different.
It feels like it's being treated as a text file and it's being received and written out in a different character set to that in which it is written. As a binary file, I don't want it to be treated as a text file which is why I'm handling it as application/octet-stream, but this doesn't seem to be honoured. Or maybe it's not a character set problem and could be something else? Plain text files are transmitted and received correctly, with no corruption, which leads me to think that it is the special characters in the binary file that are causing the problem.
I'd like to resolve this so that the received file is identical to the sent file, so any help would be appreciated.
I got the same issue. By default, Camel will serialize it as a String when producing to the http endoint.
You should explicitly convert the GenericFile to byte[] by doing a simple : .convertBodyTo(byte[].class) before your .to("http4:..")
I am trying to put a file into a Google Cloud Storage (GCS) bucket from the command line. At a later stage this shall be used in a deployed script at the user end without any type of user-visible authentication.
So far I generate a signed url like this:
gsutil signurl -p notasecret -m PUT -d 1d myserviceaccount.p12 gs://mybucket/testfile
which will generate something like
https://storage.googleapis.com/mybucket/testfile?GoogleAccessId=myserviceaccount#developer.gserviceaccount.com&Expires=1430963040&Signature=gMf2h95bNmolizUGYrsQ%2F%2F%2FiHxW14I%2F0EOU3ZSFWtfCwNqSyok3iweQiuPxYXH4b26FeDSrmFOXB58%2B%2B%2BiAOJ%2B1gdLC9Y%2BkeUdbrjH0eGTW0NVsM1AWY2LsQ3dYf5Ho%2Bos1Fk26EsLJlD096Ku9aWqLW%2FpL%2FBSsUIfHijrFJPdI%3D
The next step (at the user end) would be curl uploading the file with a PUT request. Like so:
curl -X PUT --data-binary #testfile 'https://storage.googleapis.com/mybucket/testfile?GoogleAccessId=myserviceaccount#developer.gserviceaccount.com&Expires=1430963040&Signature=gMf2h95bNmolizUGYrsQ%2F%2F%2FiHxW14I%2F0EOU3ZSFWtfCwNqSyok3iweQiuPxYXH4b26FeDSrmFOXB58%2B%2B%2BiAOJ%2B1gdLC9Y%2BkeUdbrjH0eGTW0NVsM1AWY2LsQ3dYf5Ho%2Bos1Fk26EsLJlD096Ku9aWqLW%2FpL%2FBSsUIfHijrFJPdI%3D'
I can get this to work with an existing file in the bucket and a GET request (for downloading), but it does not seem to work for uploading. curl throws the server's response with error messages like this at me:
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided.
Check your Google secret key and signing method.</Message>
<StringToSign>PUT
application/x-www-form-urlencoded
1430963040
/mybucket/testfile</StringToSign>
</Error>
And this makes sense to me, as obviously I am not just making a bare PUT request, but one for a particular file of a specific size, whereas the signature computed by 'gsutil signurl' would not know about these details at the time it is computed.
Somehow I was under the impression (e.g., based on the last usage case described in gsutil signurl documentation and also in the post How to allow anonymous uploads to cloud storage) that it should be possible to generate a generic signed url for uploading purposes and then use it later. Am I just mistaken about this point or is there a way to fix the curl request?
Any thoughts about this are appreciated. However, I'd like this to work with "minimal tools", i.e., ideally shell and curl only, but no other programming languages.
EDIT:
Organising one's thoughts by formulating the exact problem is the first step towards the solution. I realise now that
curl -X PUT -T - [request-url] < testfile
does actually solve the immediate problem. However, this means multiple users would write to the same file if they use the same signed url. The documentation suggests you can omit the object name in the creation of the signed url, i.e., use
gsutil signurl -p notasecret -m PUT -d 1d myserviceaccount.p12 gs://mybucket/
This, supposedly, would allow anyone with the resulting signed url to put any object of any type into my bucket. Only I do not get this work, as I don't see how you can then tell GCS which object you are actually writing to.
This was driving me mad too. Turns out it was the binary-file part of the curl command. Try this instead:
curl -X PUT --upload-file me.jpeg $SIGNED_URL
If the resource does not specify a single object, you can do so on an individual basis by adding a URL param to the request with the name of the object. For example:
curl -X PUT -T - [request-url]?name=[object-name] < testfile
This surely works with storage/v1, although I have not tried myself with a signed URL yet.
I encountered the similar problem(403 forbidden).
It turned out that my json library, which I use it to marshal each response, would replace & by \u0026 for security concern. So the url may be correct in the program but invalid in client side
So I guess that there might be some string encoding bug inside the Signature query string of your url since the signature string is harder to detect error in comparison with my \u0026.
I am trying to send camera settings file through .cgi with wget. I have code
wget --http-user=admin --http-password=aaa --post-file=file.bin
http://192.168.1.54/restore.cgi
but answer is : http request sent awaiting response... no data received. retrying.
do you have any ideas where can be problem?
Thanks
You are showing a file named 'file.bin', which would suggest it is a raw binary file. wget can only send URL-encoded data. You will need to reverse-engineer the form fields typically sent to restore.cgi and duplicate that format in 'file.bin'.
From wget(1):
In particular, --post-file is not for transmitting files as form
attachments: those must appear as "key=value" data (with appropriate
percent-coding) just like everything else. Wget does not currently
support "multipart/form-data" for transmitting POST data; only
application/x-www-form-urlencoded".
I have a problem in my grails application which reads a txt file stored in the disk and then sends the file to the client.
Now I am achieving this by reading the file line by line and storing them in a String array.
After reading all lines from the file, the String array is sent to the client as JSON.
In my gsp's javascript I get that array and display the array contents in a text area as
textarea.value = arr.join("\n\n");
This operation happens recursively for every 1 minute which is achieved using ajax.
My problem is, the txt which the server is reading consists of about 10,000 to 20,000 lines.
So reading all those 10,000+ lines and sending them as array creates problem in my IE8 which gets hung-up and finally crashes.
Is there any other easy way of sending the whole file through http and displaying it in browser?
Any help would be greatly appreciated.
Thanks in advance.
EDIT:
On Googling I found that, file input/output streaming is a better way to display the file contents in a browser but I couldn't find an example on how to do it.
Can anyone share some example on how to do it?
I'm working on a simple server that have to work with a browser. When I give it some command, it has to reply me with some html code so to reply my answer. For example I can see the list of the files of a folder etc. I can access my server using localhost:port/somecommand
Now I'm working on donwloading a file from the local hard-disk. What I want to do is enter an url like localhost:port/download/filepath and make the browser download it. When I create the reply I put all things html need to understand that there is a file to download, and infact I have the classical pop-up window that ask me to download the file, but what I receive is bigger than the original file located to the hard-disk, and infact the file is corrupted.
Here the code:
This is the html I send back
HTTP/1.0 200 OK
Date: Tue Apr 10 16:23:55 2012
Content-Type: application/octet-stream
Content-Disposition: attachment; filename=mypic.jpg
Content-Length: 2574359
//empty line here
(I followed The Content-Disposition Header Field text )
then I read from file and I send back html first and what I read from file then:
int file=open(name,O_RDONLY);
void * data=malloc(buf.st_size+1); //buf.st_size is the correct file size
size_t readed=read(file,data,buf.st_size);
send(sd_current, html, DIRSIZE, 0); //html is a string containing what you I showed you
send(sd_current, data, readed);
This result the file I can download using localhost:port/download/filepath to be bigger than original then corrupted, but I can't get rid of why. Can someone help me?
Things to check:
is DIRSIZE really the size of the http header? (Since the size should vary, and capitals normally means constant).
Is the read successful?
How is the file corrupted?
Are the line endings on the http header correct? They should be \r\n
EDIT:
If DIRSIZE is not the size of the header, then the rest of the buffer (containing NULLs or junk) will be sent to the other size.
So the other side sees the HTTP header, then starts receiving data - starting with the rest of the html buffer, then the contents of the file.
Then depending on the receiver, it either stops at the Content-Length header size, or carries on while the stream is still delivering data.
Does that match your result contents: Some junk at the beginning, followed by the expected file contents?
Content-Length: 2574359, 2574359 should be the size of your jpg file, but I see nothing in your code to put that in the html portion. buf.st_size == 2574359 ?