libcurl C API > POST ranges of data obtained from URL - c

How can I post ranges of data obtained from URL, not from file? Say I need to read 150-250000 bytes from http://localhost/video.mp4 (A) and POST this data to http://172.32.144.12 (B) in chunks smoothly so that it looked like the data is streamed from (A) to (B)?

Why not simply start downloading from A (using a range if you don't want the whole thing) and once you have received enough data to pass it along to site B, you issue a separate request with that data. Meanwhile you continue downloading from A into an alternative buffer etc.
You can do this using two threads or even in the same thread using libcurl's multi interface.

Related

Pipe data from curl_easy_perform

I am trying to use libcurl to pipe data from an arbitrary (user given) url to my application:
The https.c example shows how to retrieve content from a URL and immediately write it to somewhere as it comes in, for example stdout or a file.
The sendrecv.c example shows how to setup a pipe by making the application repeatedly call curl_easy_recv to retrieve chunks of data.
However I don't understand how to combine the two. It seems like curl_easy_recv only works when:
/* Do not do the transfer - only connect to host */
curl_easy_setopt(curl, CURLOPT_CONNECT_ONLY, 1L);
When this option is set curl_easy_perform does not retrieve any data, it just connects. In the example, the application proceeds by manually sending an http command using curl_easy_send. However I just want to retrieve the data specified in the URL, without writing manual http or ftp commands.
Is there a way to use curl_easy_recv or something similar in combination with the default behavior of curl_easy_perform automatically taking care of retrieving the content specified in the url?
First, curl_easy_send and curl_easy_recv are really only meant to be used if you're not doing one of the protocols libcurl already supports so in most cases they are not the correct answer. It doesn't sound like you need them.
curl_easy_perform() does the transfer of the given URL and it'll call the CURLOPT_WRITEFUNCTION as soon as data has arrived and you can then make use of that data or send it somewhere of your choice. Is that not enough?

How to send a variable length list or array of parameters with a HTTP POST request in JMeter?

I'm making JMeter load tests for an ASP.NET web application, and the tests should post some data to the server. Specifically, they should post grades for all the pupils in a class. However, the tests are supposed to be general, so that they can be run towards different schools with a small change in the configuration.
However, this creates a problem when the grades are posted, since the number of parameters in the post request (pupils in the class) can vary from run to run, or even from thread to thread. Currently I only know how to pass parameters through the HTTP request form as shown below:
However, in the next thread there could be a saveModel.PupilOrderAndBehaviours[2] or even up to 30. I have all of this information available directly from csv files. That is, I can tell JMeter ahead of time how many pupils will be in each class, and what grades each of them should receive, so I do not need to read this out from previous responses or anything like that.
Is there a way, potentially using BeanShell, I can configure JMeter to do this correctly?
It can be done with Beanshell Preprocessor.
int count = 10;
for(int i=1;i<=count;i++)
{
sampler.addArgument("Parameter" + i, "Value" + i);
}
It adds 10 parameters as given below # run time.
Please refer to this site.
http://theworkaholic.blogspot.com/2010/03/dynamic-parameters-in-jmeter.html

How to implement a lossless URL shortening

First, a bit of context:
I'm trying to implement a URL shortening on my own server (in C, if that matters). The aim is to avoid long URLs while being able to restore a context from a shortened URL.
Currently I have a implementation that creates a session on the server, identified by a certain ID. This works, but consumes memory on the server (and is not desired since it's an embedded server with limited resources and the main purpose of the device isn't providing web pages but doing other cool stuff).
Another option would be to use cookies or HTML5 webstorage to store the session information in the client.
But what I'm searching for is the possibility to store the shortened URL parameters in one parameter that I attach to the URL and be able to re-construct the original parameters from that one.
First thought was to use a Base64-encoding to put all the parameters into one, but this produces an even larger URL.
Currently, I'm thinking of compressing the URL parameters (using some compression algorithm like zip, bz2, ...), do the Base64-encoding on that compressed binary blob and use that information as context. When I get the parameter, I could do a Base64-decoding, de-compress the result and have hands on the original URL.
The question is: is there any other possibility that I'm overlooking that I could use to lossless compress a large list of URL parameters into a single smaller one?
Update:
After the comments from home, I realized that I overlooked that compressing itself adds some overhead to the compressed data making the compressed data even larger than the original data because of the overhead that for example zipping adds to the content.
So (as home states in his comments), I'm starting to think that compressing the whole list of URL parameters is only really useful if the parameters are beyond a certain length because otherwise, I could end up having an even larger URL than before.
You can always roll your own compression. If you simply apply some huffman coding, the result will always be smaller (but then base64 encoding it, it'll grow a bit, so the net effect may perhaps not be optimal).
I'm using a custom compression strategy on an embedded project I work with where I first use a lzjb (a lempel ziv derivate, follow link for source code, really tight implementation (from open solaris)) followed by huffman coding the compressed result.
The lzjb algorithm doesn't perform too well on very short inputs, though (~16 bytes, in which case I leave it uncompressed).

Preventing download of video files from outside the web server (via .htaccess)?

I've got video files stored as:
www.example.com/video_files/abc.flv
My application uses FlowPlayer which streams this video file to the end user.
How can I restrict access to these files only to the application within the server and prevent ppl from typing in the url/file link directly and downloading the file?
The short answer is you can't, if the media is streamed unprotected. People with packet sniffers will always be able to dump the stream as it's sent to their browser.
If this is really important to you, you should investigate a DRM solution. (But note that DRM is not unbreakable either.)
No way to do it.
The best thing, that you can do, add to link some hash and timestamp.
For example www.example.com/video_files/abc.flv => www.example.com/video_files/12345678901234567890123456789012/12345678/abc.flv
12345678901234567890123456789012 - is hash
12345678 - is timestamp, to which that link will be valid.
as hash function you can use for example something like:
hash = md5(abc.flv12345678somesecretkey)
After reciving request, webserver must check hash and timestamp and after that output file to user or throw an error.
For each user you must generate it's own url with a small lifetime.
User can't redestribute url's because it's expires very quuickly.
PS, sorry for my bad english

Determining response length from an ISAPI filter

I am working on an ISAPI Filter to strip certain content out of responses. I need to collect all the body of the response before I do the processing, as the content I'm stripping could overlap send buffers.
To do this I'd like to buffer the response content with each SF_NOTIFY_SEND_RAW_DATA notification until I get to the last one, then send the translated data. I would like to know the best way to determine which SF_NOTIFY_SEND_RAW_DATA is actually the last. If I wait until the SF_NOTIFY_END_OF_REQUESTnotification, then I don't know how to send the data I've buffered.
One approach would be to use the content-length. This would require I detect the end of the headers. It would also require assuming the content-length header is correct (is that guaranteed?). Since HTTP doesn't even require a content-length header, I'm not even sure it will always be there. There seems like there should be an easier way.
I'm assuming the response is not chunked, so I am not handling dechunking before I do the response change. Also, when I do the modifications to the response body, the size of teh response body will not change, so I do not need to go back and update the content-length.
I eventually found some good discussions via google.
This posts answers my questions, as well as raises issues a more complicated filter would have to address: http://groups.google.com/group/microsoft.public.platformsdk.internet.server.isapi-dev/browse_thread/thread/85a5e75f342fad2b/cbb638f9a85c9e03?q=HTTP_FILTER_RAW_DATA&_done=%2Fgroups%3Fq%3DHTTP_FILTER_RAW_DATA%26start%3D20%26&_doneTitle=Back+to+Search&&d&pli=1
The filter I have s buffering the full request into its own buffer then using the SF_NOTIFY_END_OF_REQUEST to send the contents. The modification it does does not change the size, and precludes the possibility that the response is chunked, so in my case the filter is relatively simple.

Resources