Here I implemented code for file download from server. its working fine.
Now I want to make my own progress bar function which calculates some data like remaining seconds data Rate per second etc.
So from here I found one way to use curl progress bar option. how we can enable this option.
I completely done with this.
I put my code below. here in this code my_progress_func calls frequently as per curl library time interval. I want to change this interval time and make it to 1 second. is it possible in curl library using to set some options for curl library?
I want to call this my_progress_func function after every 1 second.
Code :
#include <stdio.h>
#include <curl/curl.h>
long test =0;
struct FtpFile {
const char *filename;
FILE *stream;
long iAppend;
};
static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
{
struct FtpFile *out=(struct FtpFile *)stream;
if(out && !out->stream) {
/* open file for writing */
out->stream=fopen(out->filename, out->iAppend ? "ab":"wb");
if(!out->stream)
return -1; /* failure, can't open file to write */
}
out->iAppend += nmemb;
return fwrite(buffer, size, nmemb, out->stream);
}
int my_progress_func(void *bar,
double t, /* dltotal */
double d, /* dlnow */
double ultotal,
double ulnow)
{
printf("%f : %f \n", d, t);
return 0;
}
int main(void)
{
CURL *curl;
CURLcode res;
int c;
struct FtpFile ftpfile={
"dev.zip", /* name to store the file as if succesful */
NULL,
};
curl_global_init(CURL_GLOBAL_DEFAULT);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL,
"sftp://root:xyz_#192.170.10.1/mnt/xyz.tar");
curl_easy_setopt(curl, CURLOPT_TIMEOUT, 120L);
/* Define our callback to get called when there's data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &ftpfile);
curl_easy_setopt(curl, CURLOPT_FTPPORT, "-");
/* Switch on full protocol/debug output */
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, my_progress_func);
res = curl_easy_perform(curl);
printf("res is %d\n, data get %ld\n", res, ftpfile.iAppend);
///Retry upto 100 times it timeout or connection drop occur
for (c = 0; (res != CURLE_OK) && (c < 100); c++) {
curl_easy_setopt(curl, CURLOPT_RESUME_FROM , ftpfile.iAppend);
res = curl_easy_perform(curl);
if(res == CURLE_OK) c =0;
printf("%d res is %d\n, data get %ld\n",c, res, ftpfile.iAppend);
}
/* always cleanup */
curl_easy_cleanup(curl);
}
if(ftpfile.stream)
fclose(ftpfile.stream); /* close the local file */
curl_global_cleanup();
return 0;
}
According to the curl documentation:
http://curl.haxx.se/libcurl/c/curl_easy_setopt.html
Function pointer that should match the curl_progress_callback
prototype found in . This function gets called by libcurl
instead of its internal equivalent with a frequent interval during
operation (roughly once per second or sooner) no matter if data is
being transfered or not. Unknown/unused argument values passed to the
callback will be set to zero (like if you only download data, the
upload size will remain 0). Returning a non-zero value from this
callback will cause libcurl to abort the transfer and return
CURLE_ABORTED_BY_CALLBACK.
If it's calling too frequently then you can use time() and a static var to limit this, something like this:
static time_t prevtime;
time_t currtime;
double dif;
static int first = 1;
if(first) {
time(&prevtime);
first = 0;
}
time(&currtime);
dif = difftime(currtime, prevtime);
if(dif < 1.0)
return;
prevtime = currtime;
Obviously, you run the risk that curl might not call this function again for fully another second.
Related
I would like to create a C client that makes asynchronous API calls with lib curl and saves the responses, the calls are about a hundred at the same time. I have been looking for internet tutorials and examples for curl_multi_ * and curl_multi_socket with epoll for 4 days (I use linux) but they seem not to exist, and those few examples are not understandable to someone who is a beginner like me. Apparently I'm the only one interested in doing such a thing in C.
I also looked at the official documentation examples, but it uses a maximum of 2 connections at the same time and to do this declares two variables and calls curl_easy_init(), but the problem is that the requests made by the program are not a precise number so I cannot declare a number of variables a priori (even though it's not possible to declare 100 variables).
I found out this example of curl_multi_socket with epoll is difficult to understand and replicate for my case without an explanation of how it works.
Is there anyone who can give me a code example on how to use curl_multi_ * for multiple simultaneous connections to start with? it would be much appreciated.
EDIT:
after hours of research, I finally found an example that might be fit, the problem is that it crashes often and for various reasons
#define NUM_URLS 64
typedef struct data { // 24 / 24 Bytes
struct curl_slist * header;
char ** sub_match_json;
int nbr_sub_match;
int response_counter;
} data_t;
// list of the same URL repeated multiple times
// assume there are 64 url for example
static char *urls[] = {}
void make_header(data_t * data) {
//many curl_slist_append();
}
void init_data(data_t *data) {
data->sub_match_json = (char **)malloc(sizeof(char *) * NUM_URLS);
data->response_counter = 0;
data->nbr_sub_match = NUM_URLS;
make_header(data);
}
static size_t write_cb(void *response, size_t size, size_t nmemb, void *userp)
{
size_t realsize = size * nmemb;
data_t * data = (data_t *) userp;
data->sub_match_json[data->response_counter] = malloc(realsize + 1);
if(data->sub_match_json[data->response_counter] == NULL)
{
fprintf(stderr, "Memory allocation failed: %s\n", strerror(errno));
return 0; /* out of memory! */
}
memcpy(data->sub_match_json[data->response_counter], response, realsize);
data->sub_match_json[data->response_counter][realsize] = 0;
data->response_counter++;
return realsize;
}
static void add_transfer(CURLM *cm, int i, data_t *data)
{
CURL *curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_BUFFERSIZE, 1<<23);
// curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2_0);
// curl_easy_setopt(curl, CURLOPT_TCP_FASTOPEN, 1L);
// curl_easy_setopt(curl, CURLOPT_TCP_NODELAY, 1L);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_cb);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, (void *)data);
curl_easy_setopt(curl, CURLOPT_VERBOSE, 0L);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, data->header);
curl_easy_setopt(curl, CURLOPT_HEADER, 1L);
curl_easy_setopt(curl, CURLOPT_URL, urls[i]);
curl_easy_setopt(curl, CURLOPT_PRIVATE, urls[i]);
curl_multi_add_handle(cm, curl);
}
int main(void)
{
CURLM *cm;
CURLMsg *msg;
data_t global_data;
unsigned int transfers = 0;
int msgs_left = -1;
int still_alive = 1;
curl_global_init(CURL_GLOBAL_ALL);
cm = curl_multi_init();
init_data(NULL, &global_data); // my function
/* Limit the amount of simultaneous connections curl should allow: */
curl_multi_setopt(cm, CURLMOPT_MAXCONNECTS, (long)MAX_PARALLEL);
for(transfers = 0; transfers < MAX_PARALLEL; transfers++)
add_transfer(cm, transfers, &global_data);
do {
curl_multi_perform(cm, &still_alive);
while((msg = curl_multi_info_read(cm, &msgs_left))) {
if(msg->msg == CURLMSG_DONE) {
char *url;
CURL *e = msg->easy_handle;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &url);
fprintf(stderr, "R: %d - %s <%s>\n",
msg->data.result, curl_easy_strerror(msg->data.result), url);
curl_multi_remove_handle(cm, e);
curl_easy_cleanup(e);
}
else {
fprintf(stderr, "E: CURLMsg (%d)\n", msg->msg);
}
if(transfers < global_data.nbr_sub_match)
add_transfer(cm, transfers++, &global_data);
}
if(still_alive)
curl_multi_wait(cm, NULL, 0, 1000, NULL);
} while(still_alive || (transfers < NUM_URLS));
curl_multi_cleanup(cm);
curl_global_cleanup();
while (global_data.response_counter-- >= 0) {
printf("%s\n", global_data.sub_match_json[global_data.response_counter]);
}
return EXIT_SUCCESS;
}
Error:
api_calls(75984,0x100088580) malloc: Incorrect checksum for freed object 0x100604c30: probably modified after being freed.
Corrupt value: 0x600002931f10
api_calls(75984,0x100088580) malloc: *** set a breakpoint in malloc_error_break to debug
this is on curl_easy_cleanup(e);
Exception has occurred.
EXC_BAD_ACCESS (code=1, address=0x0)
otherwise, when no error occurs, in sub_match_json there are bytes and no char. Why this ?
I use libcurl in my C code to download files given their urls. My code looks similar to this:
#include <stdio.h>
#include <curl.h>
#include <pthread.h>
static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream)
{
size_t written = fwrite(ptr, size, nmemb, (FILE *)stream);
return written;
}
int progress_func(void *ptr, double TotalToDownload, double NowDownloaded,
double TotalToUpload, double NowUploaded)
{
struct my_custom_struct *my_dummy_data = (struct my_custom_struct *) data;
//do some stuffs here
return 0;
}
void *download_with_curl(void *data)
{
char *url = (char *) data;
int res = 0;
// My custom struct to store data
struct my_custom_struct my_dummy_data;
char errbuff[CURL_ERROR_SIZE] = {0};
CURL *curl_handle;
/* init the curl session */
curl_handle = curl_easy_init();
/* set URL to get here */
curl_easy_setopt(curl_handle, CURLOPT_URL, url);
/* disable progress meter, set to 0L to enable*/
curl_easy_setopt(curl_handle, CURLOPT_NOPROGRESS, 0L);
/* send all data to this function*/
curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, write_data);
curl_easy_setopt(curl_handle, CURLOPT_LOW_SPEED_TIME, RESPOND_TIME);
curl_easy_setopt(curl_handle, CURLOPT_LOW_SPEED_LIMIT, 30L);
/* set the progress function */
curl_easy_setopt(curl_handle, CURLOPT_PROGRESSFUNCTION, progress_func);
/* set the progress data */
curl_easy_setopt(curl_handle, CURLOPT_PROGRESSDATA, &my_dummy_data);
/* provide a buffer to store errors in */
curl_easy_setopt(curl_handle, CURLOPT_ERRORBUFFER, errbuff);
FILE *pagefile = fopen(path_to_where_I_want_to_store_the_file, "wb");
/* write the page body to this file handle */
curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, pagefile);
/* get the file*/
int status = curl_easy_perform(curl_handle);
res = 0;
int response_code;
curl_easy_getinfo(curl_handle, CURLINFO_RESPONSE_CODE, &response_code);
fclose(pagefile);
if (status != 0) {
log_warn("CURL ERROR %d: %s", status, errbuff);
response_code = -status;
}
/* cleanup curl stuff */
curl_easy_cleanup(curl_handle);
return NULL;
}
int main()
{
// sockfd = create a sockfd
// bind, listen
do {
// accept new connection
char *url;
// receive the url from client
pthread_t tid;
pthread_create(&tid, NULL, download_with_curl, url);
} while (1);
}
When I send a single download request, the code works fine. "Works fine" means that the md5sum values of the original file and the downloaded file are equal. However, when I send multiple requests to download multiple files, only the first file that is downloaded has the correct md5sum value. To be clear, if I send requests to download files A (200MB), B (5MB) and C (50MB) in that order, only file B is correctly downloaded because it is finished first. Files A and C will have incorrect md5sum values. Moreover, when I check the content of files A and C, it looks like curl just inserts random segments of data into them. If the original file content is
This is the content of a file
then the downloaded file is like
This is the #$%!##%#% content of $%(#(!)$()$%||##$%*&) a file
After spending two days of debugging, I finally solved the problem (I hope so). All I did was just flushing the data after calling fwrite. The function write_data now looks like this:
static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream)
{
size_t written = fwrite(ptr, size, nmemb, (FILE *)stream);
fflush((FILE *) stream);
return written;
}
I do not know if it completely solves the problem or not. Could anyone explain why it behaves that way and give my a solution to this?
UPDATE 1
It seems that there is something to do with fwrite()'s internal buffer. Changing from fwrite(ptr, size, nmemb, stream) to write(fileno(stream), ptr, size * nmemb) seems to give the same result as using fflush().
UPDATE 2
Using the default write function (remove the option CURLOPT_WRITEFUNCTION) of libcurl gives the same problem.
I am downloading file quite commonly with curl. However, the server does a tricky thing: it return non-200 code and still sends some data. The problem is that I have the HTTP code after the data are written, but I do not want to write anything if it is non-200. Does anyone know a way to do that without storing data on disk or memory?
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, curlWriteHandler);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, ptr);
res = curl_easy_perform(curl);
if (res == CURLE_OK) {
return 0;
}
long response_code;
curl_easy_getinfo(curl_.get(), CURLINFO_RESPONSE_CODE, &response_code);
if (response_code != 200) {
return 0;
}
size_t curlWriteHandler(char* chars, size_t size, size_t nmemb, void* userp) {
// write to file
return size * nmemb;
}
Setting CURLOPT_FAILONERROR should do it for 4xx and 5xx errors.
When this option is used and an error is detected, it will cause the connection to get closed and CURLE_HTTP_RETURNED_ERROR is returned.
curl_easy_setopt(curl, CURLOPT_FAILONERROR, 1L);
Closing connection is not good for me, it is important to reuse one. Can you think about anything else?
Unfortunately I can't find a way to make CURLOPT_FAILONERROR not close the connection.
The other option is to make the write function aware of the response. Unfortunately the curl handle is not passed into the callback.
We could make the curl variable global. Or we can take advantage of the void *userdata option to the write callback and pass in a struct containing both the curl handle and the buffer.
Here's a rough sketch demonstrating how the write callback can get access to the response code and also save the response body.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <curl/curl.h>
typedef struct {
CURL *curl;
char *buf;
} curl_write_data;
size_t curlWriteHandler(char* chars, size_t size, size_t nmemb, void* userp) {
curl_write_data *curl_data = (curl_write_data*)userp;
long response_code;
curl_easy_getinfo(curl_data->curl, CURLINFO_RESPONSE_CODE, &response_code);
printf("Response: %ld\n", response_code);
// Now we can save if we like.
if( response_code < 300 ) {
curl_data->buf = malloc(size*(nmemb+1));
strcpy(curl_data->buf, chars);
strcat(curl_data->buf, "\0");
return size * nmemb;
}
else {
return 0;
}
}
int main() {
CURL *curl = curl_easy_init();
if(!curl) {
perror("Cant' init curl");
}
curl_write_data curl_data = { .curl = curl, .buf = NULL };
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com/alsdfjalj");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, curlWriteHandler);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &curl_data);
curl_easy_perform(curl);
if( curl_data.buf ) {
puts(curl_data.buf);
}
}
I'm not sure if this is the best idea, its what I came up with.
How to not show all this info?
All I do is using little edited FTP example and i dont want that info to be shown.
edit: added full code from main.c
image in link: http://www.mediafire.com/convkey/71e9/oyhctzcdjxakzxzfg.jpg
#include <stdio.h>
#include <stdlib.h>
#include <curl/curl.h>
struct FtpFile {
const char *filename;
FILE *stream;
};
static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
{
struct FtpFile *out=(struct FtpFile *)stream;
if(out && !out->stream) {
/* open file for writing */
out->stream=fopen(out->filename, "wb");
if(!out->stream)
return -1; /* failure, can't open file to write */
}
return fwrite(buffer, size, nmemb, out->stream);
}
int main()
{
CURL *curl;
CURLcode res;
struct FtpFile version={"version.txt", /* name to store the file as if succesful */NULL};
curl_global_init(CURL_GLOBAL_DEFAULT);
curl = curl_easy_init();
FILE *file_verzija;
int trenutna_verzija;
int nova_verzija;
char pitanje_za_update;
file_verzija=fopen("version.txt","r");
fscanf(file_verzija,"%i",&trenutna_verzija);
fclose(file_verzija);
printf("Current version %i",trenutna_verzija);
printf("\nChecking for updates...\n");
if(curl)
{
/*You better replace the URL with one that works!*/curl_easy_setopt(curl, CURLOPT_URL,"http://elektro-srb.com/site/muffin_finder_files/version.txt");
/* Define our callback to get called when there's data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &version);
/* Switch on full protocol/debug output */
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
if(CURLE_OK != res)
{
/* we failed */
printf("\nerror");
}
}
if(version.stream)
fclose(version.stream); /* close the local file */
file_verzija=fopen("version.txt","r");
fscanf(file_verzija,"%i",&nova_verzija);
fclose(file_verzija);
if(trenutna_verzija != nova_verzija)
{
printf("\nUpdate found! New version is %i",nova_verzija);
}
else
{
printf("You are running latest version of Muffin Finder!");
}
if(trenutna_verzija != nova_verzija)
{
printf("\nUpdate? y/n");
scanf("\n%c",&pitanje_za_update);
if((pitanje_za_update == 'y') || (pitanje_za_update == 'Y'))
{
//UPDATE
}
else if((pitanje_za_update == 'n') || (pitanje_za_update == 'N'))
{
//pokretanje stare
}
}
curl_global_cleanup();
return 0;
}
You should construct a WRITEFUNCTION option, to prevent it from using stdout for printing.
See here: http://curl.haxx.se/libcurl/c/curl_easy_setopt.html.
Search for "WRITEFUNCTION". You should implement the function (and I assume you would like to leave it empty).
EDIT: As the manual states, you should do the following:
Implement a function to replace the default stdout:
size_t outputFunction(char *ptr, size_t size, size_t nmemb, void *userdata) {}
When you initialize the CURL structure, use this option:
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, outputFunction);
comment following line:
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
I have to trasfer files using FTP protocol and libcurl in c. It works fine, but I have some problems.
1) If a transfer is started, but at a certain point I disconnect from the network, my program remains in the libcurl function, the speed goes to 0 and I can't do anything else. I tried to set timeout (CURLOPT_TIMEOUT), but it's just a timeout on the transfer time.
2) The second problem I have, which is linked to the first one is, how can I know if the trasfer if successfully completed?
My trasfer code is:
struct FtpFile {
const char *filename;
FILE *stream;
};
long int size;
static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
{
struct FtpFile *out=(struct FtpFile *)stream;
if(out && !out->stream) {
/* open file for writing */
out->stream=fopen(out->filename, "ab");
if(!out->stream)
return -1; /* failure, can't open file to write */
}
return fwrite(buffer, size, nmemb, out->stream);
}
int sz;
int main(void)
{
CURL *curl;
CURLcode res;
char prova;
struct stat statbuf;
FILE *stream;
/* apro il file per leggere la dimensione*/
if ((stream = fopen("Pdf.pdf", "rb"))
== NULL)
{
fprintf(stderr, "Nessun file da aprire, il download partirĂ da zero.\n");
}
else
{
/* Ricevo informazioni sul file */
fstat(fileno(stream), &statbuf);
fclose(stream);
size = statbuf.st_size;
printf("Dimensione del file in byte: %ld\n", size);
}
struct FtpFile ftpfile={
"Pdf.pdf", /* name to store the file as if succesful */
NULL
};
curl_global_init(CURL_GLOBAL_DEFAULT);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_URL, "ftp://....");
/* Define our callback to get called when there's data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &ftpfile);
/* Switch on full protocol/debug output */
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L);
/*Pass a long as parameter. It contains the offset in number of bytes that you want the transfer to start from.
Set this option to 0 to make the transfer start from the beginning (effectively disabling resume). For FTP, set
this option to -1 to make the transfer start from the end of the target file (useful to continue an interrupted
upload).
When doing uploads with FTP, the resume position is where in the local/source file libcurl should try to resume
the upload from and it will then append the source file to the remote target file. */
if(stream == NULL)
{
curl_easy_setopt(curl, CURLOPT_RESUME_FROM, 0);
}
else
{
curl_easy_setopt(curl, CURLOPT_RESUME_FROM, size);
}
/*Used to show file progress*/
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
if(CURLE_OK != res) {
/* we failed */
fprintf(stderr, "curl told us %d\n", res);
}
}
if(ftpfile.stream)
fclose(ftpfile.stream); /* close the local file */
curl_global_cleanup();
return 0;
}
I have recently answered a similar question to this.
1) this is by design. if you want to timeout the connection, try using these instead:
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, dl_lowspeed_bytes);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, dl_lowspeed_time);
If your download rate falls below your desired threshold, you can check the connectivity & take whatever action you see fit.
NB: Added in 7.25.0: CURLOPT_TCP_KEEPALIVE, CURLOPT_TCP_KEEPIDLE
so these might be another suitable alternative for you.
2) like this:
curl_easy_getinfo(curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &dl_bytes_remaining);
curl_easy_getinfo(curl, CURLINFO_SIZE_DOWNLOAD, &dl_bytes_received);
if (dl_bytes_remaining == dl_bytes_received)
printf("our work here is done ;)\n");