I'm writing a program that continuously and recursively checks an FTP server for new files. When a file is detected, it is downloaded.
I wrote the all thing using the curl easy interface, since blocking calls to curl_easy_perform() are great for the control channel and listing operations. But when it comes to download files, the multi interface seems a lot more appropriate. I thought about switching the entire thing to multi, but it gets very complicated for directory listing.
So here's my question, can I use both interfaces, easy and multi inside the same thread ? If so, can they share the same connection to the server ?
EDIT 1
Instead of using curl_easy_perform(), is there a way to check for a single transfer status ? So I could use the curl_multi_* interface for all my transfers, and only check my LIST command status right after I perform it. This would allow me to simulate a blocking behavior, without interfering with my file transfers that would be handled and checked elsewhere.
From what I saw, the curl_multi_info_read() doesn't allow to do so :
When you fetch a message using this function, it is removed from the internal queue so calling this function again will not return the same message again.
Does this answer your question:
When an easy handle is setup and ready for transfer, then instead of using curl_easy_perform like when using the easy interface for transfers, you should add the easy handle to the multi handle with curl_multi_add_handle. You can add more easy handles to a multi handle at any point, even if other transfers are already running.
From libcurl - multi interface overview (ONE MULTI HANDLE MANY EASY HANDLES)
can I use both interfaces, easy and multi inside the same thread ?
yes absolutely, but note that the easy api is mostly blocking, and the multi api is mostly non-blocking, so if you combine them wrong, you might end up in a situation where your multi transfers are hanging/slow because your thread is blocking on a curl_easy~ call.
If so, can they share the same connection to the server ?
strictly speaking, yes, at least in some situations, but you really should let libcurl worry about connection-reuse details, unless you're in a micro-optimization phase (and given your questions, you're absolutely not)
is there a way to check for a single transfer status
check status of a single transfer from a curl_multi list of transfers?
idk to be honest, when i use curl_multi, i usually only check up on them when they're no longer active as reported by curl_multi_info_read() & co.. you could wrap each transfer in its own object with its own dedicated download thread, and keep track of each transfer with CURLOPT_WRITEFUNCTION & co,
this program will output
transfer #1 is 4.70178% downloaded. running: true
transfer #2 is 6.51742% downloaded. running: true
transfer #3 is 6.14288% downloaded. running: true
transfer #4 is 6.01199% downloaded. running: true
transfer #0 is 12.3027% downloaded. running: true
transfer #1 is 8.73407% downloaded. running: true
transfer #2 is 14.0515% downloaded. running: true
transfer #3 is 12.8638% downloaded. running: true
transfer #4 is 11.8516% downloaded. running: true
(...)
transfer #0 is 94.8156% downloaded. running: true
transfer #1 is 88.5291% downloaded. running: true
transfer #2 is 98.8117% downloaded. running: true
transfer #3 is 92.01% downloaded. running: true
transfer #4 is 100% downloaded. running: false
transfer #0 is 100% downloaded. running: false
transfer #1 is 100% downloaded. running: false
transfer #2 is 100% downloaded. running: false
transfer #3 is 100% downloaded. running: false
it keeps track of each individual transfer in its own thread, and the main thread can easily check up on any individual transfers by doing transfers[x]->status ~
#include <iostream>
#include <thread>
#include <string>
#include <string_view>
#include <atomic>
#include <vector>
#include <memory>
#include <curl/curl.h>
class Curl_Transfer
{
public:
std::string url;
std::string response_headers;
std::string response_body;
CURL *ch = nullptr;
CURLcode curl_easy_perform_code = CURLcode(0);
bool running = true;
std::thread dedicated_thread;
int64_t expected_size = 0; // << content-length reported size
Curl_Transfer(std::string url) : url(url)
{
this->dedicated_thread = std::thread([&]() -> void
{
this->ch = curl_easy_init();
curl_easy_setopt(this->ch, CURLOPT_URL, this->url.c_str());
curl_easy_setopt(this->ch, CURLOPT_WRITEDATA,
this);
curl_easy_setopt(this->ch, CURLOPT_HEADERDATA,
this);
curl_easy_setopt(this->ch, CURLOPT_WRITEFUNCTION, this->WRITEFUNCTION_cb);
curl_easy_setopt(this->ch, CURLOPT_HEADERFUNCTION, this->HEADERFUNCTION_cb);
CURLcode code=curl_easy_perform(this->ch);
//std::cout << "code: " << code << std::endl;
this->curl_easy_perform_code = code;
this->running = false;
});
}
~Curl_Transfer()
{
std::cout << "DESTRUCTING!" << std::endl;
this->dedicated_thread.join();
curl_easy_cleanup(this->ch);
}
private:
// this function need to be static to be compatible with some C->C++ calling stuff... idk, but it also need access to this, so fthis=this...
static size_t WRITEFUNCTION_cb(const char *data, size_t size, size_t nmemb, Curl_Transfer *fthis)
{
CURL *ch = fthis->ch;
fthis->response_body.append(data, size * nmemb);
//std::cout << "got body data! " << size*nmemb << "\n";
return size * nmemb;
};
// this function need to be static to be compatible with some C->C++ calling stuff... idk, but it also need access to this, so fthis=this...
static size_t HEADERFUNCTION_cb(const char *data, size_t size, size_t nmemb, Curl_Transfer *fthis)
{
CURL *ch = fthis->ch;
//std::cout << "got headers! " << size*nmemb << "\n";
fthis->response_headers.append(data, size * nmemb);
std::string_view svd(data, size * nmemb);
const std::string_view needle = "Content-Length: ";
auto clp = svd.find(needle);
if (clp != std::string::npos)
{
svd = svd.substr(needle.size());
std::string fck(svd);
fthis->expected_size = std::stoll(fck, nullptr, 0);
}
return size * nmemb;
};
};
int main()
{
curl_global_init(~0); // << todo get proper constant
std::vector<Curl_Transfer *> transfers;
for (int i = 0; i < 5; ++i)
{
auto fck = new Curl_Transfer("http://speedtest.tele2.net/100MB.zip");
transfers.push_back((fck));
}
for (;;)
{
std::this_thread::sleep_for(std::chrono::seconds(5));
for (size_t i = 0; i < transfers.size(); ++i)
{
std::cout << "transfer #" << i << " is " << (double((transfers[i]->response_body.size()) / double(transfers[i]->expected_size))*100) << "% downloaded. running: " << (transfers[i]->running ? "true" : "false") << "\n";
}
}
}
there's probably a better way to do this though.. there has to be. but until someone smarter comes along, *this works at least...
apparently i did all the threading shit to avoid using the curl_multi api.. dafuq
you're using C, not C++... sorry, ofc you can do all of the above in C as well, but i'm not comfortable enough with C to enjoy re-writing that in C (anyone is free to re-write the code in C if they want to)
Related
I'll cut the the short of it. I'm attempting to understand the filesystem library but there's very little information I've been able to find. I managed to get it to compile and understand the filesystem::path type variable really well but don't seem to understand how to get filesystem::directory_iterator to work properly. I'm not sure if I'm using it for a purpose it wasn't design for. So here is what I'm attempting to do:
I wanted to create a program that opens every text file within a specified folder. For this I need to obtain the folder name and path but I want the program to be able to obtain this information on itself and dynamically so if I add or remove textFiles it'll have the logic to function.
I'm able to create a directory_iterator variable that it manages to hold the first file with me just giving it the directory like this:
const char address[]{ "C:\\Users\\c-jr8\\ProgramBranch\\PersonalPlatform\\log extruder\\logs" };
fs::directory_iterator myIterator(address);
When testing the code in the folder I have four textFiles called "attempt 1" to "attempt 4". When reading the information on:
https://learn.microsoft.com/en-us/cpp/standard-library/directory-iterator-class?view=vs-2019#op_star
It mentions two functions to move the path object within the iterator to the next file. The first is increment(), which is the intendent method for iterating through the files, and operation++().
Now increment() hasn't been able to work for me cause it takes in a erro_code type variable and I haven't been able to find much information about how to implement this with the filesystem_errorcode variable or however it's meant to be used. The operation++() works beautifully and provides me with the path to every file but I was having issues with managing the code to detect when the operate++() functions leads to no other files. Once it iterates through every file it sorts of crashes when it keeps moving to the next. Here's that piece of code:
string tempString;
for (int i = 0; i < 5; i++) { //Here the limiting is 5 so it'll iterate onces more than the numbers of files unpurpose to see how it responses.
tempString = myIterator.operator*().path().generic_string();
ifstream tempFile(tempString);
if (!tempFile.is_open()) {
cout << "Looking at file: " << i + 1 << "; failed to open." << endl << endl;
cin.get();
return 0;
}
{
//do things with file...
}
tempFile.close();
myIterator.operator++();
}
What I want if to find a way to stop the for loop once it the iterator goes off the final file.
whichever information about how the filesystem library works it would be very much appreciated.
std::directory_iterator is a classic iterator that allows for iterating over ranges, and those are usually designated by a pair of iterators, one indicating the beginning of a sequence and another representing the past-the-end iterator.
Some iterator types, like those providing access to streams, don't have an actual end location in memory. A similar situation applies to a directory iterator. In such a case, the idiomatic approach is to use a default-constructed iterator object that will serve as an end indicator.
Having said that, you could rewrite your loop to:
for (fs::directory_iterator myIterator(address), end{}; myIterator != end; ++myIterator) {
Alternatively, you can utilize a range-based for loop:
for (auto& p : fs::directory_iterator(address)) {
tempString = p.path().generic_string();
// ...
Also, note that iterators' interface is supposed to look/behave like a pointer, hence it uses operator overloading to allow for concise syntax. So instead of:
myIterator.operator++();
you should be using:
++myIterator;
Similarly, instead of:
myIterator.operator*().path().generic_string();
juse use:
(*myIterator).path().generic_string();
or:
myIterator->path().generic_string();
You should compare myIterator with a default constructed directory_iterator to check if the last file has been processed. You can also use a much simpler form to access the operators (shown in the code below):
string tempString;
// loop until myIterator == fs::directory_iterator{}
for(size_t i = 1; myIterator != fs::directory_iterator{}; ++i) {
// access path() through the iterators operator->
tempString = myIterator->path().generic_string();
ifstream tempFile(tempString);
if(!tempFile.is_open()) {
cout << "Looking at file: " << i << "; failed to open." << endl << endl;
cin.get();
return 0;
}
{
std::cout << tempString << " opened\n";
}
// tempFile.close(); // closes automatically when it goes out of scope
// simpler form to use myIterator.operator++():
++myIterator;
}
An even simpler approach would be to use a range-based for-loop:
for(const fs::directory_entry& dirent : fs::directory_iterator(address)) {
const fs::path& path = dirent.path();
ifstream tempFile(path);
if(!tempFile) {
cout << "Looking at file: " << path << "; failed to open.\n\n";
cin.get();
return 0;
}
std::cout << path << " opened\n";
}
Attempting to use mbed OS scheduler for a small project.
As mbed os is Asynchronous I need to avoid blocking code.
However the library for my wireless receiver uses a blocking line of:
while (!(wireless.isRxData()));
Is there an alternative way to do this that won't block all the code until a message is received?
static void listen(void) {
wireless.quickRxSetup(channel, addr1);
sprintf(ackData,"Ack data \r\n");
wireless.acknowledgeData(ackData, strlen(ackData), 1);
while (!(wireless.isRxData()));
len = wireless.getRxData(msg);
}
static void motor(void) {
pc.printf("Motor\n");
m.speed(1);
n.speed(1);
led1 = 1;
wait(0.5);
m.speed(0);
n.speed(0);
}
static void sendData() {
wireless.quickTxSetup(channel, addr1);
strcpy(accelData, "Robot");
wireless.transmitData(accelData ,strlen(accelData));
}
void app_start(int, char**) {
minar::Scheduler::postCallback(listen).period(minar::milliseconds(500)).tolerance(minar::milliseconds(1000));
minar::Scheduler::postCallback(motor).period(minar::milliseconds(500));
minar::Scheduler::postCallback(sendData).period(minar::milliseconds(500)).delay(minar::milliseconds(3000));
}
You should remove the while (!(wireless.isRxData())); loop in your listen function. Replace it with:
if (wireless.isRxData()) {
len = wireless.getRxData(msg);
// Process data
}
Then, you can process your data in that if statement, or you can call postCallback on another function that will do your processing.
Instead of looping until data is available, you'll want to poll for data. If RX data is not available, exit the function and set a timer to go off after a short interval. When the timer goes off, check for data again. Repeat until data is available. I'm not familiar with your OS so I can't offer any specific code. This may be as simple as adding a short "sleep" call inside the while loop, or may involve creating another callback from the scheduler.
I have developed my own hybrid stream cipher and for the GUI i am using Qt. Initially i wrote it on a single thread but it being a stream cipher was making GUI dysfunctional when operating on large files. So i shifted the encryption/decryption to a separate Qthread. Also to show the progress i included a standard QProgressbar onto the GUI. But when I run the File I/O the encryption/decryption works perfectly but the progress bar doesn't update properly. After the whole operation completes, the progress bar suddenly goes from 0% to 100% showing that it didn't get the chance to update during the operation. For the code, I emitted the completed percentage from the FileCrypto to the main GUI thread onto the QProgressbar's setValue(int) slot. Since it didn't work I also tried to sent a int poitner over to the FileCrypto thread whilst updating the pointer with the percentage and using a QTimer on the GUI thread to check the value of the int value locally and update the progress bar but still I got the exact same result.
Here is my code:
The FileCrypto class:
#include <QThread>
#include <QFile>
#include <PolyVernam.h> //my algo header
class FileCrypto : public QThread
{
Q_OBJECT
public:
FileCrypto(QString, QString, int);
bool stopIt;
protected:
void run();
signals:
void completed(int);
void msg(QString);
void pathMsg1(QString);
void pathMsg2(QString);
void keyMsg(QString);
private:
QFile src, dest;
QString tag;
int mode;
qint64 length;
PolyVernam pv;
};
The Code:
#include <FileCrypto.h>
FileCrypto::FileCrypto(QString input, QString keyFile, int mode)
{
stopIt = false;
this->mode = mode;
src.setFileName(input);
if(mode == 1)
{
emit msg("Current Encryption/Decryption status: Encrypting file... :D:D");
tag = "-encrypted";
pv.setMode("encrypt", "");
}
else
{
emit msg("Current Encryption/Decryption status: Decrypting file... :D:D");
tag = "-decrypted";
pv.setMode("decrypt", keyFile);
}
dest.setFileName(QFileInfo(src).absolutePath() + "/" + QFileInfo(src).baseName()
+ tag + "." + QFileInfo(src).completeSuffix());
length = src.bytesAvailable();
}
void FileCrypto::run()
{
qint64 done = 0;
quint8 r, outChar;
char ch;
QDataStream in(&src);
in.setVersion(QDataStream::Qt_4_7);
src.open(QIODevice::ReadOnly);
QDataStream out(&dest);
out.setVersion(QDataStream::Qt_4_7);
dest.open(QIODevice::WriteOnly);
while(!in.atEnd() && !stopIt)
{
done++;
in >> r;
ch = char(r);
if(mode == 1)
outChar = pv.encrypt(QString(ch)).at(0).toAscii();
else
outChar = pv.decrypt(QString(ch)).at(0).toAscii();
out << outChar;
emit completed(int((done / length) * 100));
}
src.close();
dest.close();
if(stopIt)
this->exit(0);
if(mode == 1)
{
emit pathMsg1(QFileInfo(src).absoluteFilePath());
emit pathMsg2(QFileInfo(dest).absoluteFilePath());
}
else
{
emit pathMsg1(QFileInfo(dest).absoluteFilePath());
emit pathMsg2(QFileInfo(src).absoluteFilePath());
}
emit keyMsg(pv.keyFilePath);
emit msg("Current Encryption/Decryption status: Idle... :'(");
}
This is how I am making the thread and connecting it on the main GUI thread:
FileCrypto *fc = new FileCrypto(ui->lineEdit_4->text(), "", 1);
connect(fc, SIGNAL(completed(int)), ui->progressBar, SLOT(setValue(int)));
connect(fc, SIGNAL(msg(QString)), ui->statusBar, SLOT(showMessage(QString)));
connect(fc, SIGNAL(pathMsg1(QString)), ui->lineEdit_4, SLOT(setText(QString)));
connect(fc, SIGNAL(pathMsg2(QString)), ui->lineEdit_5, SLOT(setText(QString)));
connect(fc, SIGNAL(keyMsg(QString)), ui->lineEdit_2, SLOT(setText(QString)));
connect(fc, SIGNAL(keyMsg(QString)), this, SLOT(done()));
If I don't update the progress bar i.e. don't emit the percentage, the process happens much faster. I also tried printing the percentage. It slows it down like hell but the values are fine. Also can you suggest a way to change it to buffered IO....
Any sort of help is much appreciated here.......
The problem does not lie in the fact that you are calling from a different thread. It is located in:
emit completed(int((done / length) * 100));
Since done and length are int types, and done <= length, done/length == 0. So change it to:
emit completed(100 * done / length);
(it can lead to arithmetic overflow).
I would like to authenticate users of my C network application with PAM and I have a found a nice PAM example here on Stack, which I attach at the bottom. The problem is that in my development machine I have a fingerprint reader which PAM is set up to use, as in /etc/pam.d/common-auth:
#%PAM-1.0
#
# This file is autogenerated by pam-config. All changes
# will be overwritten.
#
# Authentication-related modules common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the
# traditional Unix authentication mechanisms.
#
auth required pam_env.so
auth sufficient pam_fprint.so
auth optional pam_gnome_keyring.so
auth required pam_unix2.so
pam_fprint.so is the fingerprint reader plugin. When you normally log in, the scan can fail and you are prompted for a password. However, sshd daemon does not initiate the fingerprint at all and I would like to understand how it skips it, because for example /etc/pam.d/sshd references the common-auth module so it must pull it ..
#%PAM-1.0
auth requisite pam_nologin.so
auth include common-auth
account requisite pam_nologin.so
account include common-account
password include common-password
session required pam_loginuid.so
session include common-session
session optional pam_lastlog.so silent noupdate showfailed
I have tried to reference the 'sshd' scheme from the C program but it still initiates the fingerprint reader. I want to skip the fingerprint reader somehow in C and retain my fingerprint reader default config.
#include <stdlib.h>
#include <iostream>
#include <fstream>
#include <security/pam_appl.h>
#include <unistd.h>
// To build this:
// g++ test.cpp -lpam -o test
struct pam_response *reply;
//function used to get user input
int function_conversation(int num_msg, const struct pam_message **msg, struct pam_response **resp, void *appdata_ptr)
{
*resp = reply;
return PAM_SUCCESS;
}
int main(int argc, char** argv)
{
if(argc != 2) {
fprintf(stderr, "Usage: check_user <username>\n");
exit(1);
}
const char *username;
username = argv[1];
const struct pam_conv local_conversation = { function_conversation, NULL };
pam_handle_t *local_auth_handle = NULL; // this gets set by pam_start
int retval;
// local_auth_handle gets set based on the service
retval = pam_start("common-auth", username, &local_conversation, &local_auth_handle);
if (retval != PAM_SUCCESS)
{
std::cout << "pam_start returned " << retval << std::endl;
exit(retval);
}
reply = (struct pam_response *)malloc(sizeof(struct pam_response));
// *** Get the password by any method, or maybe it was passed into this function.
reply[0].resp = getpass("Password: ");
reply[0].resp_retcode = 0;
retval = pam_authenticate(local_auth_handle, 0);
if (retval != PAM_SUCCESS)
{
if (retval == PAM_AUTH_ERR)
{
std::cout << "Authentication failure." << std::endl;
}
else
{
std::cout << "pam_authenticate returned " << retval << std::endl;
}
exit(retval);
}
std::cout << "Authenticated." << std::endl;
retval = pam_end(local_auth_handle, retval);
if (retval != PAM_SUCCESS)
{
std::cout << "pam_end returned " << retval << std::endl;
exit(retval);
}
return retval;
}
I doubt that sshd is actually skipping that module. Rather, I suspect that the fingerprint reader authentication module (sensibly) is checking whether the authenticating user appears to be on the local system or is coming over the network (which it can figure out from PAM data like rhost) and just silently does nothing if this is a network authentication. You could try looking at the source code to see if it has such a test, or try setting PAM_RHOST via pam_set_item and see if that changes the behavior.
To answer your actual question, I don't believe there is a way to tell PAM to run a particular PAM group except for one module. The expected way to do what you want to do is to create a new configuration file in /etc/pam.d that matches the application name you pass to pam_start that does not include common-auth but instead contains just the modules that you want to run.
I'm trying to develop simple RESTful api using FastCGI (and restcgi). When I tried to implement POST method I noticed that the input stream (representing request body) is wrong. I did a little test and looks like when I try to read the stream only every other character is received.
Body sent: name=john&surname=smith
Received: aejh&unm=mt
I've tried more clients just to make sure it's not the client messing with the data.
My code is:
int main(int argc, char* argv[]) {
// FastCGI initialization.
FCGX_Init();
FCGX_Request request;
FCGX_InitRequest(&request, 0, 0);
while (FCGX_Accept_r(&request) >= 0) {
// FastCGI request setup.
fcgi_streambuf fisbuf(request.in);
std::istream is(&fisbuf);
fcgi_streambuf fosbuf(request.out);
std::ostream os(&fosbuf);
std::string str;
is >> str;
std::cerr << str; // this way I can see it in apache error log
// restcgi code here
}
return 0;
}
I'm using fast_cgi module with apache (not sure if that makes any difference).
Any idea what am I doing wrong?
The problem is in fcgio.cpp
The fcgi_steambuf class is defined using char_type, but the int underflow() method downcasts its return value to (unsigned char), it should cast to (char_type).
I encountered this problem as well, on an unmodified Debian install.
I found that the problem went away if I supplied a buffer to the fcgi_streambuf constructor:
const size_t LEN = ... // whatever, it doesn't have to be big.
vector<char> v (LEN);
fcgi_streambuf buf (request.in, &v[0], v.size());
iostream in (&buf);
string s;
getline(in, s); // s now holds the correct data.
After finding no answer anywhere (not even FastCGI mailing list) I dumped the original fastcgi libraries and tried using fastcgi++ libraries instead. The problem disappeared. There are also other benefits - c++, more features, easier to use.
Use is.read() not is >> ...
Sample from restcgi documentation:
clen = strtol(clenstr, &clenstr, 10);
if (*clenstr)
{
cerr << "can't parse \"CONTENT_LENGTH="
<< FCGX_GetParam("CONTENT_LENGTH", request->envp)
<< "\"\n";
clen = STDIN_MAX;
}
// *always* put a cap on the amount of data that will be read
if (clen > STDIN_MAX) clen = STDIN_MAX;
*content = new char[clen];
is.read(*content, clen);
clen = is.gcount();