Sending images to server? - file

I have to write a C++ applicaton that has to read images from a local directory on client computer (linux, ubuntu) and send them to a server (linux, ubuntu).
There will be almost 1000 of such clients.
Assuming that the rest of my program is written in C++ I need some hint on what library+technologies to use to achieve this goal?

That would depend on a number of variables.
First, determine what type of format does the server accept? Is it SOAP or not? If yes, you can stream data to the server. Otherwise, you need to read the entire file first and then, send it.
Second, here is a very good article on how to create a web request in C++. Have a look at it: How do you make a HTTP request with C++?

Related

Best way to write a ftp client program to list files on the server?

I am trying to write a client-server program in C in windows. The objective is to receive the directory listing from the server. Now I was trying to develop the client-server in such a way to utilize most resources.
One way to implement is that server makes a single send() call to send info of a single file. So if there are 100 files, it makes 100 calls. But I feel its a wastage of network resources. As far as I know the buffer size for send() or recv() in windows is 8kb. But the info of a single file will be hardly 1kb. So is there a way to make send() call to send multiple files info (file info are stored in structures. So they basically form a linked list) ? May be I can send info of atleast 8 files in a single Send() call. That should reduce the total send() calls to maximum 13.
So basically is there a way to send a linked list via send() ?? Plz let me know if you can think of any alternative method.
Good question! +1 for that.
But do you really want or need to write your code to use Winsock? There are good reasons to do so -- including that it's fun and a challenge. But if you don't need to, you might want to consider using the libcurl ftp library, which is free, multi-platform (including win32, of course), just works, and might make your job a lot easier.
The only way I know of to do this with FTP is to use multiple connections to the FTP server. If this is allowed by the server, there can be a list performance boost because the many protocol exchanges needed to list a complete folder tree can be run in parallel.
Rgds,
Martin
TCP is a byte stream. There is no guarantee of a 1-to-1 relation between the number of items you want to send and the number of calls to send() (or recv()) you need to make. That is simply not how TCP works. You format the data the way you need to, and then you keep calling send() until it tells you that all of the data has been sent.
Regarding FTP, please read RFC 959 and RFC 3659 to learn how the ftp protocol actually works. Before the introduction of the MLST and MLSD commands, directory listings had no standardized format. FTP servers were free to use whatever formatting they wanted. Many servers just piped the raw data from the OS's own dir or list commands. Indy, for example, includes several dozen parsers in its FTP client for handling non-standard directory listings.

Interprocess communication with a Daemon

I want to implement a Unix daemon (let's call it myUnixd), and want the user to be able to interact with this daemon via the command line, for example:
myUnixd --help # will display help information
myUnixd --show # will show some data (the's deamon should be doing the work)
So my question is: How can I communicate with the daemon? I was thinking about Unix domain sockets. Can someone tell me the right way to do this?
Thanks.
Use Berkeley sockets. Specifically, you can create a "UNIX domain socket" (otherwise known as a "local domain socket," which will create what looks like a text file. Write to the text file to send text to the daemon, read from it to receive text from the daemon. You can implement this with a few function calls.
If you want something more advanced, you can also use DBus, which offers a more sophisticated interface, but which is more complicated to learn.
use tcp socket if you want to use telnet to communicate with your daemon.
One could also use Remote Procedure Call (RPC) for such client-server communication. There are different types of messages (protocols) that can be used together with it, one of them is JSON.
The JSON-RPC protocol is very well accepted for such tasks. You can find different tools and libraries to embed in your software. A quick search on google gives this C library. The advantage of such libraries is that from a JSON specification file, where you define all your remote function calls, it creates client and/or server stubs that you can just use in your code out of the box.
As a listener one can use sockets, as the other responses state, or just an embedded HTTP server like microhttpd (and libcurl for the client). There are plenty of examples out there to just reuse. HTTP allows you also to run your client behind a proxy.

Server in C. How do i do it with query strings?

So, i am assuming that i will need to use sockets(i am a newbie to C).
The program will be for Windows(in pure C). And i shall be using these examples
http://cs.baylor.edu/~donahoo/practical/CSockets/winsock.html
My question is, instead of the client program connecting via TCP, i want the server to accept connections via a web browser i.e via HTTP.
So if the server program is running you type http://yourip:port/?gettemps and the server responds, but how do i do it?
As you might have guessed, this program will be for monitoring temps, remotely, via a web browser. But not for the CPU, for the GPU using AMD's ADL library(so yeah, only AMD cards).
The simplest option that is supported by most web servers is CGI - Common Gateway Interface.
Microsoft, of cource, has their own way of running web apps - ISAPI.
HTTP is quite a big standard, you might want to use some library such as libcurl to handle the details for you.
If you decide to code it yourself, HTTP is running over TCP so you first need to open a TCP socket at the standard HTTP port 80. Then simply listen on the socket and parse the incoming HTTP data - a great summary is given here: http://www.jmarshall.com/easy/http/.
Web browsers sends http get request to the server via tcp. If you are writing a web server from scratch than, you will need to parse data from web browser. http get request are string like for example GET /images/logo.png HTTP/1.1. So tokenize that string as it comes through tcp and get the command.
As you received your commands to the server call appropriate functions to handle your request.
Here is an great example of simple http server. You might want to make server multi-threaded as you may have multiple simultaneous users.
If you have already set up your web server to run the app on the appropriate port you can use getenv("QUERY_STRING") to access the web equivalent of command line parameters.
It would be better to call your program directly rather than just using the server to access a single default program as your example does, thus you could use http://yourip:port/yourprogram?cmd=gettemps. In this example getenv("QUERY_STRING") would return 'cmd=gettemps'.

send data internet using C

I need to send a signal via my remote PC to the Internet that let me know if this pc is connected.
I could send a link with GET values to my page and then from that php page make a query to the database.
How do I send this value through a C program that runs on this remote PC?
thanks!
(it's a windows pc)
For making HTTP requests I recommend libcurl, which is the library that almost everybody seems to be using.
http://curl.haxx.se/libcurl/
What operating system? Linux? Windows? Does the program need to be cross-platform? The reason I ask is that it influences whether you should use a library, or TCP/IP sockets, given that the request will be very simple.
Also, why not use Perl, or better yet, wget? You could schedule a task in windows, or a cronjob in unix, to wget http://yoururl/path?pcname=`uname` or similar..
What about using a client like dyndns? I'm not sure using a C program would be such a good idea for that purpose; it's a system administration task and using scripting for this would work best, unless you have a specific need in mind.

Runtime information in C daemon

The user, administrators and support staff need detailed runtime and monitoring information from a daemon developed in C.
In my case these information are e.g.
the current system health, like throughput (MB/s), already written data, ...
the current configuration
I would use JMX in the Java world and the procfs (or sysfs) interface for a kernel module. A log file doesn't seem to be the best way.
What is the best way for such a information interface for a C daemon?
I thought about opening a socket and implementing a bare-metal http or xmlrpc server, but that seems to be overkill. What are alternatives?
You can use a signal handler in your daemon that reacts to, say USR1, and dumps information to the screen/log/net. This way, you can just send the process a USR1 signal whenever you need the info.
You could listen on a UNIX-domain socket, and write regularly write the current status (say once a second) to anyone who connects to it. You don't need to implement a protocol like HTTP or XMLRPC - since the communication will be one-way just regularly write a single line of plain text containing the state.
If you are using a relational database anyway, create another table and fill it with the current status as frequent as necessary. If you don't have a relational database, write the status in a file, and implement some rotation scheme to avoid overwriting a file that somebody reads at that very moment.
Write to a file. Use a file locking protocol to force atomic reads and writes. Anything you agree on will work. There's probably a UUCP locking library floating around that you can use. In a previous life I found one for Linux. I've also implemented it from scratch. It's fairly trivial to do that too.
Check out the lockdev(3) library on Linux. It's for devices, but it may work for plain files too.
I like the socket idea best. There's no need to support HTTP or any RPC protocol. You can create a simple application specific protocol that returns requested information. If the server always returns the same info, then handling incoming requests is trivial, though the trivial approach may cause problems down the line if you ever want to expand on the possible queries. The main reason to use a pre-existing protocol is to leverage existing libraries and tools.
Speaking of leveraging, another option is to use SNMP and access the daemon as a managed component. If you need to query/manage the daemon remotely, this option has its advantages, but otherwise can turn out to be greater overkill than an HTTP server.

Resources