I'm trying to write a client-server application in C using 2 fifos (client_to_server and server_to_client).
A version of the app where the client writes a command to the server who reads it works well, but when I add in client the lines in order to read the answer from the server it doesn't work anymore: the server gets blocked in reading the command from the client (as if there is nothing in client_to_server fifo, although the client written in it). What could be the problem in this case?
You are using fputs to send data to the server. That means that the data could stay in a local buffer until the buffer is full or you explicitely flush it. When you do not wait for the answer but exit from the client, the fifo is implicitely flushed and closed, causing the server to recieve something. But if you start waiting in the client without a prior flush, you end with a deadlock.
But remember: pipes were invented for single way communications. If you want 2 way communications with acknowlegements and/or synchronizations, you should considere using sockets.
Related
I have a server that is running a select() loop that sometimes continues blocking when the client closes the connection from its side. The select() loop handles all other read/write operations correctly and sets the correct file descriptor in the fd_set, leading me to believe that it is not an issue with the file descriptor setup on the server-side.
The way I planned on handling the client closing the connection was to have the select() break due to activity on the socket (closing it from the client-side), see that the fd was set for that socket, and then try to read from it - and if the read returned 0, then close the connection. However, because the select() doesn't always return when the client side closes the connection, there is no attempt to check the fd_set and subsequently try to read from the socket.
As a workaround, I implemented a "stop code" that the client writes to the server just before closing the connection, and this write causes the select() to break and the server reads the "stop code" and knows to close the socket. The only problem with this solution is the "stop code" is an arbitrary string of bytes that could potentially appear in regular traffic, as the normal data being written can contain random strings that could potentially contain the "stop code". Is there a better way to handle the client closing the connection from its end? Or is the method I described the general "best practice"?
I think my issue has something to do with OpenSSL, as the connection in question is an OpenSSL tunnel, and it is the only file descriptor in the set giving me issues.
The way I planned on handling the client closing the connection was to have the select() break due to activity on the socket (closing it from the client-side), see that the fd was set for that socket, and then try to read from it - and if the read returned 0, then close the connection. However, because the select() doesn't always return when the client side closes the connection, there is no attempt to check the fd_set and subsequently try to read from the socket.
Regardless of whether you are using SSL or not, select() can tell you when the socket is readable (has data available to read), and a graceful closure is a readable condition (a subsequent read operation reports 0 bytes read). It is only abnormal disconnects that select() can't report (unless you use the exceptfds parameter, but even that is not always guaranteed). The best way to handle abnormal disconnects is to simply use timeouts in your own code. If you don't receive data from the client for awhile, just close the connection. The client will have to send data periodically, such as a small heartbeat command, if it wants to stay connected.
Also, when using OpenSSL, if you are using the older ssl_... API functions (ssl_new(), ssl_set_fd(), ssl_read(), ssl_write(), etc), make sure you are NOT just blindly calling select() whenever you want, that you call it ONLY when OpenSSL tells you to (when an SSL read/write operation reports an SSL_ERROR_WANT_(READ|WRITE) error). This is an area where alot of OpenSSL newbies tend to make the same mistake. They try to use OpenSSL on top of pre-existing socket logic that waits for a readable notification before then reading data. This is the wrong way to use the ssl_... API. You are expected to ask OpenSSL to perform a read/write operation unconditionally, and then if it needs to wait for new data to arrive, or pending data to send out, it will tell you and you can then call select() accordingly before retrying the SSL read/write operation again.
On the other hand, if you are using the newer bio_... API functions (bio_new(), bio_read(), bio_write(), etc), you can take control of the underlying socket I/O and not let OpenSSL manage it for you, thus you can do whatever you want with select() (or any other socket API you want).
As a workaround, I implemented a "stop code" that the client writes to the server just before closing the connection, and this write causes the select() to break and the server reads the "stop code" and knows to close the socket.
That is a very common approach in many Internet protocols, regardless of whether SSL is used or not. It is a very distinct and explicit way for the client to say "I'm done" and both parties can then close their respective sockets.
The only problem with this solution is the "stop code" is an arbitrary string of bytes that could potentially appear in regular traffic, as the normal data being written can contain random strings that could potentially contain the "stop code".
Then either your communication protocol is not designed properly, or your code is not processing the protocol correctly. In a properly-designed and correctly-processed protocol, there will not be any such ambiguity. There needs to be a clear distinction between the various commands that your protocol defines. Your "stop code" would be one such command amongst other commands. Random data in one command should not be mistakenly treated as a different command. If you are experiencing that problem, you need to fix it.
I'm trying to write a server program for C that will be able to handle a badly written client program. The client sends a bunch of commands to the server and then closes the socket. After the server executes each command its supposed to send either a 0 or a 1 to the client depending on if the command failed or not.
If I don't try to send the client that one byte after each command, everything is fine and I can continue reading commands server-side, after the client closed the socket. However, if I do try writing that 1 byte, after reading 1 command from the client, I can't read anymore commands(connection reset by peer).
Is there a way to handle this? As in, to be able to write and read all the commands?
In this case, you need to know if client awaits your answer to each command before sending another one.
In a typical client-server connection, client starts the communication. Since your client is sending a bunch of commands, there are 2 possibilities:
In the end of operation your socket return will be OK or NOK.
For each message the client sends your return will be OK or NOK.
In addition, I suggest you send any trace information, so we can evaluate which solution would better fit in your case.
I am writing a client server program in C. The problem:
while server is listening and accepting new connections, it is also storing the IP's it is connected to. Now if we enter a command say LIST in the server program window which is still running, then it should display the list of IP's it is connected to ?
I am using the Select() function for each client.
In short, how to accept input from keyboard while answering the incoming connections?
Just include the file descriptor for standard input (STDIN_FILENO, aka 0) in the set of file descriptors passed into select(2). Then, if input is available for reading on that, you read from it and process the command; otherwise, process the sockets as usual.
Alternatively, you could run a separate thread to handle user input, but given that you already have the select call in place, it's probably easier to continue using that.
You may want to check out D.J. Bernstein's tcpserver (see http://cr.yp.to/ucspi-tcp/tcpserver.html). Basically, you can simply run your program under tcpserver, and tcpserver will handle everything as far as setting up the sockets, listing for incoming connections on whatever port you are using, etc. When an incoming connection arrives on the port that you specify, tcpserver will spawn an instance of your program and pipe incoming info from the client to your program's STDIN, and pipe outgoing info from your program's STDOUT back to the client. This way, you can concentrate on your program's core logic (and simply read/write to stdout/stdin), and let tcpserver handle all of the heavy lifting as far as the sockets, etc., and you can accept multiple simultaneous incoming connections this way.
As far as knowing the client ip's that are currently connected - this can be done at the command line by using netstat while the server is running.
I am writing a simple instant messenger program in C on Linux.
Right now I have a program that binds a socket to a port on the local machine, and listens for text data being sent by another program that connected to my local machine IP and port.
Well, I can have this client send text data to my program, and have it displayed using stdout on my local machine; however, I cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
How would I go about either creating a new process (that listens and displays the text sent to it by the client machine, then takes that text and sends it to the other program's stdout, while the other program takes care of stdin being sent to the client machine) or create 2 programs that do the separate jobs (sending, receiving, and displaying), and sends the appropriate data to one another?
Sorry if that is weirdly worded, and I will clarify if need be. I looked into exec, execve, fork, etc. but am confused as to whether this is the appropriate path to look in to, or if there is a simpler way that I am missing.
Any help would be greatly appreciated, Thank you.
EDIT: In retrospect, I figured that this would be much easier accomplished with 2 separate programs. One, the IM server, and the others, the IM clients.
The IM Clients would connect to the IM server program, and send whatever text they wanted to the IM server. Then, the IM server would just record the data sent to it in a buffer/file with the names/ip's of the clients appended to the text sent to it by each client, and send that text (in format of name:text) to each client that is connected.
This would remove the need for complicated inter-process/program communication for stdin and stdout, and instead, use a simple client/server way of communicating, with the client programs displaying text sent to it from server via stdout, and using stdin to send whatever text to the server.
With this said, I am still interested in someone answering my original question: for science. Thank you all for reading, and hopefully someone will benefit from my mental brainstorming, or whatever answers come from the community.
however, i cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
The same socket that was returned from a listening-socket by accept() can be used for both sending and receiving data. So your socket is never "busy" just because you're reading from it ... you can write back on the same socket.
If you need to both read and write concurrently, then share the socket returned from accept() across two different threads. Since two different buffers are being used by the networking stack for sending and receiving on the socket, a dedicated thread for reading and another dedicated thread for writing to the socket will be thread-safe without the use of mutexes.
I would go with fork() - create a child process and now you have two different processes that can do two different things on two different sockets- one can receive and the other can send. I have no personal experience with coding a client/server like this yet, but that would be my first stab at solving your issue...
As #bdonlan mentioned in a comment, you definitely need a multiplexing call like select or preferably poll (or related syscalls like pselect, ppoll ...). These multiplexing calls are the primitive to wait on several channels at once (with pselect and ppoll able to atomically wait for both I/O events and signals). Read also the select tutorial man page. Of course, you can wait for several file descriptors, and you can wait for both reading & writing abilities (even on the same socket, if needed), in the same select or poll syscall.
All event-based loops and frameworks are using these multiplexing calls (like poll or select). You could also use libevent, or even (particularly when coding a graphical user interface application) some GUI toolkit like Gtk or Qt, which are all based around a central event loop.
I don't think that having a multi-process or multi-threaded application is useful in your case. You just need some event loop.
You might also ask to get a SIGIO signal when data arrives on your socket using fcntl with F_SETOWN, but this is not very useful for you. Then you often want to have your socket non-blocking.
I have written a client-server program which does some data from a file in server to the client. In this I don't want the client to wait indefinitely if server is not running. For this I am using SELECT system call, in this system call we can specify timings as an argument, which tells the client to waits for the server to send the data within that time. Now the problem is, it's sending the data oly for that no. of seconds(as specified in select() ). It's not doing the actual work..
NOTE:- I am using UDP connection.
Can Anyone solve this problem??
You do actually read after select returns? You must read from the fd that select marked.