I'm trying to write a server program for C that will be able to handle a badly written client program. The client sends a bunch of commands to the server and then closes the socket. After the server executes each command its supposed to send either a 0 or a 1 to the client depending on if the command failed or not.
If I don't try to send the client that one byte after each command, everything is fine and I can continue reading commands server-side, after the client closed the socket. However, if I do try writing that 1 byte, after reading 1 command from the client, I can't read anymore commands(connection reset by peer).
Is there a way to handle this? As in, to be able to write and read all the commands?
In this case, you need to know if client awaits your answer to each command before sending another one.
In a typical client-server connection, client starts the communication. Since your client is sending a bunch of commands, there are 2 possibilities:
In the end of operation your socket return will be OK or NOK.
For each message the client sends your return will be OK or NOK.
In addition, I suggest you send any trace information, so we can evaluate which solution would better fit in your case.
Related
I'm trying to write a client-server application in C using 2 fifos (client_to_server and server_to_client).
A version of the app where the client writes a command to the server who reads it works well, but when I add in client the lines in order to read the answer from the server it doesn't work anymore: the server gets blocked in reading the command from the client (as if there is nothing in client_to_server fifo, although the client written in it). What could be the problem in this case?
You are using fputs to send data to the server. That means that the data could stay in a local buffer until the buffer is full or you explicitely flush it. When you do not wait for the answer but exit from the client, the fifo is implicitely flushed and closed, causing the server to recieve something. But if you start waiting in the client without a prior flush, you end with a deadlock.
But remember: pipes were invented for single way communications. If you want 2 way communications with acknowlegements and/or synchronizations, you should considere using sockets.
I am writing a client server program in C. The problem:
while server is listening and accepting new connections, it is also storing the IP's it is connected to. Now if we enter a command say LIST in the server program window which is still running, then it should display the list of IP's it is connected to ?
I am using the Select() function for each client.
In short, how to accept input from keyboard while answering the incoming connections?
Just include the file descriptor for standard input (STDIN_FILENO, aka 0) in the set of file descriptors passed into select(2). Then, if input is available for reading on that, you read from it and process the command; otherwise, process the sockets as usual.
Alternatively, you could run a separate thread to handle user input, but given that you already have the select call in place, it's probably easier to continue using that.
You may want to check out D.J. Bernstein's tcpserver (see http://cr.yp.to/ucspi-tcp/tcpserver.html). Basically, you can simply run your program under tcpserver, and tcpserver will handle everything as far as setting up the sockets, listing for incoming connections on whatever port you are using, etc. When an incoming connection arrives on the port that you specify, tcpserver will spawn an instance of your program and pipe incoming info from the client to your program's STDIN, and pipe outgoing info from your program's STDOUT back to the client. This way, you can concentrate on your program's core logic (and simply read/write to stdout/stdin), and let tcpserver handle all of the heavy lifting as far as the sockets, etc., and you can accept multiple simultaneous incoming connections this way.
As far as knowing the client ip's that are currently connected - this can be done at the command line by using netstat while the server is running.
I have a program that display the log output to the stdout.
So if I open a telnet session to my target linux and then launch on this telnet session my program then I will get the log messages displayed on my telnet session.
In my program I have a little http server running. Now if I change the IP address of my target linux and then I restart the interface (the http server will restart automatically because I detect the change of ip address with netlink) And then I will get the telnet session closed and the stdout messages are redirected to the socket opened by my http server and I will get the printf of the log message locked.
I tried with select to detect this lock but without success: How to use select with stdout?
The select return success before going to the prinf (which locks)
Any suggestion to avoid this problem ?
If I understand correctly, the telnet session (why aren't you using SSH??) under which the HTTP server is running becomes broken due to the change of IP address.
What will happen after that if the program continues to write data to this session (which is its stdout) is that, at first the writes will succeed as the system buffers up data, then eventually the writes will block (not "lock"). Finally, the TCP connection will time out and the writes will return an error. It may or may not take a long time for the TCP session to time out, but it eventually will.
You can make your log output code use non-blocking writes to stdout if you want to avoid blocking (e.g. if your application is event-driven and must not block). You will need to use fcntl to change stdout to non-blocking and you will probably need to avoid stdio altogether because stdio is not designed to work with non-blocking output. You must implement your own buffering and write directly to file descriptor 1.
You also mentioned that you want to log to an HTTP connection after the stdout log becomes broken. You could do that too (triggered once you get an error writing to stdout) but it will be a lot more work. You will have to manage your log buffer internally in your application until an HTTP client connects and requests it. You will also want to add a provision for discarding the log if it gets too big, in case no HTTP client connects. All of that is betyond the scope of a SO question...
I am working on a TCP server side app, which forwards data to a client.
The problem I am facing is that I try to find out on my server side app if my client disconnected and which data was sent and which not.
My research showed that there are basically two ways to find that out:
1) read from the socket and check if the FIN signal came back
2) waiting for the sigpipe signal on the send call
The first solution doesn't seem reliable to me, as I can't guarantee that the client doesn't send any random data and as such would make my test succeed even though it shouldn't.
The problem with the second solution is that I only get the sigpipe after X following calls to send and as such can't guarantee which data was really sent and which not. I read here on SO and on other sites, that the sigpipe is only supposed to come after the second call to send, I can reproduce that behavior if I only send and receive over localhost but not if I really use the network.
My question now is if it's normal that X can vary and if yes which parameters I might look at to alter that behavior or if that is not reliable possible due to TCP nature.
TCP connection is bidirectional. A FIN from a client signals that the client won't be sending any more data, but the data in the other direction (from the server to the client) can still be sent (if client does not reset the connection with the RST). The reliable way to detect the FIN from the client, is to read from the client socket (if you are using socket interface) until the read returns 0.
TCP guarantees that if both ends terminate connection with a FIN that is acknowledged, all data that was exchanged within the connection, was received by the other side. If the connection is terminated with the RST, TCP by itself gives you no way to determine which data was successfully read by the other side. To do it, you need some application level mechanism, such as application level acknowledgements. But the best way would be to design your protocol in such a way, that connection, under normal circumstances, is always closed gracefully (FINs from both sides, no RSTs).
I am experimenting with shutdown(2) system call.
According to the manual, it does what I want.
When I invoke it in a TCP server in the following way:
shutdown(clntSocket, SHUT_RDWR)
then clients must be able to observe that TCP connection was closed.
I guess, this means that clients must be able to notice that no further data can be sent/received. This is the theory which I am not able to corroborate.
In this simple experiment I define a TCP server and a TCP client. The server receives 3 bytes from the client, then invokes shutdown(2). The client sends 3 bytes and subsequently it sends another 3 bytes. Both send operations succeed. Shouldn't the second send operation fail?
Thanks in advance for the help.
A send operation succeeding just means the data was queued for sending. It doesn't mean it was actually sent or received. After calling shutdown, you can call read if you want to confirm that the other end has completed its part of the shutdown process. Once read returns zero or an error, then you know the connection has been shutdown.
When the server calls shutdown(2) with SHUT_WR or SHUT_RDWR, a TCP packet with the FIN flag is sent. FIN means that the sender will not send any more data. It says nothing about the intent to receive data.
The client has no way to know if the server has called SHUT_RD. It doesn't seem to affect the client in any way.