I have written a client-server program which does some data from a file in server to the client. In this I don't want the client to wait indefinitely if server is not running. For this I am using SELECT system call, in this system call we can specify timings as an argument, which tells the client to waits for the server to send the data within that time. Now the problem is, it's sending the data oly for that no. of seconds(as specified in select() ). It's not doing the actual work..
NOTE:- I am using UDP connection.
Can Anyone solve this problem??
You do actually read after select returns? You must read from the fd that select marked.
Related
I'm trying to write a client-server application in C using 2 fifos (client_to_server and server_to_client).
A version of the app where the client writes a command to the server who reads it works well, but when I add in client the lines in order to read the answer from the server it doesn't work anymore: the server gets blocked in reading the command from the client (as if there is nothing in client_to_server fifo, although the client written in it). What could be the problem in this case?
You are using fputs to send data to the server. That means that the data could stay in a local buffer until the buffer is full or you explicitely flush it. When you do not wait for the answer but exit from the client, the fifo is implicitely flushed and closed, causing the server to recieve something. But if you start waiting in the client without a prior flush, you end with a deadlock.
But remember: pipes were invented for single way communications. If you want 2 way communications with acknowlegements and/or synchronizations, you should considere using sockets.
I'm trying to write a server program for C that will be able to handle a badly written client program. The client sends a bunch of commands to the server and then closes the socket. After the server executes each command its supposed to send either a 0 or a 1 to the client depending on if the command failed or not.
If I don't try to send the client that one byte after each command, everything is fine and I can continue reading commands server-side, after the client closed the socket. However, if I do try writing that 1 byte, after reading 1 command from the client, I can't read anymore commands(connection reset by peer).
Is there a way to handle this? As in, to be able to write and read all the commands?
In this case, you need to know if client awaits your answer to each command before sending another one.
In a typical client-server connection, client starts the communication. Since your client is sending a bunch of commands, there are 2 possibilities:
In the end of operation your socket return will be OK or NOK.
For each message the client sends your return will be OK or NOK.
In addition, I suggest you send any trace information, so we can evaluate which solution would better fit in your case.
I realise that I'll get at least one answer along the lines of "(re)write the code so it doesn't hang" but let's assume we don't live in that shiny happy utopia just yet...
In our embedded system we have a big SDK including a web-server (Boa) which is the primary method of user interaction.
It's possible, during certain phases of the moon, that something can cause the web server to hang or become otherwise stuck in such a way that the process appears running normally (not crashed/dead/using 100% CPU) but does not serve any web pages.
So, the question is, how do we test/detect this situation?
To test whether the server is hung, create a TCP socket and connect to port 80 on IP address 127.0.0.1 (loopback address). Then send the following text over the socket
GET / HTTP/1.1\r\n\r\n
Most servers will interpret that as a request for index.html. Alternatively, you could implement an undocumented URL for testing (which allows for a shorter, predetermined response), e.g.
GET /test/fdoaoqfaf12491r2h1rfda HTTP/1.1\r\n\r\n
You then need to read the response from the server. This involves using select with a reasonable timeout to determine whether any data came back from the server, and if so, use recv to read the data. The response from the server will consist of a header followed by content. The header consists of lines of text, with a blank line at the end of the header. Lines end with \r\n, so the end of the header is \r\n\r\n.
Getting the content involves calling select and recv until recv returns 0. This assumes that the server will send the response and then close the socket. Some sophisticated servers will leave a socket open to allow multiple requests over the same socket. A simple embedded server should not be doing that. (If your server is trying to use the same socket for multiple requests, then you need to figure out how to turn that feature off.)
That's all very well and good, but you really need to rewrite your code so it doesn't hang.
The mostly likely cause of the problem is that the server has a bunch of dangling sockets, i.e. connections from clients that were never properly cleaned up. Dangling sockets will eventually prevent the server from accepting more connections, either because the server has a limit on the number of open connections, or because the process that's running the server uses up all of its file descriptors.
The first thing to check is the TCP timeout value. One project that I worked on had a default timeout of 5 hours, which meant that dangling sockets stayed open for 5 hours. A reasonable timeout is 1 minute.
Then you need to create a client that deliberately misbehaves. Clients can misbehave by
leaving a socket open without reading the server's response
abruptly closing the socket while reading the response
gracefully closing the socket while reading the response
The first situation should be handled by the TCP timeout. The other two need to be properly handled by the server code. Graceful and abrupt socket closure is controlled via the SO_LINGER option of ioctl and the shutdown function. After the client misbehaves, check the number of open file descriptors in the server process, to verify that the server has handled the situation correctly.
I have a load test application that I want to have start multiple threads and each one of those threads will open up a socket to the same server and communicate with it. Is this possible or must I fork() or run multiple instances of a single threaded app?
[Update from comments:]
The problem I seem to be getting is that the multiple calls to socket() all seem to be returning a value of 0. Therefore, when the threads try to communicate with the server, only one of them succeeds while the rest are waiting for a response and time out.
Sure! The only time this would be a problem is if they were all acting as a server and trying to listen on the same port. Sounds like you're using them as clients and in this regard, you can have as many as you want (as long as the OS doesn't run out of file descriptors for your process).
Yes, you can create multiple client socket connections to the same server IP/Port, as long as you are not binding those client sockets to the same local IP/Port at the same time. By default, connect() does an implicit bind() to a random local port unless bind() was explicitly called beforehand.
I am writing a simple instant messenger program in C on Linux.
Right now I have a program that binds a socket to a port on the local machine, and listens for text data being sent by another program that connected to my local machine IP and port.
Well, I can have this client send text data to my program, and have it displayed using stdout on my local machine; however, I cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
How would I go about either creating a new process (that listens and displays the text sent to it by the client machine, then takes that text and sends it to the other program's stdout, while the other program takes care of stdin being sent to the client machine) or create 2 programs that do the separate jobs (sending, receiving, and displaying), and sends the appropriate data to one another?
Sorry if that is weirdly worded, and I will clarify if need be. I looked into exec, execve, fork, etc. but am confused as to whether this is the appropriate path to look in to, or if there is a simpler way that I am missing.
Any help would be greatly appreciated, Thank you.
EDIT: In retrospect, I figured that this would be much easier accomplished with 2 separate programs. One, the IM server, and the others, the IM clients.
The IM Clients would connect to the IM server program, and send whatever text they wanted to the IM server. Then, the IM server would just record the data sent to it in a buffer/file with the names/ip's of the clients appended to the text sent to it by each client, and send that text (in format of name:text) to each client that is connected.
This would remove the need for complicated inter-process/program communication for stdin and stdout, and instead, use a simple client/server way of communicating, with the client programs displaying text sent to it from server via stdout, and using stdin to send whatever text to the server.
With this said, I am still interested in someone answering my original question: for science. Thank you all for reading, and hopefully someone will benefit from my mental brainstorming, or whatever answers come from the community.
however, i cannot program a way to send data back to the client machine, because my program is busy listening and displaying the text sent by the client machine.
The same socket that was returned from a listening-socket by accept() can be used for both sending and receiving data. So your socket is never "busy" just because you're reading from it ... you can write back on the same socket.
If you need to both read and write concurrently, then share the socket returned from accept() across two different threads. Since two different buffers are being used by the networking stack for sending and receiving on the socket, a dedicated thread for reading and another dedicated thread for writing to the socket will be thread-safe without the use of mutexes.
I would go with fork() - create a child process and now you have two different processes that can do two different things on two different sockets- one can receive and the other can send. I have no personal experience with coding a client/server like this yet, but that would be my first stab at solving your issue...
As #bdonlan mentioned in a comment, you definitely need a multiplexing call like select or preferably poll (or related syscalls like pselect, ppoll ...). These multiplexing calls are the primitive to wait on several channels at once (with pselect and ppoll able to atomically wait for both I/O events and signals). Read also the select tutorial man page. Of course, you can wait for several file descriptors, and you can wait for both reading & writing abilities (even on the same socket, if needed), in the same select or poll syscall.
All event-based loops and frameworks are using these multiplexing calls (like poll or select). You could also use libevent, or even (particularly when coding a graphical user interface application) some GUI toolkit like Gtk or Qt, which are all based around a central event loop.
I don't think that having a multi-process or multi-threaded application is useful in your case. You just need some event loop.
You might also ask to get a SIGIO signal when data arrives on your socket using fcntl with F_SETOWN, but this is not very useful for you. Then you often want to have your socket non-blocking.