I'm making a server/client program in C (LINUX). I have 4 programs in a folder. A clientUNIX, a clientTCP, a serverUNIX and a serverTCP. They all work flawlessly. Now, my goal is to make a server that supports both clients.
The easiest way of doing this, for me, was to start a new program (serverTCPUNIX) that does the following:
In main(), create a thread to handle TCP clients and another thread to handle UNIX clients.
Is there a better way of achieving this? Because this way, I'd have 2 threads looping through clients. I want to know if I can have only 1 thread and 1 loop that supports both types of clients.
Thanks.
Maybe you could switch the sockets on the server to non-blocking mode and then use select() to wait for either of them to receive a connection and handle it as needed, for example by creating a thread handling the client's request, and then going back to same select() to wait for the next incoming connection.
Related
I have a program that needs to:
Handle 20 connections. My program will act as client in every connection, each client connecting to a different server.
Once connected my client should send a request to the server every second and wait for a response. If no request is sent within 9 seconds, the server will time out the client.
It is unacceptable for one connection to cause problems for the rest of the connections.
I do not have access to threads and I do not have access to non-blocking sockets. I have a single-threaded program with blocking sockets.
Edit: The reason I cannot use threads and non blocking sockets is that I am on a non-standard system. I have a single RTOS(Real-Time Operating System) task available.
To solve this, use of select is necessary but I am not sure if it is sufficient.
Initially I connect to all clients. But select can only be used to see if a read or write will block or not, not if a connect will.
So when I have connected to say 2 clients and they are all waiting to be served, what if the 3rd does not work, the connection will block causing the first 2 connections to time out as well.
Can this be solved?
I think the connection-issue can be solved by setting a timeout for the connect-operation, so that it will fail fast enough. Of course that will limit you if the network really is working, but you have a very long (slow) path to some of the server(s). That's bad design, but your requirements are pretty harsh.
See this answer for details on connection-timeouts.
It seems you need to isolate the connections. Well, if you cannot use threads you can always resort to good-old-processes.
Spawn each client by forking your server process and use traditional IPC mechanisms if communication between them is required.
If you can neither use a multiprocess approach I'm afraid you'll have a hard time doing that.
I want two functionalities to be implemented on my udp server application.
Creating thread that continuously receives data coming from any client.
Creating a thread that continuously sends data on server socket after specific time period and waits for reply from client. (I implemented this to make aure that whenever any client goes down, the data is not received back from client and server comes to know that client is down.)
Now, the problem I am facing is that since two threads are sharing same connected socket, whenever both threads try to access this socket simultaneously, a deadlock is established.
One of the solution I found was to create two sockets. One that continuously receives data, and the other socket that is meant for sending data from server time to time to clients and wait for their response, but since Server can must be bind()ed and I have bind()ed my socket to INADDR_ANY once, how would I create a separate socket for sending data from server and waiting for replies from client.
Please help me with this complication.
Also do let me know if there is some other better way of its implementation.
Thanks in advance :)
You will have to use non-blocking net functions and use a mutex to ensure no two threads access the socket at once.
A single thread may, however, be enough, if you use non-blocking functions. Using many threads will probably not improve performance, but may make the code more readable.
I have two questions regarding using sockets for client server communication. Assume there is only 1 client in both the cases.
1) I know that we can send and receive data between client and server using a single socket. But in that case, what will happen when both the server and the client try to send the data at the same time?
2) Which of these is the best model?
i) Using single thread, single socket for sending and receiving
ii) Using 2 threads(one for sending and one for receiving), single socket
iii) Using 2 sockets and 2 threads, one for sending and one for receiving.
The connection is full-duplex, meaning that sends and receives can happen at the same time. So in answer to question one, both client and server will be able to send/read data from their socket simultaneously.
In terms of which "model" is best, it depends on your application and what you're trying to achieve. Incidentally, you don't need to multi-thread. You could:
Multi-process (fork)
Use non-blocking sockets (select/poll)
Use asynchronous notification (signals)
All of which have pros and cons.
For question number one, nothing special will happen. TCP is fully duplex, both sides of a connection can send simultaneously.
And as there is no problem with sending/receiving simultaneously, the first alternative in your second question is going to be the simplest.
In that scenario you dont need threads. The sockets themselves buffer the incoming data until you read it from the file-descriptor. More precisely there are multiple levels of buffers starting at the hardware. You will not miss data because you were writing at the same time, it just waits for you until you next read from the socket's file-descriptor.
There is no inherent need for multithreading if you want to poll at multiple sockets.
All you really have to do is use select().
To achieve this you define a FD_SET (File-Descriptor Set) to which you add all the sockets you want to poll. This set you hand to select() and it will return you all file descriptors with pending data.
man page for select, fd_set and a select tutorial
I am in the middle of a multi-threaded TCP server design using Berkely SOCKET API under linux in system independent C language. The Server has to perform I/O multiplexing as the server is a centralized controller that manages the clients (that maintain a persistent connection with the server forever (unless a machine on which client is running fails etc)). The server needs to handle a minimum of 500 clients.
I have a 16 core machine, what I want is that I spawn 16 threads(one per core) and a main thread. The main thread will listen() to the connections and then dispatch each connection on the queue list to a thread which will then call accept() and then use the select() sys call to perform I/O multiplexing. Now the problem is how do I know that when to dispatch a thread to call accept() . I mean how do I find out in the main thread that there is a connection pending at the listen() so that I can assign a thread to handle that connection. All help much appreciated.
Thanks.
The listen() function call prepares a socket to accept incoming connections. You then use select() on that socket and get a notification that a new connection has arrived. You then call accept on the server socket and a new socket id will be returned. If you like you can then pass that socket id onto your thread.
What I would do is have a single thread for accepting connections and receiving data which then dispatches the data to a queue as a work item for processing.
Note that if each of your 16 threads is going to be running select (or poll, or whatever) anyway, there is no problem with them all adding the server socket to their select sets.
More than one may wake when the server socket has in incoming connection, but only one will successfully call accept, so it should work.
Pro: easy to code.
Con:
naive implementation doesn't balance load (would need eg. global
stats on number of accepted sockets handled by each thread, with
high-load threads removing the server socket from their select sets)
thundering herd behaviour could be problematic at high accept rates
epoll or aio/asio. I suspect you got no replies to your earlier post because you didn't specify linux when you asked for a scalable high-performnce solution. Asynchronous solutions on different OS are implemented with substantial kernel support and linux aio, Windows IOCP etc. are different enough that 'system independent' does not really apply - nobody could give you an answer.
Now that you have narrowed the OS down to linux, look up the appropriate asynchronous solutions.
Okay I'm brand new to socket programming and my program is not behaving like I'd expect it to. In all the examples that I see of socket programming they use accept() and all the code after assumes that a connection has been made.
But my accept() is called as soon as I start the server. Is this supposed to happen? Or is the server supposed to wait for a connection before executing the rest of the program?
EDIT: Oops I forgot to mention it is a TCP connection.
I think this is what you're after.
http://www.sockets.com/winsock.htm#Accept
The main concept within winsocket programming is you're working with either blocking or non blocking sockets. Most of the time if you're using blocking sockets you can query the sockets recieve set to see if any call would result in your call to the routine being blocked..
For starting off with this UDP is easier considering its a datagram protocol. TCP on the other hand is a streaming protocol. So it's easier to think in regards to blocks of data that is sent and received.
For a server, you:
Create the socket - socket().
Bind it to an address.
You enter a loop in which you:
Listen for connection attempts
Accept and process them
It is not clear from your description whether you are doing all those steps.
There are multiple options for the 'process them' phase, depending on whether you plan to have a single-threaded single process handle one request before processing the next, or whether you plan to have a multi-threaded single process, with one thread accepting requests and creating other threads to do the processing (while the one thread waits for the next incoming connection), or whether you plan to have the process fork with the child processing the new request while the parent goes back to listening for the next request.
You are supposed to enter your acceptance loop after you have started listening for connections. Use select() to detect when a pending client connection is ready to be accepted, then call accept() to accept it.