Server:
socket()
bind()
listen()
for(;;) {
select()
if it is listenfd {
accept()
add to fd_set
} else {
add task to thread_pool work queue
threadpool_add(thread_routine)
}
}
thread_routine() {
get connection fd
read()
write()
close(connection fd)
}
This design has a problem, while select waits for data from socket_fd, another thread may close(socket_fd), this will cause select return and read(socket_fd) return EBADF. What is the right design?
It's basically okay. The mistake is in having thread_routine call close on the socket. It is never okay to destroy a resource while another thread is, or might be, using it. If this is TCP, a better option would be to call shutdown on the socket.
Perhaps this would be a better way to design your application:
One listen-thread accepts connections (i.e. only selects the listen fd)
For each accepted connection, immediately create a new thread or hand the fd to an existing idle thread
Select the client fds in their own threads for input (not in the listener thread)
This is a more intuitive design and should eliminate your problem. You might even be able use blocking IO (without select) in the client threads, depending on the protocol used.
I am new in TCP server client program . I want to develop a application in C to authenticate client and receive data from server . I know I need to use thread to handle multiple client . But I am concern about how can I call each functions in server side via thread or any need of creating more threads in server (like worker thread to do each functions) . I have a server which has lot of function like fun1() ,fun2(),fun3(),fun4() to handle the client data .So Is it any problem or delay when I use thread ? because when multiple client come at one sec , how the server handle this case ? I develop a logic like
server fun
{
//thread function calling fun1()
}
void *fun1(void *arg)
{
fun2()
pthread_exit((void*)xx)
}
fun2()
{
fun3()
}
fun3()
{
}
When you are using C you have to use the function accept for incoming connections. accept is a blocking function, so it waits until a connection is established. The return parameter of this function is a socket.
So your next statement after accept should be a creation of a thread with the input parameter of the socket. In your thread you can call your functions fun1,fun2,...
Of course there is a little delay but it's only milliseconds. When multiple clients are going to connect, they are queued.
The advantage of a parallel Server instead of a serial server is that one client can't block your service.
Hi I am figuring out a way to listen to a socket and connect to a different socket(on same ip but different port number) simultaneously in the same program.when I listen to a socket then it keeps blocking until it receives some message so I am not able to listen and connect simultaneously.
I actually need to simulate exchange of LSP packets between different routers. So I am writing a program to simulate a router so as to run it n(number of routers)times.
Could anyone please share on how to proceed ??
If I understood your problem correctly, one of these might help.
Multi-thread or Multi-process
Basically, when you receive a client you can process a client separately in a separate thread or in a new process. You will be able to receive incoming connections and connect to new clients from other sources while processing the ones who are already connected.
Pseudo Code:
main() {
while(1) {
accept client
/*
After the fork or creation of the new thread, the loop goes back to
accepting clients while connected clients are being processed.
*/
fork or create new thread passing and client socket to it
}
}
processClient() {
do whatever you need to do...
}
Select
Select is another good way of doing non-blocking sockets. Select basically waits for data to come (ie. data, new client requests) to the server and processes them one-by-one. The server will not block on accept as it will wait until it receives something before processing it.
Psuedo Code:
main() {
while(1) {
wait on select
if new client {
accept it
}
for client in clients {
if client has data {
process it
}
}
}
}
ePoll (if you're in Linux)
ePoll is similar to Select only it can handle WAY more clients and it's a lot sexier.
Here's a repo that has each of those. My code isn't perfect here as it was a project that I did while in school.
https://github.com/koralarts/ServerBenchmarking
First, a little background to explain the motivation: I'm working on a very simple select()-based TCP "mirror proxy", that allows two firewalled clients to talk to each other indirectly. Both clients connect to this server, and as soon as both clients are connected, any TCP bytes sent to the server by client A is forwarded to client B, and vice-versa.
This more or less works, with one slight gotcha: if client A connects to the server and starts sending data before client B has connected, the server doesn't have anywhere to put the data. I don't want to buffer it up in RAM, since that could end up using a lot of RAM; and I don't want to just drop the data either, as client B might need it. So I go for the third option, which is to not select()-for-read-ready on client A's socket until client B has also connected. That way client A just blocks until everything is ready to go.
That more or less works too, but the side effect of not selecting-for-read-ready on client A's socket is that if client A decides to close his TCP connection to the server, the server doesn't get notified about that fact -- at least, not until client B comes along and the server finally selects-for-read-ready on client A's socket, reads any pending data, and then gets the socket-closed notification (i.e. recv() returning 0).
I'd prefer it if the server had some way of knowing (in a timely manner) when client A closed his TCP connection. Is there a way to know this? Polling would be acceptable in this case (e.g. I could have select() wake up once a minute and call IsSocketStillConnected(sock) on all sockets, if such a function existed).
If you want to check if the socket is actually closed instead of data, you can add the MSG_PEEK flag on recv() to see if data arrived or if you get 0 or an error.
/* handle readable on A */
if (B_is_not_connected) {
char c;
ssize_t x = recv(A_sock, &c, 1, MSG_PEEK);
if (x > 0) {
/* ...have data, leave it in socket buffer until B connects */
} else if (x == 0) {
/* ...handle FIN from A */
} else {
/* ...handle errors */
}
}
Even if A closes after sending some data, your proxy probably wants to forward that data to B first before forwarding the FIN to B, so there is no point in knowing that A has sent FIN on the connection sooner than after having read all the data it has sent.
A TCP connection isn't considered closed until after both sides send FIN. However, if A has forcibly shutdown its endpoint, you will not know that until after you attempt to send data on it, and receive an EPIPE (assuming you have suppressed SIGPIPE).
After reading your mirror proxy application a bit more, since this is a firewall traversal application, it seems that you actually need a small control protocol to allow to you verify that these peers are actually allowed to talk to each other. If you have a control protocol, then you have many solutions available to you, but the one I would advocate would be to have one of the connections describe itself as the server, and the other connection describe itself as the client. Then, you can reset the connection the client if there is no server present to take its connection. You can let servers wait for a client connection up to some timeout. A server should not initiate any data, and if it does without a connected client, you can reset the server connection. This eliminates the issue of buffering data for a dead connection.
It appears the answer to my question is "no, not unless you are willing and able to modify your TCP stack to get access to the necessary private socket-state information".
Since I'm not able to do that, my solution was to redesign the proxy server to always read data from all clients, and throw away any data that arrives from a client whose partner hasn't connected yet. This is non-optimal, since it means that the TCP streams going through the proxy no longer have the stream-like property of reliable in-order delivery that TCP-using programs expect, but it will suffice for my purpose.
For me the solution was to poll the socket status.
On Windows 10, the following code seemed to work (but equivalent implementations seem to exist for other systems):
WSAPOLLFD polledSocket;
polledSocket.fd = socketItf;
polledSocket.events = POLLRDNORM | POLLWRNORM;
if (WSAPoll(&polledSocket, 1, 0) > 0)
{
if (polledSocket.revents &= (POLLERR | POLLHUP))
{
// socket closed
return FALSE;
}
}
I don't see the problem as you see it. Let's say A connects to the server sends some data and close, it does not need any message back. Server won't read its data until B connects, once it does server read socket A and send the data to B. The first read will return the data A had sent and the second return either 0 or -1 in either case the socket is closed, server close B. Let's suppose A send a big chunk of data, the A's send() method will block until server starts reading and consumes the buffer.
I would use a function with a select which returns 0, 1, 2, 11, 22 or -1,
where;
0=No data in either socket (timeout)
1=A has data to read
2=B has data to read
11=A socket has an error (disconnected)
22=B socket has an error (disconnected)
-1: One/both socket is/are not valid
int WhichSocket(int sd1, int sd2, int seconds, int microsecs) {
fd_set sfds, efds;
struct timeval timeout={0, 0};
int bigger;
int ret;
FD_ZERO(&sfds);
FD_SET(sd1, &sfds);
FD_SET(sd2, &sfds);
FD_SET(sd1, &efds);
FD_SET(sd2, &efds);
timeout.tv_sec=seconds;
timeout.tv_usec=microsecs;
if (sd1 > sd2) bigger=sd1;
else bigger=sd2;
// bigger is necessary to be Berkeley compatible, Microsoft ignore this param.
ret = select(bigger+1, &sfds, NULL, &efds, &timeout);
if (ret > 0) {
if (FD_ISSET(sd1, &sfds)) return(1); // sd1 has data
if (FD_ISSET(sd2, &sfds)) return(2); // sd2 has data
if (FD_ISSET(sd1, &efds)) return(11); // sd1 has an error
if (FD_ISSET(sd2, &efds)) return(22); // sd2 has an error
}
else if (ret < 0) return -1; // one of the socket is not valid
return(0); // timeout
}
Since Linux 2.6.17, you can poll/epoll for POLLRDHUP/EPOLLRDHUP. See epoll_ctl(2):
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing simple code to detect peer shutdown when using Edge Triggered monitoring.)
If your proxy must be a general purpose proxy for any protocol, then you should handle also those clients which sends data and immediately calls close after the send (one way data transfer only).
So if client A sends a data and closes the connection before the connection is opened to B, don't worry, just forward the data to B normally (when connection to B is opened).
There is no need to implement special handling for this scenario.
Your proxy will detect the closed connection when:
read returns zero after connection to B is opened and all pending data from A is read. or
your programs try to send data (from B) to A.
You could check if the socket is still connected by trying to write to the file descriptor for each socket. Then if the return value of the write is -1 or if errno = EPIPE, you know that socket has been closed.for example
int isSockStillConnected(int *fileDescriptors, int numFDs){
int i,n;
for (i=0;i<numFDs;i++){
n = write(fileDescriptors+i,"heartbeat",9);
if (n < 0) return -1;
if (errno == EPIPE) return -1;
}
//made it here, must be okay
return 0;
}
I have a small server-client application that doesn't do very much (a client connects to the server, sends a number trough a pipe and receives another number).
But it only works with one connection at a time (While a client is connected, no other client has access to the server)
I want to make it possible for multiple clients to connect to the server at one time, and I plan to do this with worker threads.
Obs:
#define CONNECT_NAMEDPIPE "\\\\.\\pipe\\ClientToServer"
Server:
HANDLE namedPipe = CreateNamedPipe (CONNECT_NAMEDPIPE,
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,
2,
sizeof(MinMax),
sizeof(NumberList),
0, // timeout
NULL);
if (namedPipe == INVALID_HANDLE_VALUE) {
printf("Unable to create named pipe\r\nServer closing\r\n");
printf("CreateNamedPipe failed, GLE=%d.\r\n", GetLastError());
} // Error unable create pipe
else {
printf("Server created\r\n");
printf("Awaiting connection\r\n");
ConnectNamedPipe(namedPipe, NULL);
etc ...
}
So the server waits on ConnectNamedPipe until a client connects, then is unavailable for any other connections.
If I'd like to enable multiple connections, how should I create the worker threads ?
Should every connection attempt create a new pipe (with a new pipe name / path - CONNECT_NAMEDPIPE can't be used for all)
How do I know when someone else is trying to connect ? Where should my threads be ? I'm stuck.
I think Berkeley sockets are better suited for this. If you must go with pipes, something like this could work:
The client sends a connection request through the main named pipe to the control thread.
The server creates (or fetches from a pool) a worker thread that listens on another, unique pipe.
The control thread answers with the name of this new pipe.
The client closes the control pipe and sends real request data to the new pipe.
The worker thread reads the request, processes it, and sends back the response.
Meanwhile the control thread is ready to read another connection request from another client.