hi there i'm working on a multithread Server (TCP) in C and i have a little issue about it. Everything works fine, more than one thread can connect to the server but whenever a client writes "exit" (which is a condition for a client when he/she writes "exit" string he/she goes to disconnect from the server) serves shutsdown itself also. So the communication through other threads get lost. However, logically it should be waiting for other clients even some current clients get disconnected. Here is a part of main and server is in a endless loop for waiting clients. hsock is id of socket belongs to server and csock is the id of clients.
main(){
.
.
.
while(1){
if(counter==0)
printf("waiting for a connection\n");
csock = (int*)malloc(sizeof(int));
if((*csock = accept( hsock, (struct sockaddr*)&sadr, &addr_size))!= -1){
printf("---------------------\nReceived connection from %s\n",inet_ntoa(sadr.sin_addr));
pthread_create(&thread_id,0,&SocketHandler, (void*)csock );
counter++;
}
else{
fprintf(stderr, "Error accepting %d\n", errno);
}
}// end while
.
.
.
return 0
}
As you can see whenever a client get disconnected, server should keep waiting for an another threads. On the other hand, this is the last part of SocketHandler function which is a thread function.
void* SocketHandler(void* csock){
.
.
.
printf("Client disconnected\n");
free(csock);
return 0;
}
After return 0 statement isn't it necessary to return back to while(1) loop in main. I will be glad if you can help and thanks anyway
Threads run asynchronously once created, which means that the main thread (the one doing the accept) should continue looping and be back to accepting new connections, whatever the child thread does.
Some advices:
if your thread can run autonomously, use pthread_detach after the create to let thread handle its own termination graciously.
don't forget to do a close(csock) before ending the thread.
you don't necessarily need to allocate the int which will contain the socket descriptor, just pass it as the void * directly (but I guess that you will be passing more information than just the socket in a structure in a future version of your code).
After return 0 statement isn't it necessary to return back to while(1)
loop in main.
No. The thread exits. It has nothing to so with what happens in other threads.
I don't see how you can expect anybody to help you with a networking problem that happens on client disconnect when you don't show any code that handles the networking to the client.
Related
I started writing a c web server a while ago (windows 8), but I tried using only threads by myself, without using the select() option.
This is my main loop, and I'm opening each new thread like this:
uintptr_t new_thread;
while (client_sock = accept(server->sock, (struct sockaddr *)&client_info, &size))
{
if (client_sock <= 0) quit();
printf("\n[***] : Got a connection from localhost on port %d\n",ntohs(client_info.sin_port));
code = init_connection(client_sock);
if (code)
{
new_thread = _beginthread(handle_connection, 0, ID++, client_sock);
if (new_thread == -1)
{
fprintf(stderr, "Could not create thread for sending data: %d\n", GetLastError());
closesocket(client_sock);
quit();
}
}
else
{
debug("Failed to init connection");
closesocket(client_sock);
debug("Connection to client ended");
}
}
First of all, I would love to here if I can make this code better.
Testing this program by trying to enter the localhost from chrome, I see that no more data is sent (after recieving one http request).
My question is what would be the best way for the program to act then: close the thread and when another request will be made it will open a new one? if so, how do I close that thread? if not, when should I close that thread?
Normally, when implementing a server that forks separate processes, I would make the child process stay alive to serve predefined amount of requests (e.g. 100) and then kill itself. This is to reduce overhead created by forking and on the other hand recover from possible memory leaks or other problems in the process. Threads are lighter than processes, so it may make sense to close them faster.
I think you should compare the benefits and drawbacks. Measure the overhead of thread creation and closing compared to keeping them alive. In any case you must make sure that there is limit on the number threads you have alive at one time.
About the windows specifics on creating ans closing the thread you could go and add e.g. this response.
I am writing a two daemon application - a client and a server. It is a very basic version of distributed shell. Clients are connecting to the server, and server issues a command, that is propagated to every client.
I dont know how to create the socket logic on server-side - I do some testing and for now I am accepting connections in an loop and for every incoming connection I fork a child to process the connection
while (1) {
clisockfd = accept(sockfd, (struct sockaddr *) &cliaddr, &clilen);
if (clisockfd < 0) {
log_err("err on opening client socket");
exit(EXIT_FAILURE);
}
/* create a new child to process the connection */
if((pid = fork()) < 0) {
log_err("err on forking, something is really broken!");
exit(EXIT_FAILURE);
}
if(!pid) {
/* here we are in forked process so we dont need the sockfd */
close(sockfd);
/* function that handles connection */
handle_connection(clisockfd);
exit(EXIT_FAILURE);
} else {
close(clisockfd);
}
}
However what I have now have some disadvantages - I can accept a connection, do something with it, and return to main process (forked process have to return, and then execution in main process is resumed). I would like to keep every socketfd somewhere(a list?) and be able to choose one of those (or all of them) and send to this socketfd a command that I want to issue on my client/s. I assume that I cant do it in traditional accept->fork->return to main process manner.
So it probably should looks like:
client connects -> server set up a new socketfd and saves it somewhere -> drops to shell where I can choose one of socket and send it a command -> somewhere in the whole process it also should wait for next incoming client connections - but where?
If someone could give me an idea what mechanisms should I use to create the logic that I need? Maybe it would be better to issue connection from server to client, not from client to server.
Regards,
Krzysztof
I assume that I cant do it in traditional accept->fork->return to main process manner.
You could but it will be hard to design/maintain.
The best solution is to use select() (POSIX), epoll() (Linux), kqueue() (BSD) or I/O Completion Ports (Windows) depending on your platform.
There is a good examples/explanations about select() in Beej's network programming guide.
I have a program which works as a simple tcp server, it will accept and serve as much as 4 clients. I wrote the following code snippet to archive this goal, for more clearly, I replace some MACRO with "magic number".
while(1) {
int ret = 0;
///
// sock_ is the listening socket, rset_conn_ only contains sock_
//
if(select(sock_ + 1, &rset_conn_, NULL, NULL, &timeo_) <= 0) {
ret = 0;
}
if(FD_ISSET(sock_, &rset_conn_)) {
ret = 1;
}
///
// because the rule select() clears file descriptor and updates timeout is
// complex, for simplicity, let's reset the sock_ and timeout
//
FD_SET(sock_, &rset_conn_);
timeo_.tv_sec = 0;
timeo_.tv_usec = 500000;
if(! ret) {
continue ;
}
///
// csocks_ is an 4 integer array, used for storing clients' socket
//
for(int i = 0 ; i < 4 ; ++i) {
if((-1 == csocks_[i]) && (-1 != (csocks_[i] = accept(sock_, NULL, NULL)))) {
break ;
}
}
}
now, when less than 4 clients connect, this code snippet works normally, but when the 5th client connects, as you can see, it will not be accepted in the for loop, this results in such a problem: in the following loops of while, select() call keeps returning with sock_ bit in rset_conn_ set --- just because I don't accept it!!
I realize that I should not just ignoring it(not accept) in the for loop, but how can I handle it properly by telling kernel "I don't want to accept this client, so not tell me this client want to connect me any more!"?
Any help will be appreciated..
Update:
I can not stop calling select() call, because the clients already connected may disconnect, and then my program will have room for new clients. I need to keep listening on a socket to accept connection, not stop listening when reaches one point.
Why are you asking a question you don't want the answer to? Stop calling select on the listening socket if you don't want to be told that there's an unaccepted connection.
I can not stop calling select() call, because the clients already connected may disconnect, and then my program will have room for new clients. I need to keep listening on a socket to accept connection, not stop listening when reaches one point.
You can stop calling select now. When a client disconnects, then start calling select again. Only ask questions you want the answers to.
There is no socket API to tell the kernel to ignore pending connections (well, on Windows, there is a callback in WSAAccept() for that, but that is a Microsoft-specific extension).
On most platforms, you have to either:
close() the listening socket when your array fills up, then reopen it when a client disconnects.
accept() the client but immediately close() it if the array is full.
stop calling select() and accept() while your array is full. Any clients that are in the backlog will time out eventually, unless your array fills up and you accept() a pending client before it times out.
You can remove the listening socket from the readFDs while there are four existing clients, and put it back when one drops off, but I don't see the point. Why only four clients? Why not remove the limit and eliminate the whole problem? Real servers are not built this way.
I have a socket programming problem. I am running the server and then it waits for the client. Once I run the client though, nothing happens, it just terminates and brings back the prompt. Basically it compiles alright but it doesn't run at all. It terminates as soon as I run it. This only happens when I use threads in the client code.
This is the code I'm using:
if(pthread_create(&threadID[i++], NULL, (void *)dostuff, (void *)(intptr_t)sock) != 0)
{
perror("Thread create error");
}
On the other hand, if I type in simply
dostuff(sock);
The client program does execute. I need threading because I need to implement I/O multiplexing. Could you tell me how to stop the client from terminating when I use threads?
You'll need to wait for the thread to finish before exiting the program, for example using pthread_join
// do this before returning from main
pthread_join(threadID[i], NULL);
I'm trying to connect two machines, say machine A and B. I'm trying to send TCP message from A to B (One way). In normal scenario this works fine. When the communication is smooth, if the socket in B is closed, send() from A is stuck forever. And it puts process into Zombie state. I have socket in blocked mode in machine A. Below is the code that stuck forever.
if (send (txSock,&txSockbuf,sizeof(sockstruct),0) == -1) {
printf ("Error in sending the socket Data\n");
}
else {
printf ("The SENT String is %s \n",sock_buf);
}
How do I find if the other side socket is closed?? What does send return if the destination socket is closed?? Would select be helpful.
A process in the "zombie" state means that it has already exited, but its parent has not yet read its return code. What's probably happening is that your process is receiving a SIGPIPE signal (this is what you'll get by default when you write to a closed socket), your program has already terminated, but the zombie state hasn't yet been resolved.
This related question gives more information about SIGPIPE and how to handle it: SIGPIPE, Broken pipe