ePoll not accepting some clients - c

I have implemented a server using epoll, in what I believe to be the standard way, in fact, when I implemented it using the example from the epoll man page I got the same behavior.
This leads me to believe that there must be a problem with my client, and that I'm making assumptions somehow where I shouldn't. The main method of my client forks n number of clients, which then connect to the server. What I'm seeing is that usually a subset of these clients don't trigger the epoll, and never hit the
accept() call. The three-way-handshake completes because there is a listening socket, so the client behaves as if it where accepted, but it never gets served, as the server doesn't know about it. I can't figure out why this is happening, and haven't been able to find similar questions online. Thoughts?
Here's the relevant server code:
// wrapper which binds to port and exits on error
listenFD = tcpSocket(host, port);
SetNonblocking(listenFD);
// wrapper which listens and exits on error
tcpListen(listenFD);
epollFD = epoll_create(EPOLL_QUEUE_LEN);
if (epollFD == -1)
errSystem("epoll_create");
// Add the server socket to the epoll event loop
event.events = EPOLLIN | EPOLLERR | EPOLLHUP | EPOLLET;
event.data.fd = listenFD;
if (epoll_ctl (epollFD, EPOLL_CTL_ADD, listenFD, &event) == -1)
errSystem("epoll_ctl");
while(TRUE){
//struct epoll_event events[MAX_EVENTS];
numFDs = epoll_wait(epollFD, events, EPOLL_QUEUE_LEN, -1);
for (i = 0; i < numFDs; i++){
// Case 1: Error condition
if (events[i].events & (EPOLLHUP | EPOLLERR)){
errMessage("epoll: EPOLLERR");
Close(events[i].data.fd);
printf("Closed connection to %d\n", events[i].data.fd);
fflush(stdout);
continue;
}
// Case 2: Server is receiving a connection request
if (events[i].data.fd == listenFD){
// socketlen_t
clientLen = sizeof(client);
newFD = Accept (listenFD, (SA *)&client, &clientLen);
SetNonblocking(newFD);
// Set receive low water mark to message size
SetSockOpt(newFD, SOL_SOCKET, SO_RCVLOWAT, &lowWater, sizeof(lowWater));
// Add the new socket descriptor to the epoll loop
event.data.fd = newFD;
event.events = EPOLLIN | EPOLLET;
if (epoll_ctl (epollFD, EPOLL_CTL_ADD, newFD, &event) == -1)
errSystem ("epoll_ctl");
printf("Connected to client on socket %d\n", newFD);
// tell the client we're connected and ready
// (this is an attempt to fix my issue. I'd rather not do this...)
Writen(newFD, buffer, CLIENT_MESSAGE_SIZE);
continue;
}
if (events[i].events & EPOLLIN){
//serve the client
}
}
}
And this is the client code. One instance of these works, but if I fork more than 5 or so (sometimes just 2) a number of them won't get accepted.
int Client(const char *host, const int port, const int timeLen, const int clientNum){
long double delta;
PTSTRUCTS ptstructs = (PTSTRUCTS) malloc(sizeof(TSTRUCTS));
size_t result;
stop = FALSE;
cNum = clientNum;
Signal(SIGINT, closeFD);
Signal(SIGALRM, sendMessage);
nsockets = 0;
// wrapper which calls connect() and exits with message on error
connectFD = tcpConnect(host, port);
printf("%d connected to server:\n", clientNum);
fflush(stdout);
bzero(sbuf, CLIENT_MESSAGE_SIZE);
// initialize client message
strcpy(sbuf, CLIENT_MESSAGE);
// get the start time
getStartTime(ptstructs);
getEndTime(ptstructs);
while((delta = getTimeDelta(ptstructs)) < timeLen){
// One or more clients blocks here for ever
if ((result = receiveMessage()) < CLIENT_MESSAGE_SIZE){
break;
}
//sendMessage();
alarm(1);
//delay(USER_DELAY);
getEndTime(ptstructs);
}
stop = TRUE;
Close (connectFD);
getEndTime(ptstructs);
delta = getTimeDelta(ptstructs);
printf("Client %d served %ld bytes in %.6Lf seconds.\n", clientNum, byteCount, delta);
fflush(stdout);
// free heap memory
free(ptstructs);
return (1);
}
I should probably note that I'm seeing the same behavior if I don't set EPOLLET. I originally thought this might be the result of edge-triggering behavior, but nope.

Does your backlog argument of listen() greater than the number of clients?
#some-programmer-dude, sorry for not commentting, the closed fd will be removed from epoll event set automatically http://man7.org/linux/man-pages/man7/epoll.7.html

Related

How do I use blocking-status of socket as a condition?

I am maintaining/developing an existing code-base. We have a Raspberry Pi controlling a bunch of hardware, some of which is modular. The code, written in C (might be C++), communicates using IPv4 standards over a socket (using socket.h) with a gui on Windows. I'm pretty sure we don't have multithreading implemented in the raspberry code, or this would be much easier.
Some of the modular hardware is not interacting with the code. Without the extra hardware, the raspberry sits there waiting until the GUI sends something. That's how we want it behaving.
Problem is, when we have this extra hardware connected, the raspberry needs to also run some code in response to one of its IO pins tied to a button on the hardware.
I tried adding (various) conditions to a loop in main(), which is where I thought the code was idling. I always got either the hardware OR GUI control working, but not both.
I eventually figured out that read(), which is called near the end of that loop, is blocking.
Now I'm trying to figure out how to execute one chunk of code (for the hardware control) if read() is blocking, and another when it stops.
Something like:
(Pseudocode)
read socket
while(blocking)
{
check for hardware signal, if found, runTest();
}
{use result of read()}
The loop in main:
while(true)
{
initPort();
while(listening)
{
openPort();
if (netOpen)
{
//puts("// Tester is connected /////////////////////////////////");
loadCalFactors();
loadCalResistors();
loadExtConf(true);
inOn(false);
outOn(false);
// Loop until client terminates connection.
initPins();
while(netOpen)
{
/*
* Moving the block to work().
*/
// oldI = i;
// i = digitalRead(iTSW);
//
// if((i ^ oldI) == 0 && oldI == 1) // TSW has been held on
// testing = true;
//
// else if((i ^ oldI) == 1 && oldI == 0)
// testing = false;
//
// if(testLoaded && usingTermCtrl && i && !testing)
// {
// RunTp(false);
// }
// else
work();
}
}
}
}
Here's the code where it's blocking:
void work()
{
char bs[1];
int n;
//socket Sockfd is blocking here, so we only briefly return to the
//loop in main right after an action
n = read(Sockfd, bs, 1);
if(n > 0)
doAct(bs[0]);
else
checkForSOT();
}
I tried putting all my hardware checks in checkForSOT() (which was previously empty) already, but it didn't do any better than the loop in main.
And where the socket is being setup:
void initPort()
{
struct sockaddr_in serv_addr;
printf("Initializing Port %d\n", HOST_PORT);
listening = false;
sockListen = socket(AF_INET, SOCK_STREAM, 0);
printf("sockListen=%d\n",sockListen);
if(sockListen < 0)
{
puts("Error opening socket");
delay(1000);
}
else
{
bzero((char *)&serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(HOST_PORT);
if(bind(sockListen, (struct sockaddr *) &serv_addr,
sizeof(serv_addr)) < 0)
{
puts("Error on binding");
delay(1000);
}
else
{
listen(sockListen, 1);
listening = true;
}
}
}
void openPort()
{
int clilen;
int newsockfd;
printf("\nTester waiting for connection on socket %d\n", sockListen);
clilen = sizeof(cli_addr);
// infinite wait on a connection
if((newsockfd = accept(sockListen,
(struct sockaddr *) &cli_addr, (socklen_t*) &clilen) ) < 0 )
puts("Error on accept");
else
{
if (DEBUG) debugMode = true;
netOpen = true;
Sockfd = newsockfd;
sendVersion();
netWrite("Connected\n");
sendExtConfMessage();
sendExtConf();
}
}
Most of the posts I found on sockets blocking mention poll() and select() (along with their p- and e- upgrades(?)), but don't go into clear-enough detail on how they work for me to figure out if they are what I need.
I'm also not sure what it would take to change the socket to non-blocking while maintaining the same behavior.
Note: I am still trying to read through and wrap my head around Beej's guide to network programming, so if there's a specific section of that that'll help me, please be specific.
Also, if anyone knows of (or could write) a good guide to setting up remote debugging the raspberry through NetBeans 12 (on Windows 10), that would be a HUGE help!
Poll isn't very hard. Make a loop that runs at least every 5 msec.
while (running)
Initialize, then wait for something to happen
struct pollfd fds = {fd, POLLIN, 0};
int rc = poll(&fds, 1, 5); // wait for fds or 5 msec
if (rc < 0) {
perror("poll");
exit(1);
} else if (rc > 0) {
if (fds.revents & POLLIN)
recv(fd); // fds is ready
}
// check button here
}
Obviously there should be more checking and cleanup on error conditions.

TCP Server workers with kqueue

I recently did some testing with kernel events and I came up with the following:
Does it make sense to use a kernel event for accepting sockets? My testing showed that I was only able to handle one accept at once (even if the eventlist array is bigger)(Makes sense to me cause .ident == sockfd is only true for one socket).
I thought the use of kevent is mainly to read from multiple sockets at once. Is that true?
Is this how a TCP server is done with a kqueue implementation? :
Listening Thread (without kqueue)
Accepts new connections and adds FD to a worker kqueue.
QUESTION: Is this even possible? My testing showed yes, but is it guaranteed that the worker thread will be aware of the changes and is kevent really thread safe?
Worker thread (with kqueue)
Waits on reads on file descriptors added from the listening thread.
QUESTION: How many sockets at once would make sense to check for updates?
Thanks
This is not really an answer but I made a little server script with kqueue explaining the problem:
#include <stdio.h> // fprintf
#include <sys/event.h> // kqueue
#include <netdb.h> // addrinfo
#include <arpa/inet.h> // AF_INET
#include <sys/socket.h> // socket
#include <assert.h> // assert
#include <string.h> // bzero
#include <stdbool.h> // bool
#include <unistd.h> // close
int main(int argc, const char * argv[])
{
/* Initialize server socket */
struct addrinfo hints, *res;
int sockfd;
bzero(&hints, sizeof(hints));
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
assert(getaddrinfo("localhost", "9090", &hints, &res) == 0);
sockfd = socket(AF_INET, SOCK_STREAM, res->ai_protocol);
assert(sockfd > 0);
{
unsigned opt = 1;
assert(setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) == 0);
#ifdef SO_REUSEPORT
assert(setsockopt(sockfd, SOL_SOCKET, SO_REUSEPORT, &opt, sizeof(opt)) == 0);
#endif
}
assert(bind(sockfd, res->ai_addr, res->ai_addrlen) == 0);
freeaddrinfo(res);
/* Start to listen */
(void)listen(sockfd, 5);
{
/* kevent set */
struct kevent kevSet;
/* events */
struct kevent events[20];
/* nevents */
unsigned nevents;
/* kq */
int kq;
/* buffer */
char buf[20];
/* length */
ssize_t readlen;
kevSet.data = 5; // backlog is set to 5
kevSet.fflags = 0;
kevSet.filter = EVFILT_READ;
kevSet.flags = EV_ADD;
kevSet.ident = sockfd;
kevSet.udata = NULL;
assert((kq = kqueue()) > 0);
/* Update kqueue */
assert(kevent(kq, &kevSet, 1, NULL, 0, NULL) == 0);
/* Enter loop */
while (true) {
/* Wait for events to happen */
nevents = kevent(kq, NULL, 0, events, 20, NULL);
assert(nevents >= 0);
fprintf(stderr, "Got %u events to handle...\n", nevents);
for (unsigned i = 0; i < nevents; ++i) {
struct kevent event = events[i];
int clientfd = (int)event.ident;
/* Handle disconnect */
if (event.flags & EV_EOF) {
/* Simply close socket */
close(clientfd);
fprintf(stderr, "A client has left the server...\n");
} else if (clientfd == sockfd) {
int nclientfd = accept(sockfd, NULL, NULL);
assert(nclientfd > 0);
/* Add to event list */
kevSet.data = 0;
kevSet.fflags = 0;
kevSet.filter = EVFILT_READ;
kevSet.flags = EV_ADD;
kevSet.ident = nclientfd;
kevSet.udata = NULL;
assert(kevent(kq, &kevSet, 1, NULL, 0, NULL) == 0);
fprintf(stderr, "A new client connected to the server...\n");
(void)write(nclientfd, "Welcome to this server!\n", 24);
} else if (event.flags & EVFILT_READ) {
/* sleep for "processing" time */
readlen = read(clientfd, buf, sizeof(buf));
buf[readlen - 1] = 0;
fprintf(stderr, "bytes %zu are available to read... %s \n", (size_t)event.data, buf);
sleep(4);
} else {
fprintf(stderr, "unknown event: %8.8X\n", event.flags);
}
}
}
}
return 0;
}
Every time a client sends something the server experiences a "lag" of 4 seconds. (I exaggerated a bit, but for testing quite reasonable). So how do get around to that problem? I see worker threads (pool) with own kqueue as possible solution, then no connection lag would occur. (each worker thread reads a certain "range" of file descriptors)
Normally, you use kqueue as an alternative to threads. If you're going to use threads, you can just set up a listening thread and a worker threadpool with one thread per accepted connection. That's a much simpler programming model.
In an event-driven framework, you would put both the listening socket and all the accepted sockets into the kqueue, and then handle events as they occur. When you accept a socket, you add it to the kqueue, and when a socket handler finishes it works, it could remove the socket from the kqueue. (The latter is not normally necessary because closing a fd automatically removes any associated events from any kqueue.)
Note that every event registered with a kqueue has a void* userdata, which can be used to identify the desired action when the event fires. So it's not necessary that every event queue have a unique event handler; in fact, it is common to have a variety of handlers. (For example, you might also want to handle a control channel set up through a named pipe.)
Hybrid event/thread models are certainly possible; otherwise, you cannot take advantage of multicore CPUs. One possible strategy is to use the event queue as a dispatcher in a producer-consumer model. The queue handler would directly handle events on the listening socket, accepting the connection and adding the accepted fd into the event queue. When a client connection event occurs, the event would be posted into the workqueue for later handling. It's also possible to have multiple workqueues, one per thread, and have the accepter guess which workqueue a new connection should be placed in, presumably on the basis of that thread's current load.

Socket performance

I just wondered about how Instant Messengers and Online Games can accept and deliver messages so fast. (Network programming with sockets)
I read about that this is done with nonblocking sockets.
I tried blocking sockets with pthreads (each client gets its own thread) and nonblocking sockets with kqueue.Then I profiled both servers with a program which made 99 connections (each connection in one thread) and then writes some garbage to it (with a sleep of 1 second). When all threads are set up, I measured in the main thread how long it took to get a connection from the server (with wall clock time) (while "99 users" are writing to it).
threads (avg): 0.000350 // only small difference to kqueue
kqueue (avg): 0.000300 // and this is not even stable (client side)
The problem is, while testing with kqueue I got multiple times a SIGPIPE error (client-side). (With a little timeout usleep(50) this error was fixed). I think this is really bad because a server should be capable to handle thousands of connections. (Or is it my fault on the client side?) The crazy thing about this is the infamous pthread approach did just fine (with and without timeout).
So my question is: how can you build a stable socket server in C which can handle thousands of clients "asynchronously"? I only see the threads approach as a good thing, but this is considered bad practice.
Greetings
EDIT:
My test code:
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
#define NTHREADS 100
volatile unsigned n_threads = 0;
volatile unsigned n_writes = 0;
pthread_mutex_t main_ready;
pthread_mutex_t stop_mtx;
volatile bool running = true;
void stop(void)
{
pthread_mutex_lock(&stop_mtx);
running = false;
pthread_mutex_unlock(&stop_mtx);
}
bool shouldRun(void)
{
bool copy;
pthread_mutex_lock(&stop_mtx);
copy = running;
pthread_mutex_unlock(&stop_mtx);
return copy;
}
#define TARGET_HOST "localhost"
#define TARGET_PORT "1336"
void *thread(void *args)
{
char tmp = 0x01;
if (__sync_add_and_fetch(&n_threads, 1) == NTHREADS) {
pthread_mutex_unlock(&main_ready);
fprintf(stderr, "All %u Threads are ready...\n", (unsigned)n_threads);
}
int fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
if (fd <= 0) {
fprintf(stderr, "socket_create failed\n");
}
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "pre-write failed\n");
}
do {
/* Write some garbage */
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "in-write failed\n");
break;
}
__sync_add_and_fetch(&n_writes, 1);
/* Wait some time */
usleep(500);
} while (shouldRun());
socket_close(fd);
return NULL;
}
int main(int argc, const char * argv[])
{
pthread_t threads[NTHREADS];
pthread_mutex_init(&main_ready, NULL);
pthread_mutex_lock(&main_ready);
pthread_mutex_init(&stop_mtx, NULL);
bzero((char *)&hint, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
hint.ai_family = AF_INET;
if (getaddrinfo(TARGET_HOST, TARGET_PORT, &hint, &res) != 0) {
return -1;
}
for (int i = 0; i < NTHREADS; ++i) {
pthread_create(&threads[i], NULL, thread, NULL);
}
/* wait for all threads to be set up */
pthread_mutex_lock(&main_ready);
fprintf(stderr, "Main thread is ready...\n");
{
double start, end;
int fd;
start = get_wall_time();
fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
end = get_wall_time();
if (fd > 0) {
fprintf(stderr, "Took %f ms\n", (end - start) * 1000);
socket_close(fd);
}
}
/* Stop all running threads */
stop();
/* Waiting for termination */
for (int i = 0; i < NTHREADS; ++i) {
pthread_join(threads[i], NULL);
}
fprintf(stderr, "Performed %u successfull writes\n", (unsigned)n_writes);
/* Lol.. */
freeaddrinfo(res);
return 0;
}
SIGPIPE comes when I try to connect to the kqueue server (after 10 connections are made, the server is "stuck"?). And when too many users are writing stuff, the server cannot open a new connection. (kqueue server code from http://eradman.com/posts/kqueue-tcp.html)
SIGPIPE means you're trying to write to a socket (or pipe) where the other end has already been closed (so noone will be able to read it). If you don't care about that, you can ignore SIGPIPE signals (call signal(SIGPIPE, SIG_IGN)) and the signals won't be a problem. Of course the write (or send) calls on the sockets will still be failing (with EPIPE), so you need to make you code robust enough to deal with that.
The reason that SIGPIPE normally kills the process is that its too easy to write programs that ignore errors on write/send calls and run amok using up 100% of CPU time otherwise. As long as you carefully always check for errors and deal with them, you can safely ignore SIGPIPEs
Or is it my fault?
It was your fault. TCP works. Most probably you didn't read all the data that was sent.
And when too many users are writing stuff, the server cannot open a new connection
Servers don't open connections. Clients open connections. Servers accept connections. If your server stops doing that, there something wrong with your accept loop. It should only do two things: accept a connection, and start a thread.

C - Using poll to multiplex between socket(s) and stdin - Server

I'm writing a client server application and I'm using poll to multiplex between several client sockets and stdin, where I can insert commands (example: stop the server). I believe the structure (the "logic") of my code is correct, however it's not behaving the way I expect it to:
struct pollfd pfd[NSERVER]; //defined as 10
pfd[0].fd = fileno(stdin);
pfd[0].events = POLLIN;
pfd[1].fd = socktfd; //server bind, listen socket
pfd[1].events = POLLIN;
struct sockaddr_storage remoteaddr; // client address
socklen_t addrlen;
char remoteIP[INET6_ADDRSTRLEN];
addrlen = sizeof remoteaddr;
char buf[1024]; // buffer
int pos=2;
while(poll(pfd,1,0) >= 0)
{
if(pfd[0].revents & POLLIN) { //stdin
//process input and perform command
}
if(pfd[1].revents & POLLIN) {
/* new connection */
int connsockfd = accept(socktfd, (struct sockaddr *)&remoteaddr,&addrlen);
pfd[pos].fd=connsockfd;
}
int i=2;
//Loop through the fd in pfd for events
while (i<=NSERVER)
{
if (pfd[i].revents & POLLIN) {
int c=recv(pfd[i].fd, buf, sizeof buf, 0);
if(c<=0) {
if (c==0)
{
/* Client closed socket */
close(pfd[i].fd);
}
}else
{//Client sent some data
c=send(pfd[i].fd,sbuff,z,0);
if (c<=0)
{
Error;
}
free(sbuff);
}
}
i++;
}
}
I've removed some code inside the recv and send to make the code easier to read.
It fails to behave (it just hangs, doesn't accept connections or reacts to input from stdin).
Note: I would prefer to use poll over select, so please don't point to select :-).
Thanks in advance for any assistance.
you should set every pfd[i].fd = -1, so they get ignored initially by poll().
poll(pfd, 1, 0) is wrong and should at least be poll(pfd, 2, 0) or even poll(pfd, NSERVER, 0).
while(i<=NSERVER) should be while(i<NSERVER)
Your program probably hangs, because you loop through the pfd array, which is not initialized and containes random values for .fd and .revents, so it wants to send() or recv() on some random FD which might block. Do if(pdf[i].fd < 0) {i++; continue;} in the i<NSERVER loop.
You also don't set pfd[pos].events = POLLIN on newly accepted sockets. Don't set POLLOUT unless you have something to send, because it will trigger almost every time.

Boss Worker Pthreads Web Server in C - Server crashes if more requests sent than number of threads

I'm writing a web server in C (which I suck with) using Pthreads (which I suck with even more) and I'm stuck at this point. The model for the server is boss-worker so the boss thread instantiates all worker threads at the beginning of the program. There is a global queue that stores the socket of the incoming connection(s). The boss thread is the one that adds all items (sockets) to the queue as the connections are accepted. All of the worker threads then wait for an item to be added to a global queue in order for them to take up the processing.
The server works fine as long as I connect to it less times than the number of worker threads that the server has. Because of that, I think that either something is wrong with my mutexes (maybe the signals are getting lost?) or the threads are being disabled after they run once (which would explain why if there are 8 threads, it can only parse the first 8 http requests).
Here is my global queue variable.
int queue[QUEUE_SIZE];
This is the main thread. It creates a queue struct (defined elsewhere) with methods enqueue, dequeue, empty, etc. When the server accepts a connection, it enqueues the socket that the incoming connection is on. The worker threads which were dispatched at the beginning are constantly checking this queue to see if any jobs have been added, and if there are jobs, then they dequeue the socket, connect to that port, and read/parse/write the incoming http request.
int main(int argc, char* argv[])
{
int hSocket, hServerSocket; /* handle to socket */
struct hostent* pHostInfo; /* holds info about a machine */
struct sockaddr_in Address; /* Internet socket address stuct */
int nAddressSize = sizeof(struct sockaddr_in);
int nHostPort;
int numThreads;
int i;
init(&head,&tail);
//**********************************************
//ALL OF THIS JUST SETS UP SERVER (ADDR STRUCT,PORT,HOST INFO, ETC)
if(argc < 3) {
printf("\nserver-usage port-num num-thread\n");
return 0;
}
else {
nHostPort=atoi(argv[1]);
numThreads=atoi(argv[2]);
}
printf("\nStarting server");
printf("\nMaking socket");
/* make a socket */
hServerSocket=socket(AF_INET,SOCK_STREAM,0);
if(hServerSocket == SOCKET_ERROR)
{
printf("\nCould not make a socket\n");
return 0;
}
/* fill address struct */
Address.sin_addr.s_addr = INADDR_ANY;
Address.sin_port = htons(nHostPort);
Address.sin_family = AF_INET;
printf("\nBinding to port %d\n",nHostPort);
/* bind to a port */
if(bind(hServerSocket,(struct sockaddr*)&Address,sizeof(Address)) == SOCKET_ERROR) {
printf("\nCould not connect to host\n");
return 0;
}
/* get port number */
getsockname(hServerSocket, (struct sockaddr *) &Address,(socklen_t *)&nAddressSize);
printf("Opened socket as fd (%d) on port (%d) for stream i/o\n",hServerSocket, ntohs(Address.sin_port));
printf("Server\n\
sin_family = %d\n\
sin_addr.s_addr = %d\n\
sin_port = %d\n"
, Address.sin_family
, Address.sin_addr.s_addr
, ntohs(Address.sin_port)
);
//Up to this point is boring server set up stuff. I need help below this.
//**********************************************
//instantiate all threads
pthread_t tid[numThreads];
for(i = 0; i < numThreads; i++) {
pthread_create(&tid[i],NULL,worker,NULL);
}
printf("\nMaking a listen queue of %d elements",QUEUE_SIZE);
/* establish listen queue */
if(listen(hServerSocket,QUEUE_SIZE) == SOCKET_ERROR) {
printf("\nCould not listen\n");
return 0;
}
while(1) {
pthread_mutex_lock(&mtx);
printf("\nWaiting for a connection");
while(!empty(head,tail)) {
pthread_cond_wait (&cond2, &mtx);
}
/* get the connected socket */
hSocket = accept(hServerSocket,(struct sockaddr*)&Address,(socklen_t *)&nAddressSize);
printf("\nGot a connection");
enqueue(queue,&tail,hSocket);
pthread_mutex_unlock(&mtx);
pthread_cond_signal(&cond); // wake worker thread
}
}
Here is the worker thread. This should be always running checking for new requests (by seeing if the queue is not empty). At the end of this method, it should be deferring back to the boss thread to wait for the next time it is needed.
void *worker(void *threadarg) {
pthread_mutex_lock(&mtx);
while(empty(head,tail)) {
pthread_cond_wait(&cond, &mtx);
}
int hSocket = dequeue(queue,&head);
unsigned nSendAmount, nRecvAmount;
char line[BUFFER_SIZE];
nRecvAmount = read(hSocket,line,sizeof line);
printf("\nReceived %s from client\n",line);
//***********************************************
//DO ALL HTTP PARSING (Removed for the sake of space; I can add it back if needed)
//***********************************************
nSendAmount = write(hSocket,allText,sizeof(allText));
if(nSendAmount != -1) {
totalBytesSent = totalBytesSent + nSendAmount;
}
printf("\nSending result: \"%s\" back to client\n",allText);
printf("\nClosing the socket");
/* close socket */
if(close(hSocket) == SOCKET_ERROR) {
printf("\nCould not close socket\n");
return 0;
}
pthread_mutex_unlock(&mtx);
pthread_cond_signal(&cond2);
}
Any help would be greatly appreciated. I can post more of the code if anyone needs it, just let me know. I'm not the best with OS stuff, especially in C, but I know the basics of mutexes, cond. variables, semaphores, etc. Like I said, I'll take all the help I can get. (Also, I'm not sure if I posted the code exactly right since this is my first question. Let me know if I should change the formatting at all to make it more readable.)
Thanks!
Time for a workers' revolution.
The work threads seem to be missing a while(true) loop. After the HTTP exchange and closing the socket, they should be looping back to wait on the queue for more sockets/requests.

Resources