This question is related to Socket programming in C and Sleeping a worker thread in a file server.
I am very new to socket as well as pthreads and having to handle quite a large project.
I would like to know if a scenario as below is possible and how?
I have multiple clients to a server and each client sends multiple messages to the server.Each client is serviced by a task/worker thread. A client sends a message and upon receiving a reply sends the next message till it is done and closes the connection. The task thread process one request from the client, sends its reply and sleeps till it receives the next message from the same client,till the client closes connection and the thread exits.
Now, as I said multiple clients connect at the same time. Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Server process can handle multiple clients at the same time or in an interleved manner, depending you CPU and programming architecture.
Threaded programming + multi-core or multi-CPU can handle those requests at the same time. ^_^
Related
I have a multithreading TCP server that waits for a number of predefined clients to write something to the server side and base of the requests, the server will write a message to all the clients based on their request. I'm stuck at the part when the clients that already sent a message must wait until all the clients have sent their respective message. How can I do this? I attempted to write 2 different thread functions, the first one calling the second one but not sure if this is the right way. Is there a way to make the clients wait until the server writes to all of them?
I want two functionalities to be implemented on my udp server application.
Creating thread that continuously receives data coming from any client.
Creating a thread that continuously sends data on server socket after specific time period and waits for reply from client. (I implemented this to make aure that whenever any client goes down, the data is not received back from client and server comes to know that client is down.)
Now, the problem I am facing is that since two threads are sharing same connected socket, whenever both threads try to access this socket simultaneously, a deadlock is established.
One of the solution I found was to create two sockets. One that continuously receives data, and the other socket that is meant for sending data from server time to time to clients and wait for their response, but since Server can must be bind()ed and I have bind()ed my socket to INADDR_ANY once, how would I create a separate socket for sending data from server and waiting for replies from client.
Please help me with this complication.
Also do let me know if there is some other better way of its implementation.
Thanks in advance :)
You will have to use non-blocking net functions and use a mutex to ensure no two threads access the socket at once.
A single thread may, however, be enough, if you use non-blocking functions. Using many threads will probably not improve performance, but may make the code more readable.
My application creates threads for file transfer on both the server and client sides. Right now I'm using delaying tactics (a simple for loop) on client side so that thread creation in client is done after the thread creation process in server.
The app works fine. But this is crude, if not ugly. I need to find a technical way so that the client thread is not started until it knows that server thread has been started.
I tried to use a send() from server to the client. The client's recv() must block for server's signal but apparently it doesn't. The message on the client console is about connection being refused by the server. Any hints, please?
select() may be what you're looking for: you give it a set of sockets and it blocks until something happens on these sockets (and you can provide a timeout to avoid waiting forever).
Call select() to wait until data is received on the client side, then recv() to ensure what was received is the right message from the server.
Seems you use connectionless transport. In this case I would suggest playing ping-pong game, client sends "ping" udp packet to server in loop (with reasonable period) until client receives "pong" UDP packet from server or time out.
I am trying to implement a TCP server which is a part of a larger project. Basically the server should be able to maintain a TCP connection with any number of clients (a minimum of 32) and service any client that requests servicing. In our scenario the thing is that it will be assumed that once the client is connected to the server, it will never close the connection unless some sort of failure occurs (e-g the machine running the client breaks down ) and it will repeatedly request service from the server. Same is the case with all the other clients i-e each will maintain a connection with the server and perform transactions. so to sum up the server will be at the same time maintaining the connection with the clients while simultaneously serving each client as needed and should also have the ability to accept any other client connections that want to connect to the server.
Now I implemented the above functionality using the select() system call of the berkely socket API and it works fine when we have a small number of clients (say 10). But the server needs to be scaled to the highest possible level as we are implementing it on a 16 core machine. For that I looked through various multi threading design techniques e-g one thread per client etc and the best one in my opinion would be a thread pool design. Now As I was about to implement that I ran into some problems:
If I designate the main thread to accept any number of incoming connections and save each connections File descriptor in a data structure, and I have a pool of threads, how would I get the threads to poll that whether a particular client is requesting for service or not. The design is simple enough for scenarios in which client contacts the server and after getting the service it closes the connection so that we can pick a thread from a pool, service the client and then push it back into the pool for future connection handling. But when we have to service a set of clients that maintain a connection and request services intermittently, what would be the best approach to do this. All help will be much appreciated as I am really stuck in this.
Thanks.
Use pthreads, with one thread per CPU plus one extra thread.
The extra thread (the main thread) listens for new connections with the listen() system call, accepts the new connections with accept(), then determines which worker thread currently has the least number of connections, acquires a lock/mutex for that worker thread's "pending connections" FIFO queue, places the descriptor for the accepted connection onto the worker thread's "pending connections" FIFO queue, and sends a "check your queue" notification (e.g. using a pipe) to the worker thread.
The worker threads use "select()", and send/receive data to whatever connections they've accepted. If/when a worker thread receives a "check your queue" notification from the main thread it would acquire the lock/mutex for its "pending connections" FIFO queue and add any newly accepted connections to its "fd_set" list.
For 1024 connections and 16 CPUs; you might end up with one main thread waiting for new connections (but doing almost nothing as you wouldn't be expecting many new connections), and 16 worker threads handling an average of 64 connections each.
One thread per client is almost certainly the best design. Make sure you always have at least one thread blocked in accept waiting for a new connection - this means that after accept succeeds, you might need to create a new thread before proceeding if it was the last one. I've found semaphores to be a great primitive for keeping track of the need to spawn new listening threads.
This question is related to Many processes executed by one thread and Socket server or file server implementation using multiple threads: concept not clear.
I am still unclear about a few things. Now the client and server of a socket server or a file server need not be in different machines(ofcourse it can be too).
The requests the servers receive are from different PROCESSES but they are processed by threads(say one per process) and these task threads belong to a different process(the server process). What I am confused about is how can calls from different processes be processed by threads of a single process and these threads communicate using 'shared memory" architecture that is so "THREAD" and very unlike "PROCESSES"
Thanks
Some simple ground work. Your server process contains one or more threads to process requests from any number of client processes. The clients and server can be on either the same or different machines. Clients and server are "connected" by sockets which are used to send requests from the client to the server. The same socket will be used to provide a response to the client once the request has been processed. Each client will have a unique connection to the server.
There are many ways to implement a server as described above. One possibility is that the server has one thread which handles the sockets using select(). Lets call this the Main Thread. The server process will also have several threads that will be responsible for processing the requests and responding to the clients. Lets call these Worker Threads.
When the Main Thread receives a message from one of the its client's sockets, the Main Thread will take this request and hand it to one of the Worker Threads for processing. The Worker Thread will take that request and process it and then respond using the original socket.
This server model uses a producer/consumer model where the Main Thread is the producer (in that it takes a request from the socket and produces a piece of work requiring processing) and the consumers are the Worker Threads.
There a several challenges in implementing this type of server all of which are documented and discussed in various Data Structure and Algorithms text, not the least of which are:
How do the Main and Worker threads communicate?
How do I protect data shared by the various threads from simultaneous modification?
How do I select which Worker thread should process a request?
I hope this helps.