I have a multithreading TCP server that waits for a number of predefined clients to write something to the server side and base of the requests, the server will write a message to all the clients based on their request. I'm stuck at the part when the clients that already sent a message must wait until all the clients have sent their respective message. How can I do this? I attempted to write 2 different thread functions, the first one calling the second one but not sure if this is the right way. Is there a way to make the clients wait until the server writes to all of them?
Related
i have this question, that i think is more theoretical than pratical.
I have a server, that receives connections from clients, with select() function.
Server has a pattern master-slave where slave are threads that take file descriptor from master and manage request from specific client.
(Every request from client start with a connection request)
I think that select() function must understand that same client asks some requests, because if same client asks two requests and select function can't understand that is same client, select create two file descriptor for same client with accept() and after two different threads manage a different request from same client and i think this can create a lot of concurrency problems because different threads write in different file descriptors but the real channel with client is the same.
So, i would like to know, how can select() recognise a client after the first request?
Context: this is a web/sqlite application. One process receives new data over TCP, and feed them to a SQLite database. Other processes (number is variable) are launched as required as clients connect and request updates over HTML5's server-side events interface (this might change to websocket in the future).
The idea is to force the client apps to block, and to find a way for the server to create a notification that will wakeup all awaiting clients.
Note that the clients aren't fork'ed from the server.
I'm hoping for a solution that:
doesn't require clients to register themselves to the server
allows the server to broadcast even if no client is listening - and doesn't create a huge pile of unprocessed notifications
allows clients to detect that server isn't present
allows clients to define a custom timeout (maximum wait time for an event)
Solutions checked:
sqlite3_update_hook() - only works within a single process (damned, that would have been sleek)
signals: I still have nightmares about the last time I used signals. Maybe signalfd would be better (server creates a folder, client create unique files, and server notifies all files in that folder)
iNotify - didn't read enough on this one
semaphores / locks / shared memory - can't think of a non-hacked way to use these. The server could update a shared memory area with the row ID of the line just inserted in the DB, but then what?
I'm sure I'm missing something obvious - but what? At this time, polling seems to be the best option!
Thanks.
Just as a suggestion can you try message queues? multiple clients can connect to the same queue and receive one broadcast message, each client can have its own message queue if it requires communication with the server.
Message queues are implemented by Linux OS and they are very reliable. I personally use message queues to pass messages from several clients to a central routing daemon, clients being responsible of processing and returning the altered data.
I want two functionalities to be implemented on my udp server application.
Creating thread that continuously receives data coming from any client.
Creating a thread that continuously sends data on server socket after specific time period and waits for reply from client. (I implemented this to make aure that whenever any client goes down, the data is not received back from client and server comes to know that client is down.)
Now, the problem I am facing is that since two threads are sharing same connected socket, whenever both threads try to access this socket simultaneously, a deadlock is established.
One of the solution I found was to create two sockets. One that continuously receives data, and the other socket that is meant for sending data from server time to time to clients and wait for their response, but since Server can must be bind()ed and I have bind()ed my socket to INADDR_ANY once, how would I create a separate socket for sending data from server and waiting for replies from client.
Please help me with this complication.
Also do let me know if there is some other better way of its implementation.
Thanks in advance :)
You will have to use non-blocking net functions and use a mutex to ensure no two threads access the socket at once.
A single thread may, however, be enough, if you use non-blocking functions. Using many threads will probably not improve performance, but may make the code more readable.
I'm currently working on a distributed networking project for some networking practice and the idea is to send a file from my server to a few different clients (after breaking up the file) and the clients will find the frequency of a string and return it back.
The problem I'm running into is how to identify each client and send data to each one.
The solution I've been working on to identify each client by their port. The problem arises as to how I handle multiple connections and ports. I know I have to use send() to send the data to a port once I open a connection and etc. but I have no idea how to do this across multiple connections ( I can do this with a single client and server but not with multiple clients)
Does anyone have any suggestions from a high level standpoint? I got one suggestion from a friend who said:
Open a socket
Listen for connections
When a connection request is received, spawn a new thread to handle the connection.
The main process will go back to step 2 to listen for new connections, while the new thread
will handle all data flow with the associated client.
But I'm not really sure I understand this... I've also been referencing http://shoe.bocks.com/net/#socket
Thanks
Your friend is correct. Follow first three steps (mentioned by him) and then you need to:
After spawning thread, send data (read from file) to new socket.
Once entire file is finished, you should disconnect and exit thread. On client side, you should handle disconnect and probably exit.
NOTES:
Also, you can use sendfile() instead of send() if you wish. You can use select() if you wish to handle all connections without spawning threads.
Refer http://beej.us/guide/bgnet/ for details.
EDIT:
how to identify each client? Ans: This is classical port discovery problem but in your case its simple. Server should be listening on well known port (say 12345) and all the clients will connect to it. Once they are connected, server has all sockfds. You need to use these sockfds to send data and identify them.
If you check out networkComms.net, an open source network communication library, once you have created a connection with a client you can keep track of that specific client by looking at it's NetworkIdentifier tag, a guid unique to each client.
If you will be sending large files to all of your clients also check out the included DistributedFileSystem which is specifically designed for that purpose.
This question is related to Socket programming in C and Sleeping a worker thread in a file server.
I am very new to socket as well as pthreads and having to handle quite a large project.
I would like to know if a scenario as below is possible and how?
I have multiple clients to a server and each client sends multiple messages to the server.Each client is serviced by a task/worker thread. A client sends a message and upon receiving a reply sends the next message till it is done and closes the connection. The task thread process one request from the client, sends its reply and sleeps till it receives the next message from the same client,till the client closes connection and the thread exits.
Now, as I said multiple clients connect at the same time. Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Will the server process all messages from one client and then service the next or receive messages in an interleved manner as it arrives keeping connections of all 'live' clients open.
Server process can handle multiple clients at the same time or in an interleved manner, depending you CPU and programming architecture.
Threaded programming + multi-core or multi-CPU can handle those requests at the same time. ^_^