I am writing on UDP server/client application.
I want my single server to handle 40 clients at a time. For this, I want to create 40 dedicated threads, each dedicated for one single client. Since there are 40 threads one for each client, I want to create 40 dedicated sockets as well.
But the problem that:
I don't know what will be the 40 IP addresses to which I shall bind() my sockets. (since as far as I now, I have to bind() to my Server\s IP address.) Normally I bind() to "INADDR_ANY" when there is only single socket.
But what should be the IP addresses at which I should bind() each of my 40 sockets?
Please help me. Any comment/ help is appreciated.
One common way to do this with UDP is:
Server bind() to a well known port.
Client sends the initial packet to that well known port
Server receive the first packet from a client on the well known port.
Server creates a new socket with a random port
Server replies to the client from this new socket.
Client receives the reply, notices it comes from another port than the well known
server port, and uses that port as the destination for further communication.
You'll use the getpeername() call to learn the remote address.
Keep in mind that UDP is connection-less, you'll need some way to signal the end or time out you sockets.
bind only needs the local address, not the remote address.
If you want one socket for each client, then you'll need to use different ports for each (using bind). That way, each client can send its traffic to a dedicated port, and you can have a thread for each socket/port.
It's probably a better idea to only have one socket (and one port) though, and have logic in your code to assign traffic to a thread based on the remote address (retrieved using recvfrom eg.).
The usual way is to bind a single socket and accept incoming connections. Each connection will be assigned a unique socket by accept.
As you are using UDP, I would simply use TCP as described above to let the clients know of their respective server UDP addresses.
Create a single listening socket in a dedicated listening thread.
When it receives a new packet, use the packet's remote addr/port, or put a unique clientID in the packet payload, to uniquely identify the client.
Create a new thread for that client if one does not already exist, pass the packet to that thread for further processing, and go back to listening.
If a given client thread does not receive any packets for awhile, it can terminate itself.
Related
I'm attempting to have two daemons running on the same port and IP but one is a server and the other is a client. Is there a method using socket options that would allow each socket to have a copy of the packet and let the daemons filter out the messages based on the protocol? It looks like reuse address blocks the first configured port and reuse port might just balance the packet between the two daemons.
Otherwise, I guess create another daemons to read the socket and send the packets to the correct daemon.
Thanks
You are correct at the end, you will need to have a third part that binds to the port and forwards the packets to the correct daemon.
The other way to do it would be to use three ports and use a firewall to redirect from the front end port to the backend ports; but that is a lot more complicated and not portable. But in the end you could use QOS or something. There is a wide range of possible types of use case behind the word protocol.
If the UDP packets you are receiving are multicast or broadcast packets, then you can set SO_REUSEADDR (and, for BSD-based OS's, SO_REUSEPORT) on the socket (using setsockopt()) before bind()-ing the socket, and then both sockets will receive a copy of each incoming UDP packet. (If the UDP packets are regular old unicast packets, OTOH, then doing the above will result in each received packet being received by only one of the two UDP sockets, which isn't what you want).
Note, however: You refer to the two daemons as being a "client" and a "server" -- the implication of those terms is that the two daemons are going to communicate with each other. If that is the case, then the typical approach would be to have the server-daemon bind to a well-known port number, and the client-daemon could bind to any port number (e.g. it could pass 0 as the port-number to bind(), and let the OS choose an available port-number for it). Then the client-daemon could start the conversation by sending one or more UDP packets to the server's well-known port number, and the server can find out which port the client is sending from (and therefore what port to send reply packets to) by examining the fifth argument of its recvfrom() call. In this case there is no need for the two programs to bind to the same port, and therefore no need for packet-forwarding.
I have a linux computer with a code in C that must communicate in UDP with 4 differents equipments. The computer sends differents commands to each equipment and receives responses, sometimes in parallel ...
I am a perfect beginner, and managed to communicate with one equipment using UDP socket. But now, i'm looking for a way to communicate with all these equipments, what i would like to call "multiple socket", but i don't know where to look/ what word to search to find a way ...
My linux computer is the client and all the equipment servers. I only have one eth port on the computer and will have to use a switch to have access to all the equipment. I would like to create functions like :
sendcmd(IPnumber, PORTnumber, cmd , ...)
readbuff(IPnumber, PORTnumber, buff, ...)
so i can choose which IP will received cmd ... i don't know if it's possible or if i need to open the socket, then close and redo the operation with another IP ...
So, if I ever managed to make myself understood, where should I look for a solution to my problem?
Thank you !
You can use a single UDP socket for your scenario. You can keep the socket open for the lifetime of your application.
UDP is not connection oriented. UDP sockets are also not classified into client sockets and server sockets. UDP sockets are always bound to a local port, either implicitly (typically for pure clients) or explicitly (which is usually the case for servers). In your case you do not care about the port for your UDP client.
To send to your four UDP server you can use sendto(). This lets you specify the destination IP address and port the UDP packet gets sent to.
To receive from your four UDP servers you can use recvfrom(). This will tell the IP address and port where the UDP packet came from.
You most likely want to have a receive loop of some kind. If you want to do anything else in your application you most likely want to either make recvfrom() non-blocking or you want to have the receive loop in its own thread. But this goes beyond your question.
The most important aspect of UDP is that it is not a protocol (despite its name which is misleading). It is one puzzle piece for a protocol. It is a tool to develop your own protocol. But I assume you already have a specific protocol at hand defined by your peripherals.
I have a small doubt in socket programming. i am able to send my data from client to server and my server processes the data. The o/p of the data processed, I want to send back to my client. So can we "write" the data back to the client using the same socket. I mean a server listens on a port before accepting connection and receiving data, so similarly, do i need to make my client listen to some other port (bind it some other socket) and make my server connect to that socket and transfer the data back. Any kind of example or explanation or references would be appreciated. Thanks a lot in advance.
Check out Beej's Network Programming Guide first of all.
The basic screenplay of a server/client connection goes like this:
Server listen()s on a fixed port, with a given socket.
Client connect()s to a the server port; client obtains a socket.
Server accept()s the connection, and accept() returns a new socket for the connection.
(Server continues listening on the original port with the original socket.)
For the specific connection with the client, the server write()s to the new socket it obtained when accept()ing the incoming connection. A busy server will have many, many sockets, but it will only ever need to bind() to one port. All connections come in to that one port, but the OS's networking protocol stack separates the data and makes it available at the connection-specific socket.
You don't need a new socket.
A socket is a duplex connection you can send data in both directions and you can even close the socket from one direction (don't want to write anymore) but still send data from the other direction.
Your socket is bi-directional, so there is no need to create another socket. Unless you are using some sort of middleware, such as Pub/Sub, there is no need to create another socket to enable bi-directional communication.
Technically it is right, the socket is duplex and you can send the data to the same socket you read from:
SOCKET s = socket()
... //Connect
int size = receive(s,...);
//make response
send(s, ...);
But in practice it depends on what are you going to do. It is possible to hang out socket if you have the following situation:
Process 1 sends very big data (<100K) over the socket by one send
operation
Process 2 receives data from 1 by portions and sends small packets to 1 (~20b). It is not a
confirmations, but some external events.
The situation goes into hangout, where the sending buffer of the 2 is full and it stops sending confirmations to 1.
2 and 1 are hanging in their send operations making a deadlock.
In this case I'd recommend using two sockets. One for read, one for write.
(Late answer, so mainly for anyone else who comes here looking for help)
I recently put up an example client/server application that closely follows Beej's Guide to Network Programming (which was also recommended by Kerrek SB in his answer). If you're looking for a simple working example of client/server communication, maybe this will help:
https://github.com/countvajhula/dummyclientserver
In particular, no, your client does not need to set up a separate listening socket to receive data from the server -- after the server has accepted the connection from the client, the server can simply send data back to the client on the same socket.
In my own experience,I bind 1 socket and dispatch the requests to other threads.
But the famous web server nginx is bind() multiple sockets on the destination port.
What's the benefit to do it this way?
Looking through nginx source, I don't see that possibility. To quote from the man page ip(7)
When a process wants to receive
new incoming packets or connections,
it should bind a socket to a local
interface address using bind(2). Only
one IP
socket may be bound to any given local (address, port) pair.
So, I think that there is something else going on. Can you mention how you determined that nginx was doing this?
I am coding for a server-client programming in C using socket API where I am trying to send control information to client for using different TCP connection.
Whenever the server has created new socket(TCP), I want it to notify the client to use the new socket for further communication.Currently I have thought of sending UDP packet to the client to be notified. Once the client receives the packet, it sends back the ACK to the server and at the same time switches to the another TCP connection.
I want to know, is there a better to communicate the control data across the network besides using UDP as it is unreliable.Thnx
I would like elaborate on what I am trying to achieve. I am going to measure the parameters like bandwidth,latency, receive window etc. as metric for a Ipv4 and IPv6 TCP connection. Based on the observed performance I will be switching between the two protocols whoever provides better performance.Once the decision is made to switch , I have to notify the partner(may be client or server based on type of bandwidth I am measuring upload, download). I start with IPv4 connection and at the same time open another connection - IPv6 for measuring the bandwidth and latency.
If the IPv6 connection provides better performance, I need to tell the client to switch to IPv6. In this case both the connections are open for monitoring the bandwidth periodically to decide on switching. So I have two queries on this aspect. 1.Is it a good idea to keep two connections at a time.I can create the other connection only when I need to measure the metric as the path followed between two machines will hardly change.If yes, I can pass the control information using the other TCP connection to tell the client to switch. In this way I can also measure the bandwidth and notify client
It it is not a good idea two have two TCP connections, I can use UDP to send the control information. I am avoiding to send control information on the conn used to transfer actual data as it will be an overhead. My code will work like a middleware for transfer the data, where the application will call my functions/macros to transfer the data, and the internal code will take care of measuring the bandwidth and decision to switch.I hope I am clear on what I want achieve. Thank for your feedback in advance
The normal sequence of operations for a server-side listening TCP socket is:
int fd1 = socket(...); // Create socket - assuming TCP, not UDP
bind(fd1, ...); // Bind it to a local address
listen(fd1, ...); // Set it in listen mode
int fd2;
while ((fd2 = accept(fd1, ...)) != -1) // Accept an incoming connection
{
...communicate with client via fd2...
close(fd2); // When finished
}
close(fd1);
Now, at the point when you get fd2, that has a socket with a different local ephemeral port from the port that fd1 is connected to (correction per EJP). There is no need to communicate any specific information back to the client; the TCP/IP implementation deals with that for you. The client will have a socket that is connected to fd2, with a port (probably an ephemeral port) on its machine connected to the server and the server's ephemeral port.
The point is that the completed connection will have a unique 4-tuple of (client IP address, client port, server IP address, server port).
When it comes to the processing, there are various ways of dealing with it. You can use an iterative server which deals with one request before dealing with any others. Or you can create a concurrent server in either of two different ways. One way is with a threaded server where a thread (from a pool, or newly created for each incoming connection) deals with the new request while the main thread goes back to accept new incoming requests. The alternative is that you create a forking server where the main server process forks and the child process deals with the new connection while the parent process closes the connection and goes back to listening for the next connection request.