Is it possible to have multiple sockets, which can either by TCP or UDP in one program?
For example:
SocketOne: TCP socket at port 4567; socketTwo: TCP socket at port 8765; socketThree: UDP socket at 7643.
The families will be AF_INET, and addresses will be INADDR_ANY for each.
I bind and listen for TCP, and just bind for UDP.
What makes me doubt being about to do this is, how do I wait for a client at each socket together.
I know that the code below won't work, but I don't know what else, or how to, explain what I'm trying to say.
while (1)
{
connected = accept(socketOne, (struct sockaddr *)&client_addr,&sin_size);
connected = accept(socketTwo, (struct sockaddr *)&client_addr,&sin_size);
bytes_read = recvfrom(socketThree,recv_data,1024,0,(struct sockaddr *)&client_addr, &addr_len);
}
You need the select function: http://linux.die.net/man/2/select
More user-friendly: http://beej.us/guide/bgnet/html/single/bgnet.html#select
man select.
There are a few real world examples of this. FTP has a control and data port which both use TCP and multimedia applications will use SIP or RTSP connections for control (TCP) and mulitple RTP and RTCP port (UDP) for each data stream received.
select or poll are used on unix and on Windows there are the OVERLAPPED apis to do this non-preemptively . Alternatively, this can be done with multiple threads.
Related
I am trying to create Linux tool with multiple TCP connections, which supports both IPv4 and IPv6 so I choose to use "sockaddr_storage".
Now, my question is, how do I bind client side socket to a specified (or random) TCP port?
For TCP client side, in one thread, if I just create 10 sockets and then connect() to server, then those 10 sockets will use sequential TCP ports in client side, for example, starting from 54594, then 54596, 54600, 54602, etc.
Now, I would like to bind those client sockets to different (randomized) TCP ports, how do I do with sockaddr_storage?
Thanks!
=============adding code ============
struct sockaddr_storage local_addr;
sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)
(*(struct sockaddr_in*)&local_addr).sin_port = 0;
local_addr_size = sizeof(local_addr);
bind(sockfd, (struct sockaddr *)&local_addr, local_addr_size);
............
connect(sockfd, p->ai_addr, p->ai_addrlen)
I would like to bind those client sockets to different (randomized) TCP ports
That happens automatically when you call connect() without calling bind() first. You don't need to write any code for this, and sockaddr_storage therefore doesn't come into it at all.
I'm trying to create asynchronous high performance UDP client. I'm implementing UDP tracker protocol.
Lets say I have 1000 torrent hashes. I need to make 1000/74 ~= 14 UDP requests, assuming that UDP tracker don't have any limits. UDP Tracker protocol supports up to 74 hashes per request via UDP protocol, so i need to create 14 UDP sockets.
I need to use epoll, not poll, select, libevent, libev or libuv.
Every epoll UDP example I find is for server, not client.
I'm having troubles with understanding application logic.
First, i need to create 14 sockets:
#define MAX_CLIENTS 14
int fd[MAX_CLIENTS];
for (i = 0; i < MAX_CLIENTS; i++) {
if ((fd = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
perror("socket");
exit(EXIT_FAILURE);
}
setnonblock(fd[i]);
}
Then
int efd = epoll_create(MAX_EVENTS);
Now I need to send data via sendto for each of this socket, and receive results via epoll. How can I do that ?
I don't need someone to write code for me, I just want to understand epoll logic better.
This is highly theoretical question so I can understand epoll better. Please don't refer me to pthreads or libevent, this is not my question. Also, I'm not interested in HTTP implementation of Torrent Tracker protocol.
Similar threads:
Could you recommend some guides about Epoll on Linux
Is there any good examples or tutorial about epoll UDP
Multiple UDP sockets using epoll - Not able to receive data
You should just do everything on a single socket. Since UDP is connectionless, you can re-use the same socket for all your sends and receives--you just pass the address explicitly every time you sendto(), and optionally use recvfrom() to know who replied to you.
to receive data on a particular udp socket you need to bind it.
int sockfd,n;
struct sockaddr_in servaddr,cliaddr;
socklen_t len;
sockfd=socket(AF_INET,SOCK_DGRAM,0);
bzero(&servaddr,sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr=htonl(INADDR_ANY);
servaddr.sin_port=htons(32000);
bind(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr));
and then call recv or read .
you can open different fds like you did, add them to fd set and use select.
then you can use FD_ISSET to figure out which fd has data
see man page its very useful.
Also, to answer you question in comments.
"will i receive data in any particual order ? will i be able to parse it ?"
IP world has no guarantee on order. its a connection-less world data drops and ECMP which sometimes make sure data is not ordered :).
after the read, recv, recvfrom you will have the data in a buffer which you can easily parse.
I am going through Beej's guide and I wanted to elaborate on one of the examples, a stream client/server example. In the example the server sends messages, and the client receives.
I would like to make a program that sends AND receives messages. In this case, it would no longer be a server/client architecture, since both the former server and client would perform the same duties. They would be very similar.
In the example the server does the following :
getaddrinfo(NULL, PORT, &hints, &p);
sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol));
bind(sockfd, p->ai_addr, p->ai_addrlen);
listen(sockfd, BACKLOG);
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
send(new_fd, "Hello, world!", 13, 0);
What do I need to add in order to receive messages as well from the same socket? Is it possible?
I tried many things that didn't work, such as trying to connect() using the original socketfd and using the destinations information. In the end I used two sockets and bind them to the same port with the help of setsockopt(), but I would like to know if there is a better or more efficient method.
You can send and recv from any connected socket.
The direction of the data flow does not have anything to do with the client/server relationship.
It is very common for clients and servers to both send and receive. The pattern they use to send and expect answers is called a protocol (in the sense of an application defined protocol).
They say "you need two to tango".
Same is true for Client/Server communication (protocol).
Your problems may stem from the fact that the server does not understand that your client has finished sending the data and does not produce the reply.
There are several options to signal the end of communication, here just a few examples:
Shut down socket output from the client, this will make the server to sense EOF.
Tell the server how many bytes you are sending. Server will read that number, and then after reading that number of bytes will send you a reply.
Code some magic byte sequence that signals the End-Of-Request.
I have seen two examples that illustrate how the client socket can receive messages from server.
Example 1:
server code
http://man7.org/tlpi/code/online/book/sockets/ud_ucase_sv.c.html
client code
http://man7.org/tlpi/code/online/book/sockets/ud_ucase_cl.c.html
The client program creates a socket and binds the socket to an address, so that the server can send its reply.
if (bind(sfd, (struct sockaddr *) &claddr, sizeof(struct sockaddr_un)) == -1)
errExit("bind"); // snippet from ud_ucase_cl.c
Example 2:
server code
http://man7.org/tlpi/code/online/book/sockets/i6d_ucase_sv.c.html
client code
http://man7.org/tlpi/code/online/book/sockets/i6d_ucase_cl.c.html
In example 2, client code doesn't bind its socket with an address.
Question:
Is it necessary for the client code to bind the socket with an address in order to receive message from server?
Why in the first example, we have to bind the client socket with an address, why we don't have to in the second example?
The difference is the socket family - first example uses AF_UNIX, while the second does AF_INET6. According to Stevens UNP you need to explicitly bind pathname to Unix client socket so that the server has a pathname to which it can send its reply:
... sending a datagram to an unbound Unix domain datagram socket does not implicitly bind a pathname to the socket. Therefore, if we omit this step, the server's call to recvfrom ... returns a null pathname ...
This is not required for INET{4,6} sockets since they are "auto-bound" to an ephemeral port.
For the client (TCP) or sender (UDP), calling bind() is optional; it is a way to specify the interface. Suppose you have two interfaces, which are both routable to your destination:
eth0: 10.1.1.100/24
eth1: 10.2.2.100/24
route: 10.1.1.0/24 via 10.2.2.254 # router for eth1
0.0.0.0 via 10.1.1.254 # general router
Now if you just say connect() to 12.34.56.78, you don't know which local interface furnishes the local side of the connection. By calling bind() first, you make this specific.
The same is true for UDP traffic: Without bind()ing, your sendto() will use a random source address and port, but with bind() you make the source specific.
If you have not bound AF_INET/AF_INET6 client socket before connecting/sending something, TCP/IP stack will automatically bind it to ephemeral port on outbound address.
Unlike this, UNIX domain sockets (AF_UNIX) do not automatically bind when sending, so you can send messages via SOCK_DGRAM but can't get any replies.
I would like to establish an IPC connection between several processes on Linux. I have never used UNIX sockets before, and thus I don't know if this is the correct approach to this problem.
One process receives data (unformated, binary) and shall distribute this data via a local AF_UNIX socket using the datagram protocol (i.e. similar to UDP with AF_INET). The data sent from this process to a local Unix socket shall be received by multiple clients listening on the same socket. The number of receivers may vary.
To achieve this the following code is used to create a socket and send data to it (the server process):
struct sockaddr_un ipcFile;
memset(&ipcFile, 0, sizeof(ipcFile));
ipcFile.sun_family = AF_UNIX;
strcpy(ipcFile.sun_path, filename.c_str());
int socket = socket(AF_UNIX, SOCK_DGRAM, 0);
bind(socket, (struct sockaddr *) &ipcFile, sizeof(ipcFile));
...
// buf contains the data, buflen contains the number of bytes
int bytes = write(socket, buf, buflen);
...
close(socket);
unlink(ipcFile.sun_path);
This write returns -1 with errno reporting ENOTCONN ("Transport endpoint is not connected"). I guess this is because no receiving process is currently listening to this local socket, correct?
Then, I tried to create a client who connects to this socket.
struct sockaddr_un ipcFile;
memset(&ipcFile, 0, sizeof(ipcFile));
ipcFile.sun_family = AF_UNIX;
strcpy(ipcFile.sun_path, filename.c_str());
int socket = socket(AF_UNIX, SOCK_DGRAM, 0);
bind(socket, (struct sockaddr *) &ipcFile, sizeof(ipcFile));
...
char buf[1024];
int bytes = read(socket, buf, sizeof(buf));
...
close(socket);
Here, the bind fails ("Address already in use"). So, do I need to set some socket options, or is this generally the wrong approach?
Thanks in advance for any comments / solutions!
There's a trick to using Unix Domain Socket with datagram configuration. Unlike stream sockets (tcp or unix domain socket), datagram sockets need endpoints defined for both the server AND the client. When one establishes a connection in stream sockets, an endpoint for the client is implicitly created by the operating system. Whether this corresponds to an ephemeral TCP/UDP port, or a temporary inode for the unix domain, the endpoint for the client is created for you. Thats why you don't normally need to issue a call to bind() for stream sockets in the client.
The reason you're seeing "Address already in use" is because you're telling the client to bind to the same address as the server. bind() is about asserting external identity. Two sockets can't normally have the same name.
With datagram sockets, specifically unix domain datagram sockets, the client has to bind() to its own endpoint, then connect() to the server's endpoint. Here is your client code, slightly modified, with some other goodies thrown in:
char * server_filename = "/tmp/socket-server";
char * client_filename = "/tmp/socket-client";
struct sockaddr_un server_addr;
struct sockaddr_un client_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sun_family = AF_UNIX;
strncpy(server_addr.sun_path, server_filename, 104); // XXX: should be limited to about 104 characters, system dependent
memset(&client_addr, 0, sizeof(client_addr));
client_addr.sun_family = AF_UNIX;
strncpy(client_addr.sun_path, client_filename, 104);
// get socket
int sockfd = socket(AF_UNIX, SOCK_DGRAM, 0);
// bind client to client_filename
bind(sockfd, (struct sockaddr *) &client_addr, sizeof(client_addr));
// connect client to server_filename
connect(sockfd, (struct sockaddr *) &server_addr, sizeof(server_addr));
...
char buf[1024];
int bytes = read(sockfd, buf, sizeof(buf));
...
close(sockfd);
At this point your socket should be fully setup. I think theoretically you can use read()/write(), but usually I'd use send()/recv() for datagram sockets.
Normally you'll want to check error after each of these calls and issue a perror() afterwards. It will greatly aid you when things go wrong. In general, use a pattern like this:
if ((sockfd = socket(AF_UNIX, SOCK_DGRAM, 0)) < 0) {
perror("socket failed");
}
This goes for pretty much any C system calls.
The best reference for this is Steven's "Unix Network Programming". In the 3rd edition, section 15.4, pages 415-419 show some examples and lists many of the caveats.
By the way, in reference to
I guess this is because no receiving process is currently listening to this local socket, correct?
I think you're right about the ENOTCONN error from write() in the server. A UDP socket would normally not complain because it has no facility to know if the client process is listening. However, unix domain datagram sockets are different. In fact, the write() will actually block if the client's receive buffer is full rather than drop the packet. This makes unix domain datagram sockets much superior to UDP for IPC because UDP will most certainly drop packets when under load, even on localhost. On the other hand, it means you have to be careful with fast writers and slow readers.
The proximate cause of your error is that write() doesn't know where you want to send the data to. bind() sets the name of your side of the socket - ie. where the data is coming from. To set the destination side of the socket, you can either use connect(); or you can use sendto() instead of write().
The other error ("Address already in use") is because only one process can bind() to an address.
You will need to change your approach to take this into account. Your server will need to listen on a well-known address, set with bind(). Your clients will need to send a message to the server at this address to register their interest in receiving datagrams. The server will recieve the registration messages from clients using recvfrom(), and record the address used by each client. When it wants to send a message, it will have to loop over all the clients it knows about, using sendto() to send the message to each one in turn.
Alternatively, you could use local IP multicast instead of UNIX domain sockets (UNIX domain sockets don't support multicast).
If the question intended to be about broadcasting (as I understand it), then according to unix(4) - UNIX-domain protocol family, broadcasting it is not available with UNIX Domain Sockets:
The Unix Ns -domain protocol family does not support
broadcast addressing or any form of "wildcard" matching
on incoming messages. All addresses are absolute- or
relative-pathnames of other Unix Ns -domain sockets.
May be multicast could be an option, but I feel to know it's not available with POSIX, although Linux supports UNIX Domain Socket multicast.
Also see: Introducing multicast Unix sockets.
It will happen because of
server or client die before unlink/remove for bind() file associate.
any of client/server using this bind path, try to run server again.
solutions :
when you want to bind again just check that file is already associate then unlink that file.
How to step :
first check access of this file by access(2);
if yes then unlink(2) it.
put this peace of code before bind() call,position is independent.
if(!access(filename.c_str()))
unlink(filename.c_str());
for more reference read unix(7)
Wouldn't it be easier to use shared memory or named pipes? A socket is a connection between two processes (on the same or a different machine). It isn't a mass communication method.
If you want to give something to multiple clients, you create a server that waits for connections and then all the clients can connect and it gives them the information. You can accept concurrent connections by making the program multi-threaded or by forking processes. The server establishes multiple socket-based connections with multiple clients, rather than having one socket that multiple clients connect to.
You should look into IP multicasting instead of Unix-domain anything. At present you are just trying to write to nowhere. And if you connect to one client you will only be writing to that client.
This stuff doesn't work the way you seem to think it does.
You can solve the bind error with the following code:
int use = yesno;
setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, (char*)&use, sizeof(int));
With UDP protocol, you must invoke connect() if you want to use write() or send(), otherwise you should use sendto() instead.
To achieve your requirements, the following pseudo code may be of help:
sockfd = socket(AF_INET, SOCK_DGRAM, 0)
set RESUSEADDR with setsockopt
bind()
while (1) {
recvfrom()
sendto()
}