A question on the wallclock time of socket communication.
I am having a function, which finds the servers registered at a central server.
I am adding a layer of network check over this function by extracting the URL and port number of the servers and trying to connect to them by behaving like a simple TCP client.
If the return value is greater than 0, then it means that the network is working fine; if -1, then the network is broken.
printf("--Checking for network connectivity--\n");
for(size_t i = 0; i < serverOnNetworkSize; i++) {
UA_ServerOnNetwork *server = &serverOnNetwork[i];
A[i] = (char *)UA_malloc(server->discoveryUrl.length+1);
memcpy(A[i],server->discoveryUrl.data,server->discoveryUrl.length);
A[i][server->discoveryUrl.length] = 0;
int length = strlen(A[i]);
//discovery URLs are of the form : opc.tcp://hostname:port
//new addition to extract port
B[i] = A[i] + 10;
//printf("Hostname: %s\n", B[i]);
char *p = strrchr(B[i], ':');
int port = strtoul(p+1, NULL, 10);
//printf("%d\n",port);
B[i][length-5]='\0';
//printf("Hostname: %s\n", B[i]);
//removing the port
A[i][length-5]='\0';
//without initial tcp binding
C[i] = A[i] + 10;
//printf("Hostname: %s\n", C[i]);
// FIND IP OF THAT HOST
if(i!=0){
char ip_address[50];
find_ip_address(C[i],ip_address);
socketCommunication(ip_address,C[i],port);
}
}
printf("--Checks done!--\n");
Global Funcitons:
int find_ip_address(char *hostname, char *ip_address)
{
struct hostent *host_name;
struct in_addr **ipaddress;
int count;
if((host_name = gethostbyname(hostname)) == NULL)
{
herror("\nIP Address Not Found\n");
return 1;
}
else
{
ipaddress = (struct in_addr **) host_name->h_addr_list;
for(count = 0; ipaddress[count] != NULL; count++)
{
strcpy(ip_address, inet_ntoa(*ipaddress[count]));
return 0;
}
}
return 1;
}
void socketCommunication(char *ip_address,char *hostname, int port){
int clientSocket,ret;
struct sockaddr_in serverAddr;
char buffer[1024];
clientSocket = socket(AF_INET,SOCK_STREAM,0);
if(clientSocket<0){
printf("Error in connection \n");
exit(1);
}
//printf("Client socket is created\n");
memset(&serverAddr,'\0',sizeof(serverAddr));
serverAddr.sin_port = htons(port);
serverAddr.sin_family=AF_INET;
serverAddr.sin_addr.s_addr=inet_addr(ip_address);
ret = connect(clientSocket,(struct sockaddr*)&serverAddr,sizeof(serverAddr));
if(ret<0){
printf("\nLOOKS LIKE NETWORK CONNECTION HAS FAILED. HAVE A LOOK AT THE NETWORK CONNECTIVITY at host : %s\n",hostname);
printf("\n----Updated Status Information----:\n");
printf("Discovery URL : opc.tcp://%s:%d\n",hostname,port);
printf("Status:CONNECTON TIMED OUT\n");
printf("\n");
}
To test this, I switch off the network from one of the registered servers.
When I measure the time, it shows inconsistent values of 18seconds,24,38 seconds etc.
These values occur when I switch the network of the server and run my application. On a second run of the same application, the value reduces to 2seconds or 1 second sometimes.
Output:
LOOKS LIKE NETWORK CONNECTION HAS FAILED. HAVE A LOOK AT THE NETWORK CONNECTIVITY at host : o755-gksr
----Updated Status Information----:
Discovery URL : opc.tcp://o755-gksr:4841
Status:CONNECTON TIMED OUT
--Checks done!--
Time measured: 18 seconds.
Output on another try
--Checking for network connectivity--
LOOKS LIKE NETWORK CONNECTION HAS FAILED. HAVE A LOOK AT THE NETWORK CONNECTIVITY at host : o755-gksr
----Updated Status Information----:
Discovery URL : opc.tcp://o755-gksr:4841
Status:CONNECTON TIMED OUT
--Checks done!--
Time measured: 0 seconds.
My question is : Why does it show inconsistent values? If the connection is not possible, should it not return -1 and show the error quickly?
Is there any background process, which tries to establish the connection for a finite number of times before coming to a halt?
Please let me know.
Regards,
Rakshan
The connect() behavior and its timeouts highly depends on underlying network. There are more reasons why connect() fails when the target machine is down. Errors in most cases are:
ETIMEDOUT - it means the client sent SYNs but it does not receive any response at all. It is a TCP timeout and can be quite long (minutes).
EHOSTUNREACH - it means local ARP query failed or the client sent SYN and ICMP error Host Unreachable was received. ARP query failure is detected in a few seconds. ICMP error Host Unreachable is usually returned by a remote router when its ARP query fails.
So what happen in your case if the server is in the same network as your client :
The client has server's MAC address in its ARP cache.
You "switch off the network from one of the registered servers.". You probably disconnect a cable from the server or something like that.
The client calls connect. SYN is sent directly to the MAC address from the ARP cache and in worst case the connect returns with ETIMEDOUT after two minutes.
Client delete the entry in the ARP cache.
Subsequent connect needs ARP resolution. Either it fails after 3 ARP request (3 seconds) or it fails immediately if the negative entry in ARP cache is valid. It may be valid for a few seconds only.
If the server is in remote network then the situation is similar. The ARP cache of the remote router is guilty in this case. If the remote router cannot resolve IP address to MAC address then it send ICMP Host Unreachable almost immediately but if the remote router still has the destination IP in its ARP cache it takes some time than it realizes the cache entry is obsolete and MAC address is not available.
Related
My code, which is written in C for the C Client binding for zookeeper, runs perfectly on my local computer using the same ip (not localhost:2181). However, compiling and executing my code on another computer yields with a connection loss error. I was not able to connect to my zookeeper server by using my public IP(I got my publicIP by looking up whatsmyip on google). I did an ifconfig on my terminal to get the 10.111.129.199. I am assuming this is a private IP as it starts with 10. The machine I have ssh'd to is running SolarisOS. This caused me to change a single function in zookeeper source code from synch_fetch_and_add (I think) to atomic_add because sync_fetch... is not supported by SolarisOS. According to ZooKeeper documentation, SolarisOS is not currently supported by Zookeeper. I am able to compile Zookeeper perfectly fine, and am told someone else in my company had implemented Zookeeper beforehand on our systems.
My program is trying to create a single node on the zookeeper server. My code looks like this:
int main(int argc, char *argv[]){
//zh is a global zookeeper_handle for now.
zh = zookeeper_init(host_port, my_watcher_func, 20000, 0, NULL, 0);
if(zh == NULL){
fprintf(stderr, "Error connecting to ZooKeeper Server!! \n");
exit(EXIT_FAILURE);
return 0;
}
int retval = create("/TFS/pool" , "1");
printf("return value of create = %d\n", retval);
}
int create(char* path, char* data){
int value_length= -1;
if(data != NULL){
value_length = (int) strlen(data);
}
printf("creating node at path: %s with data %s\n", path, data);
int retval = zoo_create( zh, path, data, value_length,
&ZOO_OPEN_ACL_UNSAFE, 0, 0, 0);
return retval;
}
/*empty watcher function*/
//I have no idea why this is needed.
void my_watcher_func(zhandle_t *zzh, int type, int state,
const char *path, void *watcherCtx) {}
Both systems are running GCC compiler. The problem, I think, isn't in the code as it runs fine locally, but the connection issue I am facing.
I would assume that zh would return 0 if the connection to the zookeeper was a failure from the zookeeper_init() function. This however does not happen and continues to the create().
creating node at path: /TFS/pool with data abc
2018-07-16
10:30:44,232:16332(0x2):ZOO_ERROR#handle_socket_error_msg#1670: Socket
[10.111.129.190:2181] zk retcode=-4, errno=0(Error 0): connect() call
failed
return value of create = -4
When I telnet to the ip:port it will connect. I also know that zookeeper detects my connection during telnet because I am running it in the foreground. The following is the output of zkServer.sh running in foreground when I connect via telnet 10.111.129.190 2181
2018-07-16 11:04:03,807 [myid:] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] -
Accepted socket connection from /10.7.1.70:61479
The expected output should have been:
creating node at path: /TFS/pool with data 1
2018-07-16 12:14:37,078:3180(0x70000c98c000):ZOO_INFO#check_events#1764: initiated connection to server [10.111.129.190:10101]
2018-07-16 12:14:37,107:3180(0x70000c98c000):ZOO_INFO#check_events#1811: session establishment complete on server [10.111.129.190:10101], sessionId=0x10000590d2b0000, negotiated timeout=20000
return value of create = 0
This output has always confused me as zookeeper connection is established after the zookeeper_handle is initiated. It is established upon zoo_create() instead of zookeeper_init. Doesn't effect anything, but just an interesting time to establish a connection.
I understand that retcode=-4 means CONNECTIONLOSS, but it's not even able to establish a connection with the server. If there is anyway I could fix this please do tell!.
I've got some new troubles for you. I'm studying sockets and TCP/IPv4 in C. I have to write a battleship game and I'm quite excited about it. But I can't really solve some problems. So this is my situation.
I've got one server and multiple clients. Each client can challenge or be challenged from another client. The server waits for a client to write an univoque ID. Once the ID is written, the server scans an array of struct sockaddr and finds the one related to the previously read ID. Then it calls inet_ntoa() to make the IP a string and writes it to the client. Once the client has read the IP, it creates a new direct connection with the other client and starts playing battleship.
Okay so this is the scenario. What I'm struggling with, is first how to get a univoque IP address out of that inet_ntoa(), since I'm executing every client instance in a terminal on the same host machine. Second, every client should accept incoming connections from other clients. So I gave each client 2 sockets: one for the client-server communication, and one that listens for the possible pending connections (This socket will be put in a readfd of a select() ). The thing is, once I start a new client istance -if another client is already connected and waiting to challenge or be challenged - I have to specify the IP of the server the client will connect to. So, whether I use 127.0.0.1 or the host machine IP address, the new istance of the client tries to connect to the already connected client (since its socket is in a listen state) and not to the server. I think it has to do with ports, but I don't know how to achieve what I want to do (Already tried to change ports etc.).
Here are some lines of the code. I omitted obvious things (don't kill me for this last sentence, please).
CLIENT CODE:
struct sockaddr_in servaddr, claddr,cl2addr;
servaddr.sin_family=AF_INET; //used for the client-server connection
servaddr.sin_port=htons(8888); //port 8888 for the client-server communication
servaddr.sin_addr.s_addr=inet_addr(argv[1]); //argv[1]=IP of the server
cl2addr.sin_family=AF_INET; //this will be used for the client-client new connection
cl2addr.sin_port=htons(9990);//listen on the port 9990
cl2addr.sin_addr.s_addr=INADDR_ANY; //for any client to connect
servsock=socket(PF_INET,SOCK_STREAM,0);
listenfd=socket(PF_INET,SOCK_STREAM,0);
if (bind(listenfd,(struct sockaddr*)&cl2addr,sizeof(cl2addr))<0)
{
perror("Binding: "); //This printf "Address already in use"
exit(-1);
}
if (listen(listenfd,1)<0)
{
perror("Listening: ");
exit(-1);
}
if (servsock<0)
{
perror("Creating socket: ");
exit(-1);
}
if (connect(servsock,(struct sockaddr*)&servaddr,sizeof(servaddr))<0) //estabilish a connection with the main server
{
perror("Connect: ");
exit(-1);
}
//some code with the select etc. inside //
if ((mastfd=accept(listenfd,(struct sockaddr *) &claddr,&cllen))<0)
{
perror("Accept: ");
exit(-1);
}
SERVER CODE:
struct sockaddr_in servaddr, claddr[10]; //che claddr array holds every IP in the sin_addr.s_addr field
int conn=0;
servaddr.sin_family=AF_INET;
servaddr.sin_port=htons(8888);
servaddr.sin_addr.s_addr=INADDR_ANY;
listenfd=socket(PF_INET,SOCK_STREAM,0);
if (listenfd<0)
{
perror("Socket: ");
exit(-1);
}
if(setsockopt(listenfd,SOL_SOCKET,SO_REUSEADDR,(char *)&opt,sizeof(opt))<0)
{
perror("setsockopt");
exit(-1);
}
if (bind(listenfd,(struct sockaddr *)&servaddr, sizeof(servaddr))<0)
{
perror("Bind: ");
exit(-1);
}
if (listen(listenfd,10)<0)
{
perror("Listen: ");
exit(-1);
}
//here goes a while loop//
{
FD_SET(listenfd,&readfd);
select(maxfd,&readfd,NULL,NULL,&elaps);
accept(listenfd,(struct sockaddr *)&claddr[conn],&len);
conn++;
//here the server waits for a client to write the ID to challenge, find the relative position in the claddr array and then writes the ip to the client//
m=strlen(ipbuff); //calculate the lenght of the IP
nwrite=write(att,&m,sizeof(m)); //write the lenght to the client
if (nwrite<0)
{
perror("Writing LEN: ");
_exit(-1);
}
nwrite=write(att,ipbuff,m); //write the IP to the client
if (nwrite<0)
{
perror("Writing IP: ");
_exit(-1);
}
}
Ps.
Very often, the IP written from server to client has len=2 and is something like 00 or 0.0.0.0 . This is due to the fact that it returns the local machine ip address, right?
Thank you guys, by the way. If code is bad indented or other things, feel free to throw s*** on me.
I have a UDP connection up and listening on a port (localhost) and I am trying to send a Scapy packet from localhost as well. For some reason, my C code never actually captures the packet, however I can see the packet show up in Wireshark just fine. It's been awhile since I've used sockets, but is there some special socket options I have to set or why would I be able to see the packet in Wireshark just fine but not by the C socket?
Note: I was able to successfully catch a packet when I wrote corresponding socket code to send out packets (from localhost) however I am still unable to get the listening code to catch the packet when sent from another computer.
I have found a similar question but when I tried their approach (using UDP instead of TCP), I still couldn't get netcat to catch the Scapy packet.
C Code (condensed for clarity sake)
int main() {
int sock, dataLen, inLen;
struct sockaddr_in inAddr;
short listen_port = 8080;
char buffer[2048];
if (sock = socket(AF_INET,SOCK_DGRAM,0) < 0) {
printf("ERROR: unable to establish socket\n");
return -1;
}
// zero out address structure
memset(&inAddr, 0, sizeof(inAddr));
inAddr.sin_family = AF_INET;
inAddr.sin_addr.s_addr = htonl(INADDR_ANY);
inAddr.sin_port = htons(listen_port);
if (bind(sock, (struct sockaddr*)&inAddr, sizeof(inAddr)) < 0) {
printf("ERROR: unable to bind\n");
return -1;
}
inLen = sizeof(inAddr);
printf("Now listening on port %d\n", listen_port);
while(1) {
dataLen = recvfrom(sock, buffer, 1500, 0, (struct sockaddr*)&inAddr, &inLen);
if (dataLen < 0)
printf("Error receiving datagram\n");
else
printf("Received packet of length %d\n", dataLen);
}
return 0;
}
Scapy Script
# set interface
conf.iface="lo0"
# create IP packet
ip_pkt = IP()/UDP()
ip_pkt.payload = "payload test message"
ip_pkt.dport = 8080
ip_pkt.dst = "127.0.0.1"
ip_pkt.src = "127.0.0.1"
# send out packet
send(ip_pkt)
Scapy needs to be configured slightly differently to work on the Loopback interface, see http://www.secdev.org/projects/scapy/doc/troubleshooting.html under the heading "I can’t ping 127.0.0.1. Scapy does not work with 127.0.0.1 or on the loopback interface"
I used the code given there and sent a scapy packet which was received by a C Socket, this was specifically:
from scapy.all import *
conf.L3socket=L3RawSocket
packet=IP()/UDP(dport=32000)/"HELLO WORLD"
send(packet)
This was then received on a UDP C Socket bound to lo on port 32000 (Scapy defaults to sending IP packets over the loopback interface).
I have the same problem, udp socket does not receive scapy packet.
I suppose there might be something related to this post: Raw Socket Help: Why UDP packets created by raw sockets are not being received by kernel UDP?
And what works for me is the socket.IP_HDRINCL option. Here is the working code for both and sender.
sender:
import socket
from scapy.all import *
rawudp=socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP)
rawudp.bind(('0.0.0.0',56789))
rawudp.setsockopt(socket.SOL_IP, socket.IP_HDRINCL,1)
pkt = IP()/UDP(sport=56789, dport=7890)/'hello'
rawudp.sendto(pkt.build(), ('127.0.0.1',7890))
receiver:
import socket
so = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
so.bind(('0.0.0.0',7890))
while True:
print so.recv(1024)
Verified on Fedora 14, although doesn't work on my MBP...
I think the problem is in setting incompatible set of interface, src and dst address.
When destination is loopback (127.0.0.1), interface should be lo and addresses (assuming both client and server run on the same host):
ip_pkt.dst = "127.0.0.1"
ip_pkt.src = "127.0.0.1"
Another way is to send to the ethernet address (assuming 192.168.1.1 is configured on eth0 and both client and server run on the same host):
ip_pkt.dst = "192.168.1.1"
ip_pkt.src = "192.168.1.1"
If you try different hosts, then using 127.0.0.1 and lo is not possible. Set src to client machine's ip and dst to server machine's ip.
I have a client on PC. I have a server on PC. The client and server are connected via a router with firmware based on Linux OS.
The client sends a packet to the server and receive a response. The router must intercept the packets and modify it. Something like sniffing but it's not a sniffing because i need to modify the packets.
I must to write a program for this.
I tried to open a raw socket on the router, but reсvfrom on raw socket does not intercept the packet and just copy it. The packet is going on.
Could you suggest me any way to solve this problem?
P.S. Sorry for my bad English. :)
I'd use a mix of iptables and libnetfilter_queue (assuming your kernel is relatively recent)
Add to the iptables a rules that forward all the udp packets to the NFQUEUE 0 in order to get packets from kernel to user space.
iptables -A INPUT -p udp -m udp --dport xxxxx -j NFQUEUE --queue-num 0
Build a process who listen to the NFQUEUE number 0, modify payload and give the full packet back to the kernel space using libnetfilter_queue capabilities. Follow this link to know how to do it.
In a nutshell you have to open the queue 0 (nfq_create_queue), set the mode in order to get the content of the packet (nfq_set_mode), then loop in an infinite recv to get ever udp packet filtered by iptables
fd = nfq_fd(h);
while ((rv = recv(fd, buf, sizeof(buf), 0)) >= 0) {
printf("pkt received\n");
nfq_handle_packet(h, buf, rv);
}
Everytime you call nfq_handle_packet is called, the callback defined during the nfq_create_queue phase is called. In that callback you have to modify the payload, update the size and recalculate the checksum, then set as "valid" with nfq_set_verdict
I wrote the module for the kernel and some applications. Module uses netfilter and discards packets that I need to netfilter_queue. The application processes a queue and I decide what to do with each package.
uint hook_main(uint hooknum,
struct sk_buff *skb,
const struct net_device *in,
const struct net_device *out,
int (*okfn)(struct sk_buff *) )
{
struct iphdr *ip;
struct udphdr *udp;
if (skb->protocol == htons(ETH_P_IP)){
ip = (struct iphdr *)(skb->data);
if (ip->version == 4 && ip->protocol == IPPROTO_UDP){
udp = (struct udphdr *)(skb->data + sizeof(struct iphdr));
if(ntohs(udp->dest) == SOME_PORT){
return NF_QUEUE;
}
}
}
return NF_ACCEPT;
}
int init_module ()
{
printk("[udp-catch] start udp-catch\n");
catch_hook.hook = hook_main;
catch_hook.owner = THIS_MODULE;
catch_hook.pf = PF_INET;
catch_hook.hooknum = NF_INET_FORWARD;
catch_hook.priority = NF_IP_PRI_FIRST;
nf_register_hook(&catch_hook);
return 0;
}
And a redesigned sample from netfilter.org is the application.
Routers will automatically send out whatever they receive on their other ports.
e.g. For a 4 port router, what comes in on port 1 will be sent out on ports 2,3 & 4.
To do what you require, you need another PC with 2 network cards. Connect your client PC to one network card, and the server PC to the other.
Then your program will need to recvfrom on one network card, modify the packet and sendto on the other network card.
I am using blocking TCP sockets for my client and server. Whenever I read, I first check whether data is available on the stream using select. I always read and write 40 bytes at a time. While most reads take few milliseconds or less, some just take more than half a second. That after I know that there is data available on the socket.
I am also using TCP_NODELAY
What could be causing it ?
EDIT 2
I analyzed the timestamp for each packet sent and received and saw that this delay happens only when client tries to read the object before the next object is written by the server. For instance, the server wrote object number x and after that the client tried to read object x, before the server was able to begin writing object number x+1. This makes me suspect that some kind of coalescing is taking place on the server side.
EDIT
The server is listening on 3 different ports. The client connects one by one to each of these ports.
There are three connections : One that sends some data frequently from the server to the client. A second one that only sends data from the client to the server. And a third one that is used very rarely to send single byte of data. I am facing the problem with the first connection. I am checking using select() that data is available on that connection and then when I timestamp the 40 byte read, I find that about half a second was taken for that read.
Any pointers as to how to profile this would be very helpful
using gcc on linux.
rdrr_server_start(void)
{
int rr_sd;
int input_sd;
int ack_sd;
int fp_sd;
startTcpServer(&rr_sd, remote_rr_port);
startTcpServer(&input_sd, remote_input_port);
startTcpServer(&ack_sd, remote_ack_port);
startTcpServer(&fp_sd, remote_fp_port);
connFD_rr = getTcpConnection(rr_sd);
connFD_input = getTcpConnection(input_sd);
connFD_ack= getTcpConnection(ack_sd);
connFD_fp=getTcpConnection(fp_sd);
}
static int getTcpConnection(int sd)
{
socklen_t l en;
struct sockaddr_in clientAddress;
len = sizeof(clientAddress);
int connFD = accept(sd, (struct sockaddr*) &clientAddress, &len);
nodelay(connFD);
fflush(stdout);
return connFD;
}
static void
startTcpServer(int *sd, const int port)
{
*sd= socket(AF_INET, SOCK_STREAM, 0);
ASSERT(*sd>0);
// Set socket option so that port can be reused
int enable = 1;
setsockopt(*sd, SOL_SOCKET, SO_REUSEADDR, &enable, sizeof(int));
struct sockaddr_in a;
memset(&a,0,sizeof(a));
a.sin_family = AF_INET;
a.sin_port = port;
a.sin_addr.s_addr = INADDR_ANY;
int bindResult = bind(*sd, (struct sockaddr *) &a, sizeof(a));
ASSERT(bindResult ==0);
listen(*sd,2);
}
static void nodelay(int fd) {
int flag=1;
ASSERT(setsockopt(fd, SOL_TCP, TCP_NODELAY, &flag, sizeof flag)==0);
}
startTcpClient() {
connFD_rr = socket(AF_INET, SOCK_STREAM, 0);
connFD_input = socket(AF_INET, SOCK_STREAM, 0);
connFD_ack = socket(AF_INET, SOCK_STREAM, 0);
connFD_fp= socket(AF_INET, SOCK_STREAM, 0);
struct sockaddr_in a;
memset(&a,0,sizeof(a));
a.sin_family = AF_INET;
a.sin_port = remote_rr_port;
a.sin_addr.s_addr = inet_addr(remote_server_ip);
int CONNECT_TO_SERVER= connect(connFD_rr, &a, sizeof(a));
ASSERT(CONNECT_TO_SERVER==0) ;
a.sin_port = remote_input_port;
CONNECT_TO_SERVER= connect(connFD_input, &a, sizeof(a));
ASSERT(CONNECT_TO_SERVER==0) ;
a.sin_port = remote_ack_port;
CONNECT_TO_SERVER= connect(connFD_ack, &a, sizeof(a));
ASSERT(CONNECT_TO_SERVER==0) ;
a.sin_port = remote_fp_port;
CONNECT_TO_SERVER= connect(connFD_fp, &a, sizeof(a));
ASSERT(CONNECT_TO_SERVER==0) ;
nodelay(connFD_rr);
nodelay(connFD_input);
nodelay(connFD_ack);
nodelay(connFD_fp);
}
I would be suspicious of the this line of code:
ASSERT(setsockopt(fd, SOL_TCP, TCP_NODELAY, &flag, sizeof flag)==0);
If you are running a release build, then ASSERT is mostly likely defined to nothing, so the call would not actually be made. The setsockopt call should not be in the ASSERT statement. Instead, the return value (in a variable) should be verified in the assert statement. Asserts with side effects are generally a bad thing. So even if this is not the problem, it should probably be changed.
One client and multiple connections?
some of socket functions might be blocking your execution (i.e. waiting for result of functions). I would suggest opening a new thread (on server side) for each connection so they won't interfere with each other...
but I'm shooting in the dark; you'll need to send some additional info...
Your statement is still confusing i.e. "multiple tcp connections with only one client". Obviously you have a single server listening on one port. Now if you have multiple connections this means there is more than one client connecting to the server each connected on a different tcp client port. Now server runs select and responds to whichever client has data (meaning client sent some data on his socket). Now if two clients send data simultaneously, server can only process them sequentially. So second client won't get processed until server is done processing with first.
Select only allows server to monitor more than one descriptors (sockets) and process which ever has data available. It is not like that it does processing in parallel. You need multiple threads or processes for that.
Maybe it is something related to the timeout argument.
What do you set for timeout argument of select call?
Try to change the timeout argument to a bigger one and observe the latency. Sometimes too small timeout and very often system calls can actually kill throughput . Maybe you can achieve better results if you assume a little bigger latency, that is realizable.
I suspect timeout or some code bug.
You may try using TCP_CORK (CORK'ed mode) with kernel extensions GRO, GSO and TSO disabled by ethtool:
sending inside TCP_CORK flagged session will ensure that the data will not be sent in partial segment
disabling generic-segmentation-offload, generic-receive-offload and tcp-segmentation-offload will ensure that kernel will not introduce artificial delays to collect additional tcp segments before moving data to/from userspace