I am trying to implement ZeroMQ to get an application on a Raspberry Pi 3 (Raspbian Stretch) to communicate with an application on a separate machine (in this case Windows 7 64bit OS) linked by a wired or WLAN connection.
I have compiled ZeroMQ with the C library interface on both machines (using Cygwin on Windows) and the Hello World example (which I modified slightly to print the pointer values to assure me that the functions were 'working'). Both machines are connected (in this case via a wired Ethernet link and a router) and the connection is good (I link to RPi from PC via Xrdp or SSH OK).
The problem I have is that the client/server ZeroMQ programs don't appear to be 'seeing' each other even though they do appear to work and my question is: What are the first steps I should take to investigate why this is happening? Are there any commandline or GUI tools that can help me find out what's causing the blockage? (like port activity monitors or something?).
I know very little about networking so consider me a novice in all things sockety/servicey in your reply. The source code on the RPi (server) is:
// ZeroMQ Test Server
// Compile with
// gcc -o zserver zserver.c -lzmq
#include <zmq.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <assert.h>
int main (void)
{
void *context=NULL,*responder=NULL;
int rc=1;
// Socket to talk to clients
context = zmq_ctx_new ();
printf("Context pointer = %p\n",context);
responder = zmq_socket (context, ZMQ_REP);
printf("Responder pointer = %p\n",responder);
rc = zmq_bind (responder, "tcp://*:5555");
printf("rc = %d\n",rc);
assert (rc == 0);
while (1) {
char buffer [10];
zmq_recv (responder, buffer, 10, 0);
printf ("Received Hello\n");
sleep (1); // Do some 'work'
zmq_send (responder, "World", 5, 0);
}
return 0;
}
The source code on the PC (Cygwin) client is:
// ZeroMQ Test Client
// Compile with:
// gcc -o zclient zclient.c -L/usr/local/lib -lzmq
#include <zmq.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
int main (void)
{
void *context=NULL,*requester=NULL;
printf ("Connecting to hello world server\n");
context = zmq_ctx_new ();
printf("Context pointer = %p\n",context);
requester = zmq_socket (context, ZMQ_REQ);
printf("Requester pointer = %p\n",requester);
zmq_connect (requester, "tcp://localhost:5555");
int request_nbr;
for (request_nbr = 0; request_nbr != 10; request_nbr++) {
char buffer [10];
printf ("Sending Hello %d\n", request_nbr);
zmq_send (requester, "Hello", 5, 0);
zmq_recv (requester, buffer, 10, 0);
printf ("Received World %d\n", request_nbr);
}
zmq_close (requester);
zmq_ctx_destroy (context);
return 0;
}
On the RPi LXTerminal I run the server and get this:
Context pointer = 0xefe308
Responder pointer = 0xf00e08
rc = 0
and on the Cygwin Bash shell I run the client and get this:
Connecting to hello world server
Context pointer = 0x60005ab90
Requester pointer = 0x60005f890
Sending Hello 0
... and there they both hang - one listening, the other sending but neither responding to each other.
Any clue how to start investigating this would be appreciated.
+1 for a care using explicit zmq_close() and zmq_ctx_term() release of resources ...
In case this is the first time to work with ZeroMQ,
one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds" before diving into further details
Q : What are the first steps I should take to investigate why this is happening?
A Line-of-Sight test as a step zero makes no sense here.
All localhost-placed interfaces are hard to not "see" one another.
Next, test as a first step call { .bind() | .connect() }-methods using an explicit address like tcp://127.0.0.1:56789 ( so as to avoid the expansion of both the *-wildcard and the localhost-symbolic name translations )
Always be ready to read/evaluate the API-provided errno that ZeroMQ keeps reporting about the last ZeroMQ API-operation resultin error-state.
Best read the ZeroMQ native API documentation, which is well maintained from version to version, so as to fully understand the comfort of API designed signaling/messaging meta-plane.
Mea Culpa: the LoS is sure not to have been established by the O/P code:
RPi .bind()-s on it's local I/F ( and cannot otherwise )
PC .connect()-s not to that of RPi, but the PC's local I/F
PC .connect( "tcp://<address_of_RPi>:5555" ) will make it ( use the same IP-address as you use in Xrdp or SSH to connect to RPi or may read one explicitly from RPi CLI-terminal after ~$ ip address and use that one for PC-side client code )
Two disjoint ZeroMQ AccessPoint-s have zero way how to communicate,once no transport-"wire" from A to B
// Zero MQ Test Server
// Compile with
// gcc -o zserver zserver.c -lzmq
#include <zmq.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <assert.h>
int main (void)
{
void *context=NULL,*responder=NULL;
int rc=1;
// Socket to talk to clients
context = zmq_ctx_new (); printf("Context pointer = %p\n",context);
responder = zmq_socket (context, ZMQ_REP); printf("Responder pointer = %p\n",responder);
rc = zmq_bind (responder, "tcp://*:5555"); printf("rc = %d\n",rc);
/* ----------------------------------^^^^^^------------RPi interface-----------*/
assert (rc == 0);
while (1) {
char buffer [10];
zmq_recv (responder, buffer, 10, 0); printf("Received Hello\n");
sleep (1); // Do some 'work'
zmq_send (responder, "World", 5, 0);
}
return 0;
}
The source code on the PC (Cygwin) client is:
// ZeroMQ Test Client
// Compile with:
// gcc -o zclient zclient.c -L/usr/local/lib -lzmq
#include <zmq.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>
int main (void)
{
void *context=NULL,*requester=NULL;
printf("Connecting to hello world server\n");
context = zmq_ctx_new (); printf("Context pointer = %p\n",context);
requester = zmq_socket (context, ZMQ_REQ); printf("Requester pointer = %p\n",requester);
zmq_connect (requester, "tcp://localhost:5555");
/*---------------------------------^^^^^^^^^^^^^^---------PC-local-interface------*/
int request_nbr;
for (request_nbr = 0; request_nbr != 10; request_nbr++) {
char buffer [10]; printf("Sending Hello %d\n", request_nbr);
zmq_send (requester, "Hello", 5, 0);
zmq_recv (requester, buffer, 10, 0); printf("Received World %d\n", request_nbr);
}
zmq_close (requester);
zmq_ctx_destroy (context);
return 0;
}
May like to also read more on ZeroMQ-related subjects here
Epilogue :
The trouble reported in the O/P is actually masked and remains hidden from being detectable by the API. ZeroMQ permits one AccessPoint to have 0+ transport-class-connections simultaneously, given a proper syntax and other conditions are met.
A call tozmq_connect( reguester, "tcp://<address-not-intended-but-correct>:<legal-port>" ) will result in legally-fair state and none of the defined and documented cases of possible error-states would get reported, because none of all such cases did actually happen:
EINVAL
The endpoint supplied is invalid.
EPROTONOSUPPORT
The requested transport protocol is not supported.
ENOCOMPATPROTO
The requested transport protocol is not compatible with the socket type.
ETERM
The ØMQ context associated with the specified socket was terminated.
ENOTSOCK
The provided socket was invalid.
EMTHREAD
No I/O thread is available to accomplish the task.
There are some chances to at least somehow-"detect" the trouble would be to enforce another sort of exception/error, but deferred into the call of { zmq_recv() | zmq_recv() } in their non-blocking form, where these may turn into reporting EAGAIN or might be EFSM for not having completed the end-to-end re-confirmed ZMTP-protocol handshaking ( no counterparty was and would never be met on the PC-localhost-port with remote RPi-server-side ). This requires also prior settings of zmq_setsockopt( responder, ZMQ_IMMEDIATE, 1 ) and other configuration details.
Next one, in ZeroMQ v4.+, there is a chance to inspect a subset of AccessPoint's internally reported events, using an "inspection-socket" via a rather complex strategy of instantiatingint zmq_socket_monitor (void *socket, char *endpoint, int events); attached to the AccessPoint's internals via inproc:// transport-class ~ here "inproc://myPCsocketAccessPOINT_monitor" like this:
rc = zmq_socket_monitor( responder, // AccessPoint to monitor
"inproc://myPCsocketAccessPOINT_monitor", // symbolinc name
ZMQ_ALL_EVENTS // scope of Events
);
Such created internal monitoring "inspection-socket" may next get zmq_connect()-ed to like:
void *my_end_of_monitor_socket = zmq_socket ( context, ZMQ_PAIR );
rc = zmq_connect( my_end_of_monitor_socket, // local-end PAIR-socket AccessPoint
"inproc://myPCsocketAccessPOINT_monitor" // symbolic name
);
and finally, we can use this to read a sequence of events (and act accordingly ):
int event = get_monitor_event( my_end_of_monitor_socket, NULL, NULL );
if (event == ZMQ_EVENT_CONNECT_DELAYED) { ...; }
if (event == ... ) { ...; }
using as a tool a trivialised get_monitor_event() like this, that handles some of the internal rules of reading and interpreting the multi-part messages that come as ordered from the instantiated "internal"-monitor attached to the AccessPoint:
// Read one event off the monitor socket; return value and address
// by reference, if not null, and event number by value. Returns -1
// in case of error.
static int
get_monitor_event ( void *monitor, int *value, char **address )
{
// First frame in message contains event number and value
zmq_msg_t msg;
zmq_msg_init (&msg);
if (zmq_msg_recv (&msg, monitor, 0) == -1) return -1; // Interrupted, presumably
assert (zmq_msg_more (&msg));
uint8_t *data = (uint8_t *) zmq_msg_data (&msg);
uint16_t event = *(uint16_t *) (data);
if (value) *value = *(uint32_t *) (data + 2);
// Second frame in message contains event address
zmq_msg_init (&msg);
if (zmq_msg_recv (&msg, monitor, 0) == -1) return -1; // Interrupted, presumably
assert (!zmq_msg_more (&msg));
if (address) {
uint8_t *data = (uint8_t *) zmq_msg_data (&msg);
size_t size = zmq_msg_size (&msg);
*address = (char *) malloc (size + 1);
memcpy (*address, data, size);
(*address)[size] = 0;
}
return event;
}
What internal-API-events can be monitored ?
As of the state of v4.2 API, there is this set of "internal"-monitor(able) internal-API-events:
ZMQ_EVENT_CONNECTED
The socket has successfully connected to a remote peer. The event value is the file descriptor (FD) of the underlying network socket. Warning: there is no guarantee that the FD is still valid by the time your code receives this event.
ZMQ_EVENT_CONNECT_DELAYED
A connect request on the socket is pending. The event value is unspecified.
ZMQ_EVENT_CONNECT_RETRIED
A connect request failed, and is now being retried. The event value is the reconnect interval in milliseconds. Note that the reconnect interval is recalculated at each retry.
ZMQ_EVENT_LISTENING
The socket was successfully bound to a network interface. The event value is the FD of the underlying network socket. Warning: there is no guarantee that the FD is still valid by the time your code receives this event.
ZMQ_EVENT_BIND_FAILED
The socket could not bind to a given interface. The event value is the errno generated by the system bind call.
ZMQ_EVENT_ACCEPTED
The socket has accepted a connection from a remote peer. The event value is the FD of the underlying network socket. Warning: there is no guarantee that the FD is still valid by the time your code receives this event.
ZMQ_EVENT_ACCEPT_FAILED
The socket has rejected a connection from a remote peer. The event value is the errno generated by the accept call.
ZMQ_EVENT_CLOSED
The socket was closed. The event value is the FD of the (now closed) network socket.
ZMQ_EVENT_CLOSE_FAILED
The socket close failed. The event value is the errno returned by the system call. Note that this event occurs only on IPC transports.
ZMQ_EVENT_DISCONNECTED
The socket was disconnected unexpectedly. The event value is the FD of the underlying network socket. Warning: this socket will be closed.
ZMQ_EVENT_MONITOR_STOPPED
Monitoring on this socket ended.
ZMQ_EVENT_HANDSHAKE_FAILED
The ZMTP security mechanism handshake failed. The event value is unspecified.
NOTE: in DRAFT state, not yet available in stable releases.
ZMQ_EVENT_HANDSHAKE_SUCCEED
NOTE: as new events are added, the catch-all value will start returning them. An application that relies on a strict and fixed sequence of events must not use ZMQ_EVENT_ALL in order to guarantee compatibility with future versions.
Each event is sent as two frames. The first frame contains an event number (16 bits), and an event value (32 bits) that provides additional data according to the event number. The second frame contains a string that specifies the affected TCP or IPC endpoint.
In zmq_connect, you must indicate the IP address of the raspberry (which have executed zmq_bind:
It should have been:
// on PC, remote ip is the raspberry one, the one you use for ssh for instance
rc = zmq_connect(requester, "tcp://<remote ip>:5555");
Related
Before I get started. Yes, I could use leJOS, ev3dev, or some others, but I'd like to do it this way because that is how I learn.
I am using the CodeSourcery arm-2009q1 arm toolchain. I fetched the required libraries (bluetooth) from here: https://github.com/mindboards/ev3sources.
I am uploading the programs to the brick by using this tool: https://github.com/c4ev3/ev3duder
I have also fetched the brick's shared libraries, but I can not get them to work properly and there is 0 documentation on how to write a c program for the ev3 using the shared libraries. If I could get that working I might be able to use the c_com module to handle bluetooth, but right now bluez and rfcomm in conjunction with: https://github.com/c4ev3/EV3-API for motor and sensor control seems to be my best bet.
Now, with that out of the way:
I'd like to run the EV3 as a bluetooth "server" meaning that I start a program on it and the program opens a socket, binds it, listens for a connection, and then accepts a single connection.
I am able to do open a socket, bind it to anything but channel 1 (I believe this might be the crux of my issue), I am able to listen. These all return 0 (OK) and everything is fine.
Then I try to accept a connection. That instantly returns -1 and sets the remote to address 00:00:00:00:00:00.
My code is pretty much the same as can be found here: https://people.csail.mit.edu/albert/bluez-intro/x502.html
Here it is:
#include <stdio.h>
#include <unistd.h>
#include <sys/socket.h>
#include <bluetooth/bluetooth.h>
#include <bluetooth/rfcomm.h>
#include <ev3.h>
int main(int argc, char **argv)
{
InitEV3();
struct sockaddr_rc loc_addr = { 0 }, rem_addr = { 0 };
char buf[1024] = { 0 };
int sock, client, bytes_read;
socklen_t opt = sizeof(rem_addr);
sock = socket(AF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM);
loc_addr.rc_family = AF_BLUETOOTH;
loc_addr.rc_bdaddr = *BDADDR_ANY;
loc_addr.rc_channel = 2; // <-- Anything but 1. 1 seems to be taken
bind(sock, (struct sockaddr *)&loc_addr, sizeof(loc_addr));
listen(sock, 1);
// accept one connection <-- PROGRAM FAILS HERE AS accept() returns -1
client = accept(sock, (struct sockaddr *)&rem_addr, &opt);
// ---- All following code is irrelevant because accept fails ----
ba2str( &rem_addr.rc_bdaddr, buf );
fprintf(stderr, "accepted connection from %s\n", buf);
memset(buf, 0, sizeof(buf));
bytes_read = read(client, buf, sizeof(buf));
if( bytes_read > 0 )
printf("received [%s]\n", buf);
close(client);
close(sock);
FreeEV3();
return 0;
}
I am able to get the same code working on my pi. Even communication back and forth when the ev3api-specific functions are commented out. I just can't fathom why it won't work on the EV3.
I figured it out.
On my raspberry PI, the accept call worked as expected with no quirks. On the EV3 however, the accept call is non-blocking even if it has not been told to act like so.
The solution was to place the accept call in a loop until an incoming connection was in the queue.
while (errno == EAGAIN && ButtonIsUp(BTNEXIT) && client < 0)
client = accept(sock, (struct sockaddr*)&rem_addr, sizeof(rem_addr));
I'll upload the code to github. Contact me if you'd like to do something similar with the EV3.
Using the examples provided by the ZeroMQ docs, I cannot get them work with a server written in C and a node.js client.
The examples I use are:
http://zguide.zeromq.org/js:rrclient
for Node.js:
// Hello World client in Node.js
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
var zmq = require('zmq')
, requester = zmq.socket('req');
requester.connect('tcp://localhost:5560');
var replyNbr = 0;
requester.on('message', function(msg) {
console.log('got reply', replyNbr, msg.toString());
replyNbr += 1;
});
for (var i = 0; i < 10; ++i) {
requester.send("Hello");
}
and
https://github.com/booksbyus/zguide/blob/master/examples/C/rrworker.c
for the C server:
// Hello World worker
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *responder = zmq_socket (context, ZMQ_REP);
//zmq_connect (responder, "tcp://localhost:5560");
// using bind instead of connect
zmq_bind (responder, "tcp://localhost:5560");
while (1) {
// Wait for next request from client
char *string = s_recv (responder);
printf ("Received request: [%s]\n", string);
free (string);
// Do some 'work'
sleep (1);
// Send reply back to client
s_send (responder, "World");
}
// We never get here, but clean up anyhow
zmq_close (responder);
zmq_ctx_destroy (context);
return 0;
}
I changed the port, so they now match ( 5560 ). However I get no data transmitted. Neither the client nor the server gets any message.
Why? Simply they both just remained listening
Where?
requester.connect(..) // in Node.js copy/paste code
resp.
zmq_connect ( responder, "tcp://localhost:5560" ); // in C copy/paste code
The logic of a ZeroMQ signalling / messaging infrastructure is a bit more complex.
One side of the REQ/REP has to .bind() and all the others may try to .connect().
This is valid in principle, applicable to all ZeroMQ Scalable Formal Communication Pattern archetypes, not just to the REQ/REP one.
So, in this use-case,
either side -- be it a Node.js or the C -- may start with the .bind()
and
the other one will be able to try to .connect() to such a .bind()-prepared and ready IP:port# target.
..
int rc = zmq_bind( responder, "tcp://localhost:5560" );
/* zmq_bind()
returns:
* zero if successful.
* -1 otherwise
and
sets errno to one of the values
as defined in API.
*/
..
There are many good practices to follow in ZeroMQ domain. Registering and handling the return codes from the function calls being one such topic in ZeroMQ best practices. Do not hesitate to learn faster and read through many man*years of a Collective-Experience in this here.
Binding to localhost does not seem to work, so I tried 127.0.0.1 and it works.
Why is the following code slow? And by slow I mean 100x-1000x slow. It just repeatedly performs read/write directly on a TCP socket. The curious part is that it remains slow only if I use two function calls for both read AND write as shown below. If I change either the server or the client code to use a single function call (as in the comments), it becomes super fast.
Code snippet:
int main(...) {
int sock = ...; // open TCP socket
int i;
char buf[100000];
for(i=0;i<2000;++i)
{ if(amServer)
{ write(sock,buf,10);
// read(sock,buf,20);
read(sock,buf,10);
read(sock,buf,10);
}else
{ read(sock,buf,10);
// write(sock,buf,20);
write(sock,buf,10);
write(sock,buf,10);
}
}
close(sock);
}
We stumbled on this in a larger program, that was actually using stdio buffering. It mysteriously became sluggish the moment payload size exceeded the buffer size by a small margin. Then I did some digging around with strace, and finally boiled the problem down to this. I can solve this by fooling around with buffering strategy, but I'd really like to know what on earth is going on here. On my machine, it goes from 0.030 s to over a minute on my machine (tested both locally and over remote machines) when I change the two read calls to a single call.
These tests were done on various Linux distros, and various kernel versions. Same result.
Fully runnable code with networking boilerplate:
#include <netdb.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <netinet/ip.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
static int getsockaddr(const char* name,const char* port, struct sockaddr* res)
{
struct addrinfo* list;
if(getaddrinfo(name,port,NULL,&list) < 0) return -1;
for(;list!=NULL && list->ai_family!=AF_INET;list=list->ai_next);
if(!list) return -1;
memcpy(res,list->ai_addr,list->ai_addrlen);
freeaddrinfo(list);
return 0;
}
// used as sock=tcpConnect(...); ...; close(sock);
static int tcpConnect(struct sockaddr_in* sa)
{
int outsock;
if((outsock=socket(AF_INET,SOCK_STREAM,0))<0) return -1;
if(connect(outsock,(struct sockaddr*)sa,sizeof(*sa))<0) return -1;
return outsock;
}
int tcpConnectTo(const char* server, const char* port)
{
struct sockaddr_in sa;
if(getsockaddr(server,port,(struct sockaddr*)&sa)<0) return -1;
int sock=tcpConnect(&sa); if(sock<0) return -1;
return sock;
}
int tcpListenAny(const char* portn)
{
in_port_t port;
int outsock;
if(sscanf(portn,"%hu",&port)<1) return -1;
if((outsock=socket(AF_INET,SOCK_STREAM,0))<0) return -1;
int reuse = 1;
if(setsockopt(outsock,SOL_SOCKET,SO_REUSEADDR,
(const char*)&reuse,sizeof(reuse))<0) return fprintf(stderr,"setsockopt() failed\n"),-1;
struct sockaddr_in sa = { .sin_family=AF_INET, .sin_port=htons(port)
, .sin_addr={INADDR_ANY} };
if(bind(outsock,(struct sockaddr*)&sa,sizeof(sa))<0) return fprintf(stderr,"Bind failed\n"),-1;
if(listen(outsock,SOMAXCONN)<0) return fprintf(stderr,"Listen failed\n"),-1;
return outsock;
}
int tcpAccept(const char* port)
{
int listenSock, sock;
listenSock = tcpListenAny(port);
if((sock=accept(listenSock,0,0))<0) return fprintf(stderr,"Accept failed\n"),-1;
close(listenSock);
return sock;
}
void writeLoop(int fd,const char* buf,size_t n)
{
// Don't even bother incrementing buffer pointer
while(n) n-=write(fd,buf,n);
}
void readLoop(int fd,char* buf,size_t n)
{
while(n) n-=read(fd,buf,n);
}
int main(int argc,char* argv[])
{
if(argc<3)
{ fprintf(stderr,"Usage: round {server_addr|--} port\n");
return -1;
}
bool amServer = (strcmp("--",argv[1])==0);
int sock;
if(amServer) sock=tcpAccept(argv[2]);
else sock=tcpConnectTo(argv[1],argv[2]);
if(sock<0) { fprintf(stderr,"Connection failed\n"); return -1; }
int i;
char buf[100000] = { 0 };
for(i=0;i<4000;++i)
{
if(amServer)
{ writeLoop(sock,buf,10);
readLoop(sock,buf,20);
//readLoop(sock,buf,10);
//readLoop(sock,buf,10);
}else
{ readLoop(sock,buf,10);
writeLoop(sock,buf,20);
//writeLoop(sock,buf,10);
//writeLoop(sock,buf,10);
}
}
close(sock);
return 0;
}
EDIT: This version is slightly different from the other snippet in that it reads/writes in a loop. So in this version, two separate writes automatically causes two separate read() calls, even if readLoop is called only once. But otherwise the problem still remains.
Interesting. You are being a victim of the Nagle's algorithm together with TCP delayed acknowledgements.
The Nagle's algorithm is a mechanism used in TCP to defer transmission of small segments until enough data has been accumulated that makes it worth building and sending a segment over the network. From the wikipedia article:
Nagle's algorithm works by combining a number of small outgoing
messages, and sending them all at once. Specifically, as long as there
is a sent packet for which the sender has received no acknowledgment,
the sender should keep buffering its output until it has a full
packet's worth of output, so that output can be sent all at once.
However, TCP typically employs something known as TCP delayed acknowledgements, which is a technique that consists of accumulating together a batch of ACK replies (because TCP uses cumulative ACKS), to reduce network traffic.
That wikipedia article further mentions this:
With both algorithms enabled, applications that do two successive
writes to a TCP connection, followed by a read that will not be
fulfilled until after the data from the second write has reached the
destination, experience a constant delay of up to 500 milliseconds,
the "ACK delay".
(Emphasis mine)
In your specific case, since the server doesn't send more data before reading the reply, the client is causing the delay: if the client writes twice, the second write will be delayed.
If Nagle's algorithm is being used by the sending party, data will be
queued by the sender until an ACK is received. If the sender does not
send enough data to fill the maximum segment size (for example, if it
performs two small writes followed by a blocking read) then the
transfer will pause up to the ACK delay timeout.
So, when the client makes 2 write calls, this is what happens:
Client issues the first write.
The server receives some data. It doesn't acknowledge it in the hope that more data will arrive (so it can batch up a bunch of ACKs in one single ACK).
Client issues the second write. The previous write has not been acknowledged, so Nagle's algorithm defers transmission until more data arrives (until enough data has been collected to make a segment) or the previous write is ACKed.
Server is tired of waiting and after 500 ms acknowledges the segment.
Client finally completes the 2nd write.
With 1 write, this is what happens:
Client issues the first write.
The server receives some data. It doesn't acknowledge it in the hope that more data will arrive (so it can batch up a bunch of ACKs in one single ACK).
The server writes to the socket. An ACK is part of the TCP header, so if you're writing, you might as well acknowledge the previous segment at no extra cost. Do it.
Meanwhile, the client wrote once, so it was already waiting on the next read - there was no 2nd write waiting for the server's ACK.
If you want to keep writing twice on the client side, you need to disable the Nagle's algorithm. This is the solution proposed by the algorithm author himself:
The user-level solution is to avoid write-write-read sequences on
sockets. write-read-write-read is fine. write-write-write is fine. But
write-write-read is a killer. So, if you can, buffer up your little
writes to TCP and send them all at once. Using the standard UNIX I/O
package and flushing write before each read usually works.
(See the citation on Wikipedia)
As mentioned by David Schwartz in the comments, this may not be the greatest idea for various reasons, but it illustrates the point and shows that this is indeed causing the delay.
To disable it, you need to set the TCP_NODELAY option on the sockets with setsockopt(2).
This can be done in tcpConnectTo() for the client:
int tcpConnectTo(const char* server, const char* port)
{
struct sockaddr_in sa;
if(getsockaddr(server,port,(struct sockaddr*)&sa)<0) return -1;
int sock=tcpConnect(&sa); if(sock<0) return -1;
int val = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val)) < 0)
perror("setsockopt(2) error");
return sock;
}
And in tcpAccept() for the server:
int tcpAccept(const char* port)
{
int listenSock, sock;
listenSock = tcpListenAny(port);
if((sock=accept(listenSock,0,0))<0) return fprintf(stderr,"Accept failed\n"),-1;
close(listenSock);
int val = 1;
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val)) < 0)
perror("setsockopt(2) error");
return sock;
}
It's interesting to see the huge difference this makes.
If you'd rather not mess with the socket options, it's enough to ensure that the client writes once - and only once - before the next read. You can still have the server read twice:
for(i=0;i<4000;++i)
{
if(amServer)
{ writeLoop(sock,buf,10);
//readLoop(sock,buf,20);
readLoop(sock,buf,10);
readLoop(sock,buf,10);
}else
{ readLoop(sock,buf,10);
writeLoop(sock,buf,20);
//writeLoop(sock,buf,10);
//writeLoop(sock,buf,10);
}
}
I'm trying to use czmq, the first test was ok with the inproc protocol and if the "puller" and the "pusher" in the same program.
But I want to use it on different processus, I also tried ipc and tcp, and I can not achieve to make communicate the server and the client.
The server:
#include <czmq.h>
int main (void)
{
zctx_t *ctx = zctx_new ();
void *reader = zsocket_new (ctx, ZMQ_PULL);
int rc = zsocket_connect (reader, "tcp://localhost:5555");
printf("wait for a message...\n");
char *message = zstr_recv (reader);
printf("Message: %s",message);
zctx_destroy (&ctx);
return 0;
}
and the client:
#include <czmq.h>
int main (void)
{
zctx_t *ctx = zctx_new ();
void *writer = zsocket_new (ctx, ZMQ_PUSH);
int rc = zsocket_bind (writer, "tcp://*:5555");
assert (rc == service);
zstr_send (writer, "HELLO");
zsocket_destroy (ctx, writer);
return 0;
}
Could you tell me what is wrong with my code. I have also tried other sample codes found, but without more success.
Update
The server is waiting for messages in zstr_recv, but the messages send by the client triggers nothing on the server process.
After sending the message, the client process is destroying the socket too quickly. With inproc, you "get away with it" because inproc is fast, while TCP has to go through more hurdles before the message gets to the TCP stack.
It is true that zsocket_destroy() should block until the message is sent, if ZMQ_LINGER = -1 (the default with raw ZMQ), but the default linger for CZMQ is 0. That means dropping in-transit messages when the socket is destroyed.
Try setting the linger (with zctx_set_linger) to something bigger than zero; 10ms perhaps, but use whatever value is good for you.
I have an application that reads large files from a server and hangs frequently on a particular machine. It has worked successfully under RHEL5.2 for a long time. We have recently upgraded to RHEL6.1 and it now hangs regularly.
I have created a test app that reproduces the problem. It hangs approx 98 times out of 100.
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/param.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <netdb.h>
#include <sys/socket.h>
#include <sys/time.h>
int mFD = 0;
void open_socket()
{
struct addrinfo hints, *res;
memset(&hints, 0, sizeof(hints));
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
if (getaddrinfo("localhost", "60000", &hints, &res) != 0)
{
fprintf(stderr, "Exit %d\n", __LINE__);
exit(1);
}
mFD = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
if (mFD == -1)
{
fprintf(stderr, "Exit %d\n", __LINE__);
exit(1);
}
if (connect(mFD, res->ai_addr, res->ai_addrlen) < 0)
{
fprintf(stderr, "Exit %d\n", __LINE__);
exit(1);
}
freeaddrinfo(res);
}
void read_message(int size, void* data)
{
int bytesLeft = size;
int numRd = 0;
while (bytesLeft != 0)
{
fprintf(stderr, "reading %d bytes\n", bytesLeft);
/* Replacing MSG_WAITALL with 0 works fine */
int num = recv(mFD, data, bytesLeft, MSG_WAITALL);
if (num == 0)
{
break;
}
else if (num < 0 && errno != EINTR)
{
fprintf(stderr, "Exit %d\n", __LINE__);
exit(1);
}
else if (num > 0)
{
numRd += num;
data += num;
bytesLeft -= num;
fprintf(stderr, "read %d bytes - remaining = %d\n", num, bytesLeft);
}
}
fprintf(stderr, "read total of %d bytes\n", numRd);
}
int main(int argc, char **argv)
{
open_socket();
uint32_t raw_len = atoi(argv[1]);
char raw[raw_len];
read_message(raw_len, raw);
return 0;
}
Some notes from my testing:
If "localhost" maps to the loopback address 127.0.0.1, the app hangs on the call to recv() and NEVER returns.
If "localhost" maps to the ip of the machine, thus routing the packets via the ethernet interface, the app completes successfully.
When I experience a hang, the server sends a "TCP Window Full" message, and the client responds with a "TCP ZeroWindow" message (see image and attached tcpdump capture). From this point, it hangs forever with the server sending keep-alives and the client sending ZeroWindow messages. The client never seems to expand its window, allowing the transfer to complete.
During the hang, if I examine the output of "netstat -a", there is data in the servers send queue but the clients receive queue is empty.
If I remove the MSG_WAITALL flag from the recv() call, the app completes successfully.
The hanging issue only arises using the loopback interface on 1 particular machine. I suspect this may all be related to timing dependencies.
As I drop the size of the 'file', the likelihood of the hang occurring is reduced
The source for the test app can be found here:
Socket test source
The tcpdump capture from the loopback interface can be found here:
tcpdump capture
I reproduce the issue by issuing the following commands:
> gcc socket_test.c -o socket_test
> perl -e 'for (1..6000000){ print "a" }' | nc -l 60000
> ./socket_test 6000000
This sees 6000000 bytes sent to the test app which tries to read the data using a single call to recv().
I would love to hear any suggestions on what I might be doing wrong or any further ways to debug the issue.
MSG_WAITALL should block until all data has been received. From the manual page on recv:
This flag requests that the operation block until the full request is satisfied.
However, the buffers in the network stack probably are not large enough to contain everything, which is the reason for the error messages on the server. The client network stack simply can't hold that much data.
The solution is either to increase the buffer sizes (SO_RCVBUF option to setsockopt), split the message into smaller pieces, or receiving smaller chunks putting it into your own buffer. The last is what I would recommend.
Edit: I see in your code that you already do what I suggested (read smaller chunks with own buffering,) so just remove the MSG_WAITALL flag and it should work.
Oh, and when recv returns zero, that means the other end have closed the connection, and that you should do it too.
Consider these two possible rules:
The receiver may wait for the sender to send more before receiving what has already been sent.
The sender may wait for the receiver to receive what has already been sent before sending more.
We can have either of these rules, but we cannot have both of these rules.
Why? Because if the receiver is permitted to wait for the sender, that means the sender cannot wait for the receiver to receive before sending more, otherwise we deadlock. And if the sender is permitted to wait for the receiver, that means the receiver cannot wait for the sender to send before receiving more, otherwise we deadlock.
If both of these things happen at the same time, we deadlock. The sender will not send more until the receiver receives what has already been sent, and the receiver will not receive what has already been sent unless the sender send more. Boom.
TCP chooses rule 2 (for reasons that should be obvious). Thus it cannot support rule 1. But in your code, you are the receiver, and you are waiting for the sender to send more before you receive what has already been sent. So this will deadlock.