Connection loss between Meteor and C nopoll application - c

I use nopoll (http://www.aspl.es/nopoll/) for my C application to communicate with Meteor.
Meteor send periodically some ping message.
When my application poll websocket, it replies with pong message : everything is find.
Next, to avoid polling, I replace it by a callback initialized with sigaction(SIGIO, ...).
Then, when ping is received, I send pong, but sometimes, server stop sending ping and no other message could be exchanged.
Is there any timeout between ping and the associated pong message.
Is there any mechanism to advertize myself of a connection loss, cause nopoll_conn_is_ok() and nopoll_conn_is ready() are always nopoll_true.

It is difficult to say why Meteor is stopping sending content. However, two points are interesting to be considering your case:
You don't have to send a PONG everytime you receive a PING when using noPoll because that's automatically done by noPoll's engine (see nopoll_conn_get_msg() implementation at nopoll_conn.c:2453). Maybe this is causing Meteor to fail.
About getting a connection close notification, use nopoll_conn_set_on_close (conn,handler,ptr) to get a notification when the connection is closed. See working examples here: https://dolphin.aspl.es/svn/publico/nopoll/trunk/test/nopoll-regression-client.c
Best Regards,

Related

Closing websocket connections using libcurl when server sends close signal

I'm not an advanced user, so please cope with me.
I'm trying to implement a WebSocket client using libcurl, and I'm good until the last step of a connection - termination.
The general logic is as follows:
Client connects and sends an upgrade request.
Websocket server accepts/upgrades and starts sending gibberish.
Client adds up all the gibberish sizes.
Server sends a closing signal after 10 secs.
So far so good. I'm not processing the payloads of incoming messages, and I don't want to. I have very limited resources and I don't want to experience any performance loss in order to check each payload and search for a close signal.
I'm using libcurl's easy interface and receive data with curl_easy_perform(). Is there any way to detect a close signal, or close the websocket connection after 10 secs?
Close signals are part of the WebSocket protocol at the framing layer (see RFC 6455 Sections 1.4, 5, and 5.5.1).
AFAIK, libcurl doesn't natively support WebSockets, just HTTP (which a WebSocket uses for its opening handshake, so you can fake it with libcurl). So, if libcurl doesn't process the WebSocket frames for you, you would have to process them yourself, even if you ignore their payloads.
Otherwise, just set a 10-second timer for yourself and close the underlying TCP connection directly, which you can get from libcurl using curl_easy_getinfo(CURLINFO_ACTIVESOCKET).
But, if the server is sending you a close signal, you SHOULD send one back, per Section 5.5.1, which means parsing the frames properly:
If an endpoint receives a Close frame and did not previously send a Close frame, the endpoint MUST send a Close frame in response. (When sending a Close frame in response, the endpoint typically echos the status code it received.) It SHOULD do so as soon as practical. An endpoint MAY delay sending a Close frame until its current message is sent (for instance, if the majority of a fragmented message is already sent, an endpoint MAY send the remaining fragments before sending a Close frame). However, there is no guarantee that the endpoint that has already sent a Close frame will continue to process data.
After both sending and receiving a Close message, an endpoint considers the WebSocket connection closed and MUST close the underlying TCP connection. The server MUST close the underlying TCP connection immediately; the client SHOULD wait for the server to close the connection but MAY close the connection at any time after sending and receiving a Close message, e.g., if it has not received a TCP Close from the server in a reasonable time period.
If a client and server both send a Close message at the same time, both endpoints will have sent and received a Close message and should consider the WebSocket connection closed and close the underlying TCP connection.

Client Server program using messages queues

I am trying to design a Client Server kind of application in which my Server is a daemon that accepts client requests, send client's data over a serial channel to the other side(which is an MCU and its firmware will reply to the Server request over the same serial channel). My client can be a CLI application or any other system program.
My idea of design is -
Use message queues for communication between Client and Server since this is a local application and message queues are bidirectional and fast.
Implement a LIBRARY that acts as an interface between multiple clients and the server. This basically does the stuff of packetizing client data into a message(own defined protocol), create message queues, connect to server, send/receive data and then pass it to the respective client(using call backs). This library also exposes API that can be used by clients. Thus this library gives me the flexibility to add support for any new clients keeping the server program unchanged.
Server gets the data over serial from other side and passes it to the library over message queue. The library uses callbacks to send data to the client.
EDIT:
I am thinking of creating Message queues on the fly when any client requests arrive. If I do this, how does the Server daemon(which has already started at linux boot up) gets information about this message queue? Does the message queue has a name that is persistent across and used by other programs? I want to implement clients that will be blocked until it gets response from the server.
Could you guys please review this design and tell me whether my approach is correct. Please reply if you have any other recommendations.
Thanks in advance.

I have to send 2 signals to send a message immidiately in IE and Firefox (SSE)

I'm developing a signalr app having a silverlight client and my project structure is
A web app having the server (hub)
a wpf app having the wpf client (for the first client)
a web app for the other web (silverlight) clients
The problem I'm having is that when I try to send a message to other clients in one of the web clients using Firefox or IE, I need to wait around 2 seconds. But if I send another signal or 2 signals in the same time, It works fine. I can assure that my signals are sent on time only if I send 2 messages.
Could this be because of the transport or sthg that I need to configure? Clients are working fine with Chrome.
OK. Forcing the transport to Longpolling has solved my problem.
here is how I start the connection
IClientTransport transport = new LongPollingTransport();
await HubConnection.Start(transport);
You are correct, we have documented the issue on the release notes
Server sent events has known issues on Silverlight
Messages are delayed when using server sent events on silverlight. To force long polling use the following:
connection.Start(new LongPollingTransport());

Designing an interactive client

I'm trying to design a client program that connects to a remote server and sends various messages / request to it and expects responses based on the requests sent (for e.g. send a join message and wait for a response, then either query for some resource or ask for some info etc. in no particular order).
I would like to design the client such that the user can choose any of the possible requests to send after joining the server (after completing one request and getting a response if any it should allow them to carry out further requests or quit). Something like a menu of actions that it returns to each time (while also waiting for any data from the server)? However I can't seem to figure out how to this could be done. Is there a way to do this (preferably without getting into forking/threads)?
Any inputs on this would be really great. TIA
I would start off with a simple chat server to get your feel for socket programming. Google Example TCP Chat Server or something, you'll end up with simple examples like this: http://www.cs.ucsb.edu/~almeroth/classes/W01.176B/hw2/examples/tcp-server.c .. once you are able to telnet to your server and read/write to your clients, you should be able to progress from there and perform actions when your clients issue a specific command and that sort of thing.

How to achieve interrupt-driven communication from server to client with servlets?

we wrote in C++ a screen sharing application based on sending screenshots.
It works by establishing a TCP connection btw the server and client, where the server forwards every new screenshot received for a user through the connection, and this is popped-up by the client.
Now, we are trying to host this on google app engine, and therefore need 'servlet'-ize and 'sandbox' the server code, so to implement this forwarding through HTTP requests.
I immagine the following:
1. Post request with the screenshot as multiple-data form (apache uploads ..).
But now the server needs to contact the specified client (who is logged in) to send it/forward the screenshot.
I'm not sure how to 'initiate' such connection from the servlet to the client. The client doesn't run any servlet environment (of course).
I know HTTP 1.1 mantains a TCP connection, but it seems gapps won't let me use it.
1 approaches that comes to mind is to send a CONTINUE 100 to every logged in user at login, and respond with the screenshot once it arrives. Upon receival the client makes another request, and so on.
an alternative (insipired from setting the refresh header for a browser) would be to have the app pool on a regular basis (every 5 secs).
You're not going to be able to do this effectively on GAE.
Problem 1: All output is buffered until your handler returns.
Problem 2: Quotas & Limits:
Some features impose limits unrelated
to quotas to protect the stability of
the system. For example, when an
application is called to serve a web
request, it must issue a response
within 30 seconds. If the application
takes too long, the process is
terminated and the server returns an
error code to the user. The request
timeout is dynamic, and may be
shortened if a request handler reaches
its timeout frequently to conserve
resources.
Comet support is on the product roadmap, but to me your app still seems like a poor fit for a GAE application.
Long Polling is the concept used for such asynchronous communications between server and client.
In Long Polling, servlet keeps a map of client and associated messages. Key of Map being client id and value being list of messages to be sent to the client. When a client opens a connection with server (sends request to a servlet), the servlet checks the Map if there are any messages to be sent to it. If found, it sends the messages to the client exits from the method. On receiving messages, the client opens a new connection to the server. If the servlet does not find any messages for given client, it waits till the Map gets updated with messages for given client.
This is a late reply, I'm aware, but I believe that Google have an answer for this requirement: the Channel API.

Resources