I'm currently working on an embedded TCP server using LWIP stack. I would like to use the keepalive functions in the server to let him detect when a client unexpectedly lost the connection. Do you know if keepalive can be used from the server or does-it only concern client side ?
thanks by advance
Using LwIP's BSD-like sockets API and the setsockopt() call you can keep it alive.
Related
I am using a openssl library to implement tls server.
How to configure the Heartbeat request timeout and retry count using openssl API to control the keepalive message flow?
I'm assuming you really do mean TLS as you said and not DTLS. Using heartbeats in TLS is quite unusual, although OpenSSL does support it in version 1.0.2. That support is removed from OpenSSL 1.1.0 so, for that reason, I would advise against using it in a new application. Use TCP keep alives instead.
The Heartbeat API is really quite simple. You can do three things:
1) Send a heartbeat using SSL_heartbeat()
2) Find out if a previously sent heartbeat is still pending a response using SSL_get_tlsext_heartbeat_pending()
and
3) Set the Heartbeat mode to disallow the peer from sending heartbeat requests using SSL_set_tlsext_heartbeat_no_requests()
Anything else is up to the application. Retries should not be necessary in TLS because it is designed to run over a reliable transport layer. If the connection is alive, it will get there. If it isn't, it won't. The TCP layer will handle retransmission of lost packets. Timeouts should also really be done at the TCP layer. If the TCP connection times out the SSL connection will fail.
I have an application that uses Camel Netty4 component as a consumer endpoint which is configured as a TCP client (clientMode set to true) with the reconnect option enabled. The reconnect feature works well, the TCP client automatically reconnects to the remote server after a connection outage. Unfortunately it seems that this reconnect behavior runs indefinitely until the connection is established. Is there some way to set a limit to this reconnect feature, i.e. put a limit on how many reconnect attempts can be made before throwing a connection error?
Another question but this one is for the Netty4 component implemented as a producer that sends a payload to a remote server. Is there a way to configure the endpoint to enable the reconnect feature which would allow the TCP client to try establish a connection for a number of attempts before throwing a connection error?
In Camel 2.17-SNAPSHOT, there is no way to limit the amount of reconnection attempts. The reconnection is handled by ClientModeTCPNettyServerBootstrapFactory#scheduleReconnect. See here.
Currently it doesn't track the number of attempts, but it would be pretty simple to implement this functionality by adding a counter inside the anonymous Runnable.
Could you please open a ticket in the Camel JIRA?
Thanks!
I dont think limit for retry feature is avaialble at present for consumer, but you can specify the interval in which these retries can happen , the timeunit is in milliseconds.
I am trying to design a Client Server kind of application in which my Server is a daemon that accepts client requests, send client's data over a serial channel to the other side(which is an MCU and its firmware will reply to the Server request over the same serial channel). My client can be a CLI application or any other system program.
My idea of design is -
Use message queues for communication between Client and Server since this is a local application and message queues are bidirectional and fast.
Implement a LIBRARY that acts as an interface between multiple clients and the server. This basically does the stuff of packetizing client data into a message(own defined protocol), create message queues, connect to server, send/receive data and then pass it to the respective client(using call backs). This library also exposes API that can be used by clients. Thus this library gives me the flexibility to add support for any new clients keeping the server program unchanged.
Server gets the data over serial from other side and passes it to the library over message queue. The library uses callbacks to send data to the client.
EDIT:
I am thinking of creating Message queues on the fly when any client requests arrive. If I do this, how does the Server daemon(which has already started at linux boot up) gets information about this message queue? Does the message queue has a name that is persistent across and used by other programs? I want to implement clients that will be blocked until it gets response from the server.
Could you guys please review this design and tell me whether my approach is correct. Please reply if you have any other recommendations.
Thanks in advance.
I have an C application that receives commands/info from a client application on localhost but needs to send data to the client on another IP address. Does someone have any previeous kind of experience of doing such a thing? From what i've read You cannot redirect an TCP connection?
Thanks
I understand how I can use a raw socket to listen to a server application and recieve information but I need an easy to access API and I am very familiar with REST.
Is there a way to push (not by using long pooling) data using a WCF service?
Here's my idea of how things should happen, at least at the begining:
The client accesses a URI with it's access parameters (ip, port, apikey).
The server responses with success/failure.
The server opens a socket for each channel with the client's details.
The server accesses a URI indicating that all channels are now streaming.
But how do I wrap the client or the server socket to access a URI?
Edit:
Maybe I should open a socket that notifies about changes on a channel and on the client side require that it will listen and raise the event accordingly.
This is not a very generic solution isn't it?
You should look into the Net.TCP binding, as described by Tomek (one of the WCF team members) here. You use it more-or-less like you would use the HTTP Duplex binding (i.e., the HTTP Long Poll), but it's much, much faster. It's still more complicated than REST, but it's dramatically easier than sockets, and I don't think you'll find a REST-type solution that does what you need.