libssh2 session cleanup without blocking? - c

My app uses libssh2 to communicate over SSH, and generally works fine. One problem I have is when the remote host dies unexpectedly -- the remote host in this case is an embedded device that can lose power at any time, so this isn't uncommon.
When that happens, my app detects that the remote computer has stopped responding to pings, and tears down the local end of the SSH connection like this:
void SSHSession :: CleanupSession()
{
if (_uploadFileChannel)
{
libssh2_channel_free(_uploadFileChannel);
_uploadFileChannel = NULL;
}
if (_sendCommandsChannel)
{
libssh2_channel_free(_sendCommandsChannel);
_sendCommandsChannel = NULL;
}
if (_session)
{
libssh2_session_disconnect(_session, "bye bye");
libssh2_session_free(_session);
_session = NULL;
}
}
Pretty straightforward, but the problem is that the libssh2_channel_free() calls can block for a long time waiting for the remote end to respond to the "I'm going away now" message, which it will never do because it's powered off... but in the meantime, my app is frozen (blocked in the cleanup-routine), which isn't good.
Is there any way (short of hacking libssh2) to avoid this? I'd like to just tear down the local SSH data structures, and never block during this tear-down. (I suppose I could simply leak the SSH session memory, or delegate it to a different thread, but those seem like ugly hacks rather than proper solutions)

I'm not experienced with libssh2, but perhaps we can get different behavior out of libssh2 by using libssh2_session_disconnect_ex and a different disconnect reason: SSH_DISCONNECT_CONNECTION_LOST.
libssh2_session_disconnect is equivalent to using libssh2_session_disconnect_ex with the reason SSH_DISCONNECT_BY_APPLICATION. If libssh2 knows that the connection is lost, maybe it won't try to talk to the other side.
http://libssh2.sourceforge.net/doc/#libssh2sessiondisconnectex
http://libssh2.sourceforge.net/doc/#sshdisconnectcodes

Set to non-blocking mode and take the control of reading data from the socket to your hand by setting callback function to read data from the soket using libssh2_session_callback_set with LIBSSH2_CALLBACK_RECV for cbtype
void *libssh2_session_callback_set(LIBSSH2_SESSION *session, int cbtype, void *callback);
If you can't read data from the socket due to error ENOTCONN that means remote end has closed the socket or connection failed, then return -ENOTCONN in your callback function

Related

change file descriptor without re-initializing the handle of uv_poll_t type

I have an application project running on Linux environment, which includes libuv and another third-party library, the third-party library provides APIs for starting a TCP connection to remote server (say xxx_connect()) and getting file descriptor of the active connection (say xxx_get_socket()) . So far I managed to get valid file descriptor from xxx_get_socket() after xxx_connect() completed successfully, and initialize uv_poll_t handle with that file descriptor in my program.
Currently I am working on reconnecting function, after reconnecting the same server (by running xxx_connect() again), xxx_get_socket() returns different file descriptor, that means it is necessary to update io_watcher.fd member of a uv_poll_t handle to receive data in the new active connection.
AFAIK uv_poll_init() internally invokes uv__io_check_fd() , uv__nonblock() and uv__io_init() , it seems possible to modify io_watcher.fd of a uv_poll_t handle without closing the handle and then initializing it again (see sample code below), which has extra latency. However I'm not sure if it is safe to do so, I don't know whether io_watcher.fd member of a uv_poll_t handle is referenced elsewhere in libuv (e.g. uv_run()) which makes thing more complex. Is my approach feasible or should I re-initialize the uv_poll_t handle in such case ? Appreciate any feedback.
Possible approach , simplified sample code :
int uv_poll_change_fd( uv_poll_t *handle, int new_fd ) {
if (uv__fd_exists(handle->loop, new_fd))
// ..... some code ....
err = uv__io_check_fd(handle->loop, new_fd);
if(err)
// ..... some code ....
err = uv__nonblock(new_fd, 1);
// ..... some code ....
handle->io_watcher.fd = new_fd;
}

How to execute net.exe from Windows Runtime in C?

To introduce my problem, I'm currently working on a project to remotely shut down stationary PCs via Siemens S7 PLC. Those PCs are used for an experimental manufactoring line for relays at my university. The principle is quiet simple, the PLC sends the IP of a computer via UDP to a special "always on" PC with monitoring functions on which a UDP server listens for packets (this PC starts up together with the manufactoring line; OS is Windows 10; the server is written in C). If it receives an UDP packet, it triggers a net use command followed by a shutdown command to that specific IP. This works just fine, if the server.exe is started manually. The goal is to get the server working when it's automatically started with e.g. the taskplaner. Exactly this is the main issue here, it's not working as background task. It receives the packets (I tested it) but then nothing happens, no computer shuts down. So I guessed it must be the net use or the shutdown command. At first I tried to set up the net use with a system()call:
char command[100] = { 0 };
snprintf(command, sizeof(command), "#NET USE \\\\%s\\IPC$ %s /USER:%s", ip, pw, usr);
system(command);
As this won't work I found following statement for system()on the Windows API reference page:
This API cannot be used in applications that execute in the Windows Runtime.
So I figured I had to find an alternative, which leads me to my next try with the ShellExecute() function. It seemed like there are no problems concerning the execution out of the runtime, cause I could't find any word about it at the reference page. I tried:
char programpath[100] = "C:\\Windows\\System32\\net.exe";
char cmdline[100] = { 0 };
snprintf(cmdline, sizeof(cmdline), "Use \\\\%s\\IPC$ %s /USER:%s", ip, pw, usr);
ShellExecute(NULL, "open", programpath, cmdline, NULL, SW_HIDE);
But nope, won't work either from the background. When I asked my prof about it, he said he is more into Linux, so he could only guess that there might be a problem with the permissions. He meant that my server has possibly no rights to open those two programs, when calling them as background process. But even after a long time of investigating through the internet, I can't find a proper solution which fits to my problem.

nonblocking BIO_do_connect blocked when there is no internet connected

I am using Openssl-0.9.8x as follows:
bio = BIO_new_ssl_connect(ctx);
BIO_get_ssl(bio, & ssl);
SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY);
BIO_set_nbio(bio, 1);
in_addr_t serverIP = inet_addr(HTTPS_SERVER_IP);
BIO_set_conn_ip(bio, &serverIP );
BIO_set_conn_port(bio, HTTPS_SERVER_PORT_STR);
while(1) {
printf("BIO_do_connect start>>>>\n");
if(BIO_do_connect(bio) <= 0 && BIO_should_retry(bio)) {
sleep(1);
printf("BIO_do_connect retry>>>>\n");
}
else {
printf("Connect success.\n");
}
}
It works fine when the internet connection is OK (i.e. it can connect to the server). But, when the internet connection is limited (i.e. it can't connect to the server), the BIO_do_connect() is blocked after one or more times of retry.
The output as follows:
BIO_do_connect start>>>>
BIO_do_connect retry>>>>
BIO_do_connect start>>>>
BIO_do_connect retry>>>>
BIO_do_connect start>>>>
Finally, it is blocked in BIO_do_connect(...)? why this happened?
It is probably your use of SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY).
From the 0.9.8 man page:
SSL_MODE_AUTO_RETRY
Never bother the application with retries if the transport is blocking.
If a renegotiation take place during normal operation, a
SSL_read() or SSL_write() would return
with -1 and indicate the need to retry with SSL_ERROR_WANT_READ.
In a non-blocking environment applications must be prepared to handle
incomplete read/write operations.
In a blocking environment, applications are not always prepared to
deal with read/write operations returning without success report. The
flag SSL_MODE_AUTO_RETRY will cause read/write operations to only
return after the handshake and successful completion.
The effect of SSL_MODE_AUTO_RETRY is to automatically retry operations that would otherwise return back to application code (even when the using blocking connections). It doesn't make any sense to use it when you want non-blocking operation.
Try removing that line completely.
By the way 0.9.8 is out of support and is no longer receiving security updates. You really ought to upgrade to a more recent version.
Add the following:
in_addr_t serverIP = inet_addr(HTTPS_SERVER_IP);
BIO_set_conn_ip(bio, &serverIP );
BIO_set_conn_port(bio, HTTPS_SERVER_PORT_STR);

libmpdclient: detect lost connection to MPD daemon

I'm writing a plugin for my statusbar to print MPD state, currently using the libmpdclient library. It has to be robust to properly handle lost connections in case MPD is restarted, but simple checking with mpd_connection_get_error on existing mpd_connection object does not work – it can detect the error only when the initial mpd_connection_new fails.
This is a simplified code I'm working with:
#include <stdio.h>
#include <unistd.h>
#include <mpd/client.h>
int main(void) {
struct mpd_connection* m_connection = NULL;
struct mpd_status* m_status = NULL;
char* m_state_str;
m_connection = mpd_connection_new(NULL, 0, 30000);
while (1) {
// this check works only on start up (i.e. when mpd_connection_new failed),
// not when the connection is lost later
if (mpd_connection_get_error(m_connection) != MPD_ERROR_SUCCESS) {
fprintf(stderr, "Could not connect to MPD: %s\n", mpd_connection_get_error_message(m_connection));
mpd_connection_free(m_connection);
m_connection = NULL;
}
m_status = mpd_run_status(m_connection);
if (mpd_status_get_state(m_status) == MPD_STATE_PLAY) {
m_state_str = "playing";
} else if (mpd_status_get_state(m_status) == MPD_STATE_STOP) {
m_state_str = "stopped";
} else if (mpd_status_get_state(m_status) == MPD_STATE_PAUSE) {
m_state_str = "paused";
} else {
m_state_str = "unknown";
}
printf("MPD state: %s\n", m_state_str);
sleep(1);
}
}
When MPD is stopped during the execution of the above program, it segfaults with:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fb2fd9557e0 in mpd_status_get_state () from /usr/lib/libmpdclient.so.2
The only way I can think of to make the program safe is to establish a new connection in every iteration, which I was hoping to avoid. But then what if the connection is lost between individual calls to libmpdclient functions? How often, and more importantly how exactly, should I check if the connection is still alive?
The only way I could find that really works (beyond reestablishing a connection with each run) is using the idle command. If mpd_recv_idle (or mpd_run_idle) returns 0, it is an error condition, and you can take that as a cue to free your connection and run from there.
It's not a perfect solution, but it does let you keep a live connection between runs, and it helps you avoid segfaults (though I don't think you can completely avoid them, because if you send a command and mpd is killed before you recv it, I'm pretty sure the library still segfaults). I'm not sure if there is a better solution. It would be fantastic if there was a reliable way to detect if your connection was still alive via the API, but I can't find anything of the sort. It doesn't seem like libmpdclient is well-built for very long-lived connections that have to deal with mpd instances that go up and down over time.
Another lower-level option is to use sockets to interact with MPD through its protocol directly, though in doing that you'd likely reimplement much of libmpdclient itself anyway.
EDIT: Unfortunately, the idle command does block until something happens, and can sit blocking for as long as a single audio track will last, so if you need your program to do other things in the interim, you have to find a way to implement it asynchronously or in another thread.
Assuming "conn" is a connection created with "mpd_connection_new":
if (mpd_connection_get_error(conn) == MPD_ERROR_CLOSED) {
// mpd_connection_get_error_message(conn)
// will return "Connection closed by the server"
}
You can run this check after almost any libmpdclient call, including "mpd_recv_idle" or (as per your example) "mpd_run_status".
I'm using libmpdclient 2.18, and this certainly works for me.

Using DHCP libraries results infinite loop

I have a code that uses DHCP libararies(package : 4.2.6) to get the hardware address of the DHCP client connected to the system. During this process after DHCP objects got initialized, i tried dhcp_connect() as follows which results into an infinte loop.
dhcpctl_initialize ();
status=dhcpctl_connect (&connection, "127.0.0.1", 7911, 0);
When i tried to debug the issue, i found a function "omapi_wait_for_completion"(in ompai/dispatch.c), has a do-while check for waiter object and its status, the object should change its state to ready to come out of the this loop, but this is never happened which results into an infinite loop.
Here i am just copying the loop as a reference.
do {
status = omapi_one_dispatch ((omapi_object_t *)waiter, t);
if (status != ISC_R_SUCCESS)
return status;
} while (!waiter || !waiter -> ready);
NOTE:
There is no issue when i run the binary generated from the code in system command line, but when i trigger the same command through an application we have this issue.
The application that triggers my binary doesn't uses DHCP libraries or files.
Please note that same binary with the same application is working
fine with older DHCP package (3.0.5).
Thanks in advance for your help.

Resources