I'm writing a plugin for my statusbar to print MPD state, currently using the libmpdclient library. It has to be robust to properly handle lost connections in case MPD is restarted, but simple checking with mpd_connection_get_error on existing mpd_connection object does not work – it can detect the error only when the initial mpd_connection_new fails.
This is a simplified code I'm working with:
#include <stdio.h>
#include <unistd.h>
#include <mpd/client.h>
int main(void) {
struct mpd_connection* m_connection = NULL;
struct mpd_status* m_status = NULL;
char* m_state_str;
m_connection = mpd_connection_new(NULL, 0, 30000);
while (1) {
// this check works only on start up (i.e. when mpd_connection_new failed),
// not when the connection is lost later
if (mpd_connection_get_error(m_connection) != MPD_ERROR_SUCCESS) {
fprintf(stderr, "Could not connect to MPD: %s\n", mpd_connection_get_error_message(m_connection));
mpd_connection_free(m_connection);
m_connection = NULL;
}
m_status = mpd_run_status(m_connection);
if (mpd_status_get_state(m_status) == MPD_STATE_PLAY) {
m_state_str = "playing";
} else if (mpd_status_get_state(m_status) == MPD_STATE_STOP) {
m_state_str = "stopped";
} else if (mpd_status_get_state(m_status) == MPD_STATE_PAUSE) {
m_state_str = "paused";
} else {
m_state_str = "unknown";
}
printf("MPD state: %s\n", m_state_str);
sleep(1);
}
}
When MPD is stopped during the execution of the above program, it segfaults with:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fb2fd9557e0 in mpd_status_get_state () from /usr/lib/libmpdclient.so.2
The only way I can think of to make the program safe is to establish a new connection in every iteration, which I was hoping to avoid. But then what if the connection is lost between individual calls to libmpdclient functions? How often, and more importantly how exactly, should I check if the connection is still alive?
The only way I could find that really works (beyond reestablishing a connection with each run) is using the idle command. If mpd_recv_idle (or mpd_run_idle) returns 0, it is an error condition, and you can take that as a cue to free your connection and run from there.
It's not a perfect solution, but it does let you keep a live connection between runs, and it helps you avoid segfaults (though I don't think you can completely avoid them, because if you send a command and mpd is killed before you recv it, I'm pretty sure the library still segfaults). I'm not sure if there is a better solution. It would be fantastic if there was a reliable way to detect if your connection was still alive via the API, but I can't find anything of the sort. It doesn't seem like libmpdclient is well-built for very long-lived connections that have to deal with mpd instances that go up and down over time.
Another lower-level option is to use sockets to interact with MPD through its protocol directly, though in doing that you'd likely reimplement much of libmpdclient itself anyway.
EDIT: Unfortunately, the idle command does block until something happens, and can sit blocking for as long as a single audio track will last, so if you need your program to do other things in the interim, you have to find a way to implement it asynchronously or in another thread.
Assuming "conn" is a connection created with "mpd_connection_new":
if (mpd_connection_get_error(conn) == MPD_ERROR_CLOSED) {
// mpd_connection_get_error_message(conn)
// will return "Connection closed by the server"
}
You can run this check after almost any libmpdclient call, including "mpd_recv_idle" or (as per your example) "mpd_run_status".
I'm using libmpdclient 2.18, and this certainly works for me.
Related
I have an application project running on Linux environment, which includes libuv and another third-party library, the third-party library provides APIs for starting a TCP connection to remote server (say xxx_connect()) and getting file descriptor of the active connection (say xxx_get_socket()) . So far I managed to get valid file descriptor from xxx_get_socket() after xxx_connect() completed successfully, and initialize uv_poll_t handle with that file descriptor in my program.
Currently I am working on reconnecting function, after reconnecting the same server (by running xxx_connect() again), xxx_get_socket() returns different file descriptor, that means it is necessary to update io_watcher.fd member of a uv_poll_t handle to receive data in the new active connection.
AFAIK uv_poll_init() internally invokes uv__io_check_fd() , uv__nonblock() and uv__io_init() , it seems possible to modify io_watcher.fd of a uv_poll_t handle without closing the handle and then initializing it again (see sample code below), which has extra latency. However I'm not sure if it is safe to do so, I don't know whether io_watcher.fd member of a uv_poll_t handle is referenced elsewhere in libuv (e.g. uv_run()) which makes thing more complex. Is my approach feasible or should I re-initialize the uv_poll_t handle in such case ? Appreciate any feedback.
Possible approach , simplified sample code :
int uv_poll_change_fd( uv_poll_t *handle, int new_fd ) {
if (uv__fd_exists(handle->loop, new_fd))
// ..... some code ....
err = uv__io_check_fd(handle->loop, new_fd);
if(err)
// ..... some code ....
err = uv__nonblock(new_fd, 1);
// ..... some code ....
handle->io_watcher.fd = new_fd;
}
I'm trying to make a daemon simulate a keypress. I got it already working for a non daemon process.
#include <xdo.h>
int main()
{
xdo_t * x = xdo_new(NULL);
xdo_enter_text_window(x, CURRENTWINDOW, "Hallo xdo!", 500000);
return 0;
}
If I try the same code for my daemon I get the following error
Error: Can't open display: (null)
Is there a way to still make it work with xdo or something else?
Your process must know the $DESKTOP environment variable specifying the desktop session to send these keys to, and yours doesn't seem to have that environment set.
Which also means you must realize it needs to have all the necessary privileges, which means access to ~/.Xauthority of the owner of the session, and the sockets in /tmp/.X11-unix
I have a code that uses DHCP libararies(package : 4.2.6) to get the hardware address of the DHCP client connected to the system. During this process after DHCP objects got initialized, i tried dhcp_connect() as follows which results into an infinte loop.
dhcpctl_initialize ();
status=dhcpctl_connect (&connection, "127.0.0.1", 7911, 0);
When i tried to debug the issue, i found a function "omapi_wait_for_completion"(in ompai/dispatch.c), has a do-while check for waiter object and its status, the object should change its state to ready to come out of the this loop, but this is never happened which results into an infinite loop.
Here i am just copying the loop as a reference.
do {
status = omapi_one_dispatch ((omapi_object_t *)waiter, t);
if (status != ISC_R_SUCCESS)
return status;
} while (!waiter || !waiter -> ready);
NOTE:
There is no issue when i run the binary generated from the code in system command line, but when i trigger the same command through an application we have this issue.
The application that triggers my binary doesn't uses DHCP libraries or files.
Please note that same binary with the same application is working
fine with older DHCP package (3.0.5).
Thanks in advance for your help.
My app uses libssh2 to communicate over SSH, and generally works fine. One problem I have is when the remote host dies unexpectedly -- the remote host in this case is an embedded device that can lose power at any time, so this isn't uncommon.
When that happens, my app detects that the remote computer has stopped responding to pings, and tears down the local end of the SSH connection like this:
void SSHSession :: CleanupSession()
{
if (_uploadFileChannel)
{
libssh2_channel_free(_uploadFileChannel);
_uploadFileChannel = NULL;
}
if (_sendCommandsChannel)
{
libssh2_channel_free(_sendCommandsChannel);
_sendCommandsChannel = NULL;
}
if (_session)
{
libssh2_session_disconnect(_session, "bye bye");
libssh2_session_free(_session);
_session = NULL;
}
}
Pretty straightforward, but the problem is that the libssh2_channel_free() calls can block for a long time waiting for the remote end to respond to the "I'm going away now" message, which it will never do because it's powered off... but in the meantime, my app is frozen (blocked in the cleanup-routine), which isn't good.
Is there any way (short of hacking libssh2) to avoid this? I'd like to just tear down the local SSH data structures, and never block during this tear-down. (I suppose I could simply leak the SSH session memory, or delegate it to a different thread, but those seem like ugly hacks rather than proper solutions)
I'm not experienced with libssh2, but perhaps we can get different behavior out of libssh2 by using libssh2_session_disconnect_ex and a different disconnect reason: SSH_DISCONNECT_CONNECTION_LOST.
libssh2_session_disconnect is equivalent to using libssh2_session_disconnect_ex with the reason SSH_DISCONNECT_BY_APPLICATION. If libssh2 knows that the connection is lost, maybe it won't try to talk to the other side.
http://libssh2.sourceforge.net/doc/#libssh2sessiondisconnectex
http://libssh2.sourceforge.net/doc/#sshdisconnectcodes
Set to non-blocking mode and take the control of reading data from the socket to your hand by setting callback function to read data from the soket using libssh2_session_callback_set with LIBSSH2_CALLBACK_RECV for cbtype
void *libssh2_session_callback_set(LIBSSH2_SESSION *session, int cbtype, void *callback);
If you can't read data from the socket due to error ENOTCONN that means remote end has closed the socket or connection failed, then return -ENOTCONN in your callback function
I have an application which uses libuv library. it runs default loop:
uv_run(uv_default_loop());
How can the application be gracefully exited in case of a failure? Currently I am doing it like in the following example:
uv_tcp_t* tcp = malloc(sizeof(uv_tcp_t));
int r = uv_tcp_init(uv_default_loop(), tcp);
if (r) {
free(tcp);
uv_loop_delete(default_loop);
exit(EXIT_FAILURE);
}
Should uv_loop_delete function be called? What does it do? Does it drop all pending callback functions? Does it close all currently opened TCP connections? Do I have to do it manually before exiting?
P.S.: Can't add the tag 'libuv' (less than 1500 reputation). Can somebody create and add it?
Declaration of uv_loop_delete is here and source code is here. It looks like this:
void uv_loop_delete(uv_loop_t* loop) {
uv_ares_destroy(loop, loop->channel);
ev_loop_destroy(loop->ev);
#if __linux__
if (loop->inotify_fd == -1) return;
ev_io_stop(loop->ev, &loop->inotify_read_watcher);
close(loop->inotify_fd);
loop->inotify_fd = -1;
#endif
#if HAVE_PORTS_FS
if (loop->fs_fd != -1)
close(loop->fs_fd);
#endif
}
It will, effectively, clean every file descriptor it's possible to clean. It will close TCP connection, Inotify connections, Socket used to read events, Pipe fds, etc, etc.
=> Yes, this function will close everything you have opened through libuv.
NB: Anyway, when your application exit, your Operating System will clean and close everything you have left open, without any mercy.