When calling dbus_connection_send_with_reply through the D-Bus C API in Linux, I pass in a timeout of 1000ms, but the timeout never occurs when the receiving application doesn't reply.
If the receiving application does send a reply then this is received correctly.
Could this be due to the way that I'm servicing libdbus?
I am calling dbus_connection_dispatch and dbus_connection_dispatch periodically for servicing.
Thanks
It is highly recommended that you use a D-Bus library other than libdbus, as libdbus is fiddly to use correctly, as you are finding. If possible, use GDBus or QtDBus instead, as they are much higher-level bindings which are easier to use. If you need a lower-level binding, sd-bus is more modern than libdbus.
If you use GDBus, you can use GMainLoop to implement a main loop to handle timeouts, and set the timeout period with g_dbus_proxy_set_default_timeout() or in the arguments to individual g_dbus_proxy_call() calls. If you use sd-bus, you can use sd-event.
Related
I have a concrete problem that in a higher level language I would solve using async/await: we have a blocking system/hardware communication/network call that takes several seconds to complete. We would like to have that happen in the background while we do some other of such calls in parallel.
I have thought of a couple of solutions and there might be better ones than these:
start a thread and signal condition variable/semaphore once it's
done;
provide a callback that is executed when the call finishes
(old JavaScript style);
create your own custom scheduler to
actually mimic async/await.
What's the ideal solution to this in a systems language such as Odin or C?
I would recommend using whatever asynchronous approach is most common in your system / language. I do not recommend using a separate thread, and I do not recommend trying to port a high-level style of asynchronous programming into a lower-level language / platform. You want your consumers using something that feels natural to them, rather than learning a whole new paradigm just to call your API.
If you're on Windows, you should be able to signal a ManualResetEvent on completion. Explicit callbacks would also be acceptable.
I haven't written asynchronous code on Linux, but I suspect adopting libevent or libuv would be the way to go.
If you're exposing an API for others to consume and you want it feeling the most platform-like, I believe you'd have to do that at the driver level. That allows you to fully implement support OVERLAPPED (on Windows) or epoll (on Linux).
As we all know, libuv is an asynchronous network library, it will do its best to send out the data, however, in some cases, we can not take all the bandwidth, transmission speed needs to be controlled at the specified value, how to do this with libuv api?
libuv does not provide a built-in mechanism to do this, but it does give you enough information to build it. Assuming you're using TCP, you'd be calling uv_write repeatedly. You can then query the write_queue_size (http://docs.libuv.org/en/v1.x/stream.html#c.uv_stream_t.write_queue_size) and stop waiting until it has drained a bit. You can do this check in the callback passes to uv_write.
I am working on a project which requires monitoring on a socket. I know how to do the busy waiting using while-loop to keep reading incoming data if there is one.
Is there a way to setup a callback function so that whenever there is data on the I/O, it will read the data and call my callback function?
There are more-or-less supported socket calls: poll(), select(), epoll() which don't provide a callback, but are better than simple read(). On a fully compliant POSIX system, there is posix_aio. For cross platform support, there are several libraries (not part of standard C library) that provide what you want, like libuv, libevent, etc. – srdjan.veljkovic
I am writing a cross-platform library which emulates sockets behaviour, having additional functionality in the between (App->mylib->sockets).
I want it to be the most transparent possible for the programmer, so primitives like select and poll must work accordingly with this lib.
The problem is when data becomes available (for instance) in the real socket, it will have to go through a lot of processing, so if select points to the real socket fd, app will be blocked a lot of time. I want the select/poll to unblock only when data is ready to be consumed (after my lib has done all the processing).
So I came across this eventfd which allows me to do exactly what I want, i.e. to manipule select/poll behaviour on a given fd.
Since I am much more familiarized with Linux environment, I don't know what is the windows equivalent of eventfd. Tried to search but got no luck.
Note:
Other approach would be to use another socket connected with the interface, but that seems to be so much overhead. To make a system call with all data just because windows doesn't have (appears so) this functionality.
Or I could just implement my own select, reinventing the wheel. =/
There is none. eventfd is a Linux-specific feature -- it's not even available on other UNIXy operating systems, such as BSD and Mac OS X.
Yes, but it's ridiculous. You can make a Layered Service Provider (globally installed...) that fiddles with the system's network stack. You get to implement all the WinSock2 functions yourself, and forward most of them to the underlying TCP. This is often used by firewalls or antivirus programs to insert themselves into the stack and see what's going on.
In your case, you'd want to use an ioctl to turn on "special" behaviour for your application. Whenever the app tries to create a socket, it gets forwarded to your function, which in turn opens a real TCP socket (say). Instead of returning that HANDLE though, you use a WinSock function to create ask for a dummy handle from the kernel, and give that to the application instead. You do your stuff in a thread. Then, when the app calls WinSock functions on the dummy handle, they end up in your implementation of read, select, etc. You can decouple select notifications on the dummy handle from those on the actual handle. This lets you do things like, for example, transparently give an app a socket that wraps data each way in encryption, indistinguishably from the original socket. (Almost indistinguishably! You can call some LSP APIs on a handle to find out if there's actually and underlying handle you weren't given.)
Pretty heavy-weight, and monstrous in some ways. But, it's there... Hope that's a useful overview.
I'm working on a basic UDP socket file transfer server/client setup, using go-back-n windowing, and unfortunately am stuck doing it using Winsock due to assignment constraints.
Normally in order to manage timeouts on outstanding packets I would just use signal() but am unsure as to how/if this actually really works on Windows, and if this is actually the best solution. Is there some best way to handle these sorts of socket timeouts? Or am I best just managing timeouts with select()?
If your application has a "main()" function then using select() to manage timeouts is the most convenient as it has the advantage that it uses only socket api calls, so the code should work on any platform supporting a bsd style socket api and doesn't require a windows message loop.
If you are writing a window GUI style application - usually with a WinMain() entry point and a message loop, then WSAAsyncSelect() on a socket handle will get read (and write) ready notification messages posted to a HWND - SetTimer likewise posts periodic WM_TIMER notifications, and GetTickCount can be used to detect which socket has been idle too long.