How to control the transmission speed under libuv? - libuv

As we all know, libuv is an asynchronous network library, it will do its best to send out the data, however, in some cases, we can not take all the bandwidth, transmission speed needs to be controlled at the specified value, how to do this with libuv api?

libuv does not provide a built-in mechanism to do this, but it does give you enough information to build it. Assuming you're using TCP, you'd be calling uv_write repeatedly. You can then query the write_queue_size (http://docs.libuv.org/en/v1.x/stream.html#c.uv_stream_t.write_queue_size) and stop waiting until it has drained a bit. You can do this check in the callback passes to uv_write.

Related

Does it make sense to reduce calls to send on a socket?

Currently working with sockets, I am wondering whether it makes sense to reduce the amount of calls to send for performance.
As far as I understood, there is a send buffer and the data is not dispatched immediately (?), but then I am wondering how long the kernel waits before actually sending the data and how much overhead would be caused by it if I call send multiple times instead of once?
For TCP, there is a send buffer controlled by the Nagle algorithm (and its interaction with delayed acks from the receiver).
There isn't equivalent delay/buffering mechanism for UDP.
You haven't said which protocol you're using, but if it is TCP you probably don't need to do anything. For latency-sensitive code it can still be worth buffering writes just to avoid the syscall overhead, but I suppose you'd already know if that was your situation.

dbus_connection_send_with_reply timeout

When calling dbus_connection_send_with_reply through the D-Bus C API in Linux, I pass in a timeout of 1000ms, but the timeout never occurs when the receiving application doesn't reply.
If the receiving application does send a reply then this is received correctly.
Could this be due to the way that I'm servicing libdbus?
I am calling dbus_connection_dispatch and dbus_connection_dispatch periodically for servicing.
Thanks
It is highly recommended that you use a D-Bus library other than libdbus, as libdbus is fiddly to use correctly, as you are finding. If possible, use GDBus or QtDBus instead, as they are much higher-level bindings which are easier to use. If you need a lower-level binding, sd-bus is more modern than libdbus.
If you use GDBus, you can use GMainLoop to implement a main loop to handle timeouts, and set the timeout period with g_dbus_proxy_set_default_timeout() or in the arguments to individual g_dbus_proxy_call() calls. If you use sd-bus, you can use sd-event.

Sending and receiving DMX-512 using DMXSerial Arduino Library

I currently am working with the DMXSerial library written for arduino.
This library can be used, depending on how it is initialised as a transmitter, or as a sender.
The transmitter should be initialised as followed:
DMXSerial.init(DMXController);
Whereas the initialisation for a receiver is as followed:
DMXSerial.init(DMXReceiver);
I now want to create an implementation that receives and controls.
Does anybody have an idea how to do this without missing certain important interrupts or timing constraints?
That library doesn't look like it will easily do bidirectional. But, since DMX512 is a simple serial protocol, there's nothing stopping you from writing your own routines that manipulate the UART directly. The library will be a great guide for this.
Now, having said that: what kind of situation do you have where you want a device to both control and receive? The DMX512 protocol is explicitly unidirectional, and at the physical layer it's a daisy-chain network, which prevents multiple masters on the bus (and inherently creates a unidirectional bus). If you are a slave and you are manipulating the bus, you risk clobbering incoming packets from the master. If you are clever about it, and queue the incoming packets, you could then perhaps safely retransmit both the incoming data and your own data, but be aware that this is a decidedly nonstandard (and almost certainly standards-violating) behavior.

select() equivalence in I/O Completion Ports

I am developing a proxy server using WinSock 2.0 in Windows. If I wanted to develop it in blocking model, select() was the way to wait for client or remote server to receive data from. Is there any applicable way to do this so using I/O Completion Ports?
I used to have two Contexts for two directions of data using I/O Completion Ports. But having a WSARecv pending couldn't receive any data from remote server! I coudn't find the problem.
Thanks in advance.
EDIT. Here's the WorkerThread Code on currently developed I/O Completion Ports. But I am asking about how to implement select() equivalence.
I/O Completion Ports provide an indication of when an I/O operation completes, they do not indicate when it is possible to initiate an operation. In many situations this doesn't actually matter. Most of the time the overlapped I/O model will work perfectly well if you assume it is always possible to initiate an operation. The underlying operating system will, in most cases, simply do the right thing and queue the data for you until it is possible to complete the operation.
However, there are some situations when this is less than ideal. For example you can always send to a socket using overlapped I/O. You can do this even when the remote peer is not reading and the TCP stack has started to use flow control and has filled the TCP window... This simply uses resources on your local machine in a completely uncontrolled manner (not entirely uncontrolled, but controlled by the peer, which is not ideal). I write about this here and in many situations you DO need to actively manage this kind of thing by tracking how many outstanding I/O write requests you have and using that as an indication of 'readiness to send'.
Likewise if you want a 'readiness to recv' indication you could issue a 'zero byte' read on the socket. This is a read which is issued with a zero length buffer. The read returns when there is data to read but no data is returned. This would give you the indication that there is data to be read on the connection but is, IMHO, pointless unless you are suffering from the very unlikely situation of hitting the I/O page lock limit, as you may as well read the data when it becomes available rather than forcing multiple kernel to user mode transitions.
In summary, you don't really need an answer to your question. You need to look at how the API works and write your code to work with it rather than trying to force the API to work in a way that other APIs that you are familiar with work.

Wait until playback has completed

I'm using PortAudio as a front-end to a speech synthesis (Text to Speech) engine, and I want to provide a synchronous speak function that waits until playback has completed.
It seems like all of the PortAudio functions that deal with this only wait until the underlying API has finished consuming the audio data, not until playback has finished.
Is this possible with PortAudio? If not, are there any good cross-platform alternatives to PortAudio (has to include a C interface) that might support this?
I am not sure if the streamFinished callback, as documented here:
http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html#aa11e7b06b2cde8621551f5d527965838
is what you want. It may suffer from the same issue, but I think it would work.
Two other possibilities are:
Use lower latency settings.
Use the hardware timing. This information is available from calls like GetStreamTime(). For example:
get the current time
push x seconds of audio to the hardware
wait for the hardware clock to show the start time plus x seconds
You might also be interested in this document:
http://www.rossbencina.com/static/writings/portaudio_sync_acmc2003.pdf
I'm afraid I don't know of another API with better support for this sort of thing.

Resources