Timestamp for v4l2 image capture - v4l2

I have a Linux application that processes camera images. Currently I provide buffers to the v4l2 kernel subsystem that are filled with image data.
However I need to know, as exact as possible, when this frame was captured (by the camera). With buffers, I may not know precisely when this happened as I may not be able to process all frames in a timely manner (i.e. I may request an image at a time when it is already available for a few milliseconds).
What I am looking for is a way to determine (or estimate) the time an image was captured (or the age of it), e.g. by having the kernel record it somehow, or in worst case by not having images streamed to me but rather only sent upon my explicit request.
Environment: UVC web camera, Linux kernel 2.6.3x, V4L2 API

The v4l2_buffer structure has a timestamp field. But see also this question: Where does v4l2_buffer->timestamp value starts counting?

Related

web-gRPC Performance Rate per second

I want to develop a system for trace and debugging an external device via COM port.
Main service will be developed using python to receive, analyse & store logs data.
We decided to stream log data to web browser with gRPC protocol and draw live charts.
Highest rate if data is 50K of signals per second and maximum size of every signal is just 10 bytes.
System will be used in local network or same PC so we do not have bandwidth limits.
We want to make sure the web-grpc platform can cover this rate per second.
Thanks for your recommendations.
The throughput limit is mostly decided by the browser and the protobuf overhead. Since the latter is application specific, you should do a benchmark with real data on your preferred browsers.

How to send data from one ESP32 to another not using WiFi

I'm currently trying to send a small data (like 10bytes) from one ESP32 board to another. The preferred architecture should be that many 'slave' nodes sends data to one 'master' node. All nodes are ESP32 microcontrollers and the max distance is ~3m.
I already have implemented this architecture using WiFi 'HTTP_GET' requests, but as I also need that each node scans for BLE beacons and gets the RSSI value.. the ESP32 flash memory was not enough...
The following error was thrown:
Sketch uses 1661386 bytes (126%) of program storage space. Maximum is 1310720 bytes.
That is the main reason why I want to avoid using WiFi library. Note: I tried to use sub-header files of WiFi.h but that was not enough.
Question
Is there a lightweight implementation to simply send a small amount of data from one ESP32 to another using, for example, BLE signals ? If yes, would be nice to see the code sample!
Edit
I resolved the memory problem. As it turns out, by default the ESP32 is not configured to use the full flash storage capacity. By minimizing the SPIFFS partition helped, now the sketch uses 84% memory storage! But the question still remains.

How to create a video stream from a series of bitmaps and send it over IP network?

I have a bare-metal application running on a tiny 16 bit microcontroller (ST10) with 10BASE-T Ethernet (CS8900) and a Tcp/IP implementation based upon the EasyWeb project.
The application's main job is to control a led matrix display for public traffic passenger information. It generates display information with about about 41 fps and configurable display size of e.g. 160 × 32 pixel, 1 bit color depth (each led can be just either on or off).
Example:
There is a tiny webserver implemented, which provides the respective frame buffer content (equals to led matrix display content) as PNG or BMP for download (both uncompressed because of CPU load and 1 Bit color depth). So I can receive snapshots by e.g.:
wget http://$IP/content.png
or
wget http://$IP/content.bmp
or put appropriate html code into the controller's index.html to view that in a web browser.
I also could write html / javascript code to update that picture periodically, e.g. each second so that the user can see changes of the display content.
Now for the next step, I want to provide the display content as some kind of video stream and then put appropriate html code to my index.html or just open that "streaming URI" with e.g. vlc.
As my framebuffer bitmaps are built uncompressed, I expect a constant bitrate.
I'm not sure what's the best way to start with this.
(1) Which video format is the most easy to generate if I already have a PNG for each frame (but I have that PNG only for a couple of milliseconds and cannot buffer it for a longer time)?
Note that my target system is very resource restricted in both memory and computing power.
(2) Which way for distribution over IP?
I already have some tcp sockets open for listening on port 80. I could stream the video over HTTP (after received) by using chunked transfer encoding (each frame as an own chunk).
(Maybe HTTP Live Streaming doing like this?)
I'd also read about thinks like SCTP, RTP and RTSP but it looks like more work to implement this on my target. And as there is also the potential firewall drawback, I think I prefer HTTP for transport.
Please note, that the application is coded in plain C, without operating system or powerful libraries. All stuff is coded from the scratch, even the web server and PNG generation.
Edit 2017-09-14, tryout with APNG
As suggested by Nominal Animal, I gave a try with using APNG.
I'd extend my code to produce appropriate fcTL and fdAT chunks for each frame and provide that bla.apng with HTTP Content-Type image/apng.
After downloading those bla.apng it looks useful when e.g. opening in firefox or chrome (but not in
konqueror,
vlc,
dragon player,
gwenview).
Trying to stream that apng works nicely but only with firefox.
Chrome wants first to download the file completely.
So APNG might be a solution, but with the disadvantage that it currently only works with firefox. After further testing I found out, that 32 Bit versions of Firefox (55.0.2) crashing after about 1h of APNG playback were about 100 MiB of data has been transfered in this time. Looks that they don't discard old / obsolete frames.
Further restrictions: As APNG needs to have a 32 bit "sequence number" at each animation chunk (need 2 for each frame), there might to be a limit for the maximum playback duration. But for my frame rate of 24 ms this duration limit is at about 600 days and so I could live with.
Note that APNG mime type was specified by mozilla.org to be image/apng. But in my tests I found out that it's a bit better supported when my HTTP server delivers APNG with Content-Type image/png instead. E.g. Chromium and Safari on iOS will play my APNG files after download (but still not streaming). Even the wikipedia server delivers e.g. this beach ball APNG with Content-Type image/png.
Edit 2017-09-17, tryout with animated GIF
As also suggested by Nominal Animal, I now tried animated GIF.
Looks ok in some browsers and viewers after complete download (of e.g. 100 or 1000 frames).
Trying live streaming it looks ok in Firefox, Chrome, Opera, Rekonq and Safari (on macOS Sierra).
Not working Safari (on OSX El Capitan and iOS 10.3.1), Konqueror, vlc, dragon player, gwenview.
E.g. Safari (tested on iOS 10.3.3 and OSX El Capitan) first want to download the gif completely before display / playback.
Drawback of using GIF: For some reason (e.g. cpu usage) I don't want to implement data compression for the generated frame pictures. For e.g. PNG, I use uncompressed data in IDAT chunk and for a 160x32 PNG with 1 Bit color depth a got about 740 Byte for each frame. But when using GIF without compression, especially for 1 Bit black/white bitmaps, it blows up the pixel data by factor 3-4.
At first, embedded low-level devices not very friendly with very complex modern web browsers. It very bad idea to "connect" such sides. But if you have tech spec with this strong requirements...
MJPEG is well known for streaming video, but in your case it is very bad, as requires much CPU resources and produces bad compression ratio and high graphics quality impact. This is nature of jpeg compression - it's best with photographs (images with many gradients), but bad with pixel art (images with sharp lines).
Looks that they don't discard old / obsolete frames.
And this is correct behavior, since this is not video, but animation format and can be repeated! Exactly same will be with GIF format. Case with MJPEG may be better, as this is established as video stream.
If I were doing this project, I would do something like this:
No browser AT ALL. Write very simple native player with winapi or some low-level library to just create window, receive UDP packet and display binary data. In controller part, you must just fill udp packets and send it to client. UDP protocol is better for realtime streaming, it's drop packets (frames) in case of latency, very simple to maintain.
Stream with TCP, but raw data (1 bit per pixel). TCP will always produce some latency and caching, you can't avoid it. Same as before, but you don't need handshaking mechanism for starting video stream. Also, you can write your application in old good technologies like Flash and Applets, read raw socket and place your app in webpage.
You can try to stream AVI files with raw data over TCP (HTTP). Without indexes, it will unplayable almost everywhere, except VLC. Strange solution, but if you can't write client code and wand VLC - it will work.
You can write transcoder on intermediate server. For example, your controller sent UDP packets to this server, server transcode it in h264 and streams via RTMP to youtube... Your clients can play it with browsers, VLC, stream will in good quality upto few mbits/sec. But you need some server.
And finally, I think this is best solution: send to client only text, coordinates, animations and so on, everything what renders your controller. With Emscripten, you can convert your sources to JS and write exact same renderer in browser. As transport, you can use websockets or some tricks with long-long HTML page with multiple <script> elements, like we do in older days.
Please, tell me, which country/city have this public traffic passenger information display? It looks very cool. In my city every bus already have LED panel, but it just shows static text, it's just awful that the huge potential of the devices is not used.
Have you tried just piping this through a websocket and handling the binary data in javascript?
Every websocket frame sent would match a frame of your animation.
you would then take this data and draw it into an html canvas. This would work on every browser with websocket support - which would be quite a lot - and would give you all the flexibility you need. (and the player could be more high end than the "encoder" in the embedded device)

How to determine the last time the audio device was playing a file?

I would like to use C in order to get the last time the soundboard was playing a file. Is there a way I could do that?
None of the components you are using (tools, libraries, sound servers, drivers, kernel) logs the time when a sound is played.
If you are using one specific tool to play sounds, you could modify it to log the time.
Otherwise, you have to actively monitor the current status of the sound device.
(With ALSA, you could poll /proc/asound/card*/pcm*/sub*/status.)
I think it's not possible because of ALSA(Advanced Linux Sound Architecture) is just kernel component that provide device drivers for sound card.But i don't know if some user-space API's and library's like (alsa-ustils) can do that!,I advice may is better to check Sound-Player applications(VLC etc..) log ?!

what causes the .NET SerialPort class DataReceived event to fire?

I understand from the MSDN docs that the event DataReceived will not necessarily fire once per byte.
But does anyone know what exactly is the mechanism that causes the event to fire?
Does the receipt of each byte restart a timer that has to reach, say 10 ms between bytes, before the event fires?
I ask because I'm trying to write an app that reads XML data coming in from a serial port.
Because my laptop has no serial ports, I use a virtual serial port emulator. (I know, I know--I can't do anything about it ATM).
When I pass data through the emulated port to my app, the event fires once for each XML record (about 1500 bytes). Perfect. But when a colleague at another office tries it with two computers connected by an actual cable, the DataReceived event fires repeatedly, after every 10 or so bytes of XML, which totally throws off the app.
DataReceived can fire at any time one or more bytes are ready to read. Exactly when it is fired depends on the OS and drivers, and also there will be a small delay between the data being received and the event being fired in .NET.
You shouldn't rely on the timing of DataReceived events for control flow.
Instead, parse the underlying protocol and if you haven't received a complete message, wait for more. If you receive more than one message, make sure to keep the left overs from parsing the first message because they will be the start of the next message.
As Mark Byers pointed out, this depends on the OS and drivers. At the lowest level, a standard RS232 chip (for the life of me, I can't remember the designation of the one that everyone copied to make the 'standard') will fire an interrupt when it has data in its inbound register. The 'bottom end' of the driver has to go get that data (which could be any amount up to the buffer size of the chip), and store it in the driver's buffer, and signal to the OS that it has data. It's at this point that the .NET framework can start finding out that the data is available. Depending on when the OS signals the application that opened the serial port (which is an OS level operation, and provides the 'real' link from the .NET framework to the OS/driver level implementation), there could literally be any amount of data > 1 byte in the buffer, because the driver's bottom end could've loaded up more data in the meantime. My bet is that on your system, the driver is providing a huge buffer, and only signalling after a significant pause in the data stream. Your colleague's system, on the other hand, signals far more frequently. Again, Mark Byer's advice to parse the protocol is spot on. I've implemented a similar system over TCP sockets, and the only way to handle the situation is to buffer the data until you've got a complete protocol message, then hand the full message over to the application.

Resources