h.264/svc mobile support - mobile

I am new to H.264/SVC standard. I researched and found out that mobile devices such as android, ipad and iOSx support H.264/AVC. Since, H.264/SVC is the extension of the H.264/AVC with the multilayered structure for base profile and enhanced profile, I was wondering if the devices that support H.264/AVC also support H.264/SVC or not?

From my (merely theoretical) research of the matter, an H.264 AVC player will NOT be able to decode an H.264 SVC stream out of the box.
But using SVC as the encoding format on your server does not necessarily mean that you stream the SVC encoded data to your clients. The transformation from SVC to AVC can be done server-side with very little computational effort. It is NOT a re-encoding! There is an increasing number of solutions in the market that use the H.264 SVC file format on the server but still send AVC streams with tailored data rates to their clients after determining the available network bandwidth. This way the streaming system stays compatible with the existing client base but can already use the advantages of SVC on the servers (e.g. only one file per video and a very low storage overhead).
On the other hand it is indeed possible to send streams in SVC format if your clients are capable of processing them. If needed these SVC streams can have a reduced data rate - depending on available bandwidth - which can be extracted from the original SVC file easily and with low computing power by dropping SVC layers. Rebuilding the stream on the fly and sending a reduced layer set (down to the simple base layer) will make sense in many scenarios, whenever the available bandwidth does not allow to stream the full SVC file. After all this is what SVC is all about: The possibility to quickly generate reduced bandwidth versions from a single master file or high bandwidth SVC stream.
Actually dropping layers from SVC streams can even take place on the fly at specialized intermediate network nodes and proxies on the way from the server to the client. This allows to reduce the signal data rate whenever the following section of the network connection has a bandwidth that is to low for the full incoming stream.
Some providers that I found for systems that generate adapted SVC or AVC streams from SVC files on the fly are
Radvision (video conferencing),
Vidyo (video conferencing) and
Stretch (video hardware).
For some more details on H.264 SVC see these links:
http://www.streaminglearningcenter.com/articles/h264--scalable-video-coding---what-you-need-to-know.html
http://www.eetimes.com/design/signal-processing-dsp/4017613/Tutorial-The-H-264-Scalable-Video-Codec-SVC-?pageNumber=0.

Related

web-gRPC Performance Rate per second

I want to develop a system for trace and debugging an external device via COM port.
Main service will be developed using python to receive, analyse & store logs data.
We decided to stream log data to web browser with gRPC protocol and draw live charts.
Highest rate if data is 50K of signals per second and maximum size of every signal is just 10 bytes.
System will be used in local network or same PC so we do not have bandwidth limits.
We want to make sure the web-grpc platform can cover this rate per second.
Thanks for your recommendations.
The throughput limit is mostly decided by the browser and the protobuf overhead. Since the latter is application specific, you should do a benchmark with real data on your preferred browsers.

How to stream data via BLE?

I am currently evaluating the board of Maxim Integrated MAXREFDES100#. As part of my thesis, I would like to examine whether there is the possibility of using BLE for data streaming.
The supplied software is fully functional and can already transfer data. Of course, only smaller data packets over a longer period like Temperature or precalculated Heart Rate values. I am aware that BLE is not suitable and has not been designed to transmit data continuously. Nevertheless, I would like to try to transfer the ECG raw data also via the BLE interface. So that I can check whether my data has been arrived completely and correctly in time (prerequisite for ECG data), I have developed the following test setup:
Currently, I store data of a sine wave to a large FIFO and transfer 20-byte sine packets to a characteristic. So far everything works wonderfully. Using an Android device and a BLE sniffer I catch the data packets and check if the sine has any interruptions. The following graphic is taken from a longer measurement: (I have tested my FIFO and the sine datas are complete without any inerruption): This is a capture of my Android and my BLE Sniffer: Data packets are missing.
Sine & Spektrogram Image
Further measurements show that the errors do not occur sequentially, they can always be found at different times. (The Fifo Implementation must be correct)
Sine & Spektrogram Image 2
Since my phone and my Adafruit BLE sniffer have identical errors in the same place, the data packages on the board-side must be corrupt. I suspect they are lost at some protocol level (GAP maybe?). Interestingly, errors affect at least 200 following bytes (256 bytes data are a complete sine). This means that at least 10 data packets each 20 Bytes would have to be in order to get such a graphic.
What options are there for continuously sending data via the BLE interface?
Is it possible to send the data directly via L2CAP without GATT/ATT?
Are there any more wireless, low power protocols that allow data streaming? (So ​​with acknowledgment like TCP for completeness and chronological correctness)
Best regards, Emin
What options are there for continuously sending data via the BLE interface?
Is it possible to send the data directly via L2CAP without GATT/ATT?
BLE v4.1 introduced LE Connection-Oriented Channels which is a way to send data directly via L2CAP without GATT/ATT (think about network sockets). See also Bluetooth Core Specification Vol. 3 Part A Section 3.4.
The board you are using has a EM9301 BLE chip which seems to support BLE v4.1. However, the mbed library you are using doesn't allow you to use them. See here, you have an interface for connect/disconnect callbacks and GATT characteristics read/write/notification. Nothing about LE Connection-Oriented Channel.

How to create a video stream from a series of bitmaps and send it over IP network?

I have a bare-metal application running on a tiny 16 bit microcontroller (ST10) with 10BASE-T Ethernet (CS8900) and a Tcp/IP implementation based upon the EasyWeb project.
The application's main job is to control a led matrix display for public traffic passenger information. It generates display information with about about 41 fps and configurable display size of e.g. 160 × 32 pixel, 1 bit color depth (each led can be just either on or off).
Example:
There is a tiny webserver implemented, which provides the respective frame buffer content (equals to led matrix display content) as PNG or BMP for download (both uncompressed because of CPU load and 1 Bit color depth). So I can receive snapshots by e.g.:
wget http://$IP/content.png
or
wget http://$IP/content.bmp
or put appropriate html code into the controller's index.html to view that in a web browser.
I also could write html / javascript code to update that picture periodically, e.g. each second so that the user can see changes of the display content.
Now for the next step, I want to provide the display content as some kind of video stream and then put appropriate html code to my index.html or just open that "streaming URI" with e.g. vlc.
As my framebuffer bitmaps are built uncompressed, I expect a constant bitrate.
I'm not sure what's the best way to start with this.
(1) Which video format is the most easy to generate if I already have a PNG for each frame (but I have that PNG only for a couple of milliseconds and cannot buffer it for a longer time)?
Note that my target system is very resource restricted in both memory and computing power.
(2) Which way for distribution over IP?
I already have some tcp sockets open for listening on port 80. I could stream the video over HTTP (after received) by using chunked transfer encoding (each frame as an own chunk).
(Maybe HTTP Live Streaming doing like this?)
I'd also read about thinks like SCTP, RTP and RTSP but it looks like more work to implement this on my target. And as there is also the potential firewall drawback, I think I prefer HTTP for transport.
Please note, that the application is coded in plain C, without operating system or powerful libraries. All stuff is coded from the scratch, even the web server and PNG generation.
Edit 2017-09-14, tryout with APNG
As suggested by Nominal Animal, I gave a try with using APNG.
I'd extend my code to produce appropriate fcTL and fdAT chunks for each frame and provide that bla.apng with HTTP Content-Type image/apng.
After downloading those bla.apng it looks useful when e.g. opening in firefox or chrome (but not in
konqueror,
vlc,
dragon player,
gwenview).
Trying to stream that apng works nicely but only with firefox.
Chrome wants first to download the file completely.
So APNG might be a solution, but with the disadvantage that it currently only works with firefox. After further testing I found out, that 32 Bit versions of Firefox (55.0.2) crashing after about 1h of APNG playback were about 100 MiB of data has been transfered in this time. Looks that they don't discard old / obsolete frames.
Further restrictions: As APNG needs to have a 32 bit "sequence number" at each animation chunk (need 2 for each frame), there might to be a limit for the maximum playback duration. But for my frame rate of 24 ms this duration limit is at about 600 days and so I could live with.
Note that APNG mime type was specified by mozilla.org to be image/apng. But in my tests I found out that it's a bit better supported when my HTTP server delivers APNG with Content-Type image/png instead. E.g. Chromium and Safari on iOS will play my APNG files after download (but still not streaming). Even the wikipedia server delivers e.g. this beach ball APNG with Content-Type image/png.
Edit 2017-09-17, tryout with animated GIF
As also suggested by Nominal Animal, I now tried animated GIF.
Looks ok in some browsers and viewers after complete download (of e.g. 100 or 1000 frames).
Trying live streaming it looks ok in Firefox, Chrome, Opera, Rekonq and Safari (on macOS Sierra).
Not working Safari (on OSX El Capitan and iOS 10.3.1), Konqueror, vlc, dragon player, gwenview.
E.g. Safari (tested on iOS 10.3.3 and OSX El Capitan) first want to download the gif completely before display / playback.
Drawback of using GIF: For some reason (e.g. cpu usage) I don't want to implement data compression for the generated frame pictures. For e.g. PNG, I use uncompressed data in IDAT chunk and for a 160x32 PNG with 1 Bit color depth a got about 740 Byte for each frame. But when using GIF without compression, especially for 1 Bit black/white bitmaps, it blows up the pixel data by factor 3-4.
At first, embedded low-level devices not very friendly with very complex modern web browsers. It very bad idea to "connect" such sides. But if you have tech spec with this strong requirements...
MJPEG is well known for streaming video, but in your case it is very bad, as requires much CPU resources and produces bad compression ratio and high graphics quality impact. This is nature of jpeg compression - it's best with photographs (images with many gradients), but bad with pixel art (images with sharp lines).
Looks that they don't discard old / obsolete frames.
And this is correct behavior, since this is not video, but animation format and can be repeated! Exactly same will be with GIF format. Case with MJPEG may be better, as this is established as video stream.
If I were doing this project, I would do something like this:
No browser AT ALL. Write very simple native player with winapi or some low-level library to just create window, receive UDP packet and display binary data. In controller part, you must just fill udp packets and send it to client. UDP protocol is better for realtime streaming, it's drop packets (frames) in case of latency, very simple to maintain.
Stream with TCP, but raw data (1 bit per pixel). TCP will always produce some latency and caching, you can't avoid it. Same as before, but you don't need handshaking mechanism for starting video stream. Also, you can write your application in old good technologies like Flash and Applets, read raw socket and place your app in webpage.
You can try to stream AVI files with raw data over TCP (HTTP). Without indexes, it will unplayable almost everywhere, except VLC. Strange solution, but if you can't write client code and wand VLC - it will work.
You can write transcoder on intermediate server. For example, your controller sent UDP packets to this server, server transcode it in h264 and streams via RTMP to youtube... Your clients can play it with browsers, VLC, stream will in good quality upto few mbits/sec. But you need some server.
And finally, I think this is best solution: send to client only text, coordinates, animations and so on, everything what renders your controller. With Emscripten, you can convert your sources to JS and write exact same renderer in browser. As transport, you can use websockets or some tricks with long-long HTML page with multiple <script> elements, like we do in older days.
Please, tell me, which country/city have this public traffic passenger information display? It looks very cool. In my city every bus already have LED panel, but it just shows static text, it's just awful that the huge potential of the devices is not used.
Have you tried just piping this through a websocket and handling the binary data in javascript?
Every websocket frame sent would match a frame of your animation.
you would then take this data and draw it into an html canvas. This would work on every browser with websocket support - which would be quite a lot - and would give you all the flexibility you need. (and the player could be more high end than the "encoder" in the embedded device)

How do I increase the speed of my USB cdc device?

I am upgrading the processor in an embedded system for work. This is all in C, with no OS. Part of that upgrade includes migrating the processor-PC communications interface from IEEE-488 to USB. I finally got the USB firmware written, and have been testing it. It was going great until I tried to push through lots of data only to discover my USB connection is slower than the old IEEE-488 connection. I have the USB device enumerating as a CDC device with a baud rate of 115200 bps, but it is clear that I am not even reaching that throughput, and I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong. I control every aspect of this from the front end on the PC to the firmware on the embedded system.
I am assuming my issue is how I write to the USB on the embedded system side. Right now my USB_Write function is run in free time, and is just a while loop that writes one char to the USB port until the write buffer is empty. Is there a more efficient way to do this?
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications. The CPU would just write data across a bus to this board, and it would handle communications, which means that the CPU didn't have to waste free time handling the actual communications, but could offload the communications to a "co processor" (not a CPU but functionally the same here). Even with this concern though I figured I should be getting faster speeds given that full speed USB is on the order of MB/s while IEEE-488 is on the order of kB/s.
In short is this more likely a fundamental system constraint or a software optimization issue?
I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong.
You are correct, the baud number is a dummy value. If you create a CDC/RS232 adapter you would use this to configure your RS232 hardware, in this case it means nothing.
Is there a more efficient way to do this?
Absolutely! You should be writing chunks of data the same size as your USB endpoint for maximum transfer speed. Depending on the device you are using your stream of single byte writes may be gathered into a single packet before sending but from my experience (and your results) this is unlikely.
Depending on your latency requirements you can stick in a circular buffer and only issue data from it to the USB_Write function when you have ENDPOINT_SZ number of byes. If this results in excessive latency or your interface is not always communicating you may want to implement Nagles algorithm.
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications.
The NXP part you mentioned in the comments is without a doubt fast enough to saturate a USB full speed connection.
In short is this more likely a fundamental system constraint or a software optimization issue?
I would consider this a software design issue rather than an optimisation one, but no, it is unlikely you are fundamentally stuck.
Do take care to figure out exactly what sort of USB connection you are using though, if you are using USB 1.1 you will be limited to 64KB/s, USB 2.0 full speed you will be limited to 512KB/s. If you require higher throughput you should migrate to using a separate bulk endpoint for the data transfer.
I would recommend reading through the USB made simple site to get a good overview of the various USB speeds and their capabilities.
One final issue, vendor CDC libraries are not always the best and implementations of the CDC standard can vary. You can theoretically get more data through a CDC endpoint by using larger endpoints, I have seen this bring host side drivers to their knees though - if you go this route create a custom driver using bulk endpoints.
Try testing your device on multiple systems, you may find you get quite different results between windows and linux. This will help to point the finger at the host end.
And finally, make sure you are doing big buffered reads on the host side, USB will stop transferring data once the host side buffers are full.

Multicast / UDP Router

I am looking how to do multicast (video streams) router, with the following requirements:
receiving and sending multicast streams at 3-30Mbps (vlan forwarding)
in-ram (or storage) delaying to compensate for network congestion
TCP tunneling (UDP to TCP and vice-versa)
rate shaping of output UDP streams with up to 1 second jitter
for TCP tunneling, multi-homed network support
hundreds of streams at 3-30Mbits
I have did extensive research and I could not find any networking or video broadcast product actually matching these requirements.
I have implemented C linux app which does the above for a single stream, but now I would need to add web interface, multiple stream support etc, so I was wondering if there is something which can accomplish the above with the quality and reliability suitable for the video broadcast, like some kind of product?
Doing this in C is not easy so I was wondering if there are any higher level languages which could match the performance? Would perl, python, java would be a good choice?
How do I architect this kind of software? I am currently using C application running Redhat with RT kernel with command line interface and single stream support.
I want to do application which would run 100 streams 24/7 (using 8 or 16 core system with 64GB RAM) and would be easy to configure it on the fly using either command line or web interface.
I just cant see any better option than current Redhat RT kernel and simple C userspace app. It seems to be the best and easiest option to go with.
The usual division in architecting such an application is to have the high performance parts done by C code and to write low performance components, like a user interface for configuration and such, in a higher level language like Python or Ruby or what have you. It would be hard to achieve the performance requirements in a high level language, but it would be unnecessarily masochistic to write a web configuration system in C.

Resources