Mux telemetry data into a MPEG-ts file using gstreamer - c

I recently starting using Gstreamer and I have succeeded in muxing an audio stream and 2 camera streams into a MPEG-TS file using the mpegtsmux and now want to inject telemetry data from an accelerometer into the data stream. I was thinking by using teletext to do that, which is supported by the mpegtsmux and then use the appsrc to inject the data into the pipeline. Does anyone succeeded in doing this before, I can't seem to find any examples with injecting teletext into a data stream.

How about to use the sound channel without compression?

Use the application/x-teletext caps to create pad to the mux..
I'm yet to able successfully do it. If you have, I'd be interested on how you used appsrc.

Related

How do I capture the audio of a wpf window or cscore output in c#?

I made a music player in wpf using cscore. Now, I want to add a feature so I can stream the output in real-time (like a radio) to another instance of the music player over internet. I could see how to stream the data later, but first I need to know how to get the bytes of the audio output. I'm asking for help because I'm lost, I've done some research and found nothing but how to stream the desktop audio. That's not a solution, because I want to listen to the same music with some friends while hanging out on Discord, so if I stream the desktop audio, they will listen to themselves besides the music. Any help will be welcome. Thanks in advance!
I am have not used cscore I mainly use naudio a similar library that facilitates getting audio to and from the sound card. So I will try and answer in a way that allows you to find what you are looking for in cscore.
In your player code you will be pulling data from the audio file. In naudio this is done with a audio file reader. I think it is called a wavFileReader in cscore, This file reader translates the audio file into a stream of audio samples in the form of byte arrays, the byte arrays are then used to feed the WASAPI Out to allow the audio to play on the sound card.
The ideal place to start with your streaming system would be in the middle of those two processes. So rather than just passing the audio samples to the sound card you need to take a copy of the byte array containing the samples. it is this data you will need to stream to your friends.
From here you will need to look at compressing the audio and streaming protocols like RTP all can be done in c#. The issue will be, as it always is in audio having your data stream keep pace with the sound card. Every time WASAPIOut asks for more samples you need to have the ready otherwise the audio will be choppy.
I do hope this helps point you in the right direction. Others with experience with cscore may have some code examples to assist you more directly I am simply trying to point you in the right direction

how to capture audio by using dummy sound card driver?

I hope to know how to capture audio by using dummy sound card driver.
I'm thinking how to implement the steps below.
we play audio in ubuntu, however the audio is just played through dummy sound card driver, to capture audio stream.
captured audio is sent to windows through network.
audio is actually played in windows.
What you need is to activate ALSA snd-aloop module, that provides a full-duplex virtual loopback soundcard. Please have a look to the following links for instructions about activation and example usage:
https://github.com/TheSalarKhan/Linux-Audio-Loopback-Device
https://sysplay.in/blog/linux/2019/06/playing-with-alsa-loopback-devices/
A couple of important points to consider:
The subdevices are linked in pairs; whatever you play on hw:n,0,m goes out on hw:n,1,m (see the example in the 1st link)
The first application opening one of the subdevices will force the second application to use the same set of parameters: sample rate, format, number of channels. For example, suppose the recording application opens a capture stream on hw:2,1,0 with stereo/44100/S16_LE format; the playback application on hw:2,1,0 will be forced to use a the same stereo/44100/S16_LE format
Hope this helps

Programmatic ALSA loopback

I need some pointers where to start with the following:
From any application that plays audio using ALSA to the connected speaker I'd like to grab the samples and do some audio processing.
I am not in control of the player and I'd like to be able to process the audio from any source. Basically it will be an UV-meter, perhaps later with FFT (all just on the command line). Additionally I'd like my app to be self-contained.
In my research I've found:
There is a loopback kernel module.
You can do fancy stuff with the configuration file.
There is the ability to create plugins.
Using the kernel module and altering the configuration file introduces some dependencies of my application to the configuration of the system.
And creating a plugin I give up control over the app and cannot start/terminate it whenever I want.
This is not satisfactory to me so I'd like to know if there is a way to either:
create a loopback device programmatically
or is there any other way to read from the pcm playback device other applications are writing to.
You can use the pulseaudio for linux where you can very easily create a loopback device .There ia a pactl command -it will help you create a null sink and you can loopback from it .
something like this
//this would create a null sink with specified channel conf
pactl load-module module-null-sink sink_name=sink6ch format=s16le rate=48000 channels=6 channel_map=front-left,front-right,front-center,lfe,rear-left,rear-right
//make it default
pactl set-default-sink sink6ch
you can use its monitor device to read about the monitor devices of pulse audio

ALSA PCM device network streaming

I am writing an application in C++ intended to stream data from a PCM device (microphone) to a remote server. I have successfully been able to stream a recorded wav file to the server, and I have been able to output the mic input to a file. The next step is to merge my 2 programs...open the PCM device, and then stream what gets put into the buffer to the server.
I have read that I will need to use Pulse Audio to do this because ALSA does not have a server. Is this accurate? Does anyone have any examples or resources? I have had minimal luck trying to research this online.
Thanks in advance!
Thank you! That got me thinking down a different path and was able to just send the buffer using socket programming.

Register virtual sound device from within application

I want to be able to process audio output of applications (VLC, Rhythmbox, ...) within my own one. Moreover, one should be able to select my application as the sink for the sound (e.g., in VLC or pavucontrol my application should appear as an output device).
How is this possible? Can it be done with ALSA, Pulseaudio, ...? Currently I am seeking for the easiest solution while later performant ones may become preferable. It would be great if most of the configuration could be done via API calls.
Thank you for your support!
I ended up writing a PulseAudio module. There one can create own sinks and directly get access to the audio stream. Have a look at my implementation here.

Resources