Branch in Gstreamer pipeline, but without data copy - c

I have a simple pipeline where the input video feed from v4l2src must be fed into both encoder and display output. Using tee I can achieve this requirement, but the problem with tee is that it clones incoming video images to feed into each branch in pipeline.
In my application, the pipeline does not modify buffer of data, so it must be safe to access it in multiple threads as read-only memory.
I want to know how I can make the following pipeline without buffer cloning and only sharing buffers between multiple threads/branches in gstreamer.

Related

Mux telemetry data into a MPEG-ts file using gstreamer

I recently starting using Gstreamer and I have succeeded in muxing an audio stream and 2 camera streams into a MPEG-TS file using the mpegtsmux and now want to inject telemetry data from an accelerometer into the data stream. I was thinking by using teletext to do that, which is supported by the mpegtsmux and then use the appsrc to inject the data into the pipeline. Does anyone succeeded in doing this before, I can't seem to find any examples with injecting teletext into a data stream.
How about to use the sound channel without compression?
Use the application/x-teletext caps to create pad to the mux..
I'm yet to able successfully do it. If you have, I'd be interested on how you used appsrc.

Multiple processes read/write files. What API to use?

I have this situation when I need to spawn worker processes. On one side, worker processes should read evenly split parts of a file and pass data to socket connection. Other side should read that data and write it in parallel. I plan to split source file into parts beforehand so that each process gets only one part of a file to read from or write to.
So I'm already using sockets with read/write. From that, I think, it is better for me to continue to use this simple API. But I can not find any means of setting file pointer when using file descriptors. I need that obviously, when reading from file that is divided to read/write parts.
I've heard that mmap can help me somehow. But to my understanding mmap needs much RAM and my app will run multiple mentioned transfers. The app is also quite limited in CPU usage.
The question is, what API should I use?
EDIT I be on Linux. Filesystem is ext4.

How does GNU Radio File Sink work?

I want to know how the file sink in GNU Radio works. Does it receive a signal and then write it to the file, and while it's being written signal receiving is not done?
I just want to make sure if some portion of the signal is lost without being written to the file because of the time taken for writing.
Any help or reading material regarding this would be very much appreciated.
Depending the sampling rate of the device, writing samples to file without discontinuities may be impossible.
Instead writing to disk, you can write the samples in a ramdisk. Ramdisk is an abstraction of file storage, using the RAM memory as storage medium. The great advantage of the ramdisk is the very fast read/write data transfers. However, the file size is limited somehow by the amount of RAM memory that the host has.
Here is a good article that will help you to create a ramdisk under Linux. I am sure that you will easily find a guide for Windows too.
A file sink won't normally block your radio source as long as the average write speed exceeds the radio blocks output speed. There are internal buffers that can smooth things out a little bit, but if your disk fills up then the rest of your flowgraph will stall.
If you're not seeing "O" messages in the output console, you're not dropping samples.

Is there a Win32 API to copy a fragment of a file on another file?

I would like to programmatically copy a section of a file on another file. Is there any Win32 API I could use without moving bytes thru my program? Or should I just read from the source file and write on the target?
I know how to do this by reading and writing chunks of bytes, I just wanted to avoid doing it myself if the OS already offers that.
What you're asking for can be achieved, bot not easily. Device drivers routinely transfer data without CPU involvement, but doing that requires kernel mode code. Basically, you would have to write a device driver. The benefits would have to be huge to justify the difficulties associated with developing, testing, and distributing a kernel mode driver. So unless you think there is huge benefit at stake here, I'm afraid that ReadFile/WriteFile are the best you can do.

How does main stream web server implement this feature?

This means, for example, a module can
start compressing the response from a
backend server and stream it to the
client before the module has received
the entire response from the backend.
Nice!
I know it's some kind of asynchronous IO but simple like that isn't enough.
Anyone knows?
Without looking at the source code of an actual implementation, I'm speculating here:
It's most likely some kind of stream (abstract buffered IO) that is passed from one module to the other ("chaining"). One module (maybe a servlet container) writes to a stream that is read by another module (the compression module in your example), which then writes its output to another stream. The contents of that stream may then be processed further or transmitted to the client.
The backend may need to wait on IO before it can fully produce the page. Modules can begin compressing the start of the message before the backend is entirely done writing it.
To understand why this is useful, you need to understand how ngnix is structured. ngninx is a server that relies on non-blocking input and output. Normally, a server will use blocking input and output: it will listen on a connection, and when a connection is found, it will process the page. In order to increase throughput, multiple threads are spawned, called 'workers'.
Contrast this to ngnix: It continually asks the kernel, "Are any of my IO requests ready?" This allows it to handle the same amount of pages with 1) less overhead from all the different processes, and 2) lower memory usage. It has some downsides, however. For extremely low-volume applications, ngnix may use more CPU than a blocking server. Second, it's much less portable. Windows uses an entirely different model for non-blocking IO.
Getting back to your original question, compressing the beginning of a page is useful because it can be ready for the rest of the page when it's done accessing a database or reading from a disk or what-have-you.

Resources