I used the code snippet (V4L2 C API's) from the link below to capture MJPEG & YUYV images from an USB camera.
https://gist.github.com/jayrambhia/5866483
When I use the same code with an MIPI CSI camera, the code freezes and does not return an image.
Does V4L2 C API need any changes to use MIPI camera? Or is there any sample code snippet available for this (without gstreamer)?
Related
I hope to know how to capture audio by using dummy sound card driver.
I'm thinking how to implement the steps below.
we play audio in ubuntu, however the audio is just played through dummy sound card driver, to capture audio stream.
captured audio is sent to windows through network.
audio is actually played in windows.
What you need is to activate ALSA snd-aloop module, that provides a full-duplex virtual loopback soundcard. Please have a look to the following links for instructions about activation and example usage:
https://github.com/TheSalarKhan/Linux-Audio-Loopback-Device
https://sysplay.in/blog/linux/2019/06/playing-with-alsa-loopback-devices/
A couple of important points to consider:
The subdevices are linked in pairs; whatever you play on hw:n,0,m goes out on hw:n,1,m (see the example in the 1st link)
The first application opening one of the subdevices will force the second application to use the same set of parameters: sample rate, format, number of channels. For example, suppose the recording application opens a capture stream on hw:2,1,0 with stereo/44100/S16_LE format; the playback application on hw:2,1,0 will be forced to use a the same stereo/44100/S16_LE format
Hope this helps
iam working on a small project where i play music over bluetooth to my Raspberry pi. Also i would like to analyse the audio too.
How can i pipe the Raspberry pi audio output to a gstreamer c script? I got a correctly working solution with an mp3 file, but ofc i would like to expand this to read audio output from my RPi.
Here is the source i used in my gstreamer script to read from a file:
data.source = gst_element_factory_make ("uridecodebin", "source");
g_object_set (data.source, "uri", "file:///home/pi/example.mp3", NULL);
Do i need to loopback the audio to the microphone jack and use this as a source? Or is there a better way to do this?
I'm not familiar with the RPi. Assuming it runs a regular linux chances are high that the sound output is done via PulseAudio. for each output device PulseAudio also offers a "monitor" device fir this output. This one can be used to capture audio data.
These monitor devices may be made visible by some system configuration - but I'm not sure. Maybe they exist by default.
From a GStreamer point of view you would use the "pulsesrc" element to capture audio:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-pulsesrc.html
From there your regular GStreamer functions apply - meaning you could write the data to disk, make live analysis of the data etc.. Since this can be quite complex its hard to give a general advice..
I have a third party vision library that would only work with a firewire camera, is it possible to somehow trick libdc1394 into thinking that there is a firewire camera connected but pass frames from a usb webcam via v4l?
Given that stackoverflow is about programming...
You cannot trick libdc1394 however you can write a library that exposes exactly the same API as libdc1394 and use this as a shim over a USB webcam library. Since this is Linux you can start from the source of libdc1394 which will be a lot easier than reverse engineering.
How can I connect an Arduino board and MATLAB for image processing?
I am making an autonomous robot which require image processing in MATLAB.
You can use the MATLAB-to-Arduino package at the offical MATLAB site, "MATLAB Interface to Arduino".
If you have serial communication on the Arduino, Matlab has built in tools for talking with the chip over USB or RS232. It is fairly simple to setup, but if your images are high in resolution you may not get the necessary speed from standard RS232.
something along the lines of:
s=serial('COM1','baudrate',115200)
Then you can read and write to the Arduino through Matlab functions and scripts
You can get connected to the MATLAB interface by simply using the serial and fopen commands on MATLAB
eg:
s=serial('COM2','Baudrate',9600,'Databits',8);
fopen(s);
count=0;
while count<50
a=fscanf(s);
count=count+1;
end
fclose(s);
whereas on Arduino, use Serial.print() function.
Simple data can be sent using this. I never tried camera by using this technique. But by using a camera shield mounted on Ardunino, taking snapshots and later sending the data through Arduino to MATLAB as a matrix might work. Just an idea, might be possible.
Edit1:
I was looked up more into this, and found some potential hardwares for the same:
1. ArduCam Shield for Arduino
2. https://www.sparkfun.com/products/11418
I'd like to create an app that pulls multiple live video feeds, supplied either by coax, hdmi or some other standard, into WPF for manipulation (i.e. apply a few transforms or pixel shaders) which is then output to monitor. What would I look at to get started with this app - is there any hardware that would make things easier?
If you are pulling in standard broadcast via coax or over the air, a $100 ATSC HD TV tuner will do. I don't have any experience with HD capture cards (I think they run for about $1000), or more specifically, cards that take in a raw HD stream.
When you install a capture device (TV tuner, webcam, capture card) in Windows, it creates a DirectShow source filter wrapper for it. Based off what kind of hw you are targeting, determines how you create the DirectShow graph. I have no reason to expect HD capture cards to be different than any capture card or webcam (TV tuners are slightly different).
You can use my WPF MediaKit as a base. The web cam control may work out of the box or just require slight changes for an HD capture card. A TV tuner would require a lot more than just this.