Working with v4l2loopback devices I can run these two virtual devices:
a) running the preview image from a Canon DSLR via USB through v4l2loopback into OBS:
modprobe v4l2loopback
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
Found here, and it works.
b) Streaming the output of OBS into a browser based conferencing system, like this:
modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
Found here, this also works.
However, I need to run both a) and b) at the same time, which isn't working as expected. They are interfering, it seems they are using the same buffer the video flips back and forth between the two producers.
What I learned and tried:
A kernel module can only be loaded once. The v4l2loopback module can be unloaded using the command modprobe -r v4l2loopback. I don't know if loading it a second time will be ignored or unload the previous one.
I've tried to load the module with devices=2 as an option as well as different video devices, but I can't find the right syntax.
As there is an already accepted answer, I assume your problem has been solved. Yet, I was quite newbie and couldn't set the syntax even after the answer above (i.e. how to set video2)
After a bit of more search, I found the website that explains how to add multiple devices with an example.
modprobe v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one"
Will create 3 devices with the card names passed as the second parameter:
/dev/video3-> device number 3
/dev/video4 -> the number four
/dev/video7-> the last one
When I was trying to use my Nikon camera as a webcam and OBS as a virtual camera for streaming, to have full control of naming my video devices was important. I hope this answer will help some others, as well.
from your description ("the video flips back and forth between the two producers") it seems that both producers are writing to the same video-device.
to fix this, you need to do two things:
create 2 video-devices
tell each producer to use their own video device
creating multiple video-devices
as documented this can be accomplished by specifying devices=2 when loading the module.
taking your invocation of modprobe, this would mean:
modprobe v4l2loopback devices=2 video_nr=10 card_label="OBS Cam" exclusive_caps=1
this will create two new devices, the first one will be /dev/video10 (since you specified video_nr), the 2nd one will take the first free video-device.
on my system (that has a webcam, which occupies both /dev/video and /dev/video1) this is /dev/video2
telling each producer to use their own device
well, tell one producer to use /dev/video10 and the other to use /dev/video2 (or whatever video-devices you got)
e.g.
gphoto2 --stdout --capture-movie | gst-launch-1.0 \
fdsrc fd=0 \
! decodebin name=dec \
! queue \
! videoconvert \
! tee \
! v4l2sink device=/dev/video10
and configure obs to use /dev/video2.
or the other way round.
just don't use the same video-device for both producers.
(also make sure that your consumers use the correct video-device)
Related
Is there a way to write data in a GStreamer pipeline to a file based on an (external) condition?
I have an application/code, which streams/displays video to the screen and continuously writes it to a file (it works fine).
I would like to have the GStreamer pipeline to only write to a file if an external condition is true (at runtime - I don't know the condition in advance).
What I have done so far:
I carefully searched the official GStreamer documentation, where I found some information on appsink, but I don't really see a way how to apply it based on an (external) conditional.
I also used 'dynamic pipelines' as a search term, which seems describe the modification of GStreamer pipelines based on conditions.
I also searched the GStreamer mailing list and found this post, which uses the gst_element_set_locked_state() function.
I added a
if (condition) {
gst_element_set_locked_state(videosink, 'TRUE');
} else {
gst_element_set_locked_state(videosink, 'FALSE');
}
to my code by then the pipeline would not work at all (displaying a black image).
Another way is described on https://coaxion.net/blog/2014/01/gstreamer-dynamic-pipelines/ in Example 2 with the corresponding code being available on GitHub (https://github.com/sdroege/gst-snippets/blob/217ae015aaddfe3f7aa66ffc936ce93401fca04e/dynamic-tee-vsink.c).
It seems to use a callback and the gst_element_set_state (sink->sink, GST_STATE_NULL) function call to write to a file based on an (external) condition.
Applying this function in analogy to the function above causes the pipeline to display find, but also results in continuous (and not conditional) output to a file:
if (condition) {
gst_element_set_state(videosink, GST_STATE_PLAYING);
} else {
gst_element_set_state(videosink, GST_STATE_NULL);
}
Also gst_pad_add_probe () could be a possibility to dynamically change output to a file, but despite having loocked in the GStreamer documentation, I don't know how use this function correctly.
For your requirement you need tee and valve elements.
Tee will seperate the pipeline for both displaying to window and writing to a file. Valve is the condition you are looking. Its drop attribute drops the frame where the valve is.
Your pipeline will be like:
gst-launch-1.0 ksvideosrc ! videoconvert ! tee name=t ! queue ! valve drop=false ! autovideosink t. ! queue ! valve drop=false ! openh264enc ! h264parse ! mp4mux ! filesink location="test.mp4" -v --eos-on-shutdown
When your condition occurres, set your specific valve's drop attribute as true for not continuing to write file.
In C/C++:
if(condition)
g_object_set(videoValve,"drop",true,nullptr);
else
g_object_set(videoValve,"drop",false,nullptr);
WARNING:
Valve elements must be false until data will pass inside everything in the pipeline. Which means, you can set valve's drop attribute as true when the pipeline is on PLAYING State. You can adjust your code accordingly such as trigger the mechanism on BusCallback, you can reach pipeline states inside that.
Note: ksvideosrc (Windows) if you use Unix try v4lsrc.
If you build your application like this, it will work, I use similar scenario.
I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?
I've followed the Creating a Generic Kernel Extension with Xcode tutorial.
MyKext.c:
#include <sys/systm.h>
#include <mach/mach_types.h>
kern_return_t MyKext_start (kmod_info_t * ki, void * d)
{
printf("MyKext has started.\n");
return KERN_SUCCESS;
}
kern_return_t MyKext_stop (kmod_info_t * ki, void * d)
{
printf("MyKext has stopped.\n");
return KERN_SUCCESS;
}
I've also disabled the csrutil, which allow me to load my own kext.
# csrutil disable
When I load my own kext into kernel
$ sudo kextload -v /tmp/MyKext.kext
The result of printf() not write into /var/log/system.log.
I've also set boot-args
$ sudo nvram boot-args="original_contents debug=0x4"
Can anyone help me out?
Apparently, since Sierra (10.12) at least, they reorganized the way the logs are written (iOS support?), so you cannot see it in system.log anymore. Still, in your Console application, you have in the sidebar a Devices section, where you can select your device (usually your Mac system) and see real-time log limited to "kernel" in the search box. So I can see these when using kext load/kextunload:
default 11:58:27.608228 +0200 kernel MyKext has started.
default 11:58:34.446824 +0200 kernel MyKext has stopped.
default 11:58:44.803350 +0200 kernel MyKext has started.
There is no need for the csrutil and nvram changes.
Important For some freaky reason, I needed to restart the Console to reflect my messages changes, otherwise it has showing the ones (start & stop) from the previous build. Very strange indeed!
Later To recover old logs, try sudo log collect --last 1d and open the result with Console(more here).
Sorry to necro-post, but I found it useful to use log(1) with one of its many commands (as suggested by #pmdj in the comments above) rather than use Console. From the manual:
log -- Access system wide log messages created by os_log, os_trace and other log-
ging systems.
For example, one can run:
log stream
to see real-time output of the system, including printf() from the MacOS kernel extension.
My NodeMCU program has gone in to infinite reboot loop.
My code is functionally working but any action I try to do, e.g. file.remove("init.lua") or even just =node.heap(), it panics and reboots saying: PANIC: unprotected error in call to Lua API (not enough memory).
Because of this, I'm not able to change any code or delete init.lua to stop automatic code execution.
How do I recover?
I tried re-flashing another version of NodeMCU, but it started emitting garbage in serial port.
Then, I recalled that NodeMCU had two extra files: blank.bin and esp_init_data_default.bin.
I flashed them at 0x7E000 and 0x7C000 respectively.
They are also available as INTERNAL://BLANK and INTERNAL://DEFAULT in the NodeMCU flasher.
This booted the new NodeMCU firmware, all my files were gone and I'm out of infinite reboot loop.
Flash the following files:
0x00000.bin to 0x00000
0x10000.bin to 0x10000
And, the address for esp_init_data_default.bin depends on the size of your module's flash.
0x7c000 for 512 kB, modules like ESP-01, -03, -07 etc.
0xfc000 for 1 MB, modules like ESP8285, PSF-A85
0x1fc000 for 2 MB
0x3fc000 for 4 MB, modules like ESP-12E, NodeMCU devkit 1.0, WeMos D1 mini
Then, after flashing those binaries format its file system (run "file.format()" using ESPlorer) before flashing any other binaries.
Downloads Link
I've just finished working through a similar problem. In my case it was end-user error that caused a need to forcibly wipe init.lua, but I think both problems could be solved similarly. (For completeness, my problem was putting a far-too-short dsleep() call in init.lua, leaving the board resetting itself immediately upon starting init.lua.)
I tried flashing new NodeMCU firmware, writing blank.bin and esp_init_data_default.bin to 0x7E000 and 0x7C000, and also writing 0x00000.bin to 0x00000 and 0x10000.bin to 0x10000. None of these things helped in my case.
My hardware is an Adafruit Huzzah ESP8266 breakout (ESP-12), with 4MB of flash.
What worked for me was:
Download the NONOS SDK from Espressif (I used version 1.5.2 from http://bbs.espressif.com/viewtopic.php?f=46&t=1702).
Unzip it to get at boot_v1.2.bin, user1.1024.new.2.bin, blank.bin, and esp_init_data_default.bin (under bin/ and bin/at/).
Flash the following files to the specified memory locations:
boot_v1.2.bin to 0x00000
user1.1024.new.2.bin to 0x010000
esp_init_data_default.bin to 0xfc000
blank.bin to 0x7e000
Note about flashing:
I used esptool.py 1.2.1.
Because of the nature of my problem, I was only able to write changes to the flash when in programming mode (i.e. after booting with GPIO0 held down to GND).
I found that I needed to reset the board between each step (else invocations of esptool.py after the first would fail).
Erased the flash. esptool.py --port <your/port> erase_flash
Then I was able to write a new firmware. I used a stock nodeMCU 0.9.5 just to isolate variables, but I strongly suspect any firmware would work at this point.
The only think that worked for me was python flash tool esptool in ubuntu, windows flashtool never deleted init.lua and reboot loop.
Commands (ubuntu):
git clone https://github.com/themadinventor/esptool.git
cd esptool
python esptool.py -h
ls -l /dev/tty*
nodemcu_latest.bin can be downloaded from github or anywhere.
sudo python esptool.py -p /dev/ttyUSB0 --baud 460800 write_flash --flash_size=8m 0 nodemcu_latest.bin
I am using v4l2-ctl from command line to change exposure values of usb camera but I cannot change the device from built in webcam
When I am using v4l2-ctl d /dev/video1 - it gives no error but it does nothing at all
You might be using the wrong cmd.
First of all, you need to specify -d to select a different device (mind the --prefix; it is missing in the Q).
but simply running v4l2-ctl -d /dev/video1 will not do anything with the device (you don't specify what to do)
So you also need to tell v4l2-ctl to change the exposure-time (or whatever you want to do) with the -c <ctrl>=<val> switch
So your command should look like:
v4l2-ctl -d /dev/video1 -c exposure_absolute=3000
but then, your device simply may not support setting the exposure time and simply ignore any requests (it should not announce support for setting the exposure if it cannot change it, but often device drivers are a bit easygoing)