v4l2-ctl not changing from default webcam - v4l2

I am using v4l2-ctl from command line to change exposure values of usb camera but I cannot change the device from built in webcam
When I am using v4l2-ctl d /dev/video1 - it gives no error but it does nothing at all

You might be using the wrong cmd.
First of all, you need to specify -d to select a different device (mind the --prefix; it is missing in the Q).
but simply running v4l2-ctl -d /dev/video1 will not do anything with the device (you don't specify what to do)
So you also need to tell v4l2-ctl to change the exposure-time (or whatever you want to do) with the -c <ctrl>=<val> switch
So your command should look like:
v4l2-ctl -d /dev/video1 -c exposure_absolute=3000
but then, your device simply may not support setting the exposure time and simply ignore any requests (it should not announce support for setting the exposure if it cannot change it, but often device drivers are a bit easygoing)

Related

How to send Sysrq programmatically over serial and is CONFIG_MAGIC_SYSRQ_SERIAL required

I keep getting the SysRq HELP printout, basically seems i can send over serial the sysrq but it won't accept the next key within 5 seconds ie the command key (b) to reboot.
I need to send the command programmatically over serial console connection to reboot system.
I can reboot the system via echo b > /proc/sysrq-trigger and cat /proc/sys/kernel/sysrq is 1 (ie full sysrq is enabled)
But I notice that kernel (2.6.32) image I'm booting with only has CONFIG_MAGIC_SYSRQ=y and there's no mention of CONFIG_MAGIC_SYSRQ_SERIAL. I'd like to know if that setting is required for 2.6.32 or if it was "assumed enabled" and its only required in new kernels.
According to this, I don't need that in my kernel since it was only added to apparently optionally disable sysrq over serial to prevent unwanted triggers.
Anyway I don't really care if PERL is used or PYTHON or C code with tcsendbreak or any programmatic method to send alt-sysrq-b over /dev/ttyUSB0 to reboot linux over serial. So far all i can do is send break sequence and see output of:
SysRq : HELP : loglevel(0-9) reBoot Crash terminate-all-tasks(E) memory-full-oom
l-active-cpus(L) show-memory-usage(M) nice-all-RT-tasks(N) powerOff show-registe
-blocked-tasks(W)
But the command key sent afterward never does anything. So I'm not sure what's wrong. FYI, the system I'm trying to send sysrq over serial to is an embedded linux system that boots via uboot with uimage and dtb file.
Instead of using a break signal I would prefer a technique where the code actually sends the Alt-SysRq-b keyboard keys over the serial console connection.

Add TCP Options

Is there any easy & fast way to add TCP options to the SKB packet like Window scaling factor or Time stamp with C on netfilter.
Or if any body has example it would be perfect to see.
Thank you
One of the fastest way is by using sysctl and following are various options, but as you are interested in window size and timestamp are below:
net.ipv4.tcp_slow_start_after_idle
net.ipv4.tcp_timestamps
The command to control them is:
sudo sysctl -w <option=123>
Otherwise you can control it programatically like the way timestamp is controlled either in software or hardware in this guide.

How to recover from infinite reboot loops in NodeMCU?

My NodeMCU program has gone in to infinite reboot loop.
My code is functionally working but any action I try to do, e.g. file.remove("init.lua") or even just =node.heap(), it panics and reboots saying: PANIC: unprotected error in call to Lua API (not enough memory).
Because of this, I'm not able to change any code or delete init.lua to stop automatic code execution.
How do I recover?
I tried re-flashing another version of NodeMCU, but it started emitting garbage in serial port.
Then, I recalled that NodeMCU had two extra files: blank.bin and esp_init_data_default.bin.
I flashed them at 0x7E000 and 0x7C000 respectively.
They are also available as INTERNAL://BLANK and INTERNAL://DEFAULT in the NodeMCU flasher.
This booted the new NodeMCU firmware, all my files were gone and I'm out of infinite reboot loop.
Flash the following files:
0x00000.bin to 0x00000
0x10000.bin to 0x10000
And, the address for esp_init_data_default.bin depends on the size of your module's flash.
0x7c000 for 512 kB, modules like ESP-01, -03, -07 etc.
0xfc000 for 1 MB, modules like ESP8285, PSF-A85
0x1fc000 for 2 MB
0x3fc000 for 4 MB, modules like ESP-12E, NodeMCU devkit 1.0, WeMos D1 mini
Then, after flashing those binaries format its file system (run "file.format()" using ESPlorer) before flashing any other binaries.
Downloads Link
I've just finished working through a similar problem. In my case it was end-user error that caused a need to forcibly wipe init.lua, but I think both problems could be solved similarly. (For completeness, my problem was putting a far-too-short dsleep() call in init.lua, leaving the board resetting itself immediately upon starting init.lua.)
I tried flashing new NodeMCU firmware, writing blank.bin and esp_init_data_default.bin to 0x7E000 and 0x7C000, and also writing 0x00000.bin to 0x00000 and 0x10000.bin to 0x10000. None of these things helped in my case.
My hardware is an Adafruit Huzzah ESP8266 breakout (ESP-12), with 4MB of flash.
What worked for me was:
Download the NONOS SDK from Espressif (I used version 1.5.2 from http://bbs.espressif.com/viewtopic.php?f=46&t=1702).
Unzip it to get at boot_v1.2.bin, user1.1024.new.2.bin, blank.bin, and esp_init_data_default.bin (under bin/ and bin/at/).
Flash the following files to the specified memory locations:
boot_v1.2.bin to 0x00000
user1.1024.new.2.bin to 0x010000
esp_init_data_default.bin to 0xfc000
blank.bin to 0x7e000
Note about flashing:
I used esptool.py 1.2.1.
Because of the nature of my problem, I was only able to write changes to the flash when in programming mode (i.e. after booting with GPIO0 held down to GND).
I found that I needed to reset the board between each step (else invocations of esptool.py after the first would fail).
Erased the flash. esptool.py --port <your/port> erase_flash
Then I was able to write a new firmware. I used a stock nodeMCU 0.9.5 just to isolate variables, but I strongly suspect any firmware would work at this point.
The only think that worked for me was python flash tool esptool in ubuntu, windows flashtool never deleted init.lua and reboot loop.
Commands (ubuntu):
git clone https://github.com/themadinventor/esptool.git
cd esptool
python esptool.py -h
ls -l /dev/tty*
nodemcu_latest.bin can be downloaded from github or anywhere.
sudo python esptool.py -p /dev/ttyUSB0 --baud 460800 write_flash --flash_size=8m 0 nodemcu_latest.bin

Running qemu on ARM with KVM acceleration

I'm trying to emulate an ARM VM on an ARM host, a cubieboard2 embedded board, by means of qemu. I've compiled qemu from the source code and enabled kvm. Now the problem is that launching qemu-system-arm as follows:
$ /usr/local/bin/qemu-system-arm -M accel=kvm -cpu host -kernel vmlinuz-3.2.0-4-vexpress -initrd initrd.img-3.2.0-4-vexpress -sd debian_wheezy-_armhf_standard.qcow2 -append "console=ttyAMA0 root=/dev/mmcblk0p2" -nographic
I have this error:
qemu-system-arm: -M accel=kvm: Unsupported machine type
Use -machine help to list supported machines!
What is wrong in the command I've typed. How to enable kvm?
How about reading this:
https://groups.google.com/forum/#!topic/cubieboard/4EGONZMoIAU
And yes, you are right, as the Cubieboard2 has A15, HYP hypervisor is implemented and KVM should be running in it.
More about HYP mode is covered here:
http://lwn.net/Articles/557132/
there is another way you can see the failing mode (why qemu command failed): execute your command under "strace", and you will be able to clearly see when /dev/kvm is opened, and if it succeed, non-zero fd will be returned after open("/dev/kvm") is called. and before all this - "lsmod" should return a line indicating "kvm.ko" kernel module is running, and if you can read your kernel's config file, there should be a "CONFIG_KVM" embedded within.
-M takes a machine name (eg "vexpress-a15" or "virt"), not a set of suboption=value settings. You want -machine suboption=value,... for that.
("-M name" is a shortcut for "-machine type=name".)
You also need to specify a machine name, either via -machine type=name or -M name, otherwise QEMU will complain that you didn't specify one.

How to solve "ptrace operation not permitted" when trying to attach GDB to a process?

I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.

Resources