rc.local is not running on raspberry pi's startup - c

I'm trying to run a simple C code when the pi boots, so I followed the steps on the documentation (https://www.raspberrypi.org/documentation/linux/usage/rc-local.md), but when I start it, it shows this error:
Failed to start etc/rc.local compatibility.
See 'systemctl status rc-local.service' for details.
I do as it says and I receive this:
rc-local.service - /etc/rc.local Compatibility
Loaded: loaded (/lib/systemd/system/rc-local.service; static)
Drop-In: /etc/systemd/system/rc-local.service.d
ttyoutput.conf
Active: failed (Result: exit-code) since Tue 2015-12-08 10:44:23 UTC; 2min 18s ago
Process: 451 ExecStart=/etc/rc.local start (code=exit, status=203/EXEC)
My rc.local file looks like this:
./home/pi/server-starter &
exit 0
Can anyone show me what I'm doing wrong?

You have to refer to your script using an absolute path.
/home/pi/server-starter &
Notice the absence of the . in comparison to your solution.
Also, you may have to add a reference to the shell right at the beginning of your rc.local.
#!/bin/sh -e
/home/pi/server-starter &
exit 0

to run a script shell "in a script shell" use this :
sh -c /absolute/path/to/script;
to run this script in background use this :
sh -c /absolute/path/to/script &;
Don't forget exit 0 at the end of the file

Related

Compiling perl >5.10 on AIX version 5.3

I've been compiling OpenSSL (and thus Perl >5.10, as it is a dependency) on multiple platforms. I've managed to get 1.1.0b compiled on every single platform except AIX, which I can't even compile Perl. I've tried several versions and looked at the documentation Perl provides online. From what I can tell, it suggests version 5.12.2.
When I attempt to compile version 5.12.2,
I take the following from the documentation, and fill in a few local system variables, such as using XLC rv7.
export OBJECT_MODE=64
./Configure \
-d \
-Dcc=/usr/vac/bin/xlc_r7 \
-Duseshrplib \
-Duse64bitall \
-Dprefix=`pwd`/../PERL
Then I attempt to make as prompted, and get the following error:
/usr/vac/bin/xlc_r7 -q64 -o miniperl -brtl -bdynamic -L/usr/local/lib -b64 gv.o toke.o perly.o pad.o regcomp.o dump.o util.o mg.o reentr.o mro.o hv.o av.o run.o pp_hot.o sv.o pp.o scope.o pp_ctl.o pp_sys.o doop.o doio.o regexec.o utf8.o taint.o deb.o universal.o globals.o perlio.o perlapi.o numeric.o mathoms.o locale.o pp_pack.o pp_sort.o miniperlmain.o opmini.o perlmini.o -lbind -lnsl -ldl -lld -lm -lcrypt -lc
LIBPATH=.../perl-5.12.2 ./miniperl -w -Ilib -MExporter -e '<?>' || make minitest
LIBPATH=.../perl-5.12.2 ./miniperl -Ilib autodoc.pl
/usr/bin/ln -s perl5122delta.pod pod/perldelta.pod
LIBPATH=.../perl-5.12.2 ./miniperl -Ilib -Icpan/Cwd -Icpan/Cwd/lib pod/perlmodlib.PL -q
readdir(./../../../../..): Bad file number at lib/FindBin.pm line 116
stat(/pod/): No such file or directory at lib/FindBin.pm line 197
stat(/pod/): No such file or directory at lib/FindBin.pm line 200
Use of chdir('') or chdir(undef) as chdir() is deprecated at pod/perlmodlib.PL line 9.
No such file or directory at pod/perlmodlib.PL line 19.
make: The error code from the last command is 2.
Taking a look at pod/perlmodlib.PL, we see the following:
# MANIFEST itself is Unix style filenames, so we have to assume that Unix style
# filenames will work.
open (MANIFEST, "../MANIFEST") or die $!;
In my desperation I tried to hack it up and avoid writing to the manifest, but then I get this issue:
Creating Makefile.PL in cpan/Archive-Extract for Archive::Extract
Running Makefile.PL in cpan/Archive-Extract
../../miniperl Makefile.PL INSTALLDIRS=perl INSTALLMAN1DIR=none INSTALLMAN3DIR=none PERL_CORE=1 LIBPERL_A=libperl.a
readdir(./../../../../../../..): No such file or directory at ../../lib/File/Find.pm line 610
Use of chdir('') or chdir(undef) as chdir() is deprecated at ../../lib/File/Find.pm line 773.
readdir(./../../..): No such file or directory at ../../cpan/Cwd/lib/File/Spec/Unix.pm line 483
Could not open 'lib/Archive/Extract.pm': No such file or directory at ../../cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm line 2588.
512 from cpan/Archive-Extract's Makefile.PL at make_ext.pl line 390.
Warning: No Makefile!
make: Cannot find a rule to create target config from dependencies.
Stop.
make config PERL_CORE=1 LIBPERL_A=libperl.a failed, continuing anyway...
Making all in cpan/Archive-Extract
make all PERL_CORE=1 LIBPERL_A=libperl.a
make: Cannot find a rule to create target all from dependencies.
Stop.
Unsuccessful make(cpan/Archive-Extract): code=512 at make_ext.pl line 449.
make: The error code from the last command is 2.
The frequency of errors makes me feel like perhaps I don't have something configured right...
My next thought was to try the most updated version, which at the time of writing is 5.24. Using the same configuration and attempting to make I get the following issue:
Can't locate strict.pm in #INC (you may need to install the strict module) (#INC contains: /cpan/AutoLoader/lib /dist/Carp/lib /dist/PathTools /dist/PathTools/lib /cpan/ExtUtils-Install/lib /cpan/ExtUtils-MakeMaker/lib /cpan/ExtUtils-Manifest/lib /cpan/File-Path/lib /ext/re /dist/Term-ReadLine/lib /dist/Exporter/lib /ext/File-Find/lib /cpan/Text-Tabs/lib /dist/constant/lib /cpan/version/lib /lib .) at autodoc.pl line 25.
BEGIN failed--compilation aborted at autodoc.pl line 25.
make: The error code from the last command is 2.
Which I know from other compilations means I need to edit the PERL5LIB variable.
If I keep adding all the modules to the path:
export PERL5LIB=`pwd`/dist/Exporter/lib:$PERL5LIB
export PERL5LIB=`pwd`/cpan/Text-Tabs/lib:$PERL5LIB
export PERL5LIB=`pwd`/ext/re:$PERL5LIB
export PERL5LIB=`pwd`/dist/constant/lib:$PERL5LIB
export PERL5LIB=`pwd`/cpan/ExtUtils-MakeMaker/lib:$PERL5LIB
export PERL5LIB=`pwd`/dist/Carp/lib:$PERL5LIB
export PERL5LIB=`pwd`/cpan/File-Path/lib:$PERL5LIB
export PERL5LIB=`pwd`/dist/PathTools:$PERL5LIB
export PERL5LIB=`pwd`/dist/PathTools/lib:$PERL5LIB
export PERL5LIB=`pwd`/ext/File-Find/lib:$PERL5LIB
Even so, I'll still get an error for a module that isn't even present in the 5.24.0 source!
Can't locate ExtUtils/MakeMaker/version/vpp.pm in #INC (you may need to install the ExtUtils::MakeMaker::version::vpp module) (#INC contains: <removed from post>) at (eval 2) line 2.
BEGIN failed--compilation aborted at (eval 2) line 2.
Compilation failed in require at .../perl-5.24.0/cpan/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm line 10.
BEGIN failed--compilation aborted at .../perl-5.24.0/cpan/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm line 10.
Compilation failed in require at Makefile.PL line 7.
BEGIN failed--compilation aborted at Makefile.PL line 7.
Unsuccessful Makefile.PL(cpan/Archive-Tar): code=512 at make_ext.pl line 517.
make: The error code from the last command is 2.
I have MakeMaker, but the 5.24 version does not contain vpp.pm! Again, in desperation I attempted to put it in there from a different source, and I get this error:
LIBPATH=.../perl-5.24.0 ./miniperl -Ilib make_ext.pl cpan/Archive-Tar/pm_to_blib MAKE="make" LIBPERL_A=libperl.a
readdir(./../../../../../../..): No such file or directory at .../perl-5.24.0/ext/File-Find/lib/File/Find.pm line 142.
Can't cd to : No such file or directory
Unsuccessful Makefile.PL(cpan/Archive-Tar): code=512 at make_ext.pl line 517.
make: The error code from the last command is 2.
All of this makes me feel like perhaps I'm not configuring something right... Can anyone with some experience help me out with some cut and dry installation instructions for Perl on AIX? I'd be super grateful. Thanks!
Below is some information about my system:
> prtconf
System Model: IBM,8231-E1D
Machine Serial Number: Not Available
Processor Type: Not Available
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 0
Processor Clock Speed: Not Available
CPU Type: 64-bit
Kernel Type: 64-bit
Memory Size: 10240 MB
Good Memory Size: Not Available
Platform Firmware level: Not Available
Firmware Version: IBM,AL770_092
Console Login: disable
Auto Restart: true
Full Core: false
I pull down and compile openssh and openssl frequently. I use gcc and no real magic that I can recall.
Here is my script to build it. I don't know which perl is native to AIX but it seems to work for me.
https://github.com/pedz/aix-build-scripts/blob/master/build-scripts/build-openssl

rpm and Yum don't believe a package is installed after Chef installs

Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.

How to solve "ptrace operation not permitted" when trying to attach GDB to a process?

I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.

Guest program exited with non-zero exit code: 1

I am working with the build /Release process,We implement the build system using the one host machine which has two Virtual machines.One is windows virtual machine and other Linux.During the build we are invoking Nightly.bat file from Windows vm and Nightly.sh from Linux .Iam using the following command...
start /b vmrun.exe -T ws -gu "End" -gp Password runProgramInGuest "D:\Windows VM\Windows 7 x64 Edition + Visual Studio 2008\Windows 7 x64 Edition.vmx" -activeWindow "C:\SPSBuild\Nightly.bat"
vmrun.exe -T ws -gu root -gp quasar runProgramInGuest "D:\Linux\RHEL 5.3 64-bit\RHEL 5.3 64-bit - Sreejith.vmx" "/home/quasar/workspace/SPSBuild/Nightlynew.sh"
But I got an error which shows "Guest program exited with non-zero exit code: 1"..
The username ,password and the path is correct.
Anybody have any idea about that...please give me an answer..
It seems like either "C:\SPSBuild\Nightly.bat" or "/home/quasar/workspace/SPSBuild/Nightlynew.sh" is failing and returning an error.
Can you run these scripts manually to see if they produce an error message? Can you read the scripts to determine why they return exit code 1?
The file must exists in the guest machine. If it does not exists, you need to use copyFileFromHostToGuest before runProgramInguest.

eclipse gdb problem debugging ffmpeg.c

The run configuration works for a given set of arguments while the debug configuration fails.
This is my build configuration for ffmpeg.c
http://pastebin.com/PFM4K4xF
you can view from the build configuration generated . i have set posix style paths for the ffmpeg source.
the debug/run arguments are as -i Debug/sample2.mpg -ab 56k -ar 22050 -b 512k -r 30 -s 320x240 Debug/out2.flv
This all works fine when i run the program. The output file is generated.
But when i try to debug the ffmpeg.c program
it keeps stopping/hanging at certain instructions and the step over option disables.
like show_banner() and parse_options. ( when i commented out show_banner() it stoped at parse_options.)
more so in show_banner() -> cmdutils-> stops when trying to print swscale
and in parse_options->cmdutils.c-> stops at instruction po->u.func_arg(arg);
upon further inspection of stepping through i find this going into an infinite loop .
what is this error ? How do i resume stepping over instructions.
Has any one been able to debug completely from start to finish the ffmpeg.c after giving it valid input ? This is to observe the flow of execution based on the input's given.

Resources