Guest program exited with non-zero exit code: 1 - batch-file

I am working with the build /Release process,We implement the build system using the one host machine which has two Virtual machines.One is windows virtual machine and other Linux.During the build we are invoking Nightly.bat file from Windows vm and Nightly.sh from Linux .Iam using the following command...
start /b vmrun.exe -T ws -gu "End" -gp Password runProgramInGuest "D:\Windows VM\Windows 7 x64 Edition + Visual Studio 2008\Windows 7 x64 Edition.vmx" -activeWindow "C:\SPSBuild\Nightly.bat"
vmrun.exe -T ws -gu root -gp quasar runProgramInGuest "D:\Linux\RHEL 5.3 64-bit\RHEL 5.3 64-bit - Sreejith.vmx" "/home/quasar/workspace/SPSBuild/Nightlynew.sh"
But I got an error which shows "Guest program exited with non-zero exit code: 1"..
The username ,password and the path is correct.
Anybody have any idea about that...please give me an answer..

It seems like either "C:\SPSBuild\Nightly.bat" or "/home/quasar/workspace/SPSBuild/Nightlynew.sh" is failing and returning an error.
Can you run these scripts manually to see if they produce an error message? Can you read the scripts to determine why they return exit code 1?

The file must exists in the guest machine. If it does not exists, you need to use copyFileFromHostToGuest before runProgramInguest.

Related

MonetDB error when starting mclient, cannot set locale

I installed MonetDB on a Ubuntu 16.04 using the instructions: https://www.monetdb.org/easy-setup/ubuntu-debian/
When trying to start the client: mclient -u monetdb -d testdb
I get back this error:
monetdbd: internal error while starting mserver 'database 'testdb' appears to shut itself down after starting, check monetdbd's logfile (merovingian.log) for possible hints'
and when I look inside the logfile I see that the problem is apparently related to the locale:
"2022-01-19 17:47:18 ERR testdb[15411]: cannot set locale"
Any hints?
The error message occurs only once in the code, so we can see exactly which call fails. The failing call is
setlocale(LC_CTYPE, "")
and the call is done by mserver5.
The call is to set the locale for character types to whatever the environment specifies (i.e. a combination of the LC_LANG, LC_CTYPE, and LANG environment variables). It seems that they are set incorrectly in your environment.

OpenProcess returns ERROR_ACCESS_DENIED for 32bit elevated processes

I am looking for a specific eleveated 32bit process on a machine which I would like to terminate. First, I need to be sure it is the right file that is executed.
For that purpose I follow this example by Microsoft of how to get all processes file names using OpenProcess(), EnumProcessModules(), and GetModuleFileNameEx() kernel functions.
When executing from Visual Studio and from an elevated Powershell (x86 or x64) I get an OpenProcess() return code of 299, but the process handle is ok and I can get the file name.
When I run the same binary in an elevated CMD shell (tested on Win10 x64 and Win7 x86) then OpenProcess() return 5 meaning ERROR_ACCESS_DENIED. -- This is a problem to me because for specific reasons the tool will eventually run from CMD.
I have already tried to tweak the desired flags for OpenProcess(), but both versions give the same result as described above.
PROCESS_QUERY_INFORMATION | PROCESS_VM_READ | PROCESS_TERMINATE
PROCESS_QUERY_LIMITED_INFORMATION | PROCESS_VM_READ | PROCESS_TERMINATE
HANDLE hProcess = OpenProcess(dwDesiredAccess, FALSE, processID);
EnumProcessModules(hProcess, &hMod, sizeof(hMod), &cbNeeded);
GetModuleFileNameEx(....);
Thanks in advance for any hints and pointers!
Eventually, I gave up on this in C and implemented it in C# using the System.Diagnostics.Process methods without a flaw.
Thanks anyway for all your efforts!

rc.local is not running on raspberry pi's startup

I'm trying to run a simple C code when the pi boots, so I followed the steps on the documentation (https://www.raspberrypi.org/documentation/linux/usage/rc-local.md), but when I start it, it shows this error:
Failed to start etc/rc.local compatibility.
See 'systemctl status rc-local.service' for details.
I do as it says and I receive this:
rc-local.service - /etc/rc.local Compatibility
Loaded: loaded (/lib/systemd/system/rc-local.service; static)
Drop-In: /etc/systemd/system/rc-local.service.d
ttyoutput.conf
Active: failed (Result: exit-code) since Tue 2015-12-08 10:44:23 UTC; 2min 18s ago
Process: 451 ExecStart=/etc/rc.local start (code=exit, status=203/EXEC)
My rc.local file looks like this:
./home/pi/server-starter &
exit 0
Can anyone show me what I'm doing wrong?
You have to refer to your script using an absolute path.
/home/pi/server-starter &
Notice the absence of the . in comparison to your solution.
Also, you may have to add a reference to the shell right at the beginning of your rc.local.
#!/bin/sh -e
/home/pi/server-starter &
exit 0
to run a script shell "in a script shell" use this :
sh -c /absolute/path/to/script;
to run this script in background use this :
sh -c /absolute/path/to/script &;
Don't forget exit 0 at the end of the file

How to solve "ptrace operation not permitted" when trying to attach GDB to a process?

I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.

Forcing program to create coredump on freebsd

In my project I added a new module and now my process is being terminated by signal 11 .
I want to track and understand the problem but no coredump file is generated by freebsd.
I set sysctl like :
sysctl -a | grep core
kern.corefile: /usr/core
kern.nodump_coredump: 1
kern.coredump: 1
kern.sugid_coredump: 1
debug.elf64_legacy_coredump: 1
debug.elf32_legacy_coredump: 1
I also set ulimit -c unlimited
From my code I removed all code about signal like "sigaction(SIGTERM, &signal, &signal_old);"
for not preventing kernel to generate coredump.
Why I can't see any coredump still ? What I am missing ?
Also are there any method forcing a program which run on freebsd to create coredump an equivalent to do_coredump() in linux?
The problem is in:
kern.corefile: /usr/core
Something like the following should help:
sysctl -w kern.corefile = "%N.core"
If I recall correctly, kern.corefile is the complete name of the resulting corefile, not the directory in which it should be placed. It also needs to be writable by the user running the process. /usr/core looks like a directory and/or a location writable only by root.
kern.nodump_coredump: 1 also looks suspicious.I don't remember that sysctl existing in the last version of FreeBSD I used, but it looks like it's intended to disable core dumps. Try setting it to 0.

Resources