How can I run dracut commands inside a non-root C code? - c

I'm developing a tool that modifies LUKS partitions and disks.
Everything is working very well. Until now...
To handle disks properly as a non-root user, I added some polkit rules to change password, open partition, change crypttab and many others.
But, I'm seeing problems when I change crypttab and I need to run dracut to apply some dracut modules (dracut --force). Specially, the last one.
My user is part of admin group and I added a rule into sudoers file to not ask sudo password when my application is executed.
So, I decided to use this code:
gchar *dracut[] = {"/usr/bin/sudo", "/usr/bin/dracut", "--force", NULL};
if ((child = fork()) > 0) {
waitpid(child, NULL, 0);
} else if (!child) {
execvp("/usr/bin/sudo", dracut);
}
It is not working because SELinux is preventing to run this command:
SELinux is preventing /usr/bin/sudo from getattr access on the chr_file /dev/hpet.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that sudo should be allowed getattr access on the hpet chr_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'sudo' --raw | audit2allow -M my-sudo
# semodule -X 300 -i my-sudo.pp
Additional Information:
Source Context system_u:system_r:xdm_t:s0-s0:c0.c1023
Target Context system_u:object_r:clock_device_t:s0
Target Objects /dev/hpet [ chr_file ]
Source sudo
Source Path /usr/bin/sudo
Port <Unknown>
Host <Unknown>
Source RPM Packages sudo-1.8.25p1-4.el8.x86_64
Target RPM Packages
Policy RPM selinux-policy-3.14.1-61.el8.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name jcfaracco#hostname
Platform Linux jcfaracco#hostname 4.18.0-80.el8.x86_64 #1
SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64
Alert Count 9
First Seen 2019-06-14 19:32:42 -03
Last Seen 2019-06-14 19:42:46 -03
Local ID 772b2c41-2302-4ee0-8886-52789eb63e22
Raw Audit Messages
type=AVC msg=audit(1560552166.658:199): avc: denied { getattr } for pid=2291 comm="sudo" path="/dev/hpet" dev="devtmpfs" ino=10776 scontext=system_u:system_r:xdm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:clock_device_t:s0 tclass=chr_file permissive=0
type=SYSCALL msg=audit(1560552166.658:199): arch=x86_64 syscall=stat success=no exit=EACCES a0=7ffd4a6dffb0 a1=7ffd4a6def20 a2=7ffd4a6def20 a3=7fe845a73181 items=0 ppid=1756 pid=2291 auid=4294967295 uid=982 gid=980 euid=0 suid=0 fsuid=0 egid=980 sgid=980 fsgid=980 tty=tty1 ses=4294967295 comm=sudo exe=/usr/bin/sudo subj=system_u:system_r:xdm_t:s0-s0:c0.c1023 key=(null)ARCH=x86_64 SYSCALL=stat AUID=unset UID=gnome-initial-setup GID=gnome-initial-setup EUID=root SUID=root FSUID=root EGID=gnome-initial-setup SGID=gnome-initial-setup FSGID=gnome-initial-setup
Hash: sudo,xdm_t,clock_device_t,chr_file,getattr
Do you know how to fix this issue? Any other idea to call dracut inside a C code is welcome too. In case of any other smart way to perform this issue.

Related

Asimbench benchmark running in gem5 fails with "fatal: Unable to find destination for [0x40008000:0x40008040] on system.iobus"

I have downloaded asimbench files which provided in the gem5.org website and I have modified the config/common/FSConfig.py with following changes:
def makeArmSystem(..)
..................
self.cf0 = CowIdeDisk(driveID='master')
self.cf2 = CowIdeDisk(driveID='master')
self.cf0.childImage(mdesc.disk())
self.cf2.childImage(disk("sdcard-1g-mxplayer.img"))
#Old platforms have a built-in IDE or CF controller. Default to
#the IDE controller if both exist. New platforms expect the
#storage controller to be added from the config script.
if hasattr(self.realview, "ide"):
#self.realview.ide.disks = [self.cf0]
self.realview.ide.disks = [self.cf0, self.cf2]
elif hasattr(self.realview, "cf_ctrl"):
#self.realview.cf_ctrl.disks = [self.cf0]
self.realview.cf_ctrl.disks = [self.cf0, self.cf2]
else:
self.pci_ide = IdeController(disks=[self.cf0])
pci_devices.append(self.pci_ide
I used this command:
./build/ARM/gem5.opt configs/example/fs.py --mem-size=8192MB
--disk-image=/home/yaz/gem5/full_system_images/disks/ARMv7a-ICS-Android.SMP.Asimbench-v3.img
--kernel=/home/yaz/gem5/full_system_images/binaries/vmlinux.smp.ics.arm.asimbench.2.6.35
--os-type=android-ics --cpu-type=MinorCPU --machine-type=VExpress_GEM5 --script=/home/yaz/gem5/full_system_images/boot/adobe.rcS
warn: CheckedInt already exists in allParams. This may be caused by
the Python 2.7 compatibility layer. warn: Enum already exists in
allParams. This may be caused by the Python 2.7 compatibility layer.
warn: ScopedEnum already exists in allParams. This may be caused by
the Python 2.7 compatibility layer. gem5 Simulator System.
http://gem5.org gem5 is copyrighted software; use the --copyright
option for details. gem5 version 20.0.0.3 gem5 compiled Jul 7 2020
16:17:12 gem5 started Jul 16 2020 04:41:50 gem5 executing on
yazeed-OptiPlex-9010, pid 3367 command line: ./build/ARM/gem5.opt
configs/example/fs.py --mem-size=8192MB
--disk-image=/home/yaz/gem5/full_system_images/disks/ARMv7a-ICS-Android.SMP.Asimbench-v3.img
--kernel=/home/yaz/gem5/full_system_images/binaries/vmlinux.smp.ics.arm.asimbench.2.6.35
--os-type=android-ics --cpu-type=MinorCPU --machine-type=VExpress_GEM5 --script=/home/yaz/gem5/full_system_images/boot/adobe.rcS
Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
info: kernel located at: /home/yaz/gem5/full_system_images/binaries/vmlinux.smp.ics.arm.asimbench.2.6.35
system.vncserver: Listening for connections on port 5900
system.terminal: Listening for connections on port 3456
system.realview.uart1.device: Listening for connections on port 3457
system.realview.uart2.device: Listening for connections on port 3458
system.realview.uart3.device: Listening for connections on port 3459
0: system.remote_gdb: listening for remote gdb on port 7000 info:
Using bootloader at address 0x80000000
info: Using kernel entry physical address at 0x140008000 warn: DTB file specified, but no
device tree support in kernel
**** REAL SIMULATION ****
warn:Existing EnergyCtrl, but no enabled DVFSHandler found. info: Entering
event queue # 0. Starting simulation...
fatal: Unable to find destination for [0x40008000:0x40008040] on system.iobus
Memory Usage: 8786764 KBytes
Thanks for helping

How do I use the PAM capabilities module to grant capabilities to a particular user and executable?

I'm attempting to make a program which uses raw sockets run correctly as non-root with Linux capabilities. The program is as follows:
#include <netinet/ip.h>
int main()
{
int sd = socket(PF_INET, SOCK_RAW, IPPROTO_TCP);
if(sd < 0)
{
perror("socket() error");
return 1;
}
return 0;
}
If I compile it and run it as non-root, I get an error, as expected:
[user#localhost ~]$ make socket
cc socket.c -o socket
[user#localhost ~]$ ./socket
socket() error: Operation not permitted
If I add the cap_net_raw capability, as an effective and permitted capability, it works.
[user#localhost ~]$ sudo setcap cap_net_raw+ep socket
[sudo] password for user:
[user#localhost ~]$ ./socket
[user#localhost ~]$
Now, I want to use pam_cap.so to make it so that only a particular user can run this program with cap_net_raw, instead of everyone. My /etc/security/capability.conf is:
cap_net_raw user
My /etc/pam.d/login is (note that I also tried /etc/pam.d/sshd but that did not seem to work either):
#%PAM-1.0
auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so
auth substack system-auth
auth include postlogin
#Added this line to use pam_cap
auth required pam_cap.so
account required pam_nologin.so
account include system-auth
password include system-auth
# pam_selinux.so close should be the first session rule
session required pam_selinux.so close
session required pam_loginuid.so
session optional pam_console.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session required pam_selinux.so open
session required pam_namespace.so
session optional pam_keyinit.so force revoke
session include system-auth
session include postlogin
-session optional pam_ck_connector.so
I had an ssh session, I logged out and back in after that and executed the following commands:
[user#localhost ~]$ sudo setcap cap_net_raw+p socket
[sudo] password for user:
[user#localhost ~]$ getcap socket
socket = cap_net_raw+p
[user#localhost ~]$ ./socket
socket() error: Operation not permitted
[user#localhost ~]$
My question is: Why was I not able to execute the 'socket' program with cap_net_raw? I thought that when I logged in, my user would obtain it as a permitted capability, and it would allow 'user' to run 'socket' with the cap_net_raw.
This is what I'm running on:
[user#localhost ~]$ uname -a
Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[user#localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)
I figured out that I had the wrong capabilities on the file. In order for the process to be able to obtain effective capabilities from the pam_cap module, the file needs to be configured with the "inherited" capability as well. So, setting caps on the file should be:
sudo setcap cap_net_raw+ip socket
However, I still could only get the program to work successfully from a normal tty login, and not an ssh login.
I came across this question when trying to use Google to jump to the pam_cap.so documentation.
The way to use setcap to set this binary up for use with pam_cap.so is:
sudo setcap cap_net_raw=ie socket
That is, the i instructs the binary to promote the process Inheritable capability flag into a process Permitted p capability, and the legacy e instructs the kernel to raise its value in the process' Effective flag when the program is invoked.
You can skip the e part if you want to use libcaps cap_set_proc() function to raise the Effective flag from inside the program. Something like:
cap_t c = cap_get_proc();
cap_fill(c, CAP_EFFECTIVE, CAP_PERMITTED);
cap_set_proc(c);
cap_free(c);
FWIW I've recently written an article on the various ways you can inherit capabilities in the modern kernel.

How to solve "ptrace operation not permitted" when trying to attach GDB to a process?

I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.

Xdebug - command is not available

I'm debugging remotely my project in PhpStorm. IDE shows 'Connected' for a moment and immediately goes into 'Waiting for incoming connection...'
Below is Xdebug log from this session
I: Connecting to configured address/port: X.x.x.x:9000.
I: Connected to client. :-)
> <init xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" fileuri="file:///xxx/info.php" language="PHP" protocol_version="1.0" appid="4365" idekey="10594"><engine version="2.2.2"><![CDATA[Xdebug]]></engine><author><![CDATA[Derick Rethans]]></author><url><![CDATAhttp://xdebug.org]></url><copyright><![CDATA[Copyright (c) 2002-2013 by Derick Rethans]]></copyright></init>
<- feature_set -i 0 -n show_hidden -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="0" feature="show_hidden" success="1"></response>
<- feature_set -i 1 -n max_depth -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="1" feature="max_depth" success="1"></response>
<- feature_set -i 2 -n max_children -v 100
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="2" feature="max_children" success="1"></response>
<- status -i 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="status" transaction_id="3" status="starting" reason="ok"></response>
<- step_into -i 4
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="step_into" transaction_id="4" status="stopping" reason="ok"></response>
<- breakpoint_set -i 5 -t line -f file://xxx/info.php -n 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="5"><error code="5"><message><![CDATA[command is not available]]></message></error></response>
"
According to Xdebug documentation status "stopping" is
'State after completion of code execution. This typically happens at the end of code execution, allowing the IDE to further interact with the debugger engine (for example, to collect performance data, or use other extended commands).'
So my debugger stops before reaching first breakpoint (set on first line).
Could it be a question of server configuration?
You should go to php.ini and delete a line like this
extension=php_xdebug-...
How did this line was created.
You put a xdebug's file into PHP extensions path like this
.../php5.X.XX/ext/
Now you may turn on this PHP extension by any _AMP UI tools like WAMP, XAMPP etc.
To prevent this painful misfortune you must put the Xdebug file into
.../php5.X.XX/zend_ext/
It'll make Xdebug hidden from any _AMP tool.
And correct your zend_extension parameter too.
zend_extension = .../php5.X.XX/ext/php_xdebug-...
to
zend_extension = .../php5.X.XX/zend_ext/php_xdebug-...
It's common default path for it.
Please, remember!
With PHPStorm, Eclipse, Zend etc., possibly you should consider to correct two php.ini files.
The first one for your web server. Commonly under Apache folder
...\Apache2.X.XX\bin\
The second one is for the direct PHP-script debugging. It lies in the PHP hosting folder:
...\php\php5.X.XX\
In my case, the cause of the "breakpoint_set" / "command is not available" problem was disabled xdebug.extended_info option (it is enabled by default but I disabled it for profiling).
Breakpoints do not work then xdebug.extended_info is disabled.
I have got breakpoints worked after reenabling xdebug.extended_info.
I had same problem under windows, with phpstorm, i was googling many time. Eventually, my decision is the:
in php.ini:
xdebug.remote_mode = "jit"
From phpstorm tutorial, JIT - "Just-In-Time" Mode
https://www.jetbrains.com/help/phpstorm/2016.2/configuring-xdebug.html#d43035e303
UPD
No, this option does not helped me actually. But, i resolve my issue in the end:
I use phpstrom for win 7, and i configured the path mapping this way:
d:\serverroot\vhost\www => d:\serverroot\vhost\www
but in my old config i spied such mapping:
d:\serverroot\vhost\www => d:/serverroot/vhost/www
Finally
On windows machines in path mapping in server configuration replace the \ by /
I think the only reason why this could happen is that your info.php has a syntax error. In that case, there is no code to execute and the script goes directly to "stopping" upon issue of the "step_into".
Zend_Opcache / OPCache can cause this issue as well, if you have it enabled try disabling it.
This error can be emitted when the XDebug extension is compiled into a non-debug build of the PHP runtime. The process will not fail (as it shouldn't), but the XDebug extension will stop doing anything for the duration of that process

Clearcase: How to control whether SUID programs work in a view or not?

We have two machines (under discussion) running ClearCase - different versions of ClearCase. Otherwise, they are about as identical in setup as can be - same Linux x86/64 kernel etc.
On one machine, SUID root programs in the view work as SUID root programs.
On the other machine, SUID root programs in the view do not work with SUID privileges, leading to unexpected results.
The only difference we've spotted so far is:
Working view: CC 7.0.1
Non-working view: CC 7.1.1.1
I can give the full output of cleartool -version if it matters, but I suspect it won't. These are the first versions listed.
Questions
Is this a known difference between the versions of ClearCase, or is it a configuration item, or something else?
Is it possible to configure the newer version of ClearCase (MVFS) to allow SUID root programs to run 'properly'?
If it is configurable, how do we change the configuration make the new version allow SUID programs?
We have a myriad machines running ClearCase, on a lot of different platforms. There have been rumours that on some machines, our SUID software has to be run 'out of view' to work. Now someone was reporting a bug - and it has taken most of the day to narrow down the differences. The issue addressed in the question seems a plausible explanation. If it is something else, so be it. I still need the hair I lost today back again!
Extra Information
All views are dynamic, not snapshot.
This is the output of cleartool lsview -l -full -pro -cview on the machine where SUID programs do work, running ClearCase 7.0.1:
Tag: idsdb00222108.jleffler.toru
Global path: /net/toru/work4/atria/idsdb00222108.jleffler.toru.vws
Server host: toru
Region: lenexa
Active: YES
View tag uuid:6dac5149.2d7511e0.8c62.00:14:5e:69:25:d0
View on host: toru
View server access path: /work4/atria/idsdb00222108.jleffler.toru.vws
View uuid: 6dac5149.2d7511e0.8c62.00:14:5e:69:25:d0
View owner: lenexa.pd/jleffler
Created 2011-01-31T11:58:11-08:00 by jleffler.rd#toru
Last modified 2011-02-26T22:32:49-08:00 by jleffler.rd#toru.lenexa.ibm.com
Last accessed 2011-02-26T22:44:55-08:00 by jleffler.rd#toru.lenexa.ibm.com
Last read of private data 2011-02-26T22:44:55-08:00 by jleffler.rd#toru.lenexa.ibm.com
Last config spec update 2011-02-26T01:10:36-08:00 by jleffler.rd#toru.lenexa.ibm.com
Last view private object update 2011-02-26T22:32:49-08:00 by jleffler.rd#toru.lenexa.ibm.com
Text mode: unix
Properties: dynamic readwrite shareable_dos
Owner: lenexa.pd/jleffler : rwx (all)
Group: lenexa.pd/rd : rwx (all)
Other: : rwx (all)
Additional groups: lenexa.pd/RAND lenexa.pd/ccusers lenexa.pd/ccids lenexa.pd/ccos
This is the output on the machine where SUID programs do not 'work', running ClearCase 7.1.1.1:
Tag: new.jleffler.zeetes
Global path: /tmp/jl/new.jleffler.zeetes.vws
Server host: zeetes
Region: lenexa
Active: YES
View tag uuid:f62b7c80.414111e0.9cec.00:14:5e:de:1b:44
View on host: zeetes
View server access path: /tmp/jl/new.jleffler.zeetes.vws
View uuid: f62b7c80.414111e0.9cec.00:14:5e:de:1b:44
View owner: lenexa.pd/informix
Created 2011-02-25T18:40:11-06:00 by informix.informix#zeetes
Last modified 2011-02-25T18:49:56-06:00 by informix.informix#zeetes
Last accessed 2011-02-25T18:50:31-06:00 by informix.informix#zeetes
Last read of private data 2011-02-25T18:50:31-06:00 by informix.informix#zeetes
Last config spec update 2011-02-25T18:49:37-06:00 by informix.informix#zeetes
Last view private object update 2011-02-25T18:49:56-06:00 by informix.informix#zeetes
Text mode: unix
Properties: dynamic readwrite shareable_dos
Owner: lenexa.pd/informix : rwx (all)
Group: lenexa.pd/informix : r-x (read)
Other: : r-x (read)
Additional groups: lenexa.pd/RAND lenexa.pd/ccids lenexa.pd/ccos
Detecting that SUID programs are not working
The problem is not that there is an error message from the operating system about running the SUID program. The problem is that even though the program appears to be setuid root, when run, the program is not actually setuid:
Zeetes IX: ls -l asroot
-r-sr-xr-x 1 root informix 24486 Feb 25 18:49 asroot
Zeetes IX: ./asroot id
asroot: not installed SUID root
Zeetes IX:
This is the output from asroot when it is not installed with SUID root privileges. On the other machine:
Toru JL: ls -l asroot
-r-sr-xr-x 1 root informix 26297 2011-02-27 00:11 asroot
Toru JL: ./asroot id
uid=0(root) gid=1240(rd) groups=1240(rd),1360(RAND),8714(ccusers),8803(ccids),8841(ccos)
Toru JL:
This is more or less the output I'd expect if the program is installed with SUID root privileges.
Mount information
The two main VOBs are tristarp and tristarm. On the machine where SUID is OK (wrapping done manually to avoid scrollbars):
aether:/vobs/tristarm.vbs on /vobs/tristarm.vbs type nfs \
(rw,hard,intr,bg,addr=9.25.149.151)
charon:/vobs/tristarp.vbs on /vobs/tristarp.vbs type nfs \
(rw,hard,intr,bg,addr=9.25.149.147)
charon:/vobs/tristarp.vbs on /vobs/tristarp type mvfs \
(uuid=684ef023.2dd111d0.b696.08:00:09:b1:a4:c5)
aether:/vobs/tristarm.vbs on /vobs/tristarm type mvfs \
(uuid=b74900ef.814511cf.afee.08:00:09:b1:54:d5)
On the machine where SUID is not OK:
aether:/vobs/tristarm.vbs on /vobs/tristarm type mvfs \
(uuid=b74900ef.814511cf.afee.08:00:09:b1:54:d5,nosuid)
aether:/vobs/tristarm.vbs on /vobs/tristarm.vbs type nfs \
(rw,hard,intr,bg,addr=9.25.149.151)
charon:/vobs/tristarp.vbs on /vobs/tristarp.vbs type nfs \
(rw,hard,intr,bg,addr=9.25.149.147)
charon:/vobs/tristarp.vbs on /vobs/tristarp type mvfs \
(uuid=684ef023.2dd111d0.b696.08:00:09:b1:a4:c5)
And there's the miscreant! (And I thought I'd looked at mount information. Evidently. I'd not looked accurately enough, or only on one machine - the working one - or something.) It is odd that only one of these two VOBs is mounted with nosuid; very odd.
We have an answer why!
Thanks, VonC.
Explorations
There is provision in the scripts /etc/init.d/clearcase and /etc/clearcase for the scripts and programs under /opt/rational/clearcase to use a file /var/adm/rational/clearcase/suid_mounts_allowed to control whether SUID is allowed or not; it exists on both machines, as an empty file with permissions root:root:000. But there may be some other difference that is crucial lurking here - I have asked the resident ClearCase Guru about this. However, it looks as though the difference is more likely in the configuration on the two machines than it is some version-specific change in functionality. Both versions superficially support the nosuid option, even though neither is self-evidently invoking that option - except that the 7.1.1.1 version is managing to invoke it where the 7.0.1 version is not.
It would be interesting to know:
if both kind of views are snapshots or dynamic views. I suppose dynamic, with an issue related to MVFS.
what a 'cleartool lsview -l -full -pro -cview' returns in both case (when executed within each views, one where SUID works, one where it doesn't)
if the local path within each view is the same when trying the SUID bit (local path being the path within the view as in </path/toView>/vobs/MyVob/.../path/to/a/directory)
And mainly, do you have an exact error message, like in this thread:
We see that VOBs are mounted with different options on Linux and SunOS, especially, Linux adds a "nosuid" mount option, while on SunOS "setuid" is added.
This causes us trouble during distributed builds on the Linux machines, because the remote machine(s) gets an "Operation not permitted" error when trying to execute a suid root binary from one of the VOBs
See the cleartool mount options:
UNIX and Linux: nodev, nosuid, suid.
See also "Setting the sticky bit using the cleartool protect command"
Use the following syntax to properly set the "sticky bit" using the cleartool protect command:
cleartool protect -chmod u=rxs <file>

Resources