For some reason one of my directories has started producing
executables. By this I mean that new files in that directory are
a+x (but not, for example in the parent directory):
$ ls -ld .
drwxrwsr-x 2 me me 45 Dec 5 10:22 ./
drwxrwsr-x 10 me me 13 Dec 5 10:22 ../
$ rm -f test
$ touch test
$ ls -l test
-rwxrwxr-x 1 me me 0 Dec 5 10:25 test*
$ cd ..
$ rm -f test
$ touch test
$ ls -l test
-rw-rw-r--+ 1 me me 0 Dec 5 10:26 test
Also, notice the + at the end of the second permissions line, is it significant?
I know it cannot be a umask thing...but it's set at 0002.
How can I turn off this behavior?
EDIT:
In response to an answer below I ran the following (in the parent dir):
$ touch test
$ getfacl test
# file: test
# owner: me
# group: me
user::rw-
group::rw-
mask::rwx
other::r--
Why do I have this mask? Is this the right value for it? How can I change it?
The + indicates the presence of one or more ACLs on the entry. getfacl test will show you more information. The oddity with the apparent executability of new files may be related to the ACLs in the parent directory, but we'd have to see what they are to know for sure...
Related
I am trying to create a shared library using gcc and make. I have the following section in the Makefile to compile the shared library object:
# The library build step.
lib : $(DHLLOBJS)
$(CC) $(XCFLAGS) $(SCFLAGS)$(SLIB).1 $(INCFLAGS) -o $(LIB_DIR)/$(SLIB).1.0 \
$(DLOPATHS) $(LNKFLAGS)
ln -sf $(LIB_DIR)/$(SLIB).1.0 $(LIB_DIR)/$(SLIB).1
ln -sf $(LIB_DIR)/$(SLIB).1.0 $(LIB_DIR)/$(SLIB)
The above doesn't throw any compilation errors or file system errors but the symlinks are described as dangling as shown by a chmod command:
$ sudo chmod 0755 ../lib/*
chmod: cannot operate on dangling symlink '../lib/libdhlim.so'
chmod: cannot operate on dangling symlink '../lib/libdhlim.so.1'
and the ls command output below shows the lines 5 and 6 in red:
$ ls -la lib/
total 29
drwxrwxrwx 1 root root 376 Jul 5 21:13 .
drwxrwxrwx 1 root root 4096 Jul 5 21:13 ..
lrwxrwxrwx 1 root root 21 Jul 5 21:13 libdhlim.so -> ./lib/libdhlim.so.1.0
lrwxrwxrwx 1 root root 21 Jul 5 21:13 libdhlim.so.1 -> ./lib/libdhlim.so.1.0
-rwxrwxrwx 1 root root 23792 Jul 5 21:13 libdhlim.so.1.0
When I run the same set of commands manually, they work fine. Is there something I am doing wrong here?
The problem is that you use relative paths but don't create the links with the ln option -r.
Try these as the last two lines:
ln -sfr $(LIB_DIR)/$(SLIB).1.0 $(LIB_DIR)/$(SLIB).1
ln -sfr $(LIB_DIR)/$(SLIB).1.0 $(LIB_DIR)/$(SLIB)
-r, --relative
with -s, create links relative to link location
I have three directories:
/home/Desktop/1
/home/Desktop/2
/home/Desktop/3
In the directories 1 and 2 are executable C programs, which can be executed in the terminal like this ./tst1 or ./tst2.
In the directory 3 I have a bash script, which executes a C program tst3.c from the same directory.
I want to execute these C programs from directories 1 and 2 using my bash script in the directory 3 like this
#!/bin/bash
sudo ./tst3
sleep 1
sudo ./tst1 # from directory 1
sleep 2
sudo ./tst2 # from directory 2
Any ideas?
You have multiple options, including at least:
Set PATH to include the directories where your commands are found:
#!/bin/bash
export PATH="$PATH:/home/Desktop/1:/home/Desktop/2:/home/Desktop/3"
sudo tst3 # from directory 3
sleep 1
sudo tst1 # from directory 1
sleep 2
sudo tst2 # from directory 2
Use absolute paths to the commands:
#!/bin/bash
sudo /home/Desktop/3/tst3 # from directory 3
sleep 1
sudo /home/Desktop/1/tst1 # from directory 1
sleep 2
sudo /home/Desktop/2/tst2 # from directory 2
Use relative paths to the commands:
#!/bin/bash
sudo ../3/tst3 # from directory 3
sleep 1
sudo ../1/tst1 # from directory 1
sleep 2
sudo ../2/tst2 # from directory 2
These treat the directories symmetrically. Another alternative is to place the commands in a directory already on your PATH (like $HOME/bin, perhaps), and then run them without any path. This is what I'd normally do — ensure the commands to be run are in a directory on my PATH.
If you are simply trying to locate the scripts:
#!/bin/bash
base_dir="$( dirname "$( readlink -e "$0" )" )"/..
sudo "$base_dir/3/tst3"
sleep 1
sudo "$base_dir/1/tst1"
sleep 2
sudo "$base_dir/2/tst2"
or
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"/..
sudo 3/tst3
sleep 1
sudo 1/tst1
sleep 2
sudo 2/tst2
If you want the CWD to be changed the directory of each executable before executing it:
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"
sudo ./tst3
cd ../1
sleep 1
sudo ./tst1
cd ../2
sleep 2
sudo ./tst2
These scripts will work properly even if its launched from a directory other than the directory it's found in. They will even work if they are launched via a symlink!
I am using Ubuntu 16.10.
I was following along a book, doing something like this:
#include <stdio.h>
int main() {
printf("Real UID:\t%d\n", getuid());
printf("Effective UID:\t%d\n", geteuid());
}
And to run this file as sudo without using the sudo command, after compiling with gcc, I changed the owner and group to root.
$ gcc -o test test.c
$ sudo chown root:root ./test
$ chmod u+s ./test
$ ls -l
-rwsr-xr-x 1 root root 8512 Mar 9 test
Now, this is what I got when I executed the file. My UID is 1000.
$ ./test
Real UID: 1000
Effective UID: 1000
The book I was reading said the result should be like this:
$ ./test
Real UID: 1000
Effective UID: 0
The UID for root is 0, right? Does this mean that I am running an -rwsr-xr-x file, owned by root, with my own user privilege? I don't understand.
Is your book a little on the older side? It seems like modern *nix variants widely ignore the sticky bit on executable files:
[...] the Linux kernel ignores the sticky bit on files. [...] When the sticky bit is set on a directory, files in that directory may only be unlinked or renamed by root or the directory owner or the file owner.[4]
https://en.wikipedia.org/wiki/Sticky_bit
I want to set up file permissions for files I add to a docker image. I have this simple Dockerfile:
FROM ubuntu:utopic
WORKDIR /app
RUN groupadd -g 1000 baz && useradd -u 1000 -g baz baz -d /app -s /bin/false
RUN chown baz:baz /app && chmod g+s /app
# want this to be have group baz
ADD foo /app/
Building this with docker build -t abc . where there is a foo file in . creates an image. However, the permissions on /app/foo inside is not what I want.
docker run abc ls -la
total 12
drwxr-xr-x 2 baz baz 4096 Sep 2 23:21 .
drwxr-xr-x 37 root root 4096 Sep 3 07:27 ..
-rw-rw-r-- 1 root root 419 Sep 2 21:43 foo
Note that file foo doesn't belong to group baz despite the setgid bit being set on the /app dir. I could use RUN chown -R baz:baz /app after adding the files, but that casues the a copy of the layer to be created (Note the size of the two layers below):
docker history abc | head -n 3
IMAGE CREATED CREATED BY SIZE COMMENT
b95a3d798873 About a minute ago /bin/sh -c chown -R baz:baz /app 419 B
7e007196c116 About a minute ago /bin/sh -c #(nop) ADD file:2b91d9890a9c392390 419 B
Is there some way to get around this and have the ownership of files added be what I want?
Instead of adding foo directly, you could pack it as a tar-archive and set permissions for a specific UID/GID. Here is a post on how to do it:
https://unix.stackexchange.com/questions/61004/force-the-owner-and-group-for-the-contents-of-a-tar-file
After that you can just untar it within your Docker image (ADD untars automagically). You should see the permissions preserved without an additional layer.
I executed the following commands in unix machine:
blr59-adm1:~ # ls -l / | grep back
d-w--w--w- 2 root root 4096 Jun 9 13:31 backupmnt
blr59-adm1:~ # [ -x /backupmnt ]
blr59-adm1:~ # echo $?
0
blr59-adm1:~ #
I am not able to get why I am getting the output of echo $? as 0 even if my directory is not having execute permissions.
My shell script is failing because of this behavior.
Please correct me if I am doing wrong.