How to disable linux space randomization via dockerfile? - c

I'm trying to disable randomization via Dockerfile:
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
but I get
Step 9 : RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
---> Running in 0f69e9ac1b6e
[91mtee: /proc/sys/kernel/randomize_va_space: Read-only file system
any way to work around this? (I see its saying read-only file system any way to get around this?) If its something which the kernel does this means it's outside of my container scope, in that case how am i supposed to work with gdb inside my container? please note this is my target to work with gdb in a container because i'm experimenting with it, so i wanted a container which encapsulates gcc and gdb which i'll use for experimentations.

In host
run:
sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
not in docker

Docker has syntax for modifying some of the sysctls (not via dockerfile though) and kernel.randomize_va_space does not seem to be one of them.
Since you've said you're interested in running gcc/gdb you could disable ASLR only for these binaries with:
setarch `uname -m` -R /path/to/gcc/gdb
Also see other answers in this question.

Sounds like you are building a container for development on your own computer. Unlike production environment, you could (and probably should) opt for a privileged container. In a privileged container sysfs is mounted read-write, so you can control kernel parameters as you would on the host. This is an example of Amazon Linux container I use to develop for on my Debian desktop, which shows the difference
$ docker run --rm -it amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$ docker run --rm -it --privileged amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$
Notice ro mount in the unprivileged, rw in the privileged case.
Note that the Dockerfile command
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
makes no sense. It will be executed (a) during container build time (b) on the machine where you build the image. You want (a) happen at container's run time and (b) on the machine where you run the container. If you need to change sysctls on image start, write a script which does all the setup and then drops you into the interactive shell, like placing a script into e.g. /root and setting it as the ENTRYPOINT
#!/bin/sh
sudo sysctl kernel.randomize_va_space=0
exec /bin/bash -l
(Assuming you mount host working directory into /home/jas that's a good practice, as bash will read your startup files etc).
You need to make sure you have the same UID and GID inside the container, and can do sudo. How you enable sudo depends on a distro. In Debian, members of the sudo group have unrestricted sudo access, while on Amazon Linux (and, IIRC, other RedHat-like system, the group wheel has. Usually this boils down to an unwieldy run command that you rather want to script than type, like
docker run -it -v $HOME:$HOME -w $HOME -u $(id -u):$(id -g) --group-add wheel amazonlinux-devenv
Since your primary UID and GID match the host, files in mounted host directories won't end up owned by root. An alternative is create a bona fide user for yourself during image build (i.e., in the Dockerfile), but I find this more error-prone, because I can end up running this devenv image where my username has a different UID, and that will cause problems. The use of id(1) in a startup command guarantees UID match.

Related

How can I debug QEMU with one terminal?

I am working on a moon rover for Carnegie Mellon University which will be launching next year. Specifically, I am working on a flight computer called the ISIS OBC (On Board Computer) and I am trying to find out how to first run QEMU in a terminal in the background, and then run GDB to connect to the QEMU instance I just backgrounded. I have tried running QEMU in the background with & as well as using the flag -daemonize but this causes QEMU's GDB server to not work at all.
The overarching goal is to be able to debug our flight software in GDB in one terminal window so that I can run it from inside a Docker container mounted on the repository's root. It takes a bit of setup to get be able to debug our code, with a couple of gotchas like incompatibility with newer versions of GCC, so being able to run the CODE and debug it from inside a Docker container (which has all our other development dependencies installed too) is a must.
My current solution was to just run QEMU in another gnome-terminal I initialized in the startup script completely outside of the docker container, but this will not work in Docker for obvious reasons. Here is that code in case the additional context is helpful:
#!/bin/bash
#The goal of the below code is to get the stdout from QEMU piped into GDB.
#Unfourtunately it appears that QEMU must be started as the FG in its own window so that it will
#start its GDB server, so an additional window is required.
my_tty=$(tty)
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
tail -f /tmp/qemu-gdb > $my_tty&
./third_party/gcc-arm-none-eabi-10.3-2021.07/bin/arm-none-eabi-gdb -ex='target remote localhost:1234' -ex='symbol-file build/isis-obc-rtos.elf'
# Kill any leftover qemu debugging sessions
kill $(ps aux | grep '[i]obc-loader' | awk '{print $2}')
# Delete intermediate file
rm -f /tmp/qemu-gdb
# Get's rid of any extra text that may occur
echo ""
clear
I would much prefer to run something like this to achieve my goal:
./../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb
rather than what I am running now:
gnome-terminal -- bash -c './../obc-emulation-resources/obc-qemu/iobc-loader -f sdram build/app.isis-obc-rtos.bin -s sdram -o pmc-mclk -- -serial stdio -monitor none -s -S > /tmp/qemu-gdb; $SHELL' --name="QEMU-iOBC" --title="QEMU-iOBC" -p
"iobc-loader" is a wrapper used to run the QEMU command by the way."app.isis-obc-rtos.bin" is of course the binary I am trying to run and "isis-obc-rtos.elf" contains the symbols used to debug it. Apologies if the answer is obvious, I am a student!
You can try using a terminal multiplexer like screen or tmux, which allow you to run each command in foreground in a separate virtual terminal.
You can also create panes, for example with tmux press Ctrl+b " to split the screen horizontally or Ctrl+b % to split it vertically, then Ctrl+b o to cycle between them.
Using tmux is definitely the easiest approach, especially with its built in CLI support.
You could write a script similar to this one:
tmux start-server
tmux new-session -d -s debug-session -n isis -d "<cmd1>";"<cmd2>"
Where cmd1 is your QEMU execution script, and cmd2 is another script that runs the docker you want to use for debugging.

i run npm start and it showing this pls how do i solve this problem [duplicate]

I have setup a new blank react native app.
After installing few node modules I got this error.
Running application on PGN518.
internal/fs/watchers.js:173
throw error;
^
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/badis/Desktop/react-native/albums/node_modules/.staging'
at FSWatcher.start (internal/fs/watchers.js:165:26)
at Object.watch (fs.js:1253:11)
at NodeWatcher.watchdir (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:175:20)
at NodeWatcher.<anonymous> (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:310:16)
at /home/badis/Desktop/react-native/albums/node modules/graceful-fs/polyfills.js:285:20
at FSReqWrap.oncomplete (fs.js:154:5)
I know it's related to no enough space for watchman to watch for all file changes.
I want to know what's the best course of action to take here ?
Should I ignore node_modules folder by adding it to .watchmanconfig ?
Linux uses the inotify package to observe filesystem events, individual files or directories.
Since React / Angular hot-reloads and recompiles files on save it needs to keep track of all project's files. Increasing the inotify watch limit should hide the warning messages.
You could try editing
# insert the new value into the system config
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
# check that the new value was applied
cat /proc/sys/fs/inotify/max_user_watches
# config variable name (not runnable)
fs.inotify.max_user_watches=524288
The meaning of this error is that the number of files monitored by the system has reached the limit!!
Result: The command executed failed! Or throw a warning (such as executing a react-native start VSCode)
Solution:
Modify the number of system monitoring files
Ubuntu
sudo gedit /etc/sysctl.conf
Add a line at the bottom
fs.inotify.max_user_watches=524288
Then save and exit!
sudo sysctl -p
to check it
Then it is solved!
You can fix it, that increasing the amount of inotify watchers.
If you are not interested in the technical details and only want to get Listen to work:
If you are running Debian, RedHat, or another similar Linux distribution, run the following in a terminal:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you are running ArchLinux, run the following command instead
$ echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
Then paste it in your terminal and press on enter to run it.
The Technical Details
Listen uses inotify by default on Linux to monitor directories for changes. It's not uncommon to encounter a system limit on the number of files you can monitor. For example, Ubuntu Lucid's (64bit) inotify limit is set to 8192.
You can get your current inotify file watch limit by executing:
$ cat /proc/sys/fs/inotify/max_user_watches
When this limit is not enough to monitor all files inside a directory, the limit must be increased for Listen to work properly.
You can set a new limit temporary with:
$ sudo sysctl fs.inotify.max_user_watches=524288
$ sudo sysctl -p
If you like to make your limit permanent, use:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p
You may also need to pay attention to the values of max_queued_events and max_user_instances if listen keeps on complaining.
From the official document:
"Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)
When you see this notification, it indicates that the VS Code file watcher is running out of handles because the workspace is large and contains many files. The current limit can be viewed by running:
cat /proc/sys/fs/inotify/max_user_watches
The limit can be increased to its maximum by editing
/etc/sysctl.conf
and adding this line to the end of the file:
fs.inotify.max_user_watches=524288
The new value can then be loaded in by running
sudo sysctl -p
Note that Arch Linux works a little differently, See Increasing the amount of inotify watchers for details.
While 524,288 is the maximum number of files that can be watched, if you're in an environment that is particularly memory constrained, you may wish to lower the number. Each file watch takes up 540 bytes (32-bit) or ~1kB (64-bit), so assuming that all 524,288 watches are consumed, that results in an upper bound of around 256MB (32-bit) or 512MB (64-bit).
Another option
is to exclude specific workspace directories from the VS Code file watcher with the files.watcherExclude setting. The default for files.watcherExclude excludes node_modules and some folders under .git, but you can add other directories that you don't want VS Code to track.
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true
}
delete react node_modules
rm -r node_modules
yarn or npm install
yarn start or npm start
if error occurs use this method again
Firstly you can run every time with root privileges
sudo npm start
Or you can delete node_modules folder and use npm install to install again
or you can get permanent solution
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
It happened to me with a node app I was developing on a Debian based distro. First, a simple restart solved it, but it happened again on another app.
Since it's related with the number of watchers that inotify uses to monitors files and look for changes in a directory, you have to set a higher number as limit:
I was able to solve it from the answer posted here
(thanks to him!)
So, I ran:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Read more about what’s happening at https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers#the-technical-details
Hope it helps!
Remembering that this question is a duplicated: see this answer at original question
A simple way that solve my problem was:
npm cache clear
best practice today is
npm cache verify
npm or a process controlled by it is watching too many files. Updating max_user_watches on the build node can fix it forever. For debian put the following on terminal:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you want know how Increase the amount of inotify watchers only click on link.
I use ubuntu 20 server and i add in the file : /etc/sysctl.conf the below line
fs.inotify.max_user_watches=524288
Then save the file and run sudo sysctl -p
After that all is works fine!
I solved this issue by using sudo
ie
sudo yarn start
or
sudo npm start
Use sudo to solve this issue will force the number of watchers to be increased without apply any modifications in system settings. Use sudo to solve this kind of issue is never recommended, although it's a choice that have to be made by you, hope you choose wisely.
Root cause
Most answers above talk about raising the limit, not about taking away the root cause which is typically just a matter redundant watches, typically for files in node_modules.
Webpack
The answer is in the webpack 5 docs:
watchOptions: { ignored: /node_modules/ }
Simply read here: https://webpack.js.org/configuration/watch/#watchoptionsignored
The docs even mention this as a "tip", quote:
If watching does not work for you, try out this option. This may help
issues with NFS and machines in VirtualBox, WSL, Containers, or
Docker. In those cases, use a polling interval and ignore large
folders like /node_modules/ to keep CPU usage minimal.
VS Code
VS Code or any code editor creates lots of file watches too. By default many of them are completely redundant. Read more about it here: https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
Generally we don't need to increase count of filewatchers
In this case we will have more watchers
We need to remove redundant watchers what became zombie
The issue is that we have many filewatchers that are filling out our memory
We just need remove these filewatchers (in case of node)
killall node
In react.js show me same error i fix this way hope work in react native too
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Now you can run npm start again.
npm start
Using the sysctl -p approach after setting fs.inotify.max_user_watches did not work for me (by the way this setting was already set to a high value, likely from me trying to fix this issue a while back ago, using the commonly recommended workaround(s) above).
The best solution to the problem I found here, and below I share the performed steps in solving it - in my case the issue was spotted while running visual studio code, but solving the issue should be the same in other instances, like yours:
Use this script to identify which processes are requiring the most file watchers in your session.
You can then query the current max_user_watches value with sysctl fs.inotify.{max_queued_events,max_user_instances,max_user_watches} and then set it to a different value (a lower value may do it)
sudo sysctl -w fs.inotify.max_user_watches=16384
Or you can simply kill the process you found in (1) that consumes the most file watchers (in my case, baloo_file)
The above, however, will likely need to be done again when restarting the system - the process we identified as responsible for taking much of the file watchers will (in my case - baloo_file) - will again so the same in the next boot. So to permanently fix the issue - either disable or remove this service/package. I disabled it: balooctl disable.
Now run sudo code --user-data-dir and it should open vscode with admin privileges this time. (by the way when it does not - run sudo code --user-data-dir --verbose to see what the problem is - that's how I figured out it had to do with file watchers limit).
Update:
You may configure VS code file watcher exclusion patterns as described here. This may prove to be the ultimate solution, I am just not sure you will always know beforehand which files you are NOT interested watching.
Easy Solution
I found, that a previous solution work well in my case. I removed node_modules and clear the yarn / npm cache.
Long Tail Solution
If you want to have a long-tail solution - e.g. if you often be catched by this error - you can increase the value of allowed watchers (depending on your available memory)
To figure out the current used amount of watchers, instead of only guessing, you can use this handy bash-script:
https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers
I suggest to set the max_user_watches temporary to a high value:
sudo sysctl fs.inotify.max_user_watches=95524288 and run the script.
How to calculate how much you can use
Each watcher needs
540 bytes (32-bit system), or
1 kB (double - on 64-bit OS
So if you will allow to use 512MB (on 64Bit), you set something 524288 as value.
Other way around, you can take the amount of memory you will set, and multiply it by 1024.
Example:
512 * 1024 = 52488
1024 * 1024 = 1048576
It shows you the exact amount of the current used inotify-consumers. So you might have an better Idea, how much you should increase the limit.
If you are running your project in Docker, you should do the echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf and all other commands in the host machine, since the container will inherit that setting automatically (and doing it directly inside it will not work).
Late answer, and there are many good answers already.
In case you want a simple script to check if the maximum file watches is big enough, and if not, increase the limit, here it is:
#!/usr/bin/env bash
let current_watches=`sysctl -n fs.inotify.max_user_watches`
if (( current_watches < 80000 ))
then
echo "Current max_user_watches ${current_watches} is less than 80000."
else
echo "Current max_user_watches ${current_watches} is already equal to or greater than 80000."
exit 0
fi
if sudo sysctl -w fs.inotify.max_user_watches=80000 && sudo sysctl -p && echo fs.inotify.max_user_watches=80000 | sudo tee /etc/sysctl.d/10-user-watches.conf
then
echo "max_user_watches changed to 80000."
else
echo "Could not change max_user_watches."
exit 1
fi
The script increases the limit to 80000, but feel free to set a limit that you want.
As already pointed out by #snishalaka, you can increase the number of inotify watchers.
However, I think the default number is high enough and is only reached when processes are not cleaned up properly. Hence, I simply restarted my computer as proposed on a related github issue and the error message was gone.
Another simple and good solution is just to add this to jest configuration:
watchPathIgnorePatterns: ["<rootDir>/node_modules/", "<rootDir>/.git/"]
This ignores the specified directories to reduce the files being scanned
In my case in Angular 13, I added in tsconfig.spec.json
"exclude": [
"node_modules/",
".git/"
]
thanks #Antimatter it gaves me the trick.
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Run This Code In Project Terminal After Run Npm Run Dev
Please refer this link[1]. Visual Studio code has mentioned a brief explanation for this error message. I also encountered the same error. Adding the below parameter in the relavant file will fix this issue.
fs.inotify.max_user_watches=524288
[1] https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
While almost everyone suggests to increase a number of watchers, I couldn't agree that it is a solution.
In my case I wanted to disable watcher completely, because of the tests running on CI using vui-cli plugin which starts web-pack-dev server for each test.
The problem was: when a few builds are running simultaneously they would fail because watchers limit is reached.
First things first I've tried to add the following to the vue.config.js:
module.exports = {
devServer: {
hot: false,
liveReload: false
}
}
Ref.: https://github.com/vuejs/vue-cli/issues/4368#issuecomment-515532738
And it worked locally but not on CI (apparently it stopped working locally the next day as well for some ambiguous reason).
After investigating web-pack-dev server documentation I found this:
https://webpack.js.org/configuration/watch/#watch
And then this:
https://github.com/vuejs/vue-cli/issues/2725#issuecomment-646777425
Long story short this what eventually solved the problem:
vue.config.js
module.exports = {
publicPath: process.env.PUBLIC_PATH,
devServer: {
watchOptions: {
ignored: process.env.CI ? "./": null,
},
}
}
Vue version 2.6.14
if you working with vs code editor any editor that error due to large number of files in projects. node_modules and build not required in it so remove in list. that all open in vs code files menu
You have to filter unnecessary folders file sidebar
Goes to Code > Preferences > settings
in search setting search keyword "files:exclude"
Add pettern
**/node_modules
**/build
That's it
Try this , I was facing it for very long time but at the end it is solved by this,
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
The most important step after that is restart your system.
2 fixes if you've already added: fs.inotify.max_user_watches=524288
Reboot the machine, things will work again
Rename the folder that is causing the issue (for me node_modules) to an arbitrary name (node_modilesa) and then rename right back. This will remove the watches that linux had put on those folders. Allowing you code as normal again.
I encountered this issue on a linuxmint distro. It appeared to have happened when there was so many folders and subfolders/files I added to the /public folder in my app.
I applied this fix and it worked well...
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
change directory into the /etc folder:
cd /etc
then run this:
sudo systcl -p
You may have to close your terminal and npm start again to get it to work.
If this fails i recommend installing react-scripts globally and running your application directly with that.
$ npm i -g --save react-scripts
then instead of npm start run react-scripts start to run your application.
I tried increasing number as suggested but it didn't work.
I saw that when I login to my VM, it displayed "restart required"
I rebooted VM and it worked
sudo reboot
it is to easy to fix this
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
and run your project.
if there is fs.inotify.max_user_watches=524288 in your /etc/sysctl.conf,
run same command(echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf). and run your project
For vs code, see detailed instructions here:
https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

Change ownership of dir to user when running program in sudo

I have a program that I need to run with sudo. I create a directory using mkdir, but this directory has owner and group set to root. That makes sense since I am using sudo. I would like to change the owner and group to the normal user, but I'm not sure how to do that. I thought running system("chown $USER:$USER /directory/") would work, but I suppose since I am in sudo it will just set to root. I was looking into using chown, but I wasn't sure how I was supposed to get the owner and group id. Also it would be good for it to be portable, so I don't want to just hardcode a user/group id.
You're mostly on the right path already, chown is the command you're looking for here.
You can string the two commands to make and then own the directory together using a semicolon.
sudo mkdir test ; sudo chown $USER:$USER test
I've tested this on ubuntu 18.04 and ubuntu 20.04 as that's your tag. The $USER variable resolves to the user that you originally logged in as, not root, as long as you're using it at the beginning of your command like the above. Note that you need to call sudo again when doing the chown portion, the ; ends the sudo elevation.
The coreutils package includes an useful little command, install, you can use instead of mkdir in a sudo context. For example,
sudo install -o USER -g GROUP -m MODE -d DIRECTORY
where USER is the user to own the directory DIRECTORY, GROUP is the group to own the directory, and MODE is the access mode (like chmod) to the directory.
Because system(COMMAND) and popen(COMMAND,...) actually run /bin/sh with -c and COMMAND as parameters, you can use the form
sudo install -o $(id -u) -g $(id -g) -m u=rwx,g=r-x,o=x DIRECTORY
where the shell replaces the user and group names (or rather, numbers, since I'm not using the -n option) before executing sudo. (The id command is also included in coreutils, so you can definitely expect both install and id to be available on all full-blown Linux machines; and even on most embedded systems. It is what all package managers et cetera use to install files, you see.)
Above, I used the mode u=rwx,g=r-x,o=x (equivalently, 0751) as an example; it sets the mode to rwxr-x--x, i.e. grants access to everybody, with owner user and group being able to list the directory contents, and only the owner user being able to create new files or directories in it.

Mounting a GCS bucket on AppEngine Flexible Environment

I am trying to mount a GCS bucket on AppEngine Flexible Environment app using gcsfuse.
My Dockerfiles includes the following:
# gscfuse setup
RUN echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud.sdk.list
RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-jessie main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN wget -qO- https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update && apt-get install -y --no-install-recommends google-cloud-sdk gcsfuse strace
RUN gcsfuse --implicit-dirs my_bucket my_dir
I took most of this from here. It's pretty much just the standard way to install gcsfuse, plus --no-install-recommends.
If I start an app this way, it does not mount the drive. This was not too surprising to me, since it didn't seem like a supported feature of the flexible environment.
Here is the confusing part. If I run gcloud app instances ssh "<instance>", then run container_exec gaeapp /bin/bash, then gcsfuse my_bucket my_dir works fine.
However, if I run gcloud app instances ssh "<instance>" --container gaeapp, then gcsfuse my_bucket my_dir fails with this error:
fusermount: failed to open /dev/fuse: Operation not permitted
This is the same error I get if I run gcsfuse as a subprocess in my main.py.
Based on this unresolved thread, I ran strace -f and saw the exact same problem as that user did, an EPERM issue.
[pid 59] open("/dev/fuse", O_RDWR) = -1 EPERM (Operation not permitted)
Whichever way I log into the container (or if I run a subprocess from main.py), I am user root. If I run export then I do see different vars, so there is some difference in what's being run, but everything else looks the same to me.
Other suggestions I've seen include using the gcsfuse flags -o allow_other and -o allow_root. These did not work.
There may be a clue in the fact that if I try to run umount on a login that cannot run gcsfuse, it says "must be superuser to unmount", even though I am root.
It seems like there is probably some security setting that I do not understand. However, since I could in theory get main.py to trigger an external program to log in and run gcsfuse for me, it seems like there should be a way to get it to work without having to do that.
RUN commands are about creating a new layer for your dockerfile, so you're actually running that command during the image creation, which the Flex build system doesn't like.
I'm not sure why shelling out in the application didn't work, you could try 'sudo'ing it in the python subprocess, or possibly push it out of the application code by adding 'gcsfuse setup &&' to the ENTRYPOINT in the dockerfile.

Editing .desktop file to run executable as root?

I have compiled a c program into an executable that I would now like to integrate into the applications menu in Debian 7.4 XFCE. In order to run the application under normal circumstances, I am required to type
sudo myprogram
Now I have created my .desktop file and placed it in /usr/share/applications
[Desktop Entry]
Type=Application
Encoding=UTF-8
Name=myprogram
Comment=configuration loader
Exec=sudo loader
Icon=/usr/share/icons/hicolor/48x48/apps/myprogram.png
Terminal=false
Categories=Development;IDE
The item is added to my applications menu as expected, and the icon shows up properly. The problem, however, is that double clicking the menu item to launch the application does nothing.
If I navigate to /usr/bin (where I have placed my executable) and type "sudo myprogram", the program launches as expected.
What can I do to fix this issue and get the program to launch from the menu? Perhaps /usr/bin is not the correct place to put it, or I have the incorrect Exec command. I greatly appreciate the help.
I ended up using (after installing gksu)
Exec = gksu myprogram
this launches a graphical sudo prompt, which is sufficient for my needs.
This is what the setuid bit in the permissions is for. It makes executables run with permissions of the file owner. This only works on actual executables, not on shell scripts!
sudo chmod u+s myprogram
sudo chown root myprogram
./myprogram # now runs as root
Please be careful when using this as it will always execute that program as root no matter who executes it. You can limit access by setting it to your usergroup and deny all execute.
chgrp "${USER}" myprogram # provided you have individual groups set up
chmod a-x myprogram # deny all execute
This approach does not need additional installation of packages.
Terminal=true opens a new terminal window which runs
sudo -i to ask for the password.
Then, using sh to run the program, the Terminal is closed and myprogram runs in the background because it has a & at the end.
[Desktop Entry]
Type=Application
Name=...
Exec=sudo -i sh -c "myprogram &"
Terminal=true
Request: Please report if it works under your OS.
Tested under:
Xubuntu
The pkexec solution from askubuntu:
Exec=pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY APP_COMMAND
Try adding this to .desktop
Path=/path/to/myprogram

Resources