How do I run only some makefile commands as root? - c

I have an install target in my Makefile and wish to run some commands that install shared libraries(requires root permissions) and some that install config files into $HOME/.config
Usually I'd just tell the user to run sudo make install, however that results in the config file being installed to /root/.config instead of the actual users config directory.
How do I work around this issue?
Thanks alot.

You can just change the owner and permissions of the config files, although a Makefile that installs per user configuration files, is not a good idea because it would ideally need to find out how many users exist on the system to install the files for each user.
If you use the install command, you could even do
install -v -m644 -o$(USERNAME) -g$(USERGROUP) $(FILE) $(USERHOME)/.config/$(FILE)
A better approach would be to let the program install the default config files from a system wide directory when it doesn't find them, for example
/usr/share/my-application/default-config/config.conf
and then the program would search for the files in the appropriate directoy and copy them to the $HOME directory of the user that is currently running the program, that if the files are modifiable by the user, otherwise you just access them from their system-wide location.

Related

'mongo' is still not working on PowerShell after doing all recommended things

I installed MongoDB and tried to run it on terminal. It just shows up 'mongo' is not recognized as an internal or external command, operable program or batch file.
I have set the path to bin folder inside Environment variables too. One thing I noticed is I might have a missing file inside bin folder and that is mongo. Because I have mongod and mongos file inside the bin folder. I tried to uninstall and reinstall the program and it was still not working.
I have no idea it's what that I'm missing. Please help out
Finally I have found the solution,
Mongo shell no longer ships with server binaries. We can download it from MongoDB Shell Download
Then we should extract the contents of the bin from the downloaded zip file to the bin file of the MongoDB folder and run mongosh instead of mongo on the terminal

Subgit installer For windows

I am trying to migrate from SVN to Git with history data and someone suggests me to use SubGit for this.
I downloaded the zip file and found that there is subgit.bat available in the bin folder after extracting it.
I don't know how to run subgit and ensure subgit installed on my windows system or not because it's throwing me subgit: Command not found
SubGit doesn't require any special installation, you can just unzip it and start to use, the bin/subgit.bat is actually SubGit start script. For convenience sake, it may worth to add the path to SubGit 'bin' folder to PATH environment variable so that subgit can start without issuing the full path every time.
The 'Command not found' error means that Windows cannot find the 'subgit' command -- it happens when there is no program called 'subgit' neither in PATH, nor in current directory. To resolve it either use full path to the 'subgit.bat' file to start the program (like, c:\subgit\bin\subgit.bat) or add SubGit bin directory path to PATH.

node-gyp build from different directory

whenever i do
node-gyp build
i need to be in the directory that has my binding.gyp 'build' directory.
I was wondering if there is a way where my current working directory could be somewhere else and I could specify the path to build at.
My use case is I spend most of my time in the working directory ~, where i like stop/start/restart node and i dont really want to 'cd' to 'api/v1/C' (which is where I keep my .c files) every time i want to build them.
(i suppose i could just write a script that does 'cd' to my 'api/v1/C', runs node-gyp build, then 'cd ~', however i'd like to know if there is another way without making a script)
In the docs:
-C $dir, --directory=$dir Run command in different directory

how to install check_inode plugin in nagios

i have to install a plugin on a red hat server where nagios is already configured.
the plugin to be installed is inode_checker which i got from this link
how to install inode checker in nagios
but when i opened this link i could find a shell script here.
now i am not sure whether i have to place the shell script directly on the server in the location /usr/local/nagios/libexec/ or is there any other way to do it since the other plugins available in this location seems to be different and i am not able to open them.
what am i doing wrong here?please advise.
Yes, this is a bash script so simply download and place it in the folder where you have other scripts sitting. Make sure to make it executable like
chmod +x scriptname
Then you should be able to use it in nagios by creating a Command object. You can find the location of the folder where your scripts are located by looking at the resources.cfg file which should hold something like below:
$USER1$=/usr/lib64/nagios/plugins
Hope this helps.

Jenkins delete files from Workspace

I have jenkin's job, which copy tar file from linux user folder and then copy binary file (compiled) from another job and make new tar file. Then jenkins user can copy that new tar file from jenkin's workspace.
It doesn't build anything or take files from SCM. Then after a while, suddenly tar file has been deleted from workspace, I have to run job again. How I can prevent that?
You really shouldn't rely on your workspace existing after a job has completed, as the workspace can be overwritten by another build starting, or when someone deletes a build, by a slave going offline, etc...
Since you want to save the file for later use, you should use the "Archive the artifacts" option in your job's post-build configuration. If you enter **/*.tar, for example, Jenkins would save all TAR files at the end of the build.
Then you can use Jenkins' permalinks to access the artifacts, e.g.:
http://JENKINS/job/JOB_NAME/lastSuccessfulBuild/artifact/bin/my-app.tar
As the URL suggests, this would give you the file from the last successful build.
As a sidenote, if you then want to copy archived files to another build, the best way to do this is with the Copy Artifact plugin.
That way Jenkins handles the file copying for you, even across multiple Jenkins slaves, and you don't have to do anything nasty like hard-coding paths to other Jenkins workspaces.

Resources