xinetd does not load environment variables set in /etc/profile.d - nagios

I am using xinetd to serve the output of check_mk_agent. I have custom check_mk_agent scripts, some of which are configured with environment variables. These environment variables are set in /etc/profile.d/set_env.sh. When I run check_mk_agent manually, the environment variables are found, and the custom checks succeed. When i do telnet myhost 6556, the environment variables are not found, and the custom checks fail.
My question is, what is a good way to ensure that set_env.sh gets run in the xinetd context? I would rather not use env and passenv variables in xinetd configuration, because it would be annoying to unnecessarily maintain environment variables in multiple places on the same host.
Thanks!

Edit the file check_mk_agent file, and add the flowing line just after #!/bin/bash:
source /etc/profile.d/set_env.sh
Save this, and retry.

Related

What is the equivalent of setenv in Windows?

I am trying to find an equivalent for setenv to use in a C program. What I am trying to do is to modify the values of all the environment variables of the currently running process. I am trying to use putenv but it doesn't change the variables` values in any way. What could I do?
Those are the correct methods for setting the environment variables. The issue you are hitting is that SetEnvironmentVariable which is what is used by the C Runtime setenv does not change system-wide environment variables; only the environment of the currently running process.
Changing the system-wide or per-user environment variables on Windows is normally done using scripts or UI. To modify the system-wide environment variables from a C program, you need (a) to run it with administrator rights, (b) you need to modify the System Registry, and (c) you need to send a WM_SETTINGSCHANGE Win32 message to get the changes picked up by the Windows shell.

Teamcity not loading environment variables - Fortify automation

I have Teamcity currently setup to run a batch file, this batch file executes a fortify scan. It seems the environment variable 'PATH' had loaded correctly on one attempt and started to perform the scan. On the next build attempt the batch file couldn't locate one of the executable(sourceanalyzer.exe). When looking into the user defined parameters it seems different environment variables were loaded than the previous build attempt. It looks like the system environment variables load on the successful attempt and then a users environment variable loaded on the failed attempt. Is there a way to permanently set the environment variable PATH in the agent to load only the system environment variables?
UPDATE:
I have tried several things, to include passing in a Fortify environment variable, which does allow TeamCity to start running the scan. However, it looks like I hit another snag with Fortify's plugin for vs2015. The new error states it cannot find the plugin. I assume this is because pathways are hard coded? Seeing as TeamCity doesn't use the System environment variables and I have to pass them in for TeamCity to find these directories. Is there an easier way to use the batch file to load the system environment variables to avoid hard coding pathways? Would setlocal in the batch file help load these system environment variables, so I can just call sourceanalyzer with out creating environment variables or hard coding pathways?
IIRC, Teamcity will ask you if you want to install the build agent on the System account or the user account. By default it selects the System account and as long as you aren't running any GUI apps, you won't notice the difference... until something like this happens. If Fortify is GUI based, then reinstall your build agent on the user account and ignore the following. Otherwise...
When you set your PATH variables using the System(Control Panel) advanced settings, there are two panes, one for user and one for system. Here you can inspect the System variables to make sure they are correct.
What I will generally do is create a new key, say FORTIFY_PATHand prepend %FORTIFY_PATH% to the System PATH variable. THEN RESTART YOUR MACHINE. The path won't get updated correctly until you do.
Next, login to the system account using PSExec: https://superuser.com/a/596395 and try to run your tools from that command prompt to verify that they are working in the build agent's environment. I once had trouble getting an SVN script to upload until I logged into the System account and provided my SVN password. Some settings are stored in %APPDATA% which is different from the user account.
If you can't get Fortify to run from the System command prompt, then you should probably reinstall your build agent to your user account. Or install Fortify to the System account (if possible).
When configuring TeamCity build agents, check the agent system and environment variables By going to Agents->Agent->Agent Parameters, or the /agentDetails.html?id=1&tab=agentParameters&kind=envpath on your server.
After changing the parameters, restart the agent or restart the agent's machine.

Run u-boot command at startup

I have a custom board running Yocto (Jethro) and would like to run a single u-boot command, preboot. Obviously, breaking the boot sequence with space and running it manually works. How do I get it to run automatically? More specifically, where is the startup command sequence, by default?
Edit: Also, I am aware I can edit the environment at runtime. However, I am trying to build this change into the image so I can distribute it.
When you are in the uboot environment. Enter printenv, it will list the environment variables that uboot uses.
There is a variable name bootcmd. Currently, mine contain a bunch of if else command. Similarly, add your prefer function there for boot.
And after it is finished and tested. Use saveenv to store the edit
Here is a syntax for uboot.
Edit:
U-Boot allows to store commands or command sequences in a plain text file. Using the mkimage tool you can then convert this file into a script image which can be executed using U-Boot's autoscr command. U-boot Scripting Capabilities
Typically, your U-Boot recipe will build U-Boot for a single machine, in that case, I'd normally just patch the compiled in, default, U-Boot environment to do the right thing. This is achieved by
SRC_URI_machine += "file://mydefenv.patch"
Or (even better) use your own git tree. This would also have the additional benefit that your system might be able to boot up and to something useful, even if the environment would be totally corrupted.
Another possibility is to do it like Charles suggested in a comment to another answer, create an environment offline, and have U-Boot load it, see denx.de/wiki/view/DULG/UBootScripts
A third possibility, that I've also used sometimes, is to construct the environment offline (possibly using the same or a similar mechanism as in the link above), and the flash the environment to flash during the normal flash programming process. Though, most of the time I've done this on AT91's, using a tcl script similar to at91 Sam-Ba TCL script
No matter which method you chose, the bootcmd variable in U-Boot should hold your boot script.
The general answer is that bootcmd is run by default, and if there is persistent environment you can change the command and 'saveenv' so that it's kept.
It is easiest to modify the said bootcmd, which is executed anyway.
As an alternative to patching the kernel, it is possible to override the command in u-boot.
Create a file e.g. platform-top.h at the same place where you would place the patch file (it might already exist) and override the CONFIG_BOOTCOMMAND.
The result will look something like this:
/* ... */
/* replace the memory write with any other valid command */
#define CONFIG_BOOTCOMMAND "mw 0x1 0x1 && run default_bootcommand"
Don't forget to make the file known in your bbapend SRC_URI = "file://platform-top.h"

ReactJS: Storing very simple settings/constants

I am very new to ReactJS and I might be thinking completely wrong. In our react app I make a lot of AJAX calls to the backend. For example, in dev I make calls to http://localhost:3000, in production I (naturally) use another host, which changes depending on the installation.
The hosts are static, set once and never change.
How do I make the host-information manageable in React?
I read about redux/flux etc to store global variable, but this is overkill for us. We just need to have one string (URL/host-info) that we can replace with another. We can store the info in a file, as a command-line param or whatever. We just need it to be simple.
UPDATE: Turn out that I fully did not understand the build system. As Dan pointed out we use webpack to package the solution. Using a loader we could swap out our configuration-settings in the code. We ended up using a simple string replacement loader (string-replace-webpack-plugin) since env variables are not suitable for this solution.
What you're describing are usually known as environment variables. You generally maintain a specific set of environment variables for each context your application is developed or run in.
For instance you might have an APP_HOST environment variable which would be set differently at your local machine, than it would at your server.
Most programs that run on the server can read the environment variables directly, but because React apps run in the client's browser, you'll have to make them aware of the appropriate environment variables before they are served.
The easiest way to do this is with envify.
Envify will allow you to reference environment variables from your frontend code.
// app.js
const host = process.env.APP_HOST;
fetch(host);
Envify is a Browserify transform, meaning you'd need to run your code through a command like this.
# define some environment variables
APP_HOST="localhost:3000"
# build the code
browserify app.js -t envify -o bundle.js
What comes out the other side would be:
// bundle.js
var host = "localhost:3000";
fetch(host);
If you use Webpack, there's a similar alternative in the form of envify-loader.

How can I call bash from C with a clean environment?

I'm attempting to set up my computer such that I can authenticate myself using an external device connected to a python script. I started by replacing the login program in inittab with my own program, and I've been able to get into a bash shell. The problem is that it doesn't get a fresh environment like the one that is (I presume) given with login. I know there are ways for me to mess with the environment, but i haven't seen a way to give it a "default" configuration, if even such a thing makes sense.
Some ideas:
First of all it would be better in most cases to use the pluggable login architecture PAM. This will ensure, that all PAM-enabled applications and services can use the authentification method (ssh for example) and that there is no way to bypass it using regular services.
If you really want to replace login i'd suggest to clear the environment by yourselves using unsetenv for each environment variable set (you may use environ to determine the variables already set). After cleaning up the environment you may use a exec-like call to replace your program with a bash, the environment will be unchanged in this context. You may want to add the command line argument -l to start up bash as it would have been invoked by login.
Bash is running some init scripts on startup. You may check /etc/profile, /etc/bashrc and similar files for environment variables you don't want to be set.
If you want to be dependant on env (wich is not so bad, since it should be present on every linux system out there) you can use env -i bashto call bash in a clean environment.
When main(int argc, char *argv[], char *envp[]) is called by the operating system, the third parameter contains the environment. So just save a copy of it until you need to call bash.

Resources