Checking for environment CRC in u-boot script - u-boot

When I bring up my fresh out of the press system, u-boot comes with default environment (that I fine-crafted when compiling u-boot.
That is expected.
Loading Environment from MMC... *** Warning - bad CRC, using default environment
However, to automate deployment I would like to do saveenv to initialize the environment in MMC but would like to do it only when there is no valid environment in the storage.
I am looking for a way to determine whether the environment has a bad CRC (such as uninitialized) and initialize it with the default using savenev.
Once I initialize the environment I can further automate settings (such as ethaddr) from my deployment shell script using fw_setenv.
I didn't find a way how to do it programmatically within the u-boot script.

This might not be the most elegant solution but at least I have a working workaround:
I added a script that is checking if a specific environment variable env_written serving as a flag is set. If not, the script creates (sets) the flag and performs saveenv.
The flag variable makes sure the script runs only once and not at future boots.
setenv init_env "if test -n '"$env_written"'; then ; else setenv env_written 1; echo '"... inaugural saveenv"'; saveenv; fi"
Lastly I added an invocation of this script in the bootcmd script that is always executed.
setenv bootcmd "run init_env; echo '"bootcmd: "'; run loadfpga; bridge enable; run distro_bootcmd"
It does the job :-)
For reference - below is the encoding suitable for copy-pasting to your config_distro_bootcmd.h if you want to get this behavior out of the box:
"init_env=" \
"if test -n \"$env_written\"; then ; " \
"else " \
"setenv env_written 1; " \
"echo \"... inaugural saveenv\"; " \
"saveenv; " \
"fi;\0"
and
#define CONFIG_BOOTCOMMAND "run init_env; echo \"bootcmd: \"; run loadfpga; bridge enable; run distro_bootcmd"
Enjoy better scripting possibilities!

Related

How do you assign an Array inside a Dockerfile?

I have tried a number of different ways to assign an array inside a RUN command within a Dockerfile. None of them seem to work. I am running on Ubuntu-Slim, with bash as my default shell.
I've tried this (second line below)
RUN addgroup --gid 1000 node \
&& NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" ) \
...
But it fails with /bin/sh: 1: Syntax error: "(" unexpected.
I also tried assigning it as an ENV variable, as in:
ENV NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" )
but that fails as well.
Assigning and using arrays in Bash is completely supported. Yet, it appears that I can't use that feature of Bash when running in a Dockerfile. Can someone confirm/deny that you can assign arrays variables inside of shell commands in Dockerfile RUN syntax (or ENV variable syntax)?
The POSIX shell specification does not have arrays. Even if you're using non-standard shells like GNU bash, environment variables are always simple strings and never hold arrays either.
The default shell in Docker is usually /bin/sh, which should conform to the POSIX spec, and not bash. Alpine-based images don't have bash at all unless you go out of your way to install it. I'd generally recommend trying to stick to the POSIX syntax whenever possible.
A typical Dockerfile is fairly straightforward; it doesn't have a lot of parts that get reused multiple times and most of the things you can specify in a Dockerfile you don't need to be user-configurable. So for a list of OS packages, for example, I'd just list them out in a RUN command and not bother trying to package them into a variable.
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
binutils-gold \
g++ \
gcc \
...
Other things I see in Stack Overflow questions that do not need to be parameterized include the container path (set it once as the WORKDIR and refer to . thereafter), the process's port (needs to be a fixed number for the second docker run -p part), and user IDs (can be overridden with docker run -u, and you don't usually want to build an image that can only run on one system).
WORKDIR /app # not an ENV or ARG
COPY . . # into the WORKDIR, do not need to repeat
RUN adduser node # with no specific uid
EXPOSE 3000 # a fixed port number
RUN mkdir /data # also use a fixed path for potential mount points
You can have array NODE_BUILD_PACKAGES in RUN if you define SHELL :
SHELL ["/bin/bash", "-c"]

Troubleshooting export with heredoc

INTRODUCTION:
I have been using this construct to set the current group after opening a terminal at a compute server:
newgrp project1_group << ANYCODE
cd ~/WORK/project1_rundir
bsub xterm &
ANYCODE
After executing this script, new terminal is opened at a compute server, in the specified project rundir, and the primary group is set correctly.
It works just fine...
PROBLEM DESCRIPTION:
Now I would like to set an environment variable at a compute server using the same construct:
export POLICYFILE=~/WORK/project1_rundir/.policyfile << ANYCODE
cd ~/WORK/project1_rundir
bsub xterm &
ANYCODE
It doesn't do anything, not even a terminal is opened.
Does anyone have an explanation, why does newgrp work and export not?
Is there a way to make this work (not necessarily using the heredoc)?
The problem is solved (even better, without heredoc)...
The final solution is implemented as follows:
cd ~/WORK/project1_rundir
bsub -I -env "all, POLICYFILE=~/WORK/project1_rundir/.policyfile" xterm &

Auto-Running a C Program on Raspberry PI

how can I make my C code to auto-run on my Raspberry PI? I have seen a tutorial so as to achieve that but I do not really know what I am still missing. My initialization script is shown as it follows:
#! /bin/sh
# /etc/init.d/my_settings
#
# Something that could run always can be written here
### BEGIN INIT INFO
# Provides: my_settings
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# X-Interactive: true
# Short-Description: Script to start C program at boot time
# Description: Enable service provided by my_settings
### END INIT INFO
# Carry out different functions when asked to by the system
case "$1" in
start)
echo "Starting RPi Data Collector Program"
# run application you want to start
sudo /home/pi/Documents/C_Projects/cfor_RPi/charlie &
;;
stop)
echo "Killing RPi Data Collector Program"
# kills the application you want to stop
sudo killall charlie
;;
*)
echo "Usage: /etc/init.d/my_settings {start | stop}"
exit 1
;;
esac
exit 0
The problem is that my program does not run at boot time and I do not really know why. What would I be missing? Is this "killall" statement "killing" some useful process during execution time? I am making this code to run as a background application but I know that after a few seconds, when the RPi is initializing, it asks for an username and a password in order to initialize the session. Is it possible that my RPi is not executing this code because I am not providing the logging information? I do not have a monitor so that my program has to run once I plug my Rpi in. Thanks a lot in advance!!
You'll have to create links to that init script in the proper /etc/rcX.d folders. On raspbian this is done by:
sudo update-rc.d YOUR_INIT_SCRIPT_NAME defaults
You can read this debian how-to for further information. Also you should read more about run levels in Debian.
How scripts/services are run at startuptime, generally depends on the type of init system used. Off the top of my head, I'd distginguish the following 4 types:
Embedded style: A single shell script has all the commands to start the system. Usually the script is at one off the paths the kernel tries to start as init process.
BSD style
System V style: This uses /etc/inittab and latr scripts in /etc/rc*.d/ to start services one by one
systemd
Raspbian dervices from Debian, so I suppose System V style. You have to symlink your script to /etc/rc2.d like
ln -s /etc/init.d/your-script /etc/rc2.d/S08my-script
Not the structure of the link name: It says, it should be started when the run level is entered, and the '08' determines the position (do a ls /etc/rc2.d/ to see the other links).
More details: init(8).
update-rc.d(8) is the proper wway to create the symlinks on debian. See the manpage:
update-rc.d - install and remove System-V style init script links
I advice to read at least the man pages update-rc.d(8) and init(8).
http://www.akeric.com/blog/?p=1976
Here a tutorial on how to auto-loggin and start a script at boot.
If it still don t work, there s either a problem in your script or in your C program.

How to run command within a prompt set by shell script

This is the process we perform manually.
$ sudo su - gvr
[gvr/DB:DEV3FXCU]/home/gvr>
$ ai_dev.env
Gateway DEV3 $
$ gw_report integrations long
report is ******
Now i am attempting to automate this process using a shell script:
#!/bin/ksh
sudo su - gvr
. ai_dev3.env
gw_report integrations long
but this is not working. Getting stuck after entering the env.
Stuck at this place (Gateway DEV3 $)
You're not running the same commands in the two examples - gw_report long != gw_report integrations long. Maybe the latter takes much longer (or hangs).
Also, in the original code you run ai_dev.env and in the second you source it. Any variables set when running a script are gone when returning from that script, so I suspect this accounts for the different behavior.

Keep user env variables executing gksu

I have a program in c/gtk which is opened with gksu. The problem is that when I get the environment variable $HOME with getenv("HOME") it returns "root" obviously. I would like to know if there is a way to know who was the user that executed the gksu or the way to get his environmental variables.
Thanks in advance!
See the man page. Use gksu -k command... to preserve the environment (in particular, PATH and HOME).
Or, like Lewis Richard Phillip C indicated, you can use gksu env PATH="$PATH" HOME="$HOME" command... to reset the environment variables for the command. (The logic is that the parent shell, the one run with user privileges, substitutes the variables, and env re-sets them when superuser privileges have been attained.)
If your application should only be run with root privileges, you can write a launcher script -- just like many other applications do. The script itself is basically
#!/bin/sh
exec gksu -k /path/to/your/application "$#"
or
#!/bin/sh
exec gksu env PATH="$PATH" HOME="$HOME" /path/to/your/application "$#"
The script is installed in /usr/bin, and your application as /usr/bin/yourapp-bin or /usr/lib/yourapp/yourapp. The exec means that the command replaces the shell; i.e. nothing after the exec command will ever be executed (unless the application or command cannot be executed at all) -- and most importantly, there will not be an extra shell in memory while your application is begin executed.
While Linux and other POSIX-like systems do have a notion of effective identity (defining the operations an application may do) and real identity (defining the user that is doing the operation), gksu modifies all identities. In particular, while getuid() returns the real user ID for example for set-UID binaries, it will return zero ("root") when gksu is used.
Therefore, the above launch script is the recommended method to solve your problem. It is also a common one; run
file -L /usr/bin/* /usr/sbin/* | sed -ne '/shell/ s|:.*$||p' | xargs -r grep -lie launcher -e '^exec /'
to see which commands are (or declare themselves to be) launcher scripts on your system, or
file -L /bin/* /sbin/* /usr/bin/* /usr/sbin/* | sed -ne '/shell/ s|:.*$||p' | xargs -r grep -lie gksu
to see which ones use gksu explicitly. There is no harm in adopting a known good approach.. ;)
you could assign the values of these environment vars to standard variables, then execute gksu exporting the vars after gkSU... By defining these after the gkSU using && to bind together your executions, so that you essentially execute using cloned environment variables...
A better question is why do this at all? I realize you are wanting to keep some folders, but am not sure why as any files created as root, would have to be globally writable, probably using umask, or you would have to manually revise permissions or change ownership... This is such a bad Idea!
Please check out https://superuser.com/questions/232231/how-do-i-make-sudo-preserve-my-environment-variables , http://www.cyberciti.biz/faq/linux-unix-shell-export-command/ & https://serverfault.com/questions/62178/how-to-specify-roots-environment-variable

Resources