why environment variables (PATH) work in bash? - c

i'm trying to built a bash using c.
but I faced this problem when I try this: env -i bash
this should pass a void env into bash so all the environment variables should be null
example :
➜ ~ env -i bash --norc
bash-3.2$ env
PWD=/Users/mbari
SHLVL=1
_=/usr/bin/env
bash-3.2$ ls
#*mail*#78979jxq# Documents Pictures result.log
Applications Downloads VirtualBox VMs tmux-client-73012.log
Cleaner.sh Library docker_start_up.bash tmux-client-73105.log
Cleaner_42.sh Movies file
Desktop Music goinfre
bash-3.2$
screenshot :

Bash has a default path in case it does not inherit nor anything does not set PATH. It's defined as DEFAULT_PATH_VALUE in bash sources and while there are some defaults in the source, usually distributions override this value in build scripts. As you're building your own shell, you might find that config file interesting.

Related

How do you assign an Array inside a Dockerfile?

I have tried a number of different ways to assign an array inside a RUN command within a Dockerfile. None of them seem to work. I am running on Ubuntu-Slim, with bash as my default shell.
I've tried this (second line below)
RUN addgroup --gid 1000 node \
&& NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" ) \
...
But it fails with /bin/sh: 1: Syntax error: "(" unexpected.
I also tried assigning it as an ENV variable, as in:
ENV NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" )
but that fails as well.
Assigning and using arrays in Bash is completely supported. Yet, it appears that I can't use that feature of Bash when running in a Dockerfile. Can someone confirm/deny that you can assign arrays variables inside of shell commands in Dockerfile RUN syntax (or ENV variable syntax)?
The POSIX shell specification does not have arrays. Even if you're using non-standard shells like GNU bash, environment variables are always simple strings and never hold arrays either.
The default shell in Docker is usually /bin/sh, which should conform to the POSIX spec, and not bash. Alpine-based images don't have bash at all unless you go out of your way to install it. I'd generally recommend trying to stick to the POSIX syntax whenever possible.
A typical Dockerfile is fairly straightforward; it doesn't have a lot of parts that get reused multiple times and most of the things you can specify in a Dockerfile you don't need to be user-configurable. So for a list of OS packages, for example, I'd just list them out in a RUN command and not bother trying to package them into a variable.
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
binutils-gold \
g++ \
gcc \
...
Other things I see in Stack Overflow questions that do not need to be parameterized include the container path (set it once as the WORKDIR and refer to . thereafter), the process's port (needs to be a fixed number for the second docker run -p part), and user IDs (can be overridden with docker run -u, and you don't usually want to build an image that can only run on one system).
WORKDIR /app # not an ENV or ARG
COPY . . # into the WORKDIR, do not need to repeat
RUN adduser node # with no specific uid
EXPOSE 3000 # a fixed port number
RUN mkdir /data # also use a fixed path for potential mount points
You can have array NODE_BUILD_PACKAGES in RUN if you define SHELL :
SHELL ["/bin/bash", "-c"]

Optimal usage of codecov in a monorepo context with separate flags for each package

I was just wondering what’s the best way to configure codecov for a monorepo setting. For example, let’s say I have packages A and B under my monorepo. The way I’m currently using codecov is by using a github action codecov/codecov-action#v1, by using multiple uses statement in my GitHub workflow YAML file like the following:-
- uses: codecov/codecov-action#v1
with:
files: ./packages/A/coverage/lcov.info
flags: flag_a
name: A
- uses: codecov/codecov-action#v1
with:
files: ./packages/B/coverage/lcov.info
flags: flag_b
name: B
I know it's possible to use a comma-separated value to upload multiple files, but I have to set a separate flag for each package, and doing it that way doesn't seem to work.
Thank you.
If anyone wants to know my solution, heres what I came up with.
I ended up replacing the github action with my own bash script.
final code
#!/usr/bin/env bash
codecov_file="${GITHUB_WORKSPACE}/scripts/codecov.sh"
curl -s https://codecov.io/bash > $codecov_file
chmod +x $codecov_file
cd "${GITHUB_WORKSPACE}/packages";
for dir in */
do
package="${dir/\//}"
if [ -d "$package/coverage" ]
then
file="$PWD/$package/coverage/lcov.info"
flag="${package/-/_}"
$codecov_file -f $file -F $flag -v -t $CODECOV_TOKEN
fi
done
this is what the above bash script does
Downloading the bash uploader script from codecov
Moving to the packages directory where are the packages are located, and going through all the 1st level directories
Change the package name by removing extra slash
If the directory contains coverage directory only then enter into it, since only those packages have been tested.
Create a file and flag variable (removing hypen with underscore as codecov doesn't support hypen in flag name)
Executed the downloaded codecov script by passing the file and flag variable as argument

Helm and Kustomize - post renderer shell script for Windows

I need an equivalent of Unix shell script:
#!/bin/bash
# save incoming YAML to file
cat > all.yaml
# modify the YAML with kustomize
kustomize build . && rm all.yaml
but for Windows. I end up with:
#echo off
type > all.yaml
kustomize build .
but it's not working - generated file is empty. I ask kindly for help.
Command cat > all.yaml takes standard output and stores it in all.yaml file. Helm operates on std in/out and Kustomize on files, so this script ensures proper communication between this tools. Script is provided to helm like: helm upgrade <release_name> <chart> --post-renderer kustomize (script is saved in file kustomize).
EDIT
Simplifing I look for Windows equivalent of Unix's:
whoami | cat > file.txt
Finally have solved it! And came up with:
#echo off
more > all.yaml
kustomize build . && del all.yaml

Where to store settings for C command line apps and bash scripts?

This may be an obvious question but I can't find a definitive answer.
When making a command line utility in C or when writing bash scrips where can I save values for later reference?
What I'm looking for is something similar to NSUserDefaults.
For the bash setup, the shell invocation normally reads /etc/profile, and the private equivalent ~/.bash_profile or ~/.bashrc, upon startup. So look at these files and make the appropriate modifications. If possible, I suggest making a backup of these files prior to making any changes.
Be aware that the /etc/profile file will generally provide global settings while, if an equivalent file exists in your home directory, that file may override the global settings.
If you wish to add or modify environment variables on the fly, try ...
a. adding the following code to the end of your ~/.bash_profile or ~/.bashrc file
if [ -e ./.bashadd ]
then
source ./.bashadd
fi
b. append additions or modifications to the file ./.bashadd on the fly (NOTE: you'll have to handle this within your program)
echo export NAME=John >> ./.bashadd
c. at login, when you invoke bash or when you source your ~/.bash_profile or ~/.bashrc file, the environment variables will be available
Test:
[shell ~]$ echo export NAME=John >> ./.bashadd
[shell ~]$ source ./.bashrc
[shell ~]$ echo $NAME
John
[shell ~]$
Admittedly, not an ideal solution. And I would suggest doing this only in your local environment and not globally (i.e. not with /etc/profile)

Mass rename objects on Google Cloud Storage

Is it possible to mass rename objects on Google Cloud Storage using gsutil (or some other tool)? I am trying to figure out a way to rename a bunch of images from *.JPG to *.jpg.
Here is a native way to do this in bash with an explanation below, line by line of the code:
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /' | while read line; do bash -c "$line"; done
rm src-rename-list.txt; rm dest-rename-list.txt
The solution pushes 2 lists, one for the source and one for the destination file (to be used in the "gsutil mv" command):
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
The line "gsutil mv " and the two files are concatenated line by line using the below code:
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /'
This then runs each line in a while loop:
while read line; do bash -c "$line"; done
Lastly, clean up and delete the files created:
rm src-rename-list.txt; rm dest-rename-list.txt
The above has been tested against a working Google Storage bucket.
https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames
gsutil supports URI wildcards
EDIT
gsutil 3.0 release note
As part of the bucket sub-directory support we changed the * wildcard to match only up to directory boundaries, and introduced the new ** wildcard...
Do you have directories under bucket? if so, maybe you need to go down to each directories or use **.
gsutil -m mv gs://my_bucket/**.JPG gs://my_bucket/**.jpg
or
gsutil -m mv gs://my_bucket/mydir/*.JPG gs://my_bucket/mydir/*.jpg
EDIT
gsutil doesn't support wildcard for destination so far (as of 4/12/'14)
nether API.
so at this moment you need to retrieve list of all JPG files,
and rename each files.
python example:
import subprocess
files = subprocess.check_output("gsutil ls gs://my_bucket/*.JPG",shell=True)
files = files.split("\n")[:-1]
for f in files:
subprocess.call("gsutil mv %s %s"%(f,f[:-3]+"jpg"),shell=True)
please note that this would take hours.
gsutil does not support parallelized and mass-copy/rename.
You have two options:
use a dataflow process to do the operation
or
use GNU parallel to launch it using several processes
If you use GNU Parallel, it is better to deploy a new instance to do the mass copy/rename operation:
First: - Make a list of files you want to copy/rename (a file with source and destination separated by a space or tab), like this:
gs://origin_bucket/path/file gs://dest_bucket/new_path/new_filename
Second: Launch a new compute instance
Third: Login in that instance and install Gnu parallel
sudo apt install parallel
Third: authorize yourself with google (gcloud auth login) because the service account for compute might not have permissions to move/rename the files.
gcloud auth login
Make the copy (gsutil cp) or move (gsutil mv) operation with parallel:
parallel -j 20 --colsep ' ' gsutil mv {1} {2} :::: file_with_source_destination_uris.txt
This will make 20 parallel runs of the gsutil cp operation.
Yes, it is possible:
Move/rename objects and/or subdirectories

Resources