Google Cloud Build: Moving files - file

I want to move the file index.js from the root of the project to the dist/project_name. This is the step from cloudbuild.yaml:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: /bin/bash
args: ['-c', 'mv', 'index.js', 'dist/project_name']
But the step is failing with the next error:
Already have image (with digest): gcr.io/cloud-builders/docker
mv: missing file operand
Try 'mv --help' for more information.
How I can fix this issue?

Because you're using bash -c, I think you need to encapsulate the entire "script" in a string:
args: ['-c', 'mv index.js dist/project_name']
My personal preference (and it's just that), is to not embed JSON ([...]) in YAML. This makes the result in this case slightly clearer and makes it easier to embed a multiline script:
args:
- bash
- -c
- |
mv index js dist/project_name
NOTE tools like YAMLlint will do this for you too.

Related

How to inject pod environment variables values into React app on runtime?

Running pods have some environment variables defined inside, for example:
/ # printenv
REACT_APP_ENV_VARIABLE=Variable from Kube!
REDIS_SERVICE_PORT=6379
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.38.0.1:443
REDIS_PORT=tcp://10.38.61.225:6379
REDIS_PORT_6379_TCP_ADDR=10.38.61.225
HOSTNAME=playground-pod
PLAYGROUND_SERVICE_SERVICE_HOST=10.38.0.53
REDIS_PORT_6379_TCP=tcp://10.38.61.225:6379
PLAYGROUND_SERVICE_SERVICE_PORT=80
PLAYGROUND_SERVICE_PORT=tcp://10.38.0.53:80
PLAYGROUND_SERVICE_PORT_80_TCP_ADDR=10.38.0.53
KUBERNETES_PORT_443_TCP_PROTO=tcp
PLAYGROUND_SERVICE_PORT_80_TCP_PORT=80
PLAYGROUND_SERVICE_PORT_80_TCP_PROTO=tcp
REACT_APP_ENV_VARIABLE_TWO=192.168.1.12
PLAYGROUND_SERVICE_PORT_80_TCP=tcp://10.38.0.53:80
How should I configure a React app like this one:
function App() {
return (
<div className="App">
<header className="App-header">
<p>
<code>ENV. VARIABLE: </code> {x.REACT_APP_ENV_VARIABLE}
</p>
</header>
</div>
);
}
export default App;
to read and inject some of the variables present in the pod?
The main reason I want to know it, is dynamic update of e.g. backend or Redis URLs - they might change when the app is restarted, rescheduled, etc.
My first approach was using a config.json file imported to the app, but this way I can't import dynamic values generated by running pods.
You can use the library dot-env
import React from "react";
import env from "react-dotenv";
export function MyComponent() {
return <div>{env.REACT_APP}</div>;
}
while in deployment you can pass secret from secret and configmap
spec:
containers:
- name: example-site
image: example/app:v1
ports:
- containerPort: 80
env:
- name: REACT_APP
value: "123456"
The main reason I want to know it, is dynamic update of e.g. backend
or Redis URLs - they might change when the app is restarted,
rescheduled, etc.
Above scenario fit well with your requirement, instead of using the config.json.
You can pass multiple values to deployment using configmap and secrets.
As #Harsh Manvar suggested (thanks a lot!), the react-dotenv library can be used, but just adding it to the project is not enough.
Firstly, you have to follow all steps described in react-dotenv documentation (adding .env file to your project, editing package.json file).
In my case, .env file looked like this:
REACT_APP_DEPLOY_SETUP='__dps__'
REACT_APP_PORT='__prt__'
REACT_APP_BACKEND_URL='__bur__'
These are just placeholders for real values that will be added during runtime.
Having the .env file ready, npm scripts prepended with react-dotenv command, and the variables whitelisted (as described in the library documentation), you can build your app image.
When the image is ready, add it to the kubernetes pod config file and replace your variables' placeholders with real values, like this:
[...]
spec:
containers:
- name: plg-frontend
image: localhost:5000/frontend:1.22
ports:
- containerPort: 80
command:
- sh
- -c
args:
- sed -i "s/__prt__/$REACT_APP_PORT/g" /usr/share/nginx/html/env.js;
sed -i "s/__bur__/http:\/\/$BACKEND_SERVICE_HOST/g" /usr/share/nginx/html/env.js;
sed -i "s/__dps__/$REACT_APP_DEPLOY_SETUP/g" /usr/share/nginx/html/env.js;
nginx -g 'daemon off;'
env:
- name: REACT_APP_DEPLOY_SETUP
value: "development"
- name: REACT_APP_PORT
value: "5089"
What happened up there, was replacing placeholder values with actual values:
$BACKEND_SERVICE_HOST is an environment variable that exists in the pod and can be read from the running container,
$REACT_APP_DEPLOY_SETUP is a regular string defined by user,
$REACT_APP_PORT is an integer value (it has to be in quotes, like strings!).
And the replacing happened with sed command (or rather: sh -c "sed -i ..."). All of the commands are chained, so don't forget about semicolons at the end of each argument.
All of the replacements were made in /usr/share/nginx/html/env.js file, which is created by react-dotenv library in project root. The actual location depends on where you mounted your build image (it's defined in Dockerfile).
Lastly, nginx command is called, since this is the final command invoked in image's Dockerfile. Without this explicit call, the command from the Dockerfile would be overwritten with the commands related to the pod container and, in this case, nginx wouldn't start your app.
After the pod is started, you can check whether the variables are present in the container:
kubectl exec <pod-name> -- printenv | grep REACT_APP
But it doesn't mean they were read by your app during runtime. To see if they were changed to the values from the pod definition, you can either exec running pod container and preview the env.js file or add some console logging in the app code.

Optimal usage of codecov in a monorepo context with separate flags for each package

I was just wondering what’s the best way to configure codecov for a monorepo setting. For example, let’s say I have packages A and B under my monorepo. The way I’m currently using codecov is by using a github action codecov/codecov-action#v1, by using multiple uses statement in my GitHub workflow YAML file like the following:-
- uses: codecov/codecov-action#v1
with:
files: ./packages/A/coverage/lcov.info
flags: flag_a
name: A
- uses: codecov/codecov-action#v1
with:
files: ./packages/B/coverage/lcov.info
flags: flag_b
name: B
I know it's possible to use a comma-separated value to upload multiple files, but I have to set a separate flag for each package, and doing it that way doesn't seem to work.
Thank you.
If anyone wants to know my solution, heres what I came up with.
I ended up replacing the github action with my own bash script.
final code
#!/usr/bin/env bash
codecov_file="${GITHUB_WORKSPACE}/scripts/codecov.sh"
curl -s https://codecov.io/bash > $codecov_file
chmod +x $codecov_file
cd "${GITHUB_WORKSPACE}/packages";
for dir in */
do
package="${dir/\//}"
if [ -d "$package/coverage" ]
then
file="$PWD/$package/coverage/lcov.info"
flag="${package/-/_}"
$codecov_file -f $file -F $flag -v -t $CODECOV_TOKEN
fi
done
this is what the above bash script does
Downloading the bash uploader script from codecov
Moving to the packages directory where are the packages are located, and going through all the 1st level directories
Change the package name by removing extra slash
If the directory contains coverage directory only then enter into it, since only those packages have been tested.
Create a file and flag variable (removing hypen with underscore as codecov doesn't support hypen in flag name)
Executed the downloaded codecov script by passing the file and flag variable as argument

How can I pass environment variables to mongo docker-entrypoint-initdb.d?

I am trying to do the following tutorial:
https://itnext.io/docker-mongodb-authentication-kubernetes-node-js-75ff995151b6
However, in there, they use raw values for the mongo init.js file that is placed within docker-entrypoint-initdb.d folder.
I would like to use environment variables that come from my CI/CD system (Gitlab). Does anyone know how to pass environment variables to the init.js file? I have tried several things like for example use init.sh instead for the shell but without any success.
If I run manually the init shell version, I can have it working because I call mongo with --eval and pass the values, however, the docker-entrypoint-blabla is called automatically, so I do not have control of how this is called and I do not know what I could do for achieving what I want.
Thank you in advance and regards.
you can make use of a shell script to retrieve env variables and create the user.
initdb.d/init-mongo.sh
set -e
mongo <<EOF
use $MONGO_INITDB_DATABASE
db.createUser({
user: '$MONGO_INITDB_USER',
pwd: '$MONGO_INITDB_PWD',
roles: [{
role: 'readWrite',
db: '$MONGO_INITDB_DATABASE'
}]
})
EOF
docker-compose.yml
version: "3.7"
services:
mongodb:
container_name: "mongodb"
image: mongo:4.4
hostname: mongodb
restart: always
volumes:
- ./data/mongodb/mongod.conf:/etc/mongod.conf
- ./data/mongodb/initdb.d/:/docker-entrypoint-initdb.d/
- ./data/mongodb/data/db/:/data/db/
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=development
- MONGO_INITDB_USER=mongodb
- MONGO_INITDB_PWD=mongodb
ports:
- 27017:27017
command: [ "-f", "/etc/mongod.conf" ]
Now you can connect to development database using mongodb as user and password credentials.
Use shell script (e.g mongo-init.sh) to access variables. Can still run JavaScript code inside as below.
set -e
mongo <<EOF
use admin
db.createUser({
user: '$MONGO_ADMIN_USER',
pwd: '$MONGO_ADMIN_PASSWORD',
roles: [{
role: 'readWrite',
db: 'dummydb'
}]
})
EOF
Shebang line is not necessary at the beginning as this file will be sourced.
Until recently, I simply used a .sh shell script in the docker-entrypoint-initdb.d directory to access ENV variables, much like #Lazaro answer.
It is now possible to access environment variables from javascript files using process.env, provided the file is run with the newer mongosh instead of mongo, which is now deprecated.
However, according to the Docs (see 'Initializing a fresh instance'), mongosh is only used for .js files in docker-entrypoint-initdb.d if using version 6 or greater. I can confirm this is working using the mongo:6 image tag.
You can use envsubs.
If command not found : here. Install it on your runners host if you use shell runners, else, within the docker image used by the runner, or directly in your script.
(NB: Your link isn't free, so I can't adapt to your situation :p )
Example:
init.js.template:
console.log('$GREET $PEOPLE $PUNCTUATION')
console.log('Pipeline from $CI_COMMIT_BRANCH')
gitlab_ci.yml:
variables:
GREET: "hello"
PEOPLE: "world"
PUNCTUATION: "!"
# ...
script:
- (envsubst < path/to/init.js.template) > path/to/init.js
- cat path/to/init.js
Output:
$ (envsubst < init.js.template) > init.js
$ cat init.js
console.log('hello world !')
console.log('Pipeline from master')
At the end the answer is that you can use a .sh file instead of a .js file within the docker-entrypoint-initdb.d folder. Within the sh script, you can use directly environment variables. However, I could not do that at the beginning because I had a typo and environment variables were not created properly.
I prefer this method because it allows you to keep a normal .js file which you lint instead of embedding the .js file into a string.
Create a dockerfile like so:
FROM mongo:5.0.9
USER mongodb
WORKDIR /docker-entrypoint-initdb.d
COPY env_init_mongo.sh env_init_mongo.sh
WORKDIR /writing
COPY mongo_init.js mongo_init.js
WORKDIR /db/data
At the top of your mongo_init.js file, you can just define variables you need
db_name = DB_NAME
schema_version = SCHEMA_VERSION
and then in your env_init_mongo.sh file, you can just replace the strings you need with environment variables or add lines to the top of the file:
mongo_init="/writing/mongo_init.js"
sed "s/SCHEMA_VERSION/$SCHEMA_VERSION/g" -i $mongo_init
sed "s/DB_NAME/${MONGO_INITDB_DATABASE}/g" -i $mongo_init
sed "1s/^/use ${MONGO_INITDB_DATABASE}\n/" -i $mongo_init # add to top of file
mongo < $mongo_init

How to pass date arg to my cloudbuild yaml

My cloudbuild.yaml exists of the following:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/$BUILD_ID']
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'rm', '-r', 'gs://studio360-testing-reports/latest']
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/latest']
This way I always have my latest report seperated from the older ones. But can I pass a {date} arg or something into my first line so I can have a visual order of all the older reports?
(Because there is no way to rank the files by last modified in the gcp storage/bucket)
Thanks
Change the first block to this:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/${_DATE}_$BUILD_ID']
Then run this:
gcloud builds submit . --substitutions _DATE=$(date +%F_%H:%M:%S)
Then you would have something like this in the bucket:
gs://testing-reports/2020-02-13_14:01:40_8a6a7ed0-62e0-43ed-8f97-aa6eca9c2834
Explanation here and here.
EDIT:
For automatic builds started by Cloud Build triggers, use this cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args:
- '-c'
- |
gsutil -m cp -r $FILENAME gs://$BUCKET/$FILENAME-$(date +%F_%H:%M:%S)-$BUILD_ID
This allows the builder to use bash to execute gsutil, so the bash command "date" can be used inside the command.
Good explanation of the syntax by Googler here, and info about entrypoint here.
Pretty sure you should be able to bash out and do something like this:
- name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args:
- -c
- |
gsutil -m cp -r /workspace/api-testing/target/cucumber-html-reports gs://testing-reports/$BUILD_ID-$(date +%m-%d-%Y)
To my knowledge, you can't run system commands in sub. variables or env. variables. (or at least I haven't been able to figure out how)

How to move/rename a file using an Ansible task on a remote system

How is it possible to move/rename a file/directory using an Ansible module on a remote system? I don't want to use the command/shell tasks and I don't want to copy the file from the local system to the remote system.
From version 2.0, in copy module you can use remote_src parameter.
If True it will go to the remote/target machine for the src.
- name: Copy files from foo to bar
copy: remote_src=True src=/path/to/foo dest=/path/to/bar
If you want to move file you need to delete old file with file module
- name: Remove old files foo
file: path=/path/to/foo state=absent
From version 2.8 copy module remote_src supports recursive copying.
The file module doesn't copy files on the remote system. The src parameter is only used by the file module when creating a symlink to a file.
If you want to move/rename a file entirely on a remote system then your best bet is to use the command module to just invoke the appropriate command:
- name: Move foo to bar
command: mv /path/to/foo /path/to/bar
If you want to get fancy then you could first use the stat module to check that foo actually exists:
- name: stat foo
stat: path=/path/to/foo
register: foo_stat
- name: Move foo to bar
command: mv /path/to/foo /path/to/bar
when: foo_stat.stat.exists
I have found the creates option in the command module useful. How about this:
- name: Move foo to bar
command: creates="path/to/bar" mv /path/to/foo /path/to/bar
I used to do a 2 task approach using stat like Bruce P suggests. Now I do this as one task with creates. I think this is a lot clearer.
- name: Move the src file to dest
command: mv /path/to/src /path/to/dest
args:
removes: /path/to/src
creates: /path/to/dest
This runs the mv command only when /path/to/src exists and /path/to/dest does not, so it runs once per host, moves the file, then doesn't run again.
I use this method when I need to move a file or directory on several hundred hosts, many of which may be powered off at any given time. It's idempotent and safe to leave in a playbook.
Another Option that has worked well for me is using the synchronize module . Then remove the original directory using the file module.
Here is an example from the docs:
- synchronize:
src: /first/absolute/path
dest: /second/absolute/path
archive: yes
delegate_to: "{{ inventory_hostname }}"
I know it's a YEARS old topic, but I got frustrated and built a role for myself to do exactly this for an arbitrary list of files. Extend as you see fit:
main.yml
- name: created destination directory
file:
path: /path/to/directory
state: directory
mode: '0750'
- include_tasks: move.yml
loop:
- file1
- file2
- file3
move.yml
- name: stat the file
stat:
path: {{ item }}
register: my_file
- name: hard link the file into directory
file:
src: /original/path/to/{{ item }}
dest: /path/to/directory/{{ item }}
state: hard
when: my_file.stat.exists
- name: Delete the original file
file:
path: /original/path/to/{{ item }}
state: absent
when: my_file.stat.exists
Note that hard linking is preferable to copying here, because it inherently preserves ownership and permissions (in addition to not consuming more disk space for a second copy of the file).
This is the way I got it working for me:
Tasks:
- name: checking if the file 1 exists
stat:
path: /path/to/foo abc.xts
register: stat_result
- name: moving file 1
command: mv /path/to/foo abc.xts /tmp
when: stat_result.stat.exists == True
the playbook above, will check if file abc.xts exists before move the file to tmp folder.
Another way to achieve this is using file with state: hard.
This is an example I got to work:
- name: Link source file to another destination
file:
src: /path/to/source/file
path: /target/path/of/file
state: hard
Only tested on localhost (OSX) though, but should work on Linux as well. I can't tell for Windows.
Note that absolute paths are needed. Else it wouldn't let me create the link. Also you can't cross filesystems, so working with any mounted media might fail.
The hardlink is very similar to moving, if you remove the source file afterwards:
- name: Remove old file
file:
path: /path/to/source/file
state: absent
Another benefit is that changes are persisted when you're in the middle of a play. So if someone changes the source, any change is reflected in the target file.
You can verify the number of links to a file via ls -l. The number of hardlinks is shown next to the mode (e.g. rwxr-xr-x 2, when a file has 2 links).
Bruce wasn't attempting to stat the destination to check whether or not to move the file if it was already there; he was making sure the file to be moved actually existed before attempting the mv.
If your interest, like Tom's, is to only move if the file doesn't already exist, I think we should still integrate Bruce's check into the mix:
- name: stat foo
stat: path=/path/to/foo
register: foo_stat
- name: Move foo to bar
command: creates="path/to/bar" mv /path/to/foo /path/to/bar
when: foo_stat.stat.exists
This may seem like overkill, but if you want to avoid using the command module (which I do, because it using command is not idempotent) you can use a combination of copy and unarchive.
Use tar to archive the file(s) you will need. If you think ahead this actually makes sense. You may want a series of files in a given directory. Create that directory with all of the files and archive them in a tar.
Use the unarchive module. When you do that, along with the destination: and remote_src: keyword, you can place copy all of your files to a temporary folder to start with and then unpack them exactly where you want to.
On Windows:
- name: Move old folder to backup
win_command: "cmd.exe /c move /Y {{ sourcePath }} {{ destinationFolderPath }}"
To rename use rename or ren command instead
You can Do It by --
Using Ad Hoc Command
ansible all -m command -a" mv /path/to/foo /path/to/bar"
Or You if you want to do it by using playbook
- name: Move File foo to destination bar
command: mv /path/to/foo /path/to/bar
- name: Example
hosts: localhost
become: yes
tasks:
- name: checking if a file exists
stat:
path: "/projects/challenge/simplefile.txt"
register: file_data
- name: move the file if file exists
copy:
src: /projects/challenge/simplefile.txt
dest: /home/user/test
when: file_data.stat.exists
- name: report a missing file
debug:
msg: "the file or directory doesn't exist"
when: not file_data.stat.exists

Resources