How to keep latest version of cloudbuild.yaml seperate in the cloud storage - google-app-engine

My cloudbuild.yaml consists of:
steps:
- name: maven:3.6.0-jdk-8-slim
entrypoint: 'mvn'
args: ["clean","install","-PgenericApiSuite","-pl", "api-testing", "-am", "-B"]
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/$BUILD_ID']
But every time it runs now my bucket shows the reports with its build_id.
Is there a way I can keep the latest report separate from the rest?

Sadly symbolic link doesn't exist in Cloud Storage. For achieving what you want, you have to handle this manually with these 2 steps at the end of your job:
# delete the previous existing latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'rm', '-r', 'gs://testing-reports/latest']
# copy the most recent file into the latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', 'gs://testing-reports/$BUILD_ID', 'gs://testing-reports/latest']

Related

How to create a SHACL validation badge showing GitHub action results?

Using https://img.shields.io/static/v1?label=shacl&message=5&color=yellow I can create a static badge , however I want to make it dynamic and show the output of PySHACL Version: 0.17.2, which I run in a GitHub action:
name: build
on:
workflow_dispatch:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Install dependencies
run: pip install pyshacl
- name: Build and Validate
run: pyshacl -s shacl.ttl -a -f human mydata.ttl
PySHACL returns a validation report such as this:
Validation Report
Conforms: False
Results (11):
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
Severity: sh:Violation
Source Shape: meta:ComputerBasedApplicationComponentDomainShape
Focus Node: bb:IntegrationPlatform
Value Node: bb:IntegrationPlatform
Message: Value does not have class meta:ComputerBasedApplicationComponent
[...]
How do I get the number of errors (11 in this case) from the GitHub Action log into my badge?

kustomize patching a specific container other than by array (/containers/0)

I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this:
patch: |-
- op: add
path: /spec/containers/1/command
value: ["sh", "-c", "tail -f /dev/null"]
That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable.
Is there a way to patch by name, or target a label/annotation, or some other mechanism?
path: /spec/containers/${NAME_OF_CONTAINER}/command
Any insight is greatly appreciated.
For future readers: you may have seen JSONPath syntax like this floating around the internet, and hoped that you could select a list item and patch it using Kustomize.
/spec/containers[name=my-app]/command
As #Rico mentioned in his answer: This is a limitation with JSON6902 - it only accepts paths using JSONPointer syntax, defined by JSON6901.
So, no, you cannot currently address a list item using [key=value] syntax when using kustomize's patchesJson6902.
However, a solution to the problem that the original question highlights around potential reordering of list items does exist without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied).
Simply add another JSON6902 operation to your patches to test that the item remains at the index you specified.
# First, test that the item is still at the list index you expect
- op: test
path: /spec/containers/0/name
value: my-app
# Now that you know your item is still at index-0, it's safe to patch it's command
- op: replace
path: /spec/containers/0/command
value: ["sh", "-c", "tail -f /dev/null"]
The test operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid!
I use this trick especially when dealing with custom resources, since I:
A) Don't have to give kustomize a whole new openAPI spec, and
B) Don't have to depend on the CRD authors having added the correct extension annotation (like: "x-kubernetes-patch-merge-key": "name") to make sure my strategic merge patches on list items work the way I need them to.
This is more of a Json6902 patch limitation together with the fact that containers are defined in a K8s pod as an Array and not a Hash where something like this would work:
path: /spec/containers/${NAME_OF_CONTAINER}/command
You could just try a StrategicMergePatch. which essentially what kubectl apply does.
cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
run: my-app
replicas: 2
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-container
image: myimage
ports:
- containerPort: 80
EOF
cat <<EOF > set_command.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
template:
spec:
containers:
- name: my-app
command: ["sh", "-c", "tail -f /dev/null"]
EOF
cat <<EOF >./kustomization.yaml
resources:
- deployment.yaml
patchesStrategicMerge:
- set_command.yaml
EOF
✌️

Access environment variables in webpack.js react in kubernetes environment

This a part of my webpack file.
output: {
filename: "[name].[chunkhash:8].js",
path: path.resolve(__dirname, "../../dist/static/"),
publicPath: `${**process.env.STATIC_URL**}/xxxxxxx/static/`
}
I want to access environment variables set in configmap of kubernetes here. Is there a way to do that?
There is a dedicated envFrom: field in the PodSpec that allows injecting ConfigMap keys as environment variables (assuming they are "environment safe," so no integers or boolean or non-string values):
containers:
- image: whatever
envFrom:
configMapRef:
name: your-configmap-name-goes-here
Or, if you just want that one key, then there is a similar valueFrom: field in the env: items themselves:
containers:
- image: whatever
env:
- name: STATIC_URL
valueFrom:
configMapKeyRef:
name: your-configmap-name-goes-here
key: whatever-key-holds-the-static-url-value

How to loop an array of packages in an Ansible role

I have made role for installing php5-fpm (with other roles: nginx, worldpress, mysql). I want to install php5 set of packages, but have problem with the looping an array of packages. Please some tips how to solve this issue.
Role php5-fpm include:
roles/default/main.yml
roles/tasks/install.yml
default/main.yml:
---
# defaults file for php-fpm
# filename: roles/php5-fpm/defaults/main.yml
#
php5:
packages:
- php5-fpm
- php5-common
- php5-curl
- php5-mysql
- php5-cli
- php5-gd
- php5-mcrypt
- php5-suhosin
- php5-memcache
service:
name: php5-fpm
tasks/install.yml:
# filename: roles/php5-fpm/tasks/install.yml
#
- name: install php5-fpm and family
apt:
name: "{{ item }}"
with_items: php5.packages
notify:
- restart php5-fpm service
I want that "with_items" from install.yml look into defaults/main.yml and take that array of packages
Expand the variable
wrong
with_items: php5.packages
correct
loop: "{{ php5.packages }}"
Quoting from Loops
We added loop in Ansible 2.5. It is not yet a full replacement for with_, but we recommend it for most use cases.
We have not deprecated the use of with_ - that syntax will still be valid for the foreseeable future.
We are looking to improve loop syntax - watch this page and the changelog for updates.

bosh deploy error Error 190014

my bosh version is 1.3232.0
my platform is vsphere, i search the google and bosh site, it may relate to the cloud-config opt-in. but i have no idea anymore.
I create own mongodb release, when Upload the manifest, it throws Error 190014
Director task 163
Started preparing deployment > Preparing deployment. Failed: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"] (00:00:00)
Error 190014: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"]
my manifest is :
---
name: mongodb3
director_uuid: d3df0341-4aeb-4706-940b-6f4681090af8
releases:
- name: mongodb
version: latest
compilation:
workers: 1
reuse_compilation_vms: false
network: default
cloud_properties:
cpu: 4
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf_z2
disk: 20480
ram: 4096
update:
canaries: 1
canary_watch_time: 15000-30000
update_watch_time: 15000-30000
max_in_flight: 1
networks:
- name: default
type: manual
subnets:
- cloud_properties:
name: VM Network
range: 10.62.90.133/25
gateway: 10.62.90.129
static:
- 10.62.90.140
reserved:
- 10.62.90.130 - 10.62.90.139
- 10.62.90.151 - 10.62.90.254
dns:
- 10.254.174.10
- 10.104.128.235
resource_pools:
- cloud_properties:
cpu: 2
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf
disk: 10480
ram: 4096
name: mongodb3
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
jobs:
- name: mongodb3
instances: 1
templates:
- {name: mongodb3, release: mongodb3}
persistent_disk: 10_240
resource_pools: mongodb3
networks:
- name: default
solved, these parts should be put in an single file, and deploy to bosh

Resources