How to create a SHACL validation badge showing GitHub action results? - badge

Using https://img.shields.io/static/v1?label=shacl&message=5&color=yellow I can create a static badge , however I want to make it dynamic and show the output of PySHACL Version: 0.17.2, which I run in a GitHub action:
name: build
on:
workflow_dispatch:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Install dependencies
run: pip install pyshacl
- name: Build and Validate
run: pyshacl -s shacl.ttl -a -f human mydata.ttl
PySHACL returns a validation report such as this:
Validation Report
Conforms: False
Results (11):
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
Severity: sh:Violation
Source Shape: meta:ComputerBasedApplicationComponentDomainShape
Focus Node: bb:IntegrationPlatform
Value Node: bb:IntegrationPlatform
Message: Value does not have class meta:ComputerBasedApplicationComponent
[...]
How do I get the number of errors (11 in this case) from the GitHub Action log into my badge?

Related

kustomize patching a specific container other than by array (/containers/0)

I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this:
patch: |-
- op: add
path: /spec/containers/1/command
value: ["sh", "-c", "tail -f /dev/null"]
That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable.
Is there a way to patch by name, or target a label/annotation, or some other mechanism?
path: /spec/containers/${NAME_OF_CONTAINER}/command
Any insight is greatly appreciated.
For future readers: you may have seen JSONPath syntax like this floating around the internet, and hoped that you could select a list item and patch it using Kustomize.
/spec/containers[name=my-app]/command
As #Rico mentioned in his answer: This is a limitation with JSON6902 - it only accepts paths using JSONPointer syntax, defined by JSON6901.
So, no, you cannot currently address a list item using [key=value] syntax when using kustomize's patchesJson6902.
However, a solution to the problem that the original question highlights around potential reordering of list items does exist without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied).
Simply add another JSON6902 operation to your patches to test that the item remains at the index you specified.
# First, test that the item is still at the list index you expect
- op: test
path: /spec/containers/0/name
value: my-app
# Now that you know your item is still at index-0, it's safe to patch it's command
- op: replace
path: /spec/containers/0/command
value: ["sh", "-c", "tail -f /dev/null"]
The test operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid!
I use this trick especially when dealing with custom resources, since I:
A) Don't have to give kustomize a whole new openAPI spec, and
B) Don't have to depend on the CRD authors having added the correct extension annotation (like: "x-kubernetes-patch-merge-key": "name") to make sure my strategic merge patches on list items work the way I need them to.
This is more of a Json6902 patch limitation together with the fact that containers are defined in a K8s pod as an Array and not a Hash where something like this would work:
path: /spec/containers/${NAME_OF_CONTAINER}/command
You could just try a StrategicMergePatch. which essentially what kubectl apply does.
cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
run: my-app
replicas: 2
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-container
image: myimage
ports:
- containerPort: 80
EOF
cat <<EOF > set_command.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
template:
spec:
containers:
- name: my-app
command: ["sh", "-c", "tail -f /dev/null"]
EOF
cat <<EOF >./kustomization.yaml
resources:
- deployment.yaml
patchesStrategicMerge:
- set_command.yaml
EOF
✌️

Zeppelin k8s: change interpreter pod configuration

I've configured my zeppelin on kubernetes using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: zeppelin
labels: [...]
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: zeppelin
spec:
serviceAccountName: zeppelin
containers:
- name: zeppelin
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
[...]
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: k8s
- name: ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE
value: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
When a new paragraph job is performed, zeppelin since is running in k8s mode creates a pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-ghbvld 0/1 Completed 0 9m -----<<<<<<<
spark-master-0 1/1 Running 0 38m
spark-worker-0 1/1 Running 0 38m
zeppelin-6cc658d59f-gk2lp 1/1 Running 0 24m
Shortly, this container is engaged to first copy spark home folder from ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE into main container and then executes interpreter.
Problem arises here:
I'm getting this error message on created pod:
Interpreter launch command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dfile.encoding=UTF-8 -Dlog4j.configuration=file:///zeppelin/conf/log4j.properties -Dzeppelin.log.file='/zeppelin/logs/zeppelin-interpreter-spark-shared_process--spark-ghbvld.log' -Xms1024m -Xmx2048m -XX:MaxPermSize=512m -cp ":/zeppelin/interpreter/spark/dep/*:/zeppelin/interpreter/spark/*::/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview1.jar" org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc 36161 "spark-shared_process" 12321:12321
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/dep/zeppelin-spark-dependencies-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/spark-interpreter-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARN [2020-06-05 06:35:05,694] ({main} ZeppelinConfiguration.java[create]:159) - Failed to load configuration, proceeding with a default
INFO [2020-06-05 06:35:05,745] ({main} ZeppelinConfiguration.java[create]:171) - Server Host: 0.0.0.0
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:248)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:243)
at org.apache.zeppelin.conf.ZeppelinConfiguration.getServerPort(ZeppelinConfiguration.java:327)
at org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:173)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:144)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:152)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:321)
As you can see, main problem is:
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
I've tried to add zeppelin.server.port property on interpreter configuration using zeppelin web frontend, navigating to Interpreters -> Spark Interpreter -> add property, see bellow:
However, the problem keeps going.
Any ideas about how to override zeppelin.server.port, or ZEPPELIN_PORT on generated interpreter pod?
I also dump interpreter pod manifest created by zeppelin:
$ kubectl get pods -o=yaml spark-ghbvld
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"spark-ghbvld","interpreterGroupId":"spark-shared_process","interpreterSettingName":"spark"},"name":"spark-ghbvld","namespace":"ra-iot-dev"},"spec":{"automountServiceAccountToken":true,"containers":[{"command":["sh","-c","$(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r 12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process -l /tmp/local-repo -g spark"],"env":[{"name":"PYSPARK_PYTHON","value":"python"},{"name":"PYSPARK_DRIVER_PYTHON","value":"python"},{"name":"SERVICE_DOMAIN","value":null},{"name":"ZEPPELIN_HOME","value":"/zeppelin"},{"name":"INTERPRETER_GROUP_ID","value":"spark-shared_process"},{"name":"SPARK_HOME","value":null}],"image":"apache/zeppelin:0.9.0","lifecycle":{"preStop":{"exec":{"command":["sh","-c","ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer | grep -v grep | awk '{print $2}' | xargs kill"]}}},"name":"spark","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"initContainers":[{"command":["sh","-c","cp -r /opt/spark/* /spark/"],"image":"docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5","name":"spark-home-init","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{},"name":"spark-home"}]}}
openshift.io/scc: anyuid
creationTimestamp: "2020-06-05T06:34:36Z"
labels:
app: spark-ghbvld
interpreterGroupId: spark-shared_process
interpreterSettingName: spark
name: spark-ghbvld
namespace: ra-iot-dev
resourceVersion: "224863130"
selfLink: /api/v1/namespaces/ra-iot-dev/pods/spark-ghbvld
uid: a04a0d70-a6f6-11ea-9e39-0050569f5f65
spec:
automountServiceAccountToken: true
containers:
- command:
- sh
- -c
- $(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r
12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process
-l /tmp/local-repo -g spark
env:
- name: PYSPARK_PYTHON
value: python
- name: PYSPARK_DRIVER_PYTHON
value: python
- name: SERVICE_DOMAIN
- name: ZEPPELIN_HOME
value: /zeppelin
- name: INTERPRETER_GROUP_ID
value: spark-shared_process
- name: SPARK_HOME
image: apache/zeppelin:0.9.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- sh
- -c
- ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer
| grep -v grep | awk '{print $2}' | xargs kill
name: spark
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-qs7sj
initContainers:
- command:
- sh
- -c
- cp -r /opt/spark/* /spark/
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imagePullPolicy: IfNotPresent
name: spark-home-init
resources: {}
securityContext:
capabilities:
drop:
- MKNOD
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /spark
name: spark-home
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-n4lpw
readOnly: true
nodeName: node2.si-origin-cluster.t-systems.es
nodeSelector:
region: primary
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
seLinuxOptions:
level: s0:c30,c0
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: spark-home
- name: default-token-n4lpw
secret:
defaultMode: 420
secretName: default-token-n4lpw
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:03Z"
reason: PodCompleted
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:35:07Z"
reason: PodCompleted
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
reason: PodCompleted
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-06-05T06:34:37Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
image: docker.io/apache/zeppelin:0.9.0
imageID: docker-pullable://docker.io/apache/zeppelin#sha256:0691909f6884319d366f5d3a5add8802738d6240a83b2e53e980caeb6c658092
lastState: {}
name: spark
ready: false
restartCount: 0
state:
terminated:
containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
exitCode: 0
finishedAt: "2020-06-05T06:35:05Z"
reason: Completed
startedAt: "2020-06-05T06:35:05Z"
hostIP: 10.49.160.21
initContainerStatuses:
- containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
imageID: docker-pullable://docker-registry.default.svc:5000/ra-iot-dev/spark#sha256:1cbcdacbcc55b2fc97795a4f051429f69ff3666abbd936e08e180af93a11ab65
lastState: {}
name: spark-home-init
ready: true
restartCount: 0
state:
terminated:
containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
exitCode: 0
finishedAt: "2020-06-05T06:35:02Z"
reason: Completed
startedAt: "2020-06-05T06:35:02Z"
phase: Succeeded
podIP: 10.131.0.203
qosClass: BestEffort
startTime: "2020-06-05T06:34:37Z"
ENVIRONMENT VARIABLES:
PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=spark-xonray2
PYSPARK_PYTHON=python
PYSPARK_DRIVER_PYTHON=python
SERVICE_DOMAIN=
ZEPPELIN_HOME=/zeppelin
INTERPRETER_GROUP_ID=spark-shared_process
SPARK_HOME=
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
MONGODB_PORT_27017_TCP_PORT=27017
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.50.211
ZEPPELIN_PORT_80_TCP_ADDR=172.30.57.29
MONGODB_PORT=tcp://172.30.240.109:27017
MONGODB_PORT_27017_TCP=tcp://172.30.240.109:27017
SPARK_MASTER_SVC_PORT_7077_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT_7077_TCP_ADDR=172.30.88.254
SPARK_MASTER_SVC_PORT_80_TCP=tcp://172.30.88.254:80
MONGODB_PORT_27017_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT=tcp://172.30.235.145:9094
KAFKA_PORT_9092_TCP=tcp://172.30.164.40:9092
KUBERNETES_PORT_53_UDP_PROTO=udp
ZOOKEEPER_PORT_2888_TCP=tcp://172.30.222.17:2888
ZEPPELIN_PORT_80_TCP=tcp://172.30.57.29:80
ZEPPELIN_PORT_80_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.133.154
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ORION_PORT_80_TCP_ADDR=172.30.55.76
SPARK_MASTER_SVC_PORT_7077_TCP_PORT=7077
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.229.165
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP_ADDR=172.30.235.145
KAFKA_PORT_9092_TCP_PORT=9092
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.245.33
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_HOST=172.30.222.17
ZEPPELIN_SERVICE_PORT=80
KAFKA_0_EXTERNAL_SERVICE_PORT=9094
GREENPLUM_SERVICE_PORT_HTTP=5432
KAFKA_0_EXTERNAL_SERVICE_HOST=172.30.235.145
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=172.30.0.1
ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_PORT=3888
ORION_PORT_80_TCP_PORT=80
MONGODB_SERVICE_PORT_MONGODB=27017
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
ZOOKEEPER_PORT_2888_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_SERVICE_PORT_HTTP=80
GREENPLUM_SERVICE_PORT=5432
GREENPLUM_PORT_5432_TCP_PORT=5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ZOOKEEPER_PORT_3888_TCP=tcp://172.30.222.17:3888
ZOOKEEPER_PORT_3888_TCP_PROTO=tcp
MONGODB_SERVICE_PORT=27017
KAFKA_SERVICE_PORT_TCP_CLIENT=9092
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.50.211
ZOOKEEPER_SERVICE_PORT_TCP_CLIENT=2181
ZOOKEEPER_SERVICE_PORT_FOLLOWER=2888
KAFKA_SERVICE_PORT=9092
SPARK_MASTER_SVC_PORT_80_TCP_PORT=80
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.50.211:1
ORION_SERVICE_HOST=172.30.55.76
KAFKA_PORT_9092_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_ADDR=172.30.222.17
ZEPPELIN_SERVICE_PORT_HTTP=80
ORION_PORT_80_TCP=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT=tcp://172.30.88.254:7077
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.178.127
MONGODB_SERVICE_HOST=172.30.240.109
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.245.33
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.178.127:1
ORION_PORT=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_ADDR=172.30.0.147
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT=tcp://172.30.50.211:1
ORION_SERVICE_PORT=80
ORION_PORT_80_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP=tcp://172.30.235.145:9094
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.167.19:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.229.165
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_SERVICE_PORT_TCP_KAFKA=9094
KAFKA_0_EXTERNAL_PORT_9094_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_HOST=172.30.88.254
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_53_UDP_PORT=53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.178.127:1
ZEPPELIN_SERVICE_HOST=172.30.57.29
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_PORT=1
SPARK_MASTER_SVC_PORT_80_TCP_ADDR=172.30.88.254
KUBERNETES_PORT=tcp://172.30.0.1:443
ZOOKEEPER_PORT_2181_TCP_PORT=2181
ZOOKEEPER_PORT_2888_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_PORT=7077
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
ZOOKEEPER_SERVICE_PORT_TCP_ELECTION=3888
ZOOKEEPER_PORT=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2181_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_PORT_7077_TCP=tcp://172.30.88.254:7077
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
ZEPPELIN_PORT_80_TCP_PORT=80
KAFKA_0_EXTERNAL_PORT_9094_TCP_PORT=9094
GREENPLUM_SERVICE_HOST=172.30.0.147
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.133.154
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.167.19
KUBERNETES_PORT_53_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ORION_SERVICE_PORT_HTTP=80
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.167.19:1
SPARK_MASTER_SVC_SERVICE_PORT_CLUSTER=7077
KAFKA_SERVICE_HOST=172.30.164.40
GREENPLUM_PORT=tcp://172.30.0.147:5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.117.125
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.178.127
ZEPPELIN_PORT=tcp://172.30.57.29:80
KAFKA_PORT=tcp://172.30.164.40:9092
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_53_TCP_PORT=53
SPARK_MASTER_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.117.125
MONGODB_PORT_27017_TCP_ADDR=172.30.240.109
GREENPLUM_PORT_5432_TCP=tcp://172.30.0.147:5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KAFKA_PORT_9092_TCP_ADDR=172.30.164.40
ZOOKEEPER_SERVICE_PORT=2181
ZOOKEEPER_PORT_2181_TCP=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2888_TCP_PORT=2888
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.167.19
Z_VERSION=0.9.0-preview1
LOG_TAG=[ZEPPELIN_0.9.0-preview1]:
Z_HOME=/zeppelin
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
ZEPPELIN_ADDR=0.0.0.0
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HOME=/
Judging by the values of ENVIRONMENT VARIABLES, you also have a service with a metadata name 'spark-master'. Change this name, example:
apiVersion: v1
kind: Service
metadata:
name: "master-spark-service"
spec:
ports:
- name: spark
port: 7077
targetPort: 7077
selector:
component: "spark-master"
type: ClusterIP
In this case, the kubernetes will not override the port value.
ZEPPELIN_PORT is set by k8s service discovery, because your pod/service name is zeppelin!
Just change pod / service name by something else, or disable discovery env variables, see https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service, this is just enableServiceLinks: false in your zeppelin pod template definition.

How to keep latest version of cloudbuild.yaml seperate in the cloud storage

My cloudbuild.yaml consists of:
steps:
- name: maven:3.6.0-jdk-8-slim
entrypoint: 'mvn'
args: ["clean","install","-PgenericApiSuite","-pl", "api-testing", "-am", "-B"]
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/$BUILD_ID']
But every time it runs now my bucket shows the reports with its build_id.
Is there a way I can keep the latest report separate from the rest?
Sadly symbolic link doesn't exist in Cloud Storage. For achieving what you want, you have to handle this manually with these 2 steps at the end of your job:
# delete the previous existing latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'rm', '-r', 'gs://testing-reports/latest']
# copy the most recent file into the latest directory
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', 'gs://testing-reports/$BUILD_ID', 'gs://testing-reports/latest']

How to loop an array of packages in an Ansible role

I have made role for installing php5-fpm (with other roles: nginx, worldpress, mysql). I want to install php5 set of packages, but have problem with the looping an array of packages. Please some tips how to solve this issue.
Role php5-fpm include:
roles/default/main.yml
roles/tasks/install.yml
default/main.yml:
---
# defaults file for php-fpm
# filename: roles/php5-fpm/defaults/main.yml
#
php5:
packages:
- php5-fpm
- php5-common
- php5-curl
- php5-mysql
- php5-cli
- php5-gd
- php5-mcrypt
- php5-suhosin
- php5-memcache
service:
name: php5-fpm
tasks/install.yml:
# filename: roles/php5-fpm/tasks/install.yml
#
- name: install php5-fpm and family
apt:
name: "{{ item }}"
with_items: php5.packages
notify:
- restart php5-fpm service
I want that "with_items" from install.yml look into defaults/main.yml and take that array of packages
Expand the variable
wrong
with_items: php5.packages
correct
loop: "{{ php5.packages }}"
Quoting from Loops
We added loop in Ansible 2.5. It is not yet a full replacement for with_, but we recommend it for most use cases.
We have not deprecated the use of with_ - that syntax will still be valid for the foreseeable future.
We are looking to improve loop syntax - watch this page and the changelog for updates.

bosh deploy error Error 190014

my bosh version is 1.3232.0
my platform is vsphere, i search the google and bosh site, it may relate to the cloud-config opt-in. but i have no idea anymore.
I create own mongodb release, when Upload the manifest, it throws Error 190014
Director task 163
Started preparing deployment > Preparing deployment. Failed: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"] (00:00:00)
Error 190014: Deployment manifest should not contain cloud config properties: ["compilation", "networks", "resource_pools"]
my manifest is :
---
name: mongodb3
director_uuid: d3df0341-4aeb-4706-940b-6f4681090af8
releases:
- name: mongodb
version: latest
compilation:
workers: 1
reuse_compilation_vms: false
network: default
cloud_properties:
cpu: 4
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf_z2
disk: 20480
ram: 4096
update:
canaries: 1
canary_watch_time: 15000-30000
update_watch_time: 15000-30000
max_in_flight: 1
networks:
- name: default
type: manual
subnets:
- cloud_properties:
name: VM Network
range: 10.62.90.133/25
gateway: 10.62.90.129
static:
- 10.62.90.140
reserved:
- 10.62.90.130 - 10.62.90.139
- 10.62.90.151 - 10.62.90.254
dns:
- 10.254.174.10
- 10.104.128.235
resource_pools:
- cloud_properties:
cpu: 2
datacenters:
- clusters:
- cf_z2:
resource_pool: mongodb
name: cf
disk: 10480
ram: 4096
name: mongodb3
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
jobs:
- name: mongodb3
instances: 1
templates:
- {name: mongodb3, release: mongodb3}
persistent_disk: 10_240
resource_pools: mongodb3
networks:
- name: default
solved, these parts should be put in an single file, and deploy to bosh

Resources