Cannot deploy Theia plugin on Che with sidecar container - eclipse-che

I am trying to deploy a custom Theia plugin on a Che workspace. The plugin is intended to execute a task on the hosting system to generate some code. So it will ask the user for some input parameters (a REST service specification file, and a destination folder in which to generate the output), then create a system task and pass to it the parameters to execute.
When tested on a local installation of Theia (without Che), the plugin works as expected. But when deployed on a Che environment, the task fails to execute due to a missing dependency:
(node:115) UnhandledPromiseRejectionWarning: Error: /bin/sh: java: not found
After much googling, and digging into the Che documentation, I found that the most likely way to go is with sidecar container. But when I add the containers part in the yaml, the Che workspace loads the sidecar container, but ignores completely the Theia plugin.
Can someone please give me some hints?
This is the devfile for the Che workspace (the failing one is perftestgen_plugin, second from the end):
apiVersion: 1.0.0
metadata:
name: david-custom-theia-imagef6hsw
projects:
- name: smartclide-devfiles
source:
location: 'https://github.com/eclipse-researchlabs/smartclide-devfiles.git'
type: git
branch: v0.0.9
components:
- id: redhat/vscode-yaml/latest
type: chePlugin
- type: chePlugin
reference: 'https://raw.githubusercontent.com/eclipse-researchlabs/smartclide-devfiles/test/plugins_meta.yaml'
- type: chePlugin
reference: 'https://raw.githubusercontent.com/eclipse-researchlabs/smartclide-devfiles/test/perfTestGen-plugin.yaml'
alias: perftestgen_plugin
- type: cheEditor
reference: 'https://github.com/eclipse-researchlabs/smartclide-devfiles/raw/4bd4a0dc7a40665086c92ca5fcebd086f5e39009/editor_meta.yaml'
And my plugin's metadata (meta.yaml) is as follows:
apiVersion: v2
publisher: Kairos Digital Solutions
name: smartclide-perftest-plugin
version: 0.0.3-rc6
type: Theia plugin
displayName: SmartCLIDE Performance Tests Generator
title: SmartCLIDE Performance Tests Generator Plugin
description: Che Plug-in to generate performance tests for a given OpenAPI endpoint spec.
icon: https://www.eclipse.org/che/images/logo-eclipseche.svg
repository: https://github.com/eclipse-researchlabs/smartclide-perftestgen-theia
firstPublicationDate: "2021-04-27"
category: Other
spec:
containers:
- image: quay.io/eclipse/che-java11-maven:7.32.1
extensions:
- https://github.com/eclipse-researchlabs/smartclide-perftestgen-theia/releases/download/v0.0.3-rc6/smartclide_perftestgen_theia.theia
I found this issue on stackOverflow which looked similar to my case, but still no way. Che loads either the Theia plugin or the sidecar container, but not both...

Related

Camel K With local Maven Repository

I am looking to setup the settings for using my settings.xml. from the documentation I have done the following steps:
kubectl create configmap maven-settings --from-file=settings.xml
I have verified that this is in the dashboard.
My issue then becomes when I use Modeline or -d mvn:package:name:ver it can't find the JAR and from the logging of the camel-k-operator I am not seeing anything obvious as to which repository it is using.
is there an additional setting I need to be using to get it to use the maven-settings attribute when doing the following command:
kamel run -d mvn:org.project:fakeProject:1.0.0 TestFile.java --dev
the above just gets into an infinite loop of retrying to build/deploy.
You also need to configure Camel-K to use the configmap created (I don't see it in your steps).
After creating the config map, you either need to specify the name of the configmap when using kamel install:
kamel install --maven-settings=configmap:<configmap name>/<key in the configmap with settings>
for example:
kamel install --maven-settings=configmap:maven-settings/settings.xml
Or if you have Camel-K already installed, you need to reference the configmap in the IntegrationPlatform object (path: .spec.build.maven.settings.configMapKeyRef). This integration platform object is created when there is no other integration platform in the namespace when you run the first integration, so if it doesn't exist in your namespace, you can create it and camel-k operator will pick it up, for example:
apiVersion: camel.apache.org/v1
kind: IntegrationPlatform
metadata:
labels:
app: camel-k
name: camel-k
spec:
build:
maven:
settings:
configMapKeyRef:
key: settings.xml (key in your config map)
name: maven-settings (name of your config map)
A simple way to configure a maven repository is to do it when installing the Camel K operator, ie:
kamel install --maven-repository http://my-repo
Please, have a look at the kamel install --help options to see how to better configure it. The full list of possibilities is available in the official doc at https://camel.apache.org/camel-k/next/configuration/maven.html

Accessing Cloud SQL from Cloud Build

I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).
Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.

Release fails when deploying React app to Azure Web App with Azure DevOps

I cannot get a Release pipeline in Azure DevOps to successfully deploy build files from a React app to an Azure App Service.
This is the YAML file for the app:
trigger:
- main
variables:
buildConfiguration: 'Release'
stages:
- stage: Build
displayName: 'Build my web application'
jobs:
- job: 'Build'
displayName: 'Build job'
pool:
vmImage: ubuntu-latest
demands:
- npm
steps:
- task: NodeTool#0
inputs:
versionSpec: '16.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
displayName: 'npm install and build'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: 'build'
ArtifactName: 'drop'
publishLocation: 'Container'
displayName: 'Build artifact'
As you'd expect, this puts the resultant build files in 'drop'. I can confirm this by inspecting the contents of 'drop' as it is a Published Artifact I can click on in the Summary tab for the Build process.
It's the Release that fails. This is the log for the release:
2022-03-28T11:29:39.9940600Z ##[section]Starting: Azure Web App Deploy: my-app-serv
2022-03-28T11:29:39.9952321Z ==============================================================================
2022-03-28T11:29:39.9952723Z Task : Azure Web App
2022-03-28T11:29:39.9953008Z Description : Deploy an Azure Web App for Linux or Windows
2022-03-28T11:29:39.9953295Z Version : 1.200.0
2022-03-28T11:29:39.9953540Z Author : Microsoft Corporation
2022-03-28T11:29:39.9953833Z Help : https://aka.ms/azurewebapptroubleshooting
2022-03-28T11:29:39.9954210Z ==============================================================================
2022-03-28T11:29:40.3697650Z Got service connection details for Azure App Service:'my-app-serv'
2022-03-28T11:29:42.3999385Z Package deployment using ZIP Deploy initiated.
2022-03-28T11:30:18.0663125Z Updating submodules.
2022-03-28T11:30:18.0670674Z Preparing deployment for commit id 'dc023bbe-d'.
2022-03-28T11:30:18.0672154Z Repository path is /tmp/zipdeploy/extracted
2022-03-28T11:30:18.0673178Z Running oryx build...
2022-03-28T11:30:19.1423345Z Command: oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform nodejs --platform-version 16 -i /tmp/8da10ae4b1f9200 -p compress_node_modules=tar-gz --log-file /tmp/build-debug.log
2022-03-28T11:30:19.1431972Z Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
2022-03-28T11:30:19.1453191Z You can report issues at https://github.com/Microsoft/Oryx/issues
2022-03-28T11:30:19.1453685Z
2022-03-28T11:30:19.1454256Z Oryx Version: 0.2.20211207.1, Commit: 46633df49cc8fbe9718772a3c894df221273b2af, ReleaseTagName: 20211207.1
2022-03-28T11:30:19.1457307Z
2022-03-28T11:30:19.1463475Z Build Operation ID: |DTbD+7CrQyM=.49dfa157_
2022-03-28T11:30:19.1465355Z Repository Commit : dc023bbe-d46e-46f2-9d49-6e8157706c19
2022-03-28T11:30:19.1465695Z
2022-03-28T11:30:19.1466122Z Detecting platforms...
2022-03-28T11:30:19.1466558Z Could not detect any platform in the source directory.
2022-03-28T11:30:19.1467416Z Error: Couldn't detect a version for the platform 'nodejs' in the repo.
2022-03-28T11:30:19.1469069Z Error: Couldn't detect a version for the platform 'nodejs' in the repo.\n/opt/Kudu/Scripts/starter.sh oryx build /tmp/zipdeploy/extracted -o /home/site/wwwroot --platform nodejs --platform-version 16 -i /tmp/8da10ae4b1f9200 -p compress_node_modules=tar-gz --log-file /tmp/build-debug.log
2022-03-28T11:30:19.1469950Z Deployment Failed.
2022-03-28T11:30:19.1510175Z ##[error]Failed to deploy web package to App Service.
2022-03-28T11:30:19.1525344Z ##[error]To debug further please check Kudu stack trace URL : https://$my-app-serv:***#my-app-serv.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace
2022-03-28T11:30:19.1527823Z ##[error]Error: Package deployment using ZIP Deploy failed. Refer logs for more details.
2022-03-28T11:30:30.1233247Z Successfully added release annotation to the Application Insight : my-app-serv
2022-03-28T11:30:32.2997996Z Successfully updated deployment History at (CUT)
2022-03-28T11:30:34.0322983Z App Service Application URL: http://my-app-serv.azurewebsites.net
2022-03-28T11:30:34.0390276Z ##[section]Finishing: Azure Web App Deploy: my-app-serv
The Release uses Azure Web App Deploy. App Type is 'Web App on Linux'. 'Package or Folder' is the 'drop' folder. Runtime stack is '16 LTS (NODE|16-lts)' (but it also doesn't work if that's empty).
The drop folder does not contain zipped output. I don't understand why the Release operation is referred to as a Zip Deploy. Am I missing something to avoid the error 'Error: Couldn't detect a version for the platform 'nodejs' in the repo.'?
I'm just expecting the contents in the 'drop' folder to be successfully copied to App Service, and the web app run so I can test it (and in the long time, setup automated tests).
I've tried a number of different things with the Build, including zipping the build artifacts, with no luck. I don't think the build is the problem though, as the files in the 'drop' folder are the files I want copied.
So I think it's the Release that's the problem. But that looks so simple.
I start with an Agent and add an Azure Web App deployment task. It seems to successfully pickup the drop folder, as I've tried other values that show an obvious error when that is wrong. The target App Service is Linux, so the Web App Deploy App type is set to 'Web App on Linux'.
I've seen a few different approaches in stackoverflow, but no answers to this approach. Maybe I'm going about this the wrong way, but on the surface it looks right, as if I get this right, I can easily manage manual deployments, authorisations, etc. as supported by Releases.
Thanks in advance
One of the possible workarounds that you can try is to make the SCM_DO_BUILD_DURING_DEPLOYMENT= FALSE.
After making this as false, you should be able to deploy the app.
Also please refer these links with similar issue for more information.
Reference 1 ,
Reference 2

Gcloud cloud build local component failing with error "Error loading config file: unknown field "availableSecrets" in cloudbuild.Build"

Greetings stackoverflow community! First time asker, long time user.
I am testing out my cloudbuild.yaml file locally using Cloud Build Local component and Secret Manager and it is failing on "availableSecrets".
Error message: Error loading config file: unknown field "availableSecrets" in cloudbuild.Build
OS Platform: Windows 10/WSL2/Ubuntu 18.04
cloud-build-local: v0.5.2
Docker engine: v20.10.2
Nodejs version: v14.15.3
NPM version: 6.14.9
gcloud version: 326.0.0
Installed components: [BigQuery Command Line Tool, Cloud Datastore Emulator, Cloud SDK Core Libraries, Cloud Storage Command Line Tool, Google Cloud Build Local Builder, gcloud Beta Commands]
Documentation on Cloud Build build file: https://cloud.google.com/cloud-build/docs/build-config
Documentation to configure secrets with cloud build: https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
Documentation for cloud build local: https://cloud.google.com/cloud-build/docs/build-debug-locally
Steps performed:
Added secrets to Secret Manager
Enabled API between Cloud Build and Secrets Manager
Added cloudbuild service account as member of each secret password.
Added IAM permission Secret Manager Secrets Accessor to cloudbuild user. I don't know where I got this info from but it is residual at this point from other attempts to use Secret Manager with cloudbuild. I am not sure of the difference between applying access here vs applying to the Secret Manager secret.
Command: cloud-build-local --config=cloudbuild.staging.yaml --dryrun=false .
cloudbuild.staging.yaml:
- name: gcr.io/cloud-builders/npm
entrypoint: 'npm'
args: [ 'install' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ["app", "deploy"]
env:
- 'DAO_FACTORY=datastore'
- 'POLL_INTERVAL=15'
- 'PROMPT=staging>'
- 'ENVIRONMENT=staging'
- 'NAMESPACE=staging'
- 'RESET_DATASTORE=false'
secretEnv: ['ADMIN_USER', 'SUPER_ADMINS', 'BOT_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/{project token}/secrets/SYSTEM_USER/versions/1
env: 'ADMIN_USER'
- versionName: projects/{project token}/secrets/SUPER_ADMINS/versions/1
env: 'SUPER_ADMINS'
- versionName: projects/{project token}/secrets/BOT_TOKEN/versions/2
env: 'BOT_TOKEN'```
Tag: cloud-build-local. I guess without reputation a meaningful tag cannot be created. Maybe an esteemed community member will create this as this may be specific to cloud-build-local only.
Support for Google Secret Manager in Google Cloud Build descriptor file is apparently very new and does not appear to be supported by cloud-build-local component at this time; please see comment from Guillaume about feature being a week old. When cloud build descriptor is ran in Cloud Build, it works fine.
I fixed a similar issue by upgrading the gcloud tool.

Using NFS with Ververica for artifact storage not working, throwing Error: No suitable artifact fetcher found for scheme file

I'm trying to setup Ververica community edition to use NFS for artifact storage using the following values.yaml
vvp:
blobStorage:
baseUri: file:///var/nfs/export
volumes:
- name: nfs-volume
nfs:
server: "host.docker.internal"
path: "/MOUNT_POINT"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
When deploying the flink job, using job uri below:
jarUri: file:///var/nfs/artifacts/namespaces/default/flink-job.jar
I am able to see my artifacts in the Ververica UI, however when I try to deploy the flink job it fails with the following exception:
Error: No suitable artifact fetcher found for scheme file
Full error:
Some pod containers have been restarted unexpectedly. Init containers reported the following reasons: [Error: No suitable artifact fetcher found for scheme file]. Please check the Kubernetes pod logs if your application does not reach its desired state.
If I remove the "file://" from the jobURi to just the following the job containers keep restarting without giving error.
jarUri: /var/nfs/artifacts/namespaces/default/flink-job.jar
As a side note, I also added the following to the deployment.yaml, If I set the artifact to pull from an http endpoint it does save the checkpoints correctly in the NFS, so it seems that the only problem is loading artifacts from the nfs using file:// scheme.
kubernetes:
pods:
volumeMounts:
- name: my-volume
volume:
name: my-volume
nfs:
path: /MOUNT_POINT
server: host.docker.internal
volumeMount:
mountPath: /var/nfs
name: my-volume
Ververica Platform does not currently support NFS drives for Universal Blob Storage.
However, you can emulate this behavior if using version >= 2.3.2 by mounting the NFS drive to your Flink pods as you did in the deployment spec for checkpoints. This works because 2.3.2 added support for self-contained and fetching local files. You can see more information in the documentation here

Resources