How to create yaml manifests from apache camel-k? - apache-camel

I am playing around with camel-k to build some integrations. The dev mode experience is really great. My command is kamel run --config file:demo-template.json demo-driver.groovy --dev
But when i am finished i would like not just remove the --dev switch. I would like to have some yaml files to check in into git and the deploy them with ArgoCD or Flux.
Is there something like kamel build ... --dry-run=client -o yaml or similar?

You can use kamel run ... -o yaml, for example:
kamel run --config file:demo-template.json demo-driver.groovy -o yaml > integration.yaml
Despite the run name, when specifying the output, it will not actually run the integration

Related

Docker container builds locally but not on GitLab

The dockerfile builds locally but the GitLab pipeline fails, saying:
Step 3/8 : RUN gem install bundler rake
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions for the /usr/local/bundle directory.
The project strucutre is a Ruby Sinatra backend and a React frontend.
The Dockerfile looks like this
FROM ruby:3.0-alpine
# Install Dependencies
RUN apk update && apk add --no-cache build-base mysql-dev rrdtool
RUN gem install bundler rake
# Copy the files and build
WORKDIR /usr/server
COPY . .
RUN bundler install
# Run bundler
EXPOSE 443
CMD ["bundle", "exec", "puma"]
I thought Docker was meant to solve the problem of "it runs on my machine"...
What I've tried
As per this post, I tried adding -n /usr/local/bundle but it did not fix the issue.

How to hide variables in package.json

I have the following scripts in my package.json file
"docker-build": "docker build -t mouchin/my-image-name .",
"docker-push": "docker push mouchin/my-image-name:latest",
"deploy-server": "ssh root#myserverip 'docker pull mouchin/my-image-name:latest'",
"deploy": "npm run docker-build && npm run docker-push && npm run deploy-server"
Problem is that i want to hide
mouchin/my-image-name and root#myserverip
Using some sort of env, maybe saving my variables in .env.prod , but i dont know if i can read the variables saved there directly into package.json
You can use environment variables in your rpm scripts just as you would if you execute the command on the command line (for example $SSH_HOST). However those variables will need to be set directly in the shell that executes the nom script.
Now in order to get the environment variables from an env file loaded, you have to do so manually. For example using a snippet like this:
if [ ! -f .env ]
then
export $(cat .env | xargs)
fi
Source
To execute this before any other script, you could use the built-in lifecylce scripts of npm.
Perhaps, you also want to change the snippet code to load one or the other .env file in case you have one for production and one for development. You will probably be able to use the environment variable NODE_ENV for this, as it is used in most setups, however this last step really depends on your build setup.

Issue dockerizing a React + Node + nginx app

I'm trying to build an image for my React app. It's a pretty simply create-react-app setup. I'm aware that there are many questions regarding this topic, but the distinction here is that I am trying to deploy to Heroku and, because of Heroku not supporting EXPOSE, the setup is a little different.
I've managed to get my frontend up and running, but I'm having issues with my Express portion. Here is my Dockerfile.
FROM node:14.1-alpine AS builder
WORKDIR /opt/web
COPY package.json ./
RUN npm install
ENV PATH="./node_modules/.bin:$PATH"
COPY . ./
RUN npm run build
FROM nginx:1.17-alpine
RUN apk --no-cache add curl
RUN curl -L https://github.com/a8m/envsubst/releases/download/v1.1.0/envsubst-`uname -s`-`uname -m` -o envsubst && \
chmod +x envsubst && \
mv envsubst /usr/local/bin
COPY ./nginx/nginx.conf /etc/nginx/nginx.template
CMD ["/bin/sh", "-c", "envsubst < /etc/nginx/nginx.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"]
COPY --from=builder /opt/web/build /usr/share/nginx/html
It's pretty straightforward, but I'm not sure how to serve my server.js file up as an API.
I've tried many online tutorials to get nginx up and running with React and Express, but it either doesn't work with my current setup (locally) or it fails building on Heroku.
I've created a reproducible repo here. Not sure where to go from here.

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

AngularDart build with Docker

I try to deploy my angular-dart app with docker but can't get it to work.
Everything works on OS X but fails inside the container.
my pubspec.yaml:
name: myapp
dependencies:
browser: any
angular: 1.0.0
transformers:
- angular
my Dockerfile:
FROM stackbrew/ubuntu:13.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y apt-transport-https curl git
RUN sh -c 'curl https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -'
RUN sh -c 'curl https://storage.googleapis.com/download.dartlang.org/linux/debian/dart_stable.list > /etc/apt/sources.list.d/dart_stable.list'
RUN apt-get update
RUN apt-get install dart/stable
env PATH $PATH:/usr/lib/dart/bin
ADD frontend/pubspec.yaml /container/pubspec.yaml
ADD frontend/web /container/web
WORKDIR /container
RUN pub build
Dart gets installed as expected (Dart VM version: 1.7.2)
But it fails at pub build with:
Error on line 6, column 5 of pubspec.yaml: Error loading transformer: Illegal argument(s): sdkDirectory must be provided.
- angular
^^^^^^^
I found this https://github.com/angular/angular.dart/issues/1270 which suggests to add the dartSDK path to pubspec.yaml. Which can't be the solution.
The app should be runnable on every machine.. not only on those where the dartSdk path matches with the hardcoded path in pubspec.yaml
Is there another way to fix this? or a workaround?
Update
Should be fixed in code_transformers 0.2.3+2 (see http://dartbug.com/21225)
Old
I don't know yet why this is necessary on some systems and not on others but this should fix it.
transformers:
- angular:
sdkDirectory: "/usr/lib/dart"
See also https://github.com/angular/angular.dart/issues/1270#issuecomment-64967674 for an alternative approach using symlinks.

Resources