I can perform file uploads and sync commands locally with stuff like this:
"deploy": "npm run build && gsutil cp src/app.yaml gs://quantumjs-site && gsutil -m cp -r dist gs://quantumjs-site && npm run remote sync",
"sync": "gsutil rsync -r gs://quantumjs-site gs://staging.fluid-griffin-211109.appspot.com/test-app",
But I have to log on to the console (via the website) to do deploy like this:
Can I do this locally
Yes. You do the commands from the command line. (Terminal for Mac, CMD prompt for Windows)
Related
I've switched deployment config (app.yaml) from standard environment to flex with Dockerfile.
runtime: custom
env: flex
service: my-backend-service
When gitlab CI executes app deploy app.yaml, I see a successful build and deployment log. New service and version are being created in the App engine dashboard, but there're no logs/files in Debug mode and instance is not responding. Debugger gives an error "The debugger could not find a debug target for the application".
Final part of CI-CD log.
You can stream logs from the command line by running:
$ gcloud app logs tail -s my-backend-service
To view your application in the web browser run:
$ gcloud app browse -s my-backend-service --project=some-project
Job succeeded
Dockerfile (works well with Cloud run deployment)
FROM python:3.10-slim-bullseye
ENV APP_HOME /app
ENV PYTHONPATH=${PYTHONPATH}:${PWD}
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
RUN mkdir /app
COPY pyproject.toml /app
WORKDIR $APP_HOME
# set env variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . ./
RUN pip3 install poetry
RUN poetry config virtualenvs.create false
RUN poetry install --no-dev
CMD exec gunicorn main:app -c gunicorn_config.py
There were no such issues with standard app.yaml environment (python). What I'm missing?
Backstory: So I have copied the files to the Google Compute Engine VM and am trying to run docker-compose up --build, which in theory should start 3 different instances: rasa-ui, rasa-api and rasa-actions. However, it does not start the one, which has react app on it (rasa-ui). I figured docker file might be the problem.
I tried running only docker image for react app (rasa-ui) and it doesn't start the app at all.
On my local machine code runs and starts the app fine.
Commands I used (and that worked) for docker images locally were:
docker build -t rasa-ui -f Dockerfile
docker run -p 3000:3000 rasa-ui
When I use the same commands in VM, it builds the image, but doesn't run it (and doesn't show any errors, when I run it)
Docker file:
FROM node:alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
COPY ./ ./
RUN npm i
CMD ["npm", "run", "start"]
Any suggestions?
I'm trying to build an image for my React app. It's a pretty simply create-react-app setup. I'm aware that there are many questions regarding this topic, but the distinction here is that I am trying to deploy to Heroku and, because of Heroku not supporting EXPOSE, the setup is a little different.
I've managed to get my frontend up and running, but I'm having issues with my Express portion. Here is my Dockerfile.
FROM node:14.1-alpine AS builder
WORKDIR /opt/web
COPY package.json ./
RUN npm install
ENV PATH="./node_modules/.bin:$PATH"
COPY . ./
RUN npm run build
FROM nginx:1.17-alpine
RUN apk --no-cache add curl
RUN curl -L https://github.com/a8m/envsubst/releases/download/v1.1.0/envsubst-`uname -s`-`uname -m` -o envsubst && \
chmod +x envsubst && \
mv envsubst /usr/local/bin
COPY ./nginx/nginx.conf /etc/nginx/nginx.template
CMD ["/bin/sh", "-c", "envsubst < /etc/nginx/nginx.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"]
COPY --from=builder /opt/web/build /usr/share/nginx/html
It's pretty straightforward, but I'm not sure how to serve my server.js file up as an API.
I've tried many online tutorials to get nginx up and running with React and Express, but it either doesn't work with my current setup (locally) or it fails building on Heroku.
I've created a reproducible repo here. Not sure where to go from here.
I'm trying to enable GitHub oAuth in Eclipse Che. The documentation calls for modification of che.env.
Further, the docs say:
Configuration is handled by modifying che.env placed in the host
folder volume mounted to :/data. This configuration file is generated
during the che init phase.
I run Eclipse Che in a docker container as follows:
mkdir /home/<USERNAME>/che
docker run -p 8080:8080 \
--name che \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /home/<USERNAME>/che:/data \
eclipse/che-server:5.0.0-latest
(Ref: http://www.eclipse.org/che/docs/setup/docker/index.html)
I enter the container and search for che.env:
docker exec -it <CONTAINER ID> bash
find /data -name 'che.env'
Nothing is returned, thus the file che.env doesn't exist in /data. Why?
As per your docker run command, host folder mounted to volume :/data is /home/<USERNAME>/che so your che.env file must exist at path -
/home/<USERNAME>/che/che.env
Update - Image used to run eclipse che is different in docker run command. eclipse/che image is required for running eclipse che. Complete command -
docker run -it --rm -v /che-data:/data -v /var/run/docker.sock:/var/run/docker.sock eclipse/che:5.17.0 start
It's in /home//che folder. Make sure you restart Che after making changes to the file
We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.