How to deploy multiple containers on Heroku? - reactjs

I have created API(Flask RestFul) service, UI(in ReactJS) and Proxy service each having its own Dockerfile in their respective folder.
And there is Docker-compose.yaml file in main repository. It works locally on running following command docker-compose -f docker-compose.prod.yaml up, but I am unable to find way to deploy multiple containers on heroku?
Here is my github repo: https://github.com/Darpan313/Flask-React-nginx-Docker-Compose

Related

How to deploy app with Docker Compose + React + Django + Nginx?

I'm building an app using Docker Compose, React, Django and Nginx. After struggling for a few days I managed to set up a docker-compose file that successfully connected all these services, from collecting the React static files and having Nginx serve them, to having Nginx point to the Django static files instead of Django serving them, to adding other services like Celery to the Docker Compose config.
However, it seems like there's no easy place to publish + deploy this container (the Docker registry doesn't accept containers I think?). All I could find was Azure and AWS integrations, which are definitely a step up from the Heroku deployment I was doing before. My Heroku no longer works as it needs the React + Django to all be at the same depth level of folders, or it doesn't let me use the 'heroku/nodejs' buildpack. Is there a deployment option that lets me maintain the separate folder structure + ease of development of Docker Compose, without being as complex as Azure and AWS? Thanks in advance!
You can upload your container to heroku container registry
https://devcenter.heroku.com/categories/deploying-with-docker
add a heroku.yml file
build:
docker:
web: Dockerfile
run:
web: bundle exec puma -C config/puma.rb
then with the heroku-cli
heroku create
heroku container:push web

Do you need a web server to run a production build of a react app on a VPS?

I have built a react app using a truffle box that uses create-react app. I can get the app running on my local host but I can't see it on my VPS when I go to the IP address of my VPS and I run exactly the same commands and I get the same output in the terminal. I go in to my client dir and run npm start. I have tried to make a build and run the build through an http server in the client dir and the root folder of the VPS.
I run
serve -s build
All I can see is the index of the build in the browser when I try and serve the build through a webserver. When I run npm start on my localhost I can view my app but it doesn't work on my VPS. Please help me I've been struggling with this for days and its the last part of my project.
You need a webserver in any case.
When you do a local development, you do use webpack dev server (which is inside of create react app).
For the production, you need to make a production build and serve it for example by nginx. Here some details how to create production build with CRA https://create-react-app.dev/docs/production-build
On your screenshot, you don't see your site, because there is no entry point in your folder. By default it should be index.html

Setup react app build folder onto Google Kubernetes

Currently, I have a repo that contains both a Node.js Express backend and React frontend. The repo's image is in Google Container Registry and is used on a Google Kubernetes cluster. There is an url provided by a load balancer, where it is the backend url that is serving the static build server. In the future, I want to separate the backend/frontend into two different repos (one for backend and one for frontend).
I believe making changes for the backend in the cluster won't be difficult, but I am having trouble figuring out how to add the React frontend to this since the build folder will be in a different repo than the backend. I read online that to serve a React app on GCP, you would upload the build folder onto a bucket and have that bucket served on App Engine, which will provide a url to access it on the web.
I'm wondering if this is how it would be done on a Kubernetes cluster or if there is a different approach since it is not using App Engine, rather Google Kubernetes.
I hope this makes sense (I am still fairly new to Google Cloud) and any feedback/tips will be appreciated!
Thanks!
There are different approaches to this.
Approach 1: Serve your frontend via Google Cloud Storage.
There is a guide in the GCP documentation: Hosting a static website to set this up. After the build copy all the files to the cloud storage and you are done.
Approach 2: Add your fronted to your backend while building the Docker image
Build your frontend and pack it into a Docker image with something like this:
FROM node AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM scratch
COPY --from=build /app/dist /app
Build your backend and copy the frontend:
FROM myapp/frontend as frontend
FROM node
// build backend
COPY --from=frontend /app /path/where/frontend/belongs
This decouples both builds but you will always have to deploy the backend for a frontend change.
Approach 3: Serve your frontend with nginx (or another web server)
FROM node AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM nginx
COPY --from=build /app/dist /usr/share/nginx/html
You might also adapt the nginx.conf to enable routing without hash paths. See this article by codecentric for more information on that.

Multiple services with different dockerfiles on GAE Flexible

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

Structure of Angular / NodeJS repository which will run in docker

We have repository with an application written in Angular.
It needs a docker container with nginx to be hosted.
The nodejs needs a docker container of nodejs so our app will be split up in 2 containers which will be linked.
So to write 2 dockerfiles (one for each image) we have to split up our folders in our repo like:
root
Angular : contains dockerfile for nginx
NodeJS : contains dockerfile for nodejs
But the problem is they both need the package.json. (Angular for devdependencies and NodeJS for the dependencies).
Which is the best structure in the repo for your application?
The Nginx and Nodejs containers can share a volume in Docker Compose if you like. You can use the volumes_from parameter. It will mount all of the volumes from another service or container, optionally specifying read-only access(ro) or read-write(rw). https://docs.docker.com/compose/compose-file/
volumes_from:
- service_name
- container_name
- service_name:rw
In your case, package.json can be in your Nodejs container but it can also be accessible by the Nginx container using this parameter.

Resources