I'm totally new to docker and I have a flask as API service and react app can I use just one docker to run both python app.py and npm start together? Tnx.
The good approach is to separate them into two different containers. And run all the containers with docker-compose command. Then you will get all advantages of container isolation, but in the same time (by default) your apps will be at the same network and will get a possibility to transfer data between them by container name.
But ofc you can create sh-script that running two different apps in the background and put it into your CMD command in Dockerfile.
Related
I'm trying to dockerize a react app I made by making a container for development, one for production and a last one for hosting the app, in my production dockerfile I use the command RUN npm run build
So that i can get the build file that i need to host the app, but i can't copy it in the host container using docker-compose, is there a way to do it with docker-compose or do i need to build the app in the hosting container(and erasing the production container since it won't have any purpose anymore)?
I have a AngularJS front-end project that runs on nginx and communicates to a back-end java server (separate from this codebase). I find myself running the following commands to install the package:
# make sure node, npm, and gulp are installed
npm install
gulp watch
Should the above be dockerized or is it preferred to run these projects via the commands. The code will be modified locally as we develop (so we'd probably need to configure a volume that maps to the project's directory).
What would be the advantages or disadvantages of dockerizing the above vs. just running the above commands to get the project started? The main goal here is to reduce the time it takes for a new developer to get started/comfortable with the project.
Well the only benefit I can think of right now of why you might want to dockerize this application is if you would prefer someone else to be able to deploy the application a little easier (with the only dependency being Docker and access to a repository where any built containers are being stored). i.e. they could simply issue a docker run command and reference the application / build tag, and they'd have a running containerized application.
The other possible benefit I can foresee is portability across systems that are target environments. The only dependency again is Docker.
Then you have the added benefits that come with support for automatic container builds, built in versioning to name a few.
Also note, you could set up a remote SCM to store code / Dockerfiles to automate build / deploys, if you would like to move away from local host development.
If your main goal is to is to reduce the time it takes for a new developer to get started/comfortable with the project, then the the biggest issue you will face is OS (Windows/Linux use). An alternative solution to Docker would be to use Vagrant.
I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
I have a simple AngularJS application. the backend can be treated like a service (external api), so no sever side is needed at all. I would like to run it on a docker, however, i'm not sure what is the best practice here.
what i'm expecting to achieve is the following:
the docker should be able to run everything i was doing locally with nodejs - using webpack/grunt/gulp without the need to install anything on my local machine + making sure every team member is working on the same version of basically everything.
the docker should be able to be deployed to production easily and run as lightly as possible (its just static content!)
the real issue is that as far as i understand, the dev docker should be based on nodejs with a mounted volume and everything.. however, the production docker should be super simple nginx server that serves static content. so i might end up with a 2 separate dockers that use the same code base. not sure if this is the right way to go..
can anyone shed some light over this topic? thanks
Your ideas seems ok. I generally create a bash script(as for me it's flexible enough) to deploy different environments according to requirement(dev&prod).
Assumed created a bash script deployApp.sh
sh deployApp.sh `{dev or prod}`
So you can also create(or switch) Dockerfile on the fly according to your environment and build your app with this Dockerfile. So you can manage your prod environment requirements(only deploy to nginx with webpack's created bundles etc.) what you need respectively.
An example about creating deployApp.sh:
webpack `{if other required parameters here}` #created bundle.js etc.
#After webpack operations , choose Dockerfile for prod or dev :
#./prod/Dockerfile , ./dev/Dockerfile
#check if first parameter is prod or dev
docker build -f ./prod/Dockerfile #this will build nginx based container
#and copy needed files&folders
That is just an approach according to your idea, also i use like that approach. You just create that setup one time. Also you can apply another projects If it is suitable.
Is possible pass arguments building a managed vm to use 'ARG' Docker command?.
In Dockerfile sets default value...
ARG env="dev"
Building Docker container I can change this value...
docker build -t test/app --build-arg env=pr .
I have two environments and I want deploy the managed vm with different configuration files in Dockerfile build process.
Thanks.
Sadly, this isn't currently supported. All of our docker build stuff right now is kind of magically integrated. There are a few options here, though none of them are quite what you're looking for.
You can build your docker container locally, push it to gcr.io, and then use the --image-url flag on gcloud app deploy.
In the next few weeks, we're going to start using our container builder service by default for docker builds with Managed VMs. While we don't have a plan right now to expose the setting, there's a config setting that allows you to define environment variables via the container builder API. It's going to be easier to support something like this in the future.
Hope this helps!