I have a simple AngularJS application. the backend can be treated like a service (external api), so no sever side is needed at all. I would like to run it on a docker, however, i'm not sure what is the best practice here.
what i'm expecting to achieve is the following:
the docker should be able to run everything i was doing locally with nodejs - using webpack/grunt/gulp without the need to install anything on my local machine + making sure every team member is working on the same version of basically everything.
the docker should be able to be deployed to production easily and run as lightly as possible (its just static content!)
the real issue is that as far as i understand, the dev docker should be based on nodejs with a mounted volume and everything.. however, the production docker should be super simple nginx server that serves static content. so i might end up with a 2 separate dockers that use the same code base. not sure if this is the right way to go..
can anyone shed some light over this topic? thanks
Your ideas seems ok. I generally create a bash script(as for me it's flexible enough) to deploy different environments according to requirement(dev&prod).
Assumed created a bash script deployApp.sh
sh deployApp.sh `{dev or prod}`
So you can also create(or switch) Dockerfile on the fly according to your environment and build your app with this Dockerfile. So you can manage your prod environment requirements(only deploy to nginx with webpack's created bundles etc.) what you need respectively.
An example about creating deployApp.sh:
webpack `{if other required parameters here}` #created bundle.js etc.
#After webpack operations , choose Dockerfile for prod or dev :
#./prod/Dockerfile , ./dev/Dockerfile
#check if first parameter is prod or dev
docker build -f ./prod/Dockerfile #this will build nginx based container
#and copy needed files&folders
That is just an approach according to your idea, also i use like that approach. You just create that setup one time. Also you can apply another projects If it is suitable.
Related
I have a AngularJS front-end project that runs on nginx and communicates to a back-end java server (separate from this codebase). I find myself running the following commands to install the package:
# make sure node, npm, and gulp are installed
npm install
gulp watch
Should the above be dockerized or is it preferred to run these projects via the commands. The code will be modified locally as we develop (so we'd probably need to configure a volume that maps to the project's directory).
What would be the advantages or disadvantages of dockerizing the above vs. just running the above commands to get the project started? The main goal here is to reduce the time it takes for a new developer to get started/comfortable with the project.
Well the only benefit I can think of right now of why you might want to dockerize this application is if you would prefer someone else to be able to deploy the application a little easier (with the only dependency being Docker and access to a repository where any built containers are being stored). i.e. they could simply issue a docker run command and reference the application / build tag, and they'd have a running containerized application.
The other possible benefit I can foresee is portability across systems that are target environments. The only dependency again is Docker.
Then you have the added benefits that come with support for automatic container builds, built in versioning to name a few.
Also note, you could set up a remote SCM to store code / Dockerfiles to automate build / deploys, if you would like to move away from local host development.
If your main goal is to is to reduce the time it takes for a new developer to get started/comfortable with the project, then the the biggest issue you will face is OS (Windows/Linux use). An alternative solution to Docker would be to use Vagrant.
I am using create-react-app, Travis CI and netlify. I have a config file that looks like this:
require('dotenv').load();
module.exports = {
API_BASE_URL: process.env.REACT_APP_DATABASE_URL || 'http://localhost:8080'
}
When I try to deploy to Netlify, I get this error in Travis:
Creating an optimized production build...
Failed to compile.
Failed to minify the code from this file:
./node_modules/dotenv/lib/main.js:23
If I remove the require('dotenv').load(); part, it loads, but then the app tries to go to localhost, which is obviously not what I want.
This article: https://github.com/motdotla/dotenv/issues/261
brings up the same issue, but they don't offer a solution. I'm stuck. Help!
disclaimer: I work for netlify.
TL;DR unix shells complicate things.
The preferred way to use environment variables in Netlify is to set them in our environment - be that in the Build Environment Variables configuration widget (second on the "Build & Deploy settings" page - under your the repo+build command section), or via netlify.toml (https://www.netlify.com/docs/continuous-deployment/#deploy-contexts). The latter is a bit more flexible as you can set different values for different contexts - e.g. staging uses a staging DATABASE_URL and production uses a production one.
So - those variables are "available" in the build environment - if your build command were env, then you'd see them - in addition to $PATH and $NODE_VERSION and some other stuff Netlify sets automatically. However, depending on how your build pipeline works, they may or not be available inside of it. If your build command is node -p "process.env" - that will show you what node sees for environment variables - and that should show the same thing as env shows (which is what the shell run by the build script sees).
Unfortunately, many of the build pipelines that folks use DON'T automatically import/inherit variables from the parent shells. This thread shows such an example: https://github.com/theintern/intern/issues/136#issue-26148596 . So - the best practice is not to necessarily use something like dotenv (though that has worked for folks that aren't trying to minify it :) ) - but instead, use a build process that appropriately passes those environment variables that we expose in the shell, into the build environment. How you do that is kind of up to you and your tools.
a further PS: unless your build pipeline DOES something with the environment variable - it's not going to be much use in the code that gets published and served to the browser - which doesn't understand $REACT_APP_DATABASE_URL - that's just a string to the browser. I know you're not trying to do that, but wanted to point it out for folks who might see this answer later - it's a common misunderstanding among newer developers of static sites.
I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
We are building an application that so far has a simple user management implementation. This question relates to the built-in password resetting functionality of Loopback v3. User management is being worked on a model derived from the built-in User, and it is called MyCustomUser
Each time code changes are pushed into a GitHub repo, we have Jenkins build a Docker container, and inside of it run npm install then lb-sdk (with suitable parameters) then ng build --env=prod and finally node .. After this happens, the application runs normally, BUT:
When performing the same deployment commands locally (on my own linux laptop), the API endpoints /MyCustomUsers/reset and /MyCustomUsers/reset-password are created (i.e. they are visible and manipulable via the Strongloop Explorer)
When the deployment is run by Jenkins in the Docker container, only one of the two API endpoints is created, /MyCustomUsers/reset. God only knows where the other endpoint, /MyCustomUsers/reset-password, ends up.
Obviously, all deployments are run against the same codebase (i.e. the same commit ID of the GitHub repo). It is bewildering how the service behaves perfectly on localhost but not on the cloud-based docker container.
Sounds like you are running two different versions of the Loopback-Angular2-SDK. From what I've understood the SDK for Angular2 is still in heavy beta and not yet ready for production. However this doesn't excuse the difference, but it really sounds like two different versions.
We are using the same build-flow as you, are your package.json identical when it comes to #mean-expert/loopback-sdk-builder?
The guys working with the SDK-generator are really good at responding in their issue-section, would recommend asking there otherwise.
It turns out that the remote docker was running node 6.9.2 and npm 3.10.9, whereas I was running node 6.10.3 and npm 3.10.10. After making the docker instance run the same versions as I had locally and deploying the package.json along with its npm-shrinkwrap.json, the endpoint was correctly generated.
Is possible pass arguments building a managed vm to use 'ARG' Docker command?.
In Dockerfile sets default value...
ARG env="dev"
Building Docker container I can change this value...
docker build -t test/app --build-arg env=pr .
I have two environments and I want deploy the managed vm with different configuration files in Dockerfile build process.
Thanks.
Sadly, this isn't currently supported. All of our docker build stuff right now is kind of magically integrated. There are a few options here, though none of them are quite what you're looking for.
You can build your docker container locally, push it to gcr.io, and then use the --image-url flag on gcloud app deploy.
In the next few weeks, we're going to start using our container builder service by default for docker builds with Managed VMs. While we don't have a plan right now to expose the setting, there's a config setting that allows you to define environment variables via the container builder API. It's going to be easier to support something like this in the future.
Hope this helps!