Heroku Pipeline with Angular 2 Environments - angularjs

I've built my Angular 2 project using the Angular 2 CLI and was able to deploy the application to Heroku using this tutorial.
Now, I'd like to create a pipeline for the different environments of the applications (dev, staging, production, etc...).
In my package.json I have "postinstall": "ng build -prod" which creates a production build of my code, which my application runs off of. Is there a way that I could have -prod change based on CONFIG_VARS that I would have in my Heroku setup? For example it would say "postinstall": "ng build -dev" for a dev environment or "postinstall": "ng build -staging" for a staging environment. Or would I need to set up my project differently?

The short answer is: No, you will need to do something different.
Explanation:
The npm postinstall script runs when the Heroku slug is built, when you do a git push to the first Heroku app in your pipeline. Subsequently, when you "promote" releases through your Heroku pipeline (e.g. from "development" to "staging" to "production"), the pre-built Heroku slug is promoted "as is", and is NOT rebuilt.
Hence, let's say you have a config var set up in your "development" app that would set the argument that you pass to "ng build" to "dev". That means that when you git push to your "development" app, the slug will get built with the "dev" options. That is fine for the "development" app. HOWEVER, when you subsequently promote to "staging" and "production", you will be promoting the pre-built slug that was built with the "dev" options, which is NOT what you want.
So, to get the functionality you want you will need to consider a different approach.
One way to do it would be to run your "ng build" script in the "npm prestart" phase. That should work, and enable you to use Heroku config vars to modify your Angular2 app depending on the Heroku pipeline phase they are deployed on. However, I typically would NOT recommend this approach. This will cause your "ng build" to run every time "npm start" is run, which is quite often on Heroku (i.e. at least once every 24 hours or so, plus every time your Heroku dynos restart for whatever reason). This would cause your app to experience longer downtime than necessary EVERY time your dynos restart. Not a good idea, typically.
Instead, a better option may be to have your Angular2 app query your server when it initializes, and have the server return whatever pipeline-phase specific values your Angular2 app needs, based on regular Heroku config vars.

If you are running your application in a Heroku node environment you can try to take a look at this solution to avoid having your environments variables and security keys hardcoded in the repositories: https://medium.com/#natchiketa/angular-cli-and-os-environment-variables-4cfa3b849659
I strongly suggest also to take a look at its first response that introduce a dynamic solution to create the environment file at building time: https://medium.com/#h_martos/amazing-job-sara-you-save-me-a-lot-of-time-thank-you-8703b628e3eb
Maybe they are not the best solutions, but a trick to avoid setting up projects differently each time.

i found your question when i was researching the same issue.. how to use environment vars from heroku inside of angular..
after not finding any satisfying answer i came with this (cookie) approach for using different API endpoints for my pipeline.
set a cookie in your server response (server.js) with the content of your var:
app.use(function(req, res, next) {
res.cookie('API_URL', process.env.API_URL || 'http://127.0.0.1:8500/api/');
next();
});
read the value inside of your module:
this.apiUrl = cookieService.get('API_URL');
it's a workaround that will of course only work when user accepts cookies...

Related

How to inject environment variables in Vercel Non-Next.js apps?

I deployed a regular React app to Vercel and would like to have a API_BASE_URL environment setting with different values for development, preview and production environments.
Typically, I'd use dotenv npm package with my webpack config setting up the variables for local or production depending on the build by looking into env.local and env.production respectively.
The env.production would look like this:
API_BASE_URL=#{API_BASE_URL}
Then, I'd have my deployment pipeline replace all instances of #{___} with the respective values available in the pipeline.
However, the regular way in Vercel seems to be to make a Next.js app and the environment variables in the project will be automatically available in the process.env variable in the backend code.
Is there anyway for us to have environment variables in a non-Next.js in Vercel?
I'm thinking I have 2 ways forward:
There is actually an easy way 🍀
Create my own build and deployment pipeline to replace the #{___} placeholders and deploy using vercel deploy command.
Had a similar issue on my SPA (React + Webpack) app on deployed on Vercel. This is what worked for me.
https://github.com/mrsteele/dotenv-webpack#properties
webpack.config.js
const Dotenv = require('dotenv-webpack');
module.exports = {
// OTHER CONFIGS
plugins: [
new Dotenv({
systemvars: true
})
]
};
Doing it like this I was able to read env vars from:
Vercel build runtime (ex: VERCEL_ENV)
.env file in my src folder
Env variables defined in Vercel console
Initially, I was overlooking an important fact. Vercel deploys builds and deploys changes in the same step.
2 commits in preview branch which are merge them both to main in 1 single commit: that's a totally new thing (a new build is created for every single commit in every environment).
This goes against the build once, deploy to multiple environments principle but I suppose Vercel's team considered that in their quest to make the development process as simple as possible.
Anyway, I worked out 2 solutions:
In the build command, insert the environment variables e.g. in the package.json:
scripts: {
"build": "webpack --env production --env VERCEL_ENV=$VERCEL_ENV"
}
Create my own build and deployment pipeline and then deploy to vercel using vercel deploy command. It also works but I think I'm going to stick to the option for simplicity, at least in the meantime.

process.env.NODE_ENV === "developement" even in production

I want to deploy a React app (made with Create React App) + a Node server with Heroku,
I did it, but my app can't fetch data from the server,
In production, my process.env.NODE_ENV is equal to "development" which causes a lot of wrong stuff in my code,
Do you know what can put process.env.NODE_ENV always at "development"? At the build, this environment variable is supposed to switch to "production", no?
Your package.json add this.
"scripts": {
"start": "export NODE_ENV=development; {your start code}",
Your env variables can be set per environment, in this case in Heroku: https://devcenter.heroku.com/articles/config-vars#using-the-heroku-dashboard
If you want to make sure build always runs with the same NODE_ENV, you can follow #seunggabi 's answer. I'd also use cross-env to make it work cross-platform in such case. Per-process variable can be forced on heroku-postbuild task (after &&).
You can take control of your Environment with env-cmd. They make easy to switch between your local development, testing, staging, UAT or production.
You can refer to this article. This is was very helpful for me

How should I integrate Storybooks with React, Webpack, Docker, lighttpd?

I'm working on a Dockerized react app. It is built with Webpack 4. As part of the CI process on this project, the app is deployed to both staging and production environments, where it's served with lighttpd. What I would like to do is have a Storybooks page that is accessible on the staging site, but not on prod.
There are a few issues here that are making it hard for me to figure out the best approach.
As part of the CI process the app is built and automatically deployed to the staging environment when a commits are pushed to any git branch with a name of the form staging/...
The switches that configure the app for staging vs. prod are primarily in the project's webpack config.
That build process is managed by a dockerfile which I can edit. I can run yarn commands (to build a static storybooks bundle, e.g.) in that dockerfile. But the same dockerfile is used for both staging and prod and I'd prefer not to have to create a separate one.
I would prefer not to have to change the react-router routes at all, because the routes file should (ideally) be the same for staging and prod.
I'm generally a little fuzzy on how/whether a user could enter a URL on the staging environment that bypasses react-router somehow and shows the static storybooks page. The networking aspects of web dev are not my strong suit. I'm usually just building and styling components.
My approach thus far has been to simply build a static storybooks bundle by adding a RUN yarn storybooks line to the dockerfile, but this fails in a couple of ways. 1) It runs for both staging and prod.
2) It seems to build the static storybooks bundle (i.e. the dockerfile output shows the storybooks building via it's own, internal webpack process successfully), but I'm not sure how to actually view it in the browser.
I hope that all makes some sense. What I'm looking for is general "best practices"-style guidance. Please let me know if I can provide any additional info and thanks for the time.
An easy way would be to RUN yarn storybooks as a separate process using concurrently in package.json script named prod and run the right command by using a --build-arg in dockerfile
in your Dockerfile
#EXEMPLE docker build --build-arg running_env=preprod
ARG buildenv
RUN yarn run $buildenv
Do not forget to expose the right ports ... my script in package.json looks like this :
"storybook": "start-storybook -c storybook/ --ci -p 3007"
and as for the routing you could add a link and redirect to the storybooks page. I am currently figuring this out too.

Angular app on Heroku built using Webpack - Environment variables?

I am attempting to figure out the best way to load environment variables into my AngularJS app. I am currently using constants, which take their valuues from those defined in the Webpack definePlugin. However, this causes an issue with Heroku as the code is built when pushed to staging, and when it is promoted to production, it is not rebuild, therefore the webpack definePlugin constants are the staging environment variables.
I have looked at requesting the environment variables from my API at run time and then setting them as constants to be used in my front-end, but I can't figure out how to set constants programatically outside of the initial .constant(..) opportunity.
If anyone knows any other better practices around loading environment variables into a front-end when using Webpack (and not Grunt) please do let me know.
If you are using node.js (and npm) in your server, you could consider running webpack in the "npm prestart" script, instead of in "npm postinstall".
That way webpack would run every time your heroku dynos start or recycle, and so would pick up your env var definitions from the appropriate Heroku pipeline phase. So, when your staging dynos start, webpack would pick up your staging env var definitions, and when your production dynos start, webpack would pick up your production env var definitions.
The disadvantage of this approach, however, is that increases the time your dynos are out of service while recycling, since they now need to run webpack before starting.

AngularJS Continuous Deployment Tools

I have been trying out Codeship and Heroku for continuous deployment of an AngularJS application I writing at the moment. The app was created using Yeoman and uses bower and grunt. Initially I thought this seemed like a really good setup as Codeship was free to use and I was quickly able to configure this to build my AngularJS project and it offered the ability to add a deployment step after the build. There were even many PaaS providers to choose from (Heroku, S3, Google App Engine etc). However I seem to have become a bit stuck with getting the app to run on Heroku.
The problem started from the fact that all the documentation suggested that I remove the /dist path from my .gitignore so that this directory is published to Heroku post build. This was mainly from documentation that talked about publishing to Heroku from a local machine, but I figure this is all Codeship is doing under the hood anyway. I didn't want to do this as I don't believe I should be checking build output into source control. The /dist folder was added to .gitignore for a good reason. Furthermore, this kind of defeats the point of having a CI server somewhat, as I might as well just push the latest build from my machine.
After some more digging around I found out that I could add a postinstall step to my packages.json file such as bower install && grunt build which would re-run the build on Heroku and hence repopulate all the bower dependencies (something else they wanted me to check in to source control!) and the dist directory.
After giving this a try it became apparent that I would need to add bower and grunt as dependencies in packages.json, which meant moving them from devDependencies which is where they should belong!
So I now seem to be stuck. All I want to do is publish my build artefacts (/dist) the dependencies (/bower_components) and the server.js file that will run the site. Does anyone know how to achieve this with Heroku and Codeship? Alternatively has anyone had any success with this using different tools. I am looking for something that is free and I am willing to accept that it will not be production stable (won't scale to multiple servers etc), but this is fine for now as all I want to do is continuously deploy the app for internal testing and to be able to share the output with non-technical members of my team so we can discuss features we'd like to prioritise etc.
Any advice would be greatly appreciated.
Thanks
Ahoy, Marko from the Codeship crew here. Did you already send us an in app message about this? I'm sure we can get your application building on Codeship and deploying to Heroku successfully.
As a very short answer, the easiest way to get this running would be to add both bower and grunt to your dependencies in the package.json. Another possibility would be to look for a custom buildpack with both tools already installed.
And finally you could also run the tools on Codeship, add the newly installed files to the repository, commit the changes and push this new commit to Heroku. If you want to use this, you'd very probably need to force push the changes though.
Feel free to reach out to me via the in app messenger (lower right corner of the site) and I'd be happy to help you get this working!
I found two ways to get this to work.
Heroku Node Custom Buildpack
Use the mbuchetics Heroku build pack. This works by basically re-building the app once it has been pushed to Heroku.
There were a few tricks I had to employ still to make this work. In Gruntfile.jstwo new tasks needed to be configured called heroku:production and heroku:development. This is what the buildpack executes to build the app. I initially just aliased the main build task, but found that the either the buildpack or Heroku had a problem with running jshint so in the end I copied the build task and took out the parts that I didn't need.
Also in packages.json I had to add this:
"scripts": {
"postinstall": "bower cache clean && bower install"
}
This made sure the bower_components were available in Heroku.
Pros
This allowed me to keep the .gitignore file in tact so that the 'binaries' in the dist directory and the dependencies in the bower_components directory were not committed into source control.
Cons
This is basically re-building the app once it is on Heroku and I generally prefer to use the same 'binaries' throughout the entire build and deployment pipeline. That way I know that the same code that was built, is the same code that was tested and is the same code that was deployed.
It also slows down the deployment as you have to wait for the app to build twice.
CodeShip Custom Script Deployment
Not being satisfied with the fact I was building my app twice, I tried using a Custom Script pipeline in CodeShip instead of the pre-existing Heroku one. The script basically modified the .gitignore file to allow the dist folder to be committed and then pushed to the Heroku remote (which leaves the code on the origin remote unaffected by the change).
I ended up with the following bash script:
#!/bin/bash
gitRemoteName="heroku_$APP_NAME"
gitRemoteUrl="git#heroku.com:$APP_NAME.git"
# Configure git remote
git config --global user.email "you-email#example.com"
git config --global user.name "Build"
git remote add $gitRemoteName $gitRemoteUrl
# Allow dist to be pushed to heroku remote repo
echo '!dist' >> .gitignore
# Also make sure any other exclusions dont apply to that directory
echo '!dist/*' >> .gitignore
# Commit build output
git add -A .
herokuCommitMessage="Build $CI_BUILD_NUMBER for branch $CI_BRANCH. Commited by $CI_COMMITTER_NAME. Commit hash $CI_COMMIT_ID"
echo $herokuCommitMessage
git commit -m "$herokuCommitMessage"
# Must merge the last build in Heroku remote, but always chose new files in merge
git fetch $gitRemoteName
git merge "$gitRemoteName/master" -X ours -m "Merge last build and overwrite with new build"
# Branch is in detached mode so must reference the commit hash to push
git push $gitRemoteName $(git rev-parse HEAD):refs/heads/master
Pros
This only require a single build of the app and deploys the same binaries that were tested during the test phase.
Cons
I've used this script quite a few times now and it seems relatively stable. However one issue I know of is that when a new pipeline is created there will be no code on the master branch so this script fails when it tries to do the merge from the heroku remote. At the moment I get around this by doing an initial push of the master branch to Heroku before kicking off a build, but I imagine there is probably a better Git command I could run along the lines of; 'only merge this branch if it already exists'.

Resources