I'm working on a Dockerized react app. It is built with Webpack 4. As part of the CI process on this project, the app is deployed to both staging and production environments, where it's served with lighttpd. What I would like to do is have a Storybooks page that is accessible on the staging site, but not on prod.
There are a few issues here that are making it hard for me to figure out the best approach.
As part of the CI process the app is built and automatically deployed to the staging environment when a commits are pushed to any git branch with a name of the form staging/...
The switches that configure the app for staging vs. prod are primarily in the project's webpack config.
That build process is managed by a dockerfile which I can edit. I can run yarn commands (to build a static storybooks bundle, e.g.) in that dockerfile. But the same dockerfile is used for both staging and prod and I'd prefer not to have to create a separate one.
I would prefer not to have to change the react-router routes at all, because the routes file should (ideally) be the same for staging and prod.
I'm generally a little fuzzy on how/whether a user could enter a URL on the staging environment that bypasses react-router somehow and shows the static storybooks page. The networking aspects of web dev are not my strong suit. I'm usually just building and styling components.
My approach thus far has been to simply build a static storybooks bundle by adding a RUN yarn storybooks line to the dockerfile, but this fails in a couple of ways. 1) It runs for both staging and prod.
2) It seems to build the static storybooks bundle (i.e. the dockerfile output shows the storybooks building via it's own, internal webpack process successfully), but I'm not sure how to actually view it in the browser.
I hope that all makes some sense. What I'm looking for is general "best practices"-style guidance. Please let me know if I can provide any additional info and thanks for the time.
An easy way would be to RUN yarn storybooks as a separate process using concurrently in package.json script named prod and run the right command by using a --build-arg in dockerfile
in your Dockerfile
#EXEMPLE docker build --build-arg running_env=preprod
ARG buildenv
RUN yarn run $buildenv
Do not forget to expose the right ports ... my script in package.json looks like this :
"storybook": "start-storybook -c storybook/ --ci -p 3007"
and as for the routing you could add a link and redirect to the storybooks page. I am currently figuring this out too.
Related
I have a React App (Typescript templated created using create-react-app) which emits all the changes to localhost:3000 when I execute yarn start. All local changes are immediately served with hot loading.
I have another local dev server running which consumes this app's react files in /build (output from yarn build).
I would like to see all my compiled changes emitted by yarn start be consumed by another server running locally. in other words, I want the yarn to start to emit the change to my file system so they can be served.
I tried ejecting the project and changing configuration files to emit to the build directory with yarn start but that does not work.
I have also tried switching my project to https://neutrinojs.org but that may not be an option for me at moment.
Could someone suggest what approach should I be taking to achieve this.
I found the solution.
I used https://www.npmjs.com/package/cra-build-watch which serves the purpose
I deployed a regular React app to Vercel and would like to have a API_BASE_URL environment setting with different values for development, preview and production environments.
Typically, I'd use dotenv npm package with my webpack config setting up the variables for local or production depending on the build by looking into env.local and env.production respectively.
The env.production would look like this:
API_BASE_URL=#{API_BASE_URL}
Then, I'd have my deployment pipeline replace all instances of #{___} with the respective values available in the pipeline.
However, the regular way in Vercel seems to be to make a Next.js app and the environment variables in the project will be automatically available in the process.env variable in the backend code.
Is there anyway for us to have environment variables in a non-Next.js in Vercel?
I'm thinking I have 2 ways forward:
There is actually an easy way 🍀
Create my own build and deployment pipeline to replace the #{___} placeholders and deploy using vercel deploy command.
Had a similar issue on my SPA (React + Webpack) app on deployed on Vercel. This is what worked for me.
https://github.com/mrsteele/dotenv-webpack#properties
webpack.config.js
const Dotenv = require('dotenv-webpack');
module.exports = {
// OTHER CONFIGS
plugins: [
new Dotenv({
systemvars: true
})
]
};
Doing it like this I was able to read env vars from:
Vercel build runtime (ex: VERCEL_ENV)
.env file in my src folder
Env variables defined in Vercel console
Initially, I was overlooking an important fact. Vercel deploys builds and deploys changes in the same step.
2 commits in preview branch which are merge them both to main in 1 single commit: that's a totally new thing (a new build is created for every single commit in every environment).
This goes against the build once, deploy to multiple environments principle but I suppose Vercel's team considered that in their quest to make the development process as simple as possible.
Anyway, I worked out 2 solutions:
In the build command, insert the environment variables e.g. in the package.json:
scripts: {
"build": "webpack --env production --env VERCEL_ENV=$VERCEL_ENV"
}
Create my own build and deployment pipeline and then deploy to vercel using vercel deploy command. It also works but I think I'm going to stick to the option for simplicity, at least in the meantime.
I'm new to yarn, nodejs and react apps. I've tried running the Sizzy app and it works on my local XAMPP server if I first run yarn start in the cmd terminal and then access via http://localhost:3033, but after a while I have to rerun the same command. I've tried yarn build and then navigating to the build directory but that just loaded a page with a header, it didn't have the same functionality. And the contents of the build directory looks very similar to the contents of the public directory anyway.
I've had a look at this SO post and this one but still unsure why I need to run yarn start everytime.
UPDATE:
I'm still not sure how node, react, and electron fit together and why each is required, much research and learning still to do! Rather than a 'react app' I believe I'm looking at an 'Electron app'. I think if I run the command npm run package-win then I think I should get an exe file and some dlls. But how to instead setup for running on an Apache web server without having to start using the command line, or would you just have to build it with different architecture?
Starting to get a vague understanding from reading this.
If you used the npx create-react-app <app-name> then you just have to change the "start" script in the package.json file as "serve -s build". Run yarn add serve to add For deployment, Heroku is a good choice. Create a Heroku app and connect your git repo to the newly created app. Then go to the deploy tab and deploy your branch.
I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
I have been trying out Codeship and Heroku for continuous deployment of an AngularJS application I writing at the moment. The app was created using Yeoman and uses bower and grunt. Initially I thought this seemed like a really good setup as Codeship was free to use and I was quickly able to configure this to build my AngularJS project and it offered the ability to add a deployment step after the build. There were even many PaaS providers to choose from (Heroku, S3, Google App Engine etc). However I seem to have become a bit stuck with getting the app to run on Heroku.
The problem started from the fact that all the documentation suggested that I remove the /dist path from my .gitignore so that this directory is published to Heroku post build. This was mainly from documentation that talked about publishing to Heroku from a local machine, but I figure this is all Codeship is doing under the hood anyway. I didn't want to do this as I don't believe I should be checking build output into source control. The /dist folder was added to .gitignore for a good reason. Furthermore, this kind of defeats the point of having a CI server somewhat, as I might as well just push the latest build from my machine.
After some more digging around I found out that I could add a postinstall step to my packages.json file such as bower install && grunt build which would re-run the build on Heroku and hence repopulate all the bower dependencies (something else they wanted me to check in to source control!) and the dist directory.
After giving this a try it became apparent that I would need to add bower and grunt as dependencies in packages.json, which meant moving them from devDependencies which is where they should belong!
So I now seem to be stuck. All I want to do is publish my build artefacts (/dist) the dependencies (/bower_components) and the server.js file that will run the site. Does anyone know how to achieve this with Heroku and Codeship? Alternatively has anyone had any success with this using different tools. I am looking for something that is free and I am willing to accept that it will not be production stable (won't scale to multiple servers etc), but this is fine for now as all I want to do is continuously deploy the app for internal testing and to be able to share the output with non-technical members of my team so we can discuss features we'd like to prioritise etc.
Any advice would be greatly appreciated.
Thanks
Ahoy, Marko from the Codeship crew here. Did you already send us an in app message about this? I'm sure we can get your application building on Codeship and deploying to Heroku successfully.
As a very short answer, the easiest way to get this running would be to add both bower and grunt to your dependencies in the package.json. Another possibility would be to look for a custom buildpack with both tools already installed.
And finally you could also run the tools on Codeship, add the newly installed files to the repository, commit the changes and push this new commit to Heroku. If you want to use this, you'd very probably need to force push the changes though.
Feel free to reach out to me via the in app messenger (lower right corner of the site) and I'd be happy to help you get this working!
I found two ways to get this to work.
Heroku Node Custom Buildpack
Use the mbuchetics Heroku build pack. This works by basically re-building the app once it has been pushed to Heroku.
There were a few tricks I had to employ still to make this work. In Gruntfile.jstwo new tasks needed to be configured called heroku:production and heroku:development. This is what the buildpack executes to build the app. I initially just aliased the main build task, but found that the either the buildpack or Heroku had a problem with running jshint so in the end I copied the build task and took out the parts that I didn't need.
Also in packages.json I had to add this:
"scripts": {
"postinstall": "bower cache clean && bower install"
}
This made sure the bower_components were available in Heroku.
Pros
This allowed me to keep the .gitignore file in tact so that the 'binaries' in the dist directory and the dependencies in the bower_components directory were not committed into source control.
Cons
This is basically re-building the app once it is on Heroku and I generally prefer to use the same 'binaries' throughout the entire build and deployment pipeline. That way I know that the same code that was built, is the same code that was tested and is the same code that was deployed.
It also slows down the deployment as you have to wait for the app to build twice.
CodeShip Custom Script Deployment
Not being satisfied with the fact I was building my app twice, I tried using a Custom Script pipeline in CodeShip instead of the pre-existing Heroku one. The script basically modified the .gitignore file to allow the dist folder to be committed and then pushed to the Heroku remote (which leaves the code on the origin remote unaffected by the change).
I ended up with the following bash script:
#!/bin/bash
gitRemoteName="heroku_$APP_NAME"
gitRemoteUrl="git#heroku.com:$APP_NAME.git"
# Configure git remote
git config --global user.email "you-email#example.com"
git config --global user.name "Build"
git remote add $gitRemoteName $gitRemoteUrl
# Allow dist to be pushed to heroku remote repo
echo '!dist' >> .gitignore
# Also make sure any other exclusions dont apply to that directory
echo '!dist/*' >> .gitignore
# Commit build output
git add -A .
herokuCommitMessage="Build $CI_BUILD_NUMBER for branch $CI_BRANCH. Commited by $CI_COMMITTER_NAME. Commit hash $CI_COMMIT_ID"
echo $herokuCommitMessage
git commit -m "$herokuCommitMessage"
# Must merge the last build in Heroku remote, but always chose new files in merge
git fetch $gitRemoteName
git merge "$gitRemoteName/master" -X ours -m "Merge last build and overwrite with new build"
# Branch is in detached mode so must reference the commit hash to push
git push $gitRemoteName $(git rev-parse HEAD):refs/heads/master
Pros
This only require a single build of the app and deploys the same binaries that were tested during the test phase.
Cons
I've used this script quite a few times now and it seems relatively stable. However one issue I know of is that when a new pipeline is created there will be no code on the master branch so this script fails when it tries to do the merge from the heroku remote. At the moment I get around this by doing an initial push of the master branch to Heroku before kicking off a build, but I imagine there is probably a better Git command I could run along the lines of; 'only merge this branch if it already exists'.