AWS codepipline rebuild only the affected monorepo apps - reactjs

I have a NX Monorepo with 2 react applications and a shared library between them:
-apps
-app1
-app2
-libs
-global files for both apps
I have them both deployed on AWS codepipeline with s3 bucket and they share one monorepo repository, but the main issue here is that whenever I push some changes to the repo, no matter if they are in the libs(shared) or the app itself, the pipeline rebuilds all of the applications I have, my expected results are if I change something in the libs for example to rebuild all projects, because it affects them, but if I do a change in app1, which doesn't affect app2, AWS to rebuild only app1.
I read a lot of posts and landed on Lambdas and Lerna js, but everything looks pretty complicated since I am new to AWS
this is an image I landed on, it shows that I need to use lamba functions to check which part of the repo is changed and determine which pipline to rebuild, I would be really glad if someone simpliefies things for me so I can find easier solution or if someone dealt with this problem to help me find a solution.

If you use codepipeline/codebuild with a self-created build server container image including nx you don't need that logic. In that scenario nx inside build server watches for changes and builds only needed changes. Obviously you have to use EFS etc. for persistence.

Related

Is more granular versioning in a monorepo with a container possible?

My team has a monorepo written with React, built with Webpack, and managed with Lerna.
Currently, our monorepo contains a package for each screen in the app, plus a "container" package that is basically a router that lazily serves each screen. The container package has all the screens' packages as dependencies.
The problem we keep running into is that, by Lerna's convention, that container package always contains the latest version of each screen. However, we aren't always ready to take the latest version of each screen to production.
I think we need more granular control over the versions of each screen/dependency.
What would be a better way to handle this? Module Federation? peerDependencies? Are there other alternatives?
I don't know if this is right for your use case as you may need to stick with a monorepo for some reason, but we have a similar situation where our frontend needs to pull in different screens from different custom packages. The way we handle this is by structuring each screen or set of screens as its own npm package in its own directory (this can be as simple as just creating a corresponding package.json), publishing it to its own private Git repository, and then installing it in the container package via npm as you would any other module (you will need to create a Git token or set up SSH access if you use a private repo).
The added benefit of this is that you can use Git release tags to mark commits with versions (we wrote our own utility that manages this process for us automatically using Git hooks to make this easier), and then you can use the same semver ranges that you would with a regular npm package to control which version you install.
For example, one of your dependencies in your container package.json could look something like this: "my-package": "git+ssh://git#github.<company>.com:<org or user>/<repo>#semver:^1.0.0 and on the GitHub side, you would mark your commit with the tag v1.0.0. Now just import your components and render as needed
However, we aren't always ready to take the latest version of each screen to production.
If this is a situation that occurs very often, then maybe a monorepo is not the best way to structure the project code? A monorepo is ideal to accommodate the opposite case, where you want to be sure that everything integrates together before merging.
This is often the fastest way to develop, as you don't end up pushing an indeterminate amount of integration work into the future. If you (or someone else) have to come back to it later you'll lose time context switching. Getting it out of the way is more efficient and a monorepo makes that as easy as it can be.
If you need to support older versions of code for some time because it's depended on by code you can't change (yet), you can sometimes just store multiple versions on your main branch. React is a great example, take a look at all the .new.js and .old.js files:
https://github.com/facebook/react/tree/e099e1dc0eeaef7ef1cb2b6430533b2e962f80a9/packages/react-reconciler/src Current last commit on main
Sure, it creates some "duplication", but if you need to have both versions working and maintained for the foreseeable future, it's much easier if they're both there all the time. Everybody gets to pull it, nobody can miss it because they didn't check out some tag/branch.
You shouldn't overdo this either, of course. Ideally it's a temporary measure you can remove once no code depends on the old version anymore.

Cache busting a Reactjs web application

I'm developing an application in ReactJS where I quite often push new changes to the the application.
When the users load upp the application they do not always get the newest version of the application causing breaking changes and errors with the express backend I have.
From what I have researched you can invalidate the cache using "cache busting" or a similar method. Although from all the questions I have seen on stackoverflow they have no clear consensus on how to do it, and the latest update was sometime in 2017.
How would one in a modern day ReactJS application invalidate the browsers cache in an efficient and automatic way when deploying?
If it's relevant, I'm using docker and docker-compose to deploy my application
There's not one-fits-all solution. Pretty common is adding some random hash to the bundle file, which will cause browser to process the file again from server.
Something like: app.js?v=435893452 instead of app.js. Most modern bundle tools like Webpack can do all of that automatically but it's hard to give you direction without knowing your setup.

How to get from zero to Mobile Web App with Data in 60 seconds

I know that all these components exist, however I really am trying to figure out if someone has brought all these together.
Here is what I need:
JavaScript/NodeJS core application boilerplate/framework
With a website, HTML app (aka PhoneGap or even better Ionic), and ideally option to add something like a desktop app (like electron) client flexibility
All with possibility of different/specialized frontend code so all assets and HTML could be packaged into the app
Ideally kept in one GIT repo
With shared code amongst all UIs
Ability to use Angular 2 in all environments (or something similar)
Realtime? standardized data connection with data source (like meteor's DDP), I really dislike polling and I don't want to have to write my own data protocol
Have some kind of authentication capacity
Already exist in some way
What I have been eyeballing thus far is Ionic2 on top of Meteor, however it is remarkably difficult to find an actually working example of them playing together and I have not found any with separate codebases between the two interfaces.
To clarify, below is sort-of what I envision for a folder structure:
public/
common/
models/
business-logic/
server/
web/
desktop/
mobile/
And in that, all UIs and server can import from the common folder.
The end goal is to have something like Slack where they have 3 different ways of accessing the same data using the same rules but can really specialize in each interface type.
Does this exist?
I am really looking to have something that can be started with:
git clone http://github.com/a/bc
npm install
# do some other things that are documented
meteor run ios
Or am I not gonna have my cake and be able to eat it too?
I know I am shooting for the moon, but I know I can't be the first person looking to do this
For the backend I think that LoopBack may be a good bet if you want fast developement.
They have some examples for iOS, Android and Angular apps on their website:
You may get some ideas from their documentation or several example projects on GitHub.
LoopBack is currently backed by IBM.

Nice Git architecture for server / client?

I'm about to start a pretty huge project.
This project is a website.
The backend will be made with Node
The frontend will be made mostly with Angular
Backend is going to be an API (which is cool with Angular) but also (later) for an Android app.
Frontend is going to be a fork of this repo : https://github.com/maxime1992/webTemplate and I want to be able to pull from upstream to keep the fork up to date.
I am wondering. How should I manage it?
Should I create only one repo, containing back and frontend with Git submodule or subtree.
Should I create two separated repos, one for the frontend, one for the backend and then use symlink to have them together? But if someone wants to run it on Windows ... Too bad.
I want this project to be open source on GitHub so I would like to have something clear and easy for everyone :)
Tell me how you would do it, what's good, what's wrong ... I'm really curious!
As indicated in the comments, Git submodules (or Git subtrees) are not the right solution for this. Use a dependency management tool for this, which will work cross-platform (Linux, Mac, Windows), and is the standard way of doing this.
Separating your backend and frontend into separate projects is a good idea, as it will allow you to manage projects independently and add functionality or additional client applications later without bloating your application.
Since you're already using Angular for the frontend, I suggest you take a look at Bower, which is the de-facto standard dependency management tool for frontend projects. It allows you to define a bower.json file to define your dependencies, e.g. Angular and other frontend libraries, allowing you to assemble your frontend project without having to download and store libraries manually.
In your backend project, you would then also add a bower.json file which declares your frontend project as a dependency by pointing to its Git location and branch. Bower will then take care of downloading your frontend project and adding it into your backend project.
Check out some of the popular Bower tutorials for more info on this...
You can use bower link to automatically create symlinks between your projects - this will work across operating systems as well.
Some other tools that you might want to check out:
Yeoman for scaffolding a base project. There are some nice generators for scaffolding Angular projects, including things like LESS/SASS and Bootstrap (https://github.com/yeoman/generator-angular)
Wiredep for automatically wiring your Bower dependencies into your index.html file.
Getting your initial project setup right will be important. You can start small and grow things to a more advanced configuration later.

Can I have multiple versions deployed on openshift?

For a research project I am comparing PaaS providers. I'm however not sure about the following. On App Engine I can have multiple live versions of my application. If I have a new version and I deploy it I can reach it on a non-default url like: versionX.myapp.appspot.com. I can use that url to test it while running on the PaaS. Once I'm happy with the result I will change the default version and my visitors will also see the changes.
I am wondering if Openshift has something simular? Only thing I found so far is that it deploys on git push and if it fails to build it will leave the old version live. This of course still leaves a risk for functional errors. If I then still have to install a test-server locally I am still doing system administration and it would be nice if this can be prevented.
How is this best resolved when using openshift?
Edit: I did found this article: https://www.openshift.com/blogs/release-management-in-the-cloud
Is that the way to or are there other common ways to do this?
The best way to re-create the google functionality would be to run a dev/qa instance on a separate gear and add those git repositories as remotes to your local git working copy, then you can git push to any environment for testing before you deploy to production.

Resources