Is more granular versioning in a monorepo with a container possible? - reactjs

My team has a monorepo written with React, built with Webpack, and managed with Lerna.
Currently, our monorepo contains a package for each screen in the app, plus a "container" package that is basically a router that lazily serves each screen. The container package has all the screens' packages as dependencies.
The problem we keep running into is that, by Lerna's convention, that container package always contains the latest version of each screen. However, we aren't always ready to take the latest version of each screen to production.
I think we need more granular control over the versions of each screen/dependency.
What would be a better way to handle this? Module Federation? peerDependencies? Are there other alternatives?

I don't know if this is right for your use case as you may need to stick with a monorepo for some reason, but we have a similar situation where our frontend needs to pull in different screens from different custom packages. The way we handle this is by structuring each screen or set of screens as its own npm package in its own directory (this can be as simple as just creating a corresponding package.json), publishing it to its own private Git repository, and then installing it in the container package via npm as you would any other module (you will need to create a Git token or set up SSH access if you use a private repo).
The added benefit of this is that you can use Git release tags to mark commits with versions (we wrote our own utility that manages this process for us automatically using Git hooks to make this easier), and then you can use the same semver ranges that you would with a regular npm package to control which version you install.
For example, one of your dependencies in your container package.json could look something like this: "my-package": "git+ssh://git#github.<company>.com:<org or user>/<repo>#semver:^1.0.0 and on the GitHub side, you would mark your commit with the tag v1.0.0. Now just import your components and render as needed

However, we aren't always ready to take the latest version of each screen to production.
If this is a situation that occurs very often, then maybe a monorepo is not the best way to structure the project code? A monorepo is ideal to accommodate the opposite case, where you want to be sure that everything integrates together before merging.
This is often the fastest way to develop, as you don't end up pushing an indeterminate amount of integration work into the future. If you (or someone else) have to come back to it later you'll lose time context switching. Getting it out of the way is more efficient and a monorepo makes that as easy as it can be.
If you need to support older versions of code for some time because it's depended on by code you can't change (yet), you can sometimes just store multiple versions on your main branch. React is a great example, take a look at all the .new.js and .old.js files:
https://github.com/facebook/react/tree/e099e1dc0eeaef7ef1cb2b6430533b2e962f80a9/packages/react-reconciler/src Current last commit on main
Sure, it creates some "duplication", but if you need to have both versions working and maintained for the foreseeable future, it's much easier if they're both there all the time. Everybody gets to pull it, nobody can miss it because they didn't check out some tag/branch.
You shouldn't overdo this either, of course. Ideally it's a temporary measure you can remove once no code depends on the old version anymore.

Related

What is the problem of incompatibility of library versions and how monorepo-style solve it?

I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries

Optimizing build time for multiple extjs applications

We have a modular monolith application, each module being an extjs app. Modules share alot of features / functionality, therefore the most of code is sitting in a common extjs package that gets imported into each module, the module themselves are relatively thin. We also provide an accessibility build, ie., everything is built at least twice (once with normal theme, once with high contrast), but for some apps more (some logic is managed through extjs macros to exclude / include different regions at build time).
The end result is agonizing build time. ~10 apps, each built at least twice, each build lasting just under 2 minutes. It's all because each app is built from scratch. Is there a straightforward means to build it together? So that instead of rebuilding extjs code / common package code / themes 10 times, it would be just built once and reused in build process of all apps?
This looks very relevant "saving and restoring sets" . But it seems to be some lower level feature, which would as far as I understand it come useful if we were reimplementing build process from scratch and tossing out app.json. Is there a clear way how to incorporate it into existing higher level features like sencha app build?
You could go ahead and build the packages (if any) separated from the application, then drop the packages in the owning directories of the build. However, Sencha CMD and the way the class system and resolving the dependencies works makes it really hard to untangle the build process, so it's hard to give a general advice here.
You might want to look into the package loader of Ext JS and the "uses" configuration option for the app.json.

AWS codepipline rebuild only the affected monorepo apps

I have a NX Monorepo with 2 react applications and a shared library between them:
-apps
-app1
-app2
-libs
-global files for both apps
I have them both deployed on AWS codepipeline with s3 bucket and they share one monorepo repository, but the main issue here is that whenever I push some changes to the repo, no matter if they are in the libs(shared) or the app itself, the pipeline rebuilds all of the applications I have, my expected results are if I change something in the libs for example to rebuild all projects, because it affects them, but if I do a change in app1, which doesn't affect app2, AWS to rebuild only app1.
I read a lot of posts and landed on Lambdas and Lerna js, but everything looks pretty complicated since I am new to AWS
this is an image I landed on, it shows that I need to use lamba functions to check which part of the repo is changed and determine which pipline to rebuild, I would be really glad if someone simpliefies things for me so I can find easier solution or if someone dealt with this problem to help me find a solution.
If you use codepipeline/codebuild with a self-created build server container image including nx you don't need that logic. In that scenario nx inside build server watches for changes and builds only needed changes. Obviously you have to use EFS etc. for persistence.

Does it make sense to eject a project from create react app and "take back control"

Create react app is an awesome way to setup a new react project. However i can see it forces certain decisions onto you that come baked in, eg using Jest rather than other test runners such as karma/mocha. As I am setting up a new greenfield project with React, am trying to identify is the best practise to stay with it and accept certain constraints or do most teams end up ejecting and in the parlance of brexit "take back control" and what the reasoning is.
create-react-app actually has a lot of sensible defaults and make it an ideal starting point. But they also regularly update things to stay in sync with where the industry is going. So that's great. And it is maintained by some of the same people responsible for React.
The biggest drawback (and strength) is that it doesn't include many other libraries. You have to add those yourself.
But if you do that you occasionally find that you need to add or tweak a small thing in the Babel/Webpack config.
Luckily there is a middle group. Using libraries like react-app-rewired (https://github.com/timarney/react-app-rewired) allows you to make small changes to the Webpack config without ejecting just yet.
Once you do that you will want to be very careful with upgrading react-scripts. Because every time you do it might break your Webpack changes to their script.
But only once that pain is too much would I consider ejecting.

How to Organize related applications into git repo's?

What is the decision tree to know when to split a suite of related and/or cohesive applications into git repo's and/or branches? Should I keep each app in a repo? Or all app's & dependencies in a single repo? Or something in-between?
answer How should I organize multiple related applications using git? claims that a repository per project is appropriate, but does not give clues as to what a project would be.
And then there's the question of dev, test, integration test, and production checkouts when the git repo's are split. Answer how do you organize your programming work lists some branch/tag options, but ignores the multi-app details.
There's also the DB schema! incremental definition of the schema helps, but again, where would one keep this definition if the DB spans back-end and front-end app's?
Some examples I've been pondering:
a front-end web app and it's back-end CGI/DB: one repo or two?
a set of web back-ends that use features from other back-ends
a set of front-end app's that share CSS and jquery plug-ins
selenium scripts that test front-end features across dependent code - in the front-end app repo or the dependent code repo?
If I want to work on a single app, it's hard (well, tedious and error prone) to check out a directory of a repo, so I have to check out the entire git tree (or at least clone the whole tree), so that implies that git is not really built for keeping all the app's & dependencies in a single tree.
But if I want to keep each of the projects (app's, frameworks, dependencies, doc trees, CSS) in it's own repo, then I run into chasing my tail for dependency resolution, that is, I don't know which version of each app are compatible. I think git tags are a good way to go, if only I could move them to newer versions that maintain compatibility.
When app's split or merge -- as happens often with refactoring models down to baser models -- can i move the git history of just those files to another git? I don't see how to do this, so that leans towards a single repo for it all.
If I develop a new feature across app's, it would be nice for branches to represent features.
I think I want a repo of repo's -- does that exist?
This is about using a component approach: a component being a coherent set of files which have their own history (own set of branches, tags and merges).
It should include only what cannot be generated (although the db schema can sometime be added to the repo, as seen in "What is the right approach to deal with Rails db/schema.rb file in GIT?". You still can generate it though, as shown in "What is the preferred way to manage schema.rb in git?", to avoid needless conflicts)
A component can evolve without another one having to evolve. See "Structuring related components in git".
That is the main criteria which allows you to answer: "X and Y: one or two repos?".
You can split a repo into two later, but be aware that will change their history: other contributor will need to reset their own repo to that new history.
You can group those different components repos in one with submodules, as explained here (that is the "repo of repos", or, if you want to have only one repo, in subtree, as illustrated here.

Resources