Maintaining a monorepo for a react project - reactjs

What I am looking for is a master package/Sdk that defines my features, now I have various build version of this Sdk with customized features, now whenever I add a new feature to the master, it automatically reflects in the other build versions, I might have a lot of versions of it in the near future so the idea of keeping different repos/branches then merging them, later on, can turn into a nightmare( I do this right now).
I have thought of creating a configuration based system but as the sdk grows maintaining those configuration files will be a bigger challange than merging projects.
Android does something like this, example
I have my MainActivity( the top level file)
and I have a build version which has its own customized MainActivity
now When I update my top-level file, I see those changes in my build file too.
Note: All features added will not raise merge conflicts(As long as I am not updating old features). Any suggestion, StackOverflow reading recommendation will help.

Related

Optimizing build time for multiple extjs applications

We have a modular monolith application, each module being an extjs app. Modules share alot of features / functionality, therefore the most of code is sitting in a common extjs package that gets imported into each module, the module themselves are relatively thin. We also provide an accessibility build, ie., everything is built at least twice (once with normal theme, once with high contrast), but for some apps more (some logic is managed through extjs macros to exclude / include different regions at build time).
The end result is agonizing build time. ~10 apps, each built at least twice, each build lasting just under 2 minutes. It's all because each app is built from scratch. Is there a straightforward means to build it together? So that instead of rebuilding extjs code / common package code / themes 10 times, it would be just built once and reused in build process of all apps?
This looks very relevant "saving and restoring sets" . But it seems to be some lower level feature, which would as far as I understand it come useful if we were reimplementing build process from scratch and tossing out app.json. Is there a clear way how to incorporate it into existing higher level features like sencha app build?
You could go ahead and build the packages (if any) separated from the application, then drop the packages in the owning directories of the build. However, Sencha CMD and the way the class system and resolving the dependencies works makes it really hard to untangle the build process, so it's hard to give a general advice here.
You might want to look into the package loader of Ext JS and the "uses" configuration option for the app.json.

Setting up development environment

I'm a recent CS grad working for a start-up. I've been tasked with adding new features to some software they bought. I've downloaded the software from AWS and have begun trying to get started but I'm having trouble with the front-end, which is built with React and bundled by Webpack.
The download left me with an old version of the front-end and minified files of the new version. I was able to use their source maps to unpack them and get the unbundled files but now I can't figure out how to rebuild it.
The old version is set up to have its dependencies installed and minified by npm. I would like to set up something similar with the new version. I've been able to add all its dependencies to the package.json file but I keep running into errors, mostly stemming from versioning conflicts. What should I do in this situation? Am I just going about this the wrong way?
Software might be a vague term to describe what you have in your hands.
It seems from your wording that you have the source code of an older version, but the production build (bundle.js, main.chunk.js, etc) of some newer version, for which you don't have (or potentially own) the source code.
React is very complex itself, it was probably built using Create React App, dealing with the bundled and ejected files is probably worse, and adding the new features without the source code sounds like a nightmare for a CS grad at a startup. This might even potentially be illegal.

How Does a Codename One Update "Work"

A recent question in the Codename One discussion forum raised a question I often face when I'm waiting for a fix.
Sometimes the Codename One team indicates a fix would be coming in a couple of weeks and other times they indicate that its already fixed. Some of that opacity obviously relates to the update of the cloud servers but its unclear for me whether its just the cloud server & the plugin or is there something that I'm missing. Why isn't there a single update process?
I'd really like a more definitive answer like How does Codename One work?
for this.
Codename One is comprised of several different pieces and an update usually means we update only one of them. At a high level there are really just 2 major types of updates: libraries & servers.
We update libraries once every 3-5 weeks we update servers all the time (sometimes more than once per day sometimes ever 3-4 days).
Here is a slightly more accurate overview of what it means to update Codename One:
Plugin & related tools - the plugin itself provides the project properties, server connectivity and designer/gui builder tools. It updates as part of the native IDE update process once every 3-5 weeks. You need to explicitly accept an update prompt from the IDE in order to get this update. Bugs in the plugin itself or features for the designer/GUI builder need to go thru that process...
Build.xml - this is technically a part of the plugin update but you need to actually accept changes that we make to the build.xml to get some functionality. On occasion a new feature (e.g. the new GUI builder) needs to update the build.xml code but this will only happen if you go into project properties, click OK and accept the prompt to update the build.xml (if such an update exists).
Client libraries - these are the API's you use when writing Codename One code (typically CodenameOne.jar and related ports). We usually issue an update to those once every 3-5 weeks together with the plugin update. The plugin ships with these but they are only applied to new projects... When you send a build we implicitly update your libraries to the latest version using a separate update process, you can also use "Update Client Libs" within the Codename One preference to update these manually without sending a build.
Device libraries - when you send a build to the servers we use the latest version of the client libraries that might be newer than what you see in the client libs but might not be the latest git master. This allows us to rapidly deploy & test device fixes. This also allows you to work with the code and use newer features that weren't pushed to the Client Libraries. The process of updating the servers is a bit adhoc so there is some opacity around that, we are looking at making this more transparent.
VM & builders - The builder code and VM relate to the server side scripts that generate the code. When you have a compilation error on the servers or need an enhancement there we need to deploy it in a process similar to the device libraries deployment.
Certificate wizard update - this tool is updated in a completely separate update process despite shipping in the plugin. We had a lot of concerns about Apple changing things suddenly when initially creating this so we decided to allow this to update instantly.

How to Organize related applications into git repo's?

What is the decision tree to know when to split a suite of related and/or cohesive applications into git repo's and/or branches? Should I keep each app in a repo? Or all app's & dependencies in a single repo? Or something in-between?
answer How should I organize multiple related applications using git? claims that a repository per project is appropriate, but does not give clues as to what a project would be.
And then there's the question of dev, test, integration test, and production checkouts when the git repo's are split. Answer how do you organize your programming work lists some branch/tag options, but ignores the multi-app details.
There's also the DB schema! incremental definition of the schema helps, but again, where would one keep this definition if the DB spans back-end and front-end app's?
Some examples I've been pondering:
a front-end web app and it's back-end CGI/DB: one repo or two?
a set of web back-ends that use features from other back-ends
a set of front-end app's that share CSS and jquery plug-ins
selenium scripts that test front-end features across dependent code - in the front-end app repo or the dependent code repo?
If I want to work on a single app, it's hard (well, tedious and error prone) to check out a directory of a repo, so I have to check out the entire git tree (or at least clone the whole tree), so that implies that git is not really built for keeping all the app's & dependencies in a single tree.
But if I want to keep each of the projects (app's, frameworks, dependencies, doc trees, CSS) in it's own repo, then I run into chasing my tail for dependency resolution, that is, I don't know which version of each app are compatible. I think git tags are a good way to go, if only I could move them to newer versions that maintain compatibility.
When app's split or merge -- as happens often with refactoring models down to baser models -- can i move the git history of just those files to another git? I don't see how to do this, so that leans towards a single repo for it all.
If I develop a new feature across app's, it would be nice for branches to represent features.
I think I want a repo of repo's -- does that exist?
This is about using a component approach: a component being a coherent set of files which have their own history (own set of branches, tags and merges).
It should include only what cannot be generated (although the db schema can sometime be added to the repo, as seen in "What is the right approach to deal with Rails db/schema.rb file in GIT?". You still can generate it though, as shown in "What is the preferred way to manage schema.rb in git?", to avoid needless conflicts)
A component can evolve without another one having to evolve. See "Structuring related components in git".
That is the main criteria which allows you to answer: "X and Y: one or two repos?".
You can split a repo into two later, but be aware that will change their history: other contributor will need to reset their own repo to that new history.
You can group those different components repos in one with submodules, as explained here (that is the "repo of repos", or, if you want to have only one repo, in subtree, as illustrated here.

VS2010 Different publishing locations based on configuration

I'm trying to divide my solution by three configurations:
Development
Testing
Release
All above will have different publishing location, so users can work with release, do their test in testing and see what is new in development release. All three versions will be build with different name postfixes and icons and installed on each user workstation.
For now I get :
Unable to install this application because an application with the
same identity is already installed. To install this application,
either modify the manifest version for this application or uninstall
the preexisting application."
I can't even install this more than once at one workstation.
So What can I do to achive this?
You can not install the same application multiple times unless you change the deployment. The easiest way to do this is by changing the assembly name. This article explains this.
As time past, I can now see that the solution was quite close, just required me to be able to specify my requirements first.
So, now I can tell that it mostly depends on number of such configurations:
if it is limited and low, i.e. live/test/dev, you can have each as separate project in solution, like AppLive, AppTest, AppDev, this requires refactoring to move everything that is common into separate projects, but it makes code and releases clearer and easier to manage.
if those configurations are unlimited, or number is high, than way to go is to load configurations from file and pick one from the pool based on custom logic.
Currently I'm using mix of both, as I want to be able to release test versions earlier than live, but also my application is used by multiple branches, and each of them has some unique styling, logos and such, so this is applied from embed xml file, and proper set is identified based on Active Directory entries.

Resources