Im new to Single Page Application (SPA).
I created 3 separate applications, where one is the root config app and the other two acts as the child applications.
We need to run one of these child applications separately on its own.
But the deployed application gives the following error:
Your Microfrontend is not here
Can I know how to get it done
Locally it successfully runs when i run it from the
npm run start:standalone command
First: The "Error" you are getting is not an error. A microfrontend (mfe) is a microfrontend and a standalone application is a standalone application. There is a description on the Page of what you have to do to get it running.
I would not recommend running your single-spa MFE in standalone mode, for multiple Reasons which are also stated in the Readme of the standalone Plugin. It also basically just creates a standard small root-config for you to use.
But the biggest reason is that the standard yarn run serve:standalone doesn't create optimized builds, which leads to an increase in bundlesize.
E.G. one of my MFEs is 1mb big if build correctly but is around 13mb big if it's unoptimized. Which is quite noticeable.
If you want to run your Microfrontend without the other one you have two options in my opinion:
Create another Root-Config. Nothing is stopping you to create another
Root-Config and only display a single Component on there. You can even deploy the build of your Root-Config to the same environment of your MFE you are displaying (i don't know your deployment process tho).
Create a Route on your Root-Config which only displays a single Component.
If you still opt to use standalone Mode you will need a Server/Dockercontainer with node and run it in there. I'm not aware of any Buildoutputs of the standalone plugin.
Are there any examples out there of injecting new ReactJS components at Runtime, e.g:
A build is deployed on production is stable and running.
We need to add a component or a new route without running through an entire deploy process.
An additional usecase : the application ships with all the components ( e.g: A CMS Module library) - Only certain components were enabled in layout at build time but need more to be added later via a config.
Approaches I have considered.
Using next getStaticPaths and then using a override in the front-end to inject client side components. This will most probably be seen at runtim
Use a more faster deploy system - This is more obvious but imagine lots of changes within a day and multiple deploys.
Any similar problems or approaches people would have tried would be great.
Update Nov 2022
If you are searching on the internet and this comes up, Zack Jackson's Module Federation supposedly achieves this and is called live code sharing via Module federation - https://module-federation.github.io/ There is a NextJS Paid plugin https://app.privjs.com/buy/packageDetail?pkg=#module-federation/nextjs-mf (supports only CSR currently)
I think you would lose out on a lot of built-in build optimizations from Next by trying to circumvent the standard build process, e.g. automatic code-splitting as described here.
However, you might find the fallback feature solves your problem entirely - the fallback feature was meant for large ecommerce sites like it sounds like you're working with. As stated at the fallback true docs:
useful if your app has a very large number of static pages that depend on data (think: a very large e-commerce site). You want to pre-render all product pages, but then your builds would take forever.
I have built a angular webpack application and want to theme the application completely based on the REST API response (theme data - colors).
I am using a global less file which is used across the application pages.
I'm using
less.modifyVars({"variableName":"Value"}) ;
less.refresh();
in the controller and the call is working but the less variables are not changing and reflecting in pages.
I observer that its because the less files are already compiled to css when the controller is loaded and these less variables are not accessible dynamically.
Can anyone suggest any other approach or what i'm missing here? and how i can change the less variable before its compiles into css by the webpack. Or rather recompiles the less again to take the latest values form the api without reloading the page.
Please NOTE : im using npm to install all dependency and less is loaded from package json , and console.log(less) in the controller says less is undefined at client.
Thanks,
Suppose I have 10 different Camel routes in my application, is it possible to stop one particular route alone during an issue and make changes to it(in one of the java processors) and deploy it again without affecting other routes.
Also can I create and deploy a new route on the fly, while other routes are already functioning.
If these are not the default behaviour, what are the options available to achieve this?
Karaf (so do Apache ServiceMix / JBoss Fuse)has hot deployment (nowadays this might be supported in JBoss AS / WildFly as well ). Meaning, you can create your routes as independent blueprint xml files in the deploy folder (meaning just xmls). Likewise you can have xml files for every route, whenever you make changes to XML's, it will be redeployed automatically.
This approach has few drawbacks, it will be complex if you have to deal with JPA or if your route has to deal with custom processors / classes.
Check out the examples in Apache ServiceMix / JBoss Fuse project.
I would recommend this approach especially if you want to take a microcontainer approach - Something like light weight Apache Karaf + Camel Route XML files + Docker.
I have done this few years back, may be this feature is possible to achieve in any other containers as well, which I am not sure.
You can stop a route via org.apache.camel.CamelContext.stopRoute(id) & you can modify it by building a new route and adding it to the context. This would let you change the logic of a route at runtime.
This wouldn't automatically let you hot deploy a new Java processor. I think this aspect of your question isn't Camel specific - their seem to be a few options for this, including OSGi/Karaf mentioned by #gnanaguru.
Perhaps moving the logic that you think might change from a Java processor to somewhere more dynamic (like some JavaScript in an external file, or in the route itself) would be a simpler solution to your problem.
I have done a full Single Page Application (SPA) application using Angularjs.
So far so good.
As anyone knows, all javascript files are loaded in the first time access. Or, some file are loaded in lazy mode style when needed.
So far so good...
The situation is: the server updates all files (html partials, javascripts, css's) and the client remain with a lot of files out-dated.
This would be simply solved refreshing the browser, hit F5 key, control+f5, or refresh button in the browser. But this concept does not exists when working with SPA.
I'm not sure how to solve this problem.
I could detect somehow (doing a ping maybe) and just to re-load that specific file. With document.write strategy. But now rises another problem, I have a single javascript file with all javascript minified.
I could try to force a full reload in the browser or force to re-login (and reload because login are SPA part).
But reloading is an ugly solution, imagine the client lose all data in the form because he was unlucky the server have just updated. And worse, I must now create some "auto-save" feature just because of this.
I'm not sure how to handle this, if possible, doing in "angular way".
I wonder how google gmail handles this because I stay logged for many many hours without logging of.
As others have already suggested, keep the logged user on the old version of your webapp.
Not only what you ask is difficult to do in Angular, but it can also lead to a bad user experience and surprising behaviour, since there may not be a mapping between what the user is doing with the old version and what the new version provides. Views may be removed, renamed, split or merged. The behaviour of the same view may have changed, and doing so without notice for the user may cause mistakes.
You made an example with Gmail, but may have noticed that changes to the UI always happen after you logout, never while you're using it.
First of all, if your app is an intranet website used during office time, just update it while nobody is using it. This is a much simpler solution.
Otherwise, if you need to provide 24/24 availability, my suggestion is:
When you deploy the new version of your SPA, keep the old version in parallel with the new version, keep the current users on the old version, and log new users to the new version. This can be made in a number of ways depending on your setup, but it's not difficult to do.
Keep the old version around until you're confident that nobody is still using it or you're pretty sure that the new version is ok and you don't need to rollback to the old version.
The backend services should be backward-compatible with the old version of the frontend. If that's not possible you should keep multiple version of the backend services too.
As the rest of the guys said a solution can be to versioning your files. So every time that your browser check those files out the browser notice that the files are different to the ones that are in the server so the browser cache the new files.
I suggest to use some build tool like gulp, grunt or webpack, the last one is becoming more popular.
By the moment I use gulp for my projects. I´m moving to webpack though.
if you are interested in gulp you can have a look to gulp-rev and gulp-rev-replace plugins.
What do they do?
let´s say that we have the next file in your project app.js what you get after apply gulp-rev to your project is something like app-4j8888dp.js then your html file where the app.js is injected is still pointing to app.js so you need to replace it. To do that you can use gulp-rev-replace plugin.
eg. gulp task where
var gulp = require('gulp');
var rev = require('gulp-rev');
var revReplace = require('gulp-rev-replace');
var useref = require('gulp-useref');
var filter = require('gulp-filter');
var uglify = require('gulp-uglify');
var csso = require('gulp-csso');
gulp.task("index", function() {
var jsFilter = filter("**/*.js", { restore: true });
var cssFilter = filter("**/*.css", { restore: true });
var indexHtmlFilter = filter(['**/*', '!**/index.html'], { restore: true });
return gulp.src("src/index.html")
.pipe(useref()) // Concatenate with gulp-useref
.pipe(jsFilter)
.pipe(uglify()) // Minify any javascript sources
.pipe(jsFilter.restore)
.pipe(cssFilter)
.pipe(csso()) // Minify any CSS sources
.pipe(cssFilter.restore)
.pipe(indexHtmlFilter)
.pipe(rev()) // Rename the concatenated files (but not index.html)
.pipe(indexHtmlFilter.restore)
.pipe(revReplace()) // Substitute in new filenames
.pipe(gulp.dest('public'));
});
if you want to know further details see the links bellow.
https://github.com/sindresorhus/gulp-rev
https://github.com/jamesknelson/gulp-rev-replace
A single page application is that, a single stack that controls the client logic of your application. Thus, any navigation done through the application should be handled by your client, and not by the server. The goal is to have a one single "fat" HTTP request that loads everything you need, and then perform small HTTP requests.
That's why you can only have one ng-app in your apps. You are not suppose to have multiple and just load the modules you need (although the AngularJS team wants to move that way). In all cases, you should serve the same minified file and handle everything from your application.
It seems to me that you are more worried about the state of your application. As Tom Dale (EmberJS) described in the last Cage Match, we should aim to have applications that can reflect the same data between server and client at any point of time. You have many ways to do so, either by cookies, sessions or local storage.
Usually a SPA communicates with a REST based server, and hence perform idempotent operations to your data.
tl;dr You are not supposed to refresh anything from the server (styles or scripts, for instance), just the data that your application is handling. An initial single load is what SPA is all about.
separate your data and logic and reload the data using ajax whenever you want, for that i will suggest you use REST API to get the data from server.
SPA helps you to reduce the HTTP request again and again but its also require some http request to update a new data to view.
Well, you would have to unload the old existing code (i.e. the old AngularJS app, modules, controllers, services and so on). Theoretically, you could create a custom (randomized) app name (with all modules have this prefix for each unique build!) and then rebuild your app in the browser. But seriously.. that's a) very complex and b) will probably fail due memory leaks.
So, my answer is: Don't.
Caching issues
I would personally recommend to name/prefix all resources depended by a build with a unique id; either the build id, a scm hash, the timestamp or whatever like that. So, the url to the javascript is not domain.tld/resources/scripts.js but domain.tld/resources-1234567890/scripts.js which ensures that this path's content will never conflict with a (newer) version. You can choose your path/url like you want (depending on the underlaying structure: it is all virtually, can you remap urls, etcpp). It would be even not required that each version will exist forever (i.e. map all resources-(\d+)/ to resources/; however, this would be not nice for the concept of URLs.
Application state
Well, the question is how often will the application change that it would be important that such reloads are required. How long is the SPA open in a browser? Is it really impossible to support two versions at the same time? The client (the app in the browser) could even send its own version within the HTTP requests.
In the beginning of a new application, there are a lot of changes that would require a reload. But soon after your application has a solid state, the changes will be smaller and would not require a reload. The user itself will make more refreshs.. more than we ever expected :/
As with what everyone else is saying...
Don't, and while socket.io could work it's asking for trouble if you are VERY careful.
You have two options, upon server update invalidate any previous session (I would also give users a half hours notice or 5 minutes depending on application before maintenance would be done.
The second option is versioning. If they are on version 10, then they communicate with backend 10. If you release version 11 and they are still on 10 then they can still communicate with backend 10.
Remember Google wave? It failed for a reason. Too many people writing one source as the same time causes more problems then it solves.
use $state service. create state during loading page using ctor. after specified time re create state and load page.
function(state) {
state.stateCtor(action);
$state.transitionTo(action + '.detail', {}, {
notify: true
});
}
Versioning your files, so on every update increment version number and the browser will update it automaticallly.
My solution consists of several points.
While this is not important, but I send one JavaScript file to the client side, I use grunt to prepare my release. The same grunt file adds to the JavaScript tag a query with version number. Regardless of whether you have one file or lots of files, you need to add a version number to them. You may need to do this for resources as well. (check at the end for an example)
I send back in all my responses from the server (I use node) the version number of the app.
In Angular, when I receive any response, I check the version number against the version number loaded, if it has changed (this means the server code has been updated) then I alert the user that the app is going to reload. I use location.reload(true);
Once reloaded the browser will fetch all new files again because the version number in the script tag is different now, and so it will not get it from cache.
Hope this helps.
<script src="/scripts/spa.min.js?v=0.1.1-1011"></script>