How do you share state in a micro-frontend scenario? - reactjs

A first idea would be cookies, yet you can run out of space really fast.

There are several ways to get communication in microfrontends.
As already noted the different microfrontends should be loosely coupled, so you'd never directly talk from one to another.
The key question is: Is your microfrontend solution composed on the server-side or client-side?
For the client side I've written an article on the communication.
If you are on the server side (question seems to go in that direction due to the mention of cookies) then I would suggest using the standard microservice patterns for communication and exchanging state. Of course, using centralized systems such as Redis cache there would help.
In general the different microfrontends should have their own state and be as independent as possible.
Usually what you want to share is not the state / data, but rather the state with an UI representation. The reason is simple: That way you don't have to deal with the representation and edge cases (what if the data is not available?). One framework to show this is Piral.
Hope that helps!

There's no shared state, that'd break the concept of the segregation that's supposed to take place. This pattern is present among all microservices architectures as it's supposed to eliminate single points of failure and other complications in maintaining a bigger store. The common approach is for each "micro frontend" to have its own store (i.e. Redux). The Redux docs have a topic on this.

First, you should avoid having shared states between MicroFrontend (MFE) much as possible. This is a best practice to avoid coupling, reduce bugs, etc..
A lot of times you don't need it, for example, every information/state that came from the server (eg: the "user" information) could be requested individually for each MFE when they need it.
However, in case you really need a shared state there are a few solutions like:
- Implement the pub/sub pattern in the Window object.
There are a few libraries that already provide this.
//MFE 1
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
observable.publish(input.value); //input.value markup is not present in this example for simplicity
//MFE 2
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
const handleNewMessage = (newMessage) => {
setMessages((currentMessages) => currentMessages.concat(newMessage)); //custom logic to handle the message
};
observable.subscribe(handleNewMessage);
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#windowed-observable
- Dispatch/Capture Custom browser Events
Remember that Custom Events can have 'details' that allow pass information
//MF1
const customEvent = new CustomEvent('message', { detail: input.value });
window.dispatchEvent(customEvent)
//MF2
const handleNewMessage = (event) => {
setMessages((currentMessages) => currentMessages.concat(event.detail));
};
window.addEventListener('message', handleNewMessage);
This approach has the important issue that just works for 'new events dispatched', so you can read the state if you don't capture the event at the moment.
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#custom-events
In both implementations, using a good naming convention will help a lot to keep order.

Related

How to properly deal with global states in ReactJS?

Let’s say I am creating a bookstore and I want to add, delete, update, etc, books to my list (basically, manipulate the books list).
I am creating a global state with the context API and functions like below:
//defining my global state
const [books, setBooks] = useState([ ])
// defining a function to manipulate the global state
const addBook = (newBook) => {
let newBooksList = [newBook, …books]
setBooks(newBooksList)
}
And then I pass those functions to the components that needed it.
Is this the right approach? I mean, I know that it works but is this considered good practice or there are other better ways to do those types of things?
I not sure if it is a good functional approach or if I should do it in a more oop approach, I would appreciate if someone could point me in the right direction.
The way you're using is fine if you want to manage the state with Context.
About the OOP approach, of course, the way you use there is just a preference, but you don't see many examples using OOP in the React world. So, if you are seeking an exact answer to that, just stick to the method you are using there.
Optionally, you can use a custom hook for your Context. You can find an example here. It is just for the auth state but the logic is the same.
One final note, though you are using Context there and it seems fine, do not forget that Context is not a state manager. There are disadvantages to using it as a state manager, like all the components (wrapped by this Context) rerender when something changes in the state, either this change is related to them or not.

Should elements ever be made available outside of a page object?

This is a question I cannot find a definitive source on and am hoping to get some answers based on users previous experience mainly with explanations as to why a certain approach DID NOT work out.
I am using webdriver for automation via Protractor and am having a debate on whether or not page elements should ever be made available outside of page object itself. After researching it appears that there are a few different approaches people take and I cannot fully grasp the long term implications of each.
I've seen the following different implementations of the page object model:
Locators are declared in the page object and exported
This is my least favorite approach as it means element are actually being identified in the test. This seems like a bad standard to set as it could encourage automaters to use new locators, not from the page object, directly in the application.
Also any which require any dynamic information cannot be directly set when PO is initialized require further editing.
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = '#password';
this.usernameField = '#user';
}
}
test.js
const homePage = new HomePage();
$(homePage.usernameField ).sendKeys('admin');
$(homePage.passwordField ).sendKeys('password');
Elements declared in page object and exported, locators not
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = $('#password');
this.usernameField = $('#user');
}
}
test.js
const homePage = new HomePage();
homePage.usernameField.sendKeys('admin);
homePage.passwordField.sendKeys('password);
Elements declared in Page Object and only used directly within the page object, only methods exported
This is the approach I have used in the past and we ended up with many, many functions. For instance we had setUsename(), getCurrentUsername(), getUsernameAttibute(), verifyUsernameExists() and same for password element and many other elements. Our page object became huge so I don't feel like this is the best approach any longer.
One of the advantage however is that our tests look very clean and are very readable.
pageobject.js
export default class HomePage {
constructor() {
var passwordField= $('#password');
var usernameField = $('#user');
}
setUserName(name){
username.sendKeys(name);
};
setPassword(password){
passwordField.sendKeys(password);
};
}
test.js
const homePage = new HomePage();
homePage.setUsername('admin');
homePage.setPassword('123');
I'm very interested to get some feedback on this so hopefully you can take the time to read.
I prefer and believe that the last approach is the best.
Keeping aside the fact that we are talking about automation, any good/great software has following traits.
It is composed of individual modules/pieces/components
Each individual module/piece/component is cohesive in terms of data/information (selector, webdriver API calls in case of automation) specific to its API/methods to interact with the data.
The last approach provides just that with the added benefit of test cleanliness that you pointed out.
However, most of the times, for whatever reasons, we tend to ignore the modularity and make the existing POs bloated. This i have seen and was part of. So, in way, POs getting bloated is not because of the approach but the way automators/testers/developers being conscious to keep POs modular, composed and simpler. This is true whether it is about POs or the application functional code
Boated POs:
Specific to the problem of bloated POs, check if you can separate out the common elements out of the POs. For ex. Header, Footer, Left Nav, Right Nav etc are common across pages. They can be separated out and the POs can be composed of those individual pieces/sections.
Even within the main content, separate common content (if its across two or more pages if not all) into their own component API if its logical and can be reused across pages.
It is normal that the automation specs perform extensive regression for ex. an element is a password field, length of the text box is so and so etc. In such cases it doesn't make sense to add methods for every single spec use case (or expect). Good thing is, here also the commonality takes the centerstage. Basically, provide API that is used across specs and not if its used in one spec, literally.
Take for ex. a password field should be masked. Unlikely that you would want to test that in multiple spec files. Here, either we can add a method for it in LoginPO like isPasswordMasked() or we can let the password field accessible from LoginPO and the spec do the actual check for password type field. By doing this, we are still letting LoginPO in control of password field information that matters to other API (login(), logout() etc) i.e only PO knows how and where to get that password element. With the added advantage of pushing spec testing to spec file.
POs that expect/assert
At any point, its not a good idea to make any testing (or expect) to be part of the PO API. Reason(s):
The PO and its API is reusable across suites and should be easy for anyone to look at it and understand. Their primary responsibility is to provide common API.
They should be as lean as possible (to prevent bloating up)
More importantly, browser automation is slower inherently. If we add test logic to the POs and its API methods, we will only make it slower.
FWIW, i have not come across any web page that mandates a bloated API.
Exposing Elements in POs:
It depends on the use case i believe. May be an element that is used in only one spec can be a base case to expose. That said, in general, the idea is that the specs should be readable for the tester/developer AND for those who look at them later. Whether its done with meaningful element variable name or a method name is largely a preference at best. On the other hand, if an element is meant to have a bit of involved interaction (for ex hover open menu link) its definitely a candidate for exposing through API alone.
Hope that adds some clarification!
The last way is the correct way to implement page objects. The idea behind the page object is that it hides the internals of the page and provides a clean API for a script to call to perform actions on the page. Locators and elements should not be exposed. Anything you need to do to the page should be exposed by a public method.
One way you can avoid a getter and setter for each field on the page is to consolidate methods. Think about actions that the user would take on the page. Rather than having .setUsername(), .setPassword(), and .clickLoginButton() methods, you should just have a login() method that takes the username and password as parameters and does all the work to log in for you.
References
Martin Fowler is generally considered the inventor of the "page object" concept but others coined the name "page object". See his description of the page object.
Selenium's documentation on page objects.

React Simple Global Entity Cache instead of Flux/React/etc

I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.

what is this difference between this flux action and this function call?

I could a have a flux action like this:
{type: 'KILL', payload: {target: 'ogre'}}
But I am not seeing what the difference is between having a method on a class People (wrapping the store) like this,
People.kill('ogre')
IF People is the only receiver of the action?
I see that the flux dispatcher gives me two advantages (possibly)
The "kill" method can be broadcast to multiple unknown receivers (good!)
The dispatcher gives me a handy place to log all action traffic (also good!)
These might be good things sure, but is there any other reasons that I am missing?
What I don't see is how putting the actions in the form of JSON objects, suddenly enforces or helps with "1-way" communication flow, which is what I read everywhere is the big advantage of having actions, and of flux.
Looks to me like I am still effectively sending a message back to the store, no matter how I perfume the pig. Sure the action is now going through a couple of layers of indirection (action creator, dispatcher) before it gets to the store, but unless I am missing something the component that sends that action for all practical purposes is updating whatever stores are listening for the kill message.
What I am missing here?
Again I know on Stack Overflow we can't ask too general a question, so I want to keep this very specific. The two snippets of code while having different syntax, appear to be semantically (except for the possibility of broadcasting to multiple stores) exactly the same.
And again if the only reason is that it enables broadcasting and enables a single point of flow for debug purposes, I am fine with that, but would like to know if there is some other thing about flux/the dispatcher I am missing?
The major features of the flux-style architecture are roughly the following:
the store is the single source of truth for application state
only actions can trigger mutation of the store's state
store state should not be mutated directly, i.e. via assigning object values, but by creating new objects via cloning/destructuring instead
Like a diet, using this type of architecture really doesn't work if you slip and go back to the old ways intermittently.
Returning to your example. The benefit for using the action here is not broadcasting or logging aspects, but simply the fact that the People class should only be able to either consume data from a store and express its wishes to mutate the state of said store with actions. Imagine for example that Elves want to sing to the the ogre and thus are interested in knowing the said ogre is still alive. At the same time the People want to be polite and do not wish to kill the ogre while it is being serenaded. The benefits of the flux-style architecture are clear:
class People {
kill(creature) {
if (creatureStore.getSerenadedCreature() !== creature)
store.dispatch({ type: 'KILL', payload: { target: creature } })
return `The ${creature} is being serenaded by those damn elves, let's wait until they've finished.`
}
}
class Elves {
singTo(creature) {
if (!creatureStore.getCreatures().includes(creature))
return store.dispatch({ type: 'SING_TO', payload: { target: creature } })
return `Oh no, the ${creature} has been killed... I guess there will be no serenading tonight..`
}
}
If the class People were to wrap the store, you'd need the Elves class to wrap the same store as well, creating two places where the same state would be mutated in one way or the other. Now imagine if there were 10 other classes that need access to that store and want to change it: adding those new features is becoming a pain because all those classes are now at the mercy of the other classes mutating the state from underneath them, forcing you to handle tons of edge cases not possibly even related to the business logic of those classes.
With the flux style architecture, all those classes will only consume data from the creatureStore and dispatch actions based on that state. The store handles reconciling the different actions with the state so that all of its subscribers have the right data at the right times.
The benefits of this pattern may not be evident when you only have a couple of stores that are consumed by one or two entities each. When you have tens (or hundreds) of stores with tens (or hundreds) of components consuming data from several stores each, this architecture saves you time and money by making it easier to develop new features without breaking existing ones.
Hope this wall-o-text helped to clarify!
What I don't see is how putting the actions in the form of JSON objects, suddenly enforces or helps with "1-way" communication flow, which is what I read everywhere is the big advantage of having actions, and of flux.
Looks to me like I am still effectively sending a message back to the store, no matter how I perfume the pig. Sure the action is now going through a couple of layers of indirection (action creator, dispatcher) before it gets to the store, but unless I am missing something the component that sends that action for all practical purposes is updating whatever stores are listening for the kill message.
What I am missing here?
Facebook Flux took the idea from the event driven GUI systems.
In there even if you move your mouse you get messages. This was called message loop then, and now we have actions dispatching.
Also, we have lists of subscribers inside stores.
And it is really the same principle in Redux where you have one store, while in Flux you may have multiple stores.
Now little mathematics. Having 2 components A and B you need to have just a few possible update chains A updates B and B update A, or self-update (non including in here the updates from outside of the app). This is the possible case.
With just three components we have much more possible chains.
And with even more components it gets complicated. So to suppress the exponential complexity of possible components interaction we have this Flux pattern which in nothing more than IDispatch, IObservable if you worked with these interfaces from some other programming languages. One would be for spitting the actions, and the other for entering the listener's chain that exists inside the store.
With this pattern, your React code will be organized in a different way than common React approach. You will not have to use React.Component state anymore. Instead, you will use the Store(s) that will hold the application state.
Your component can only show the desire to mutate the application state by dispatching the action. For instance: onClick may dispatch the action to increment the counter. The actions are objects with the property type: that is usually a string, and usually in upper case, but the action object may have many other props such as ID, value,...
Since the components are responsible for rendering based on the application state we need somehow to deliver them the application state. It may be via the props = store.getState() or we may use the context. But also check this.
Finally, it is even not forbidden that component uses the internal state (this.state) in case this has no impact on the application. You should recognize these cases.

How do you clean up Redux' state?

As a Redux beginner, given (the idea of) a somewhat larger application I imagine a root reducer similar to:
const rootReducer = combineReducers({ accounting, crm, sales })
Application state in this case would contain accounting, crm, and sales even if the user is only using one part of the application. This may be advantageous, for example as a cache when switching back and forth between accounting and CRM, but you probably do not want to retain all data of all views that have ever been opened, that is the complete possible state tree without any pruning, within the application forever or even initializing the whole tree to its initial state on load.
Are there idioms, patterns or libraries which solve this or am I missing something?
As a partial solution which solves retaining all the data I imagine something like resetting parts of the state to their initial state when navigating away from certain parts of the application given some declarative rules such as:
set accounting = {} (indirectly, through an action such as ACCOUNTING_LEAVING) when <Accounting/> receives componentWillUnmount
delete/set crm.mail = {} when <MailEditor/> receives componentWillUnmount
I have not seen examples which clean up the state in any way. Many "list + detail view" examples store state like { list: [...], detail: {...} }, but when switching to the detail view the list is neither emptied, nor nulled, nor deleted. This is nice when I might return to the list view a couple of moments later, but not when using an application 9 to 5 without ever releasing data.
A few related thoughts:
Unless you're expecting dozens or hundreds of megabytes of data to be cached in your store, you're probably worrying about this too soon. Write your app, benchmark, and then optimize.
Dispatching actions to clear out portions of the store is entirely valid and reasonable.
My Redux addons ecosystem catalog may have some libraries listed that would be useful. In particular, the Component State and Reducer pages point to some libraries that do things like dynamically adding and removing reducers from your store, and resetting state.
You may also be interested in my collection of high-quality React and Redux tutorials.
Overall, how you organize your state, when you update it, and what you update it with is up to you. Redux simply provides a pattern and rules for the process of doing the updates.

Resources