Say I have two components on a page, where the user may interact with one and the other one should update, say something like below:
component A | component B
|
- option 1 link | "stuff related to option 1"
- option 2 link |
Now when we click option 2 link we want component B to update.
MarionetteJS uses (or will so in the next major release) Backbone.Radio.
I would like to know the best way of doing this. 2 things come to mind:
1: Use same channel in both components
// Component A
// ...
onOptionClick: function (evt) {
Backbone.Radio.channel('AvsB').request('update:b', {id: this.model.get('id')});
}
// Component B
// ...
initialize: function () {
Backbone.Radio.channel('AvsB').reply('update:b', function () {
// update contents...
}
}
2: Decouple even more, by using a mediator, eg: components should only use their "own" channel.
// Component A
// ...
onOptionClick: function (evt) {
Backbone.Radio.channel('compA').request('option:isUpdated', {id: this.model.get('id')});
}
// Mediator (eg. main.js, a controller, or whatever high-level object)
var channelCompA = Backbone.Radio.channel('compA');
var channelCompB = Backbone.Radio.channel('compB');
channelCompA.reply('option:isUpdated', channelCompB.request('content:shouldUpdate'));
// Component B
// ...
initialize: function () {
Backbone.Radio.channel('compB').reply('content:shouldUpdate', function () {
// update contents...
}
}
Option 2 is more work and seems a bit unnecessary. But I can't really shake the feeling that option 1 is still too specific. After all, component A shouldn't care that component B exists.
I think that option 2 is needlessly complicated.
Basically, you're asking whether the Event Aggregator pattern or the Mediator pattern is more appropriate here. The main thing to keep in mind is that they are both solutions to tight coupling. That's obscured in your example because you're naming the requests after the component ("update:b"). That's the source of the coupling, not the fact that you're using the same channel.
More specifically, if component A and component B really don't need to know about each other then their names shouldn't be part of the API that is the request name. The request should be named after the actual work that should be done (performUpdate, perhaps?), not after who is doing the work.
Of course, it still might make sense to use different channels for organizational or namespacing purposes. And the Mediator pattern definitely has its uses (for example, if you need it to intercept the requests and alter them in some way). But there's no point in using it just to blindly route requests in an attempt to avoid coupling.
You might find this article by the creator of Marionette interesting, as it discusses the same issue.
Related
I've been reading about the advantages of using Context in React and I am unconvinced. I'm wondering if there's something I've missed.
Context provides a way to pass data through the component tree without having to pass props down manually at every level.
What's the hassle in creating a props object in the main component and just passing it around among the underlings? Something like:
// do this once at top level (I'm assuming [foo, foo_set] and [bar, bar_set] are state variables):
const props = {foo, foo_set, bar, bar_set, thisAndThat, theOther, whatever, etcAndEtc}
// including one component
<MyComponent1 {...props } />
// including another
<MyComponent2 {...props } />
(Maybe better to use another name than props for this object, as the components can have other properties. Anyway.)
Then in MyComponent1 you can access all the props you want, or not access them. Either:
...
const MyComponent1 = (props) => {
...
// here we can use any props we need, props.bar, props.bar_set, props.theOther for example
const localVar = props.bar * 2;
props.bar_set(localVar);
// this changes the value of bar throughout the app
...
}
the advantage of the above, as I see it, is that you can pass around the props object to other sub-sub-components and not worry about whether you have anything missing.
Or:
...
const MyComponent1 = ({bar, bar_set, theOther }) => {
...
// here we can use bar, bar_set, theOther in the same example
const localVar = bar * 2;
bar_set(localVar);
...
}
The advantage of this option being that the syntax is shorter.
So my point is why not just use the standard JavaScript syntax? Why introduce new concepts when there are plenty to assimilate to do all sorts of other things?
Consider a fairly common case for most applications: You have authentication information (eg, the current user), a routing library (eg, react-router), and a theme object (what colors to use). These are needed in components scattered throughout the app.
You want to render a button somewhere down at the tip of the component tree. It's going to show the user's avatar, so it needs the authentication data. It's going to navigate when clicked, so it needs the navigate function from the routing library. And it needs to style itself according to the theme.
This certainly can be done through props, but in order for the button to get the props, every component in the chain above it must get and forward those props too. This could be many components deep, like page component -> section component -> table -> row -> widget -> button, and most of them don't need that information for themselves, so they're just taking the props in order to forward it along.
And you can easily imagine cases where there are more than 3 pieces of data that are needed across the app.
What's the hassle
Most people find this "prop drilling" to be a hassle, but let's assume you don't. You still have the problem that it has bad performance. If every component must receive the full set of "global" values that the app might need, then any time anything changes, the entire app must rerender. Optimizations like react.memo become effectively useless. You will get much better performance if you only pass the props you need.
Easier to edit code (You don't have to delete for example unused variable)
Better redability (You dont see unnescesary variables, and You see which component is using variables)
Lesser performance waste (preventing from consuming unnescesarry variables)
Suppose You got 10 descendants in - You would have to pass one variable through 10 of components.
What if some could have the same variable name ? You would have to edit Your passed variable for a while, then edit back later.
To sum up:
Using Context more efficient than stuffing everything into a single object variable, because it avoids re-rendering the whole app when anything changes.
People think passing a single variable around is more hassle than introducing specific syntax.
Context also allows you to have different values for the same variable in different parts of the app. This is shown here (the best explanation IMHO) : https://beta.reactjs.org/learn/passing-data-deeply-with-context
The above article also specifies that sometimes passing props is the best solution. It gives a list of use cases for context, and the advantages provided in each case.
There is interesting article
which describes 4 main classes exposed in Flux Utils.
Store
ReduceStore
MapStore (removed from 3.0.0)
Container
But it's not super clear what should be used for certain situations. There are only 2 examples for ReduceStore and Container, but no samples for others unfortunately.
Could you please explain basic usage for these 4 components: when and where they should be used in real life?
Extended answers and code examples would be really appreciated!
UPDATE:
MapStore has been removed starting from 3.0.0
By poking through the code and reading through the method documentation, here's what I can work out (I have not used these classes myself, as I use other Flux frameworks).
It's actually useful to go in almost reverse order for these.
Container
This is not a subclass of FluxStore because it is, unsurprisingly, not a store. The Container is a wrapper class for your React UI components that automatically pulls state from specified stores.
For example, if I have a React-driven chat app with a component that lists all my logged-in friends, I probably want to have it pull state from a LoggedInUsersStore, which would hypothetically be an array of these users.
My component would look something like this (derived from the code example they provide):
import {Component} from 'react';
import {Container} from 'flux/utils';
import {LoggedInUsersStore} from /* somewhere */;
import {UserListUI} from /* somewhere */;
class UserListContainer extends Component {
static getStores() {
return [UsersStore];
}
static calculateState(prevState) {
return {
loggedInUsers: LoggedInUsersStore.getState(),
};
}
render() {
return <UserListUI counter={this.state.counter} />;
}
}
const container = Container.create(UserListContainer);
This wrapper automatically updates the component's state if its registered stores change state, and it does so efficiently by ignoring any other changes (i.e. it assumes that the component does not depend on other parts of the application state).
I believe this is a fairly direct extension of Facebook's React coding principles, in which every bit of UI lives in a high-level "Container." Hence the name.
When to use
If a given React component is entirely dependent on the state of a few explicit stores.
If it does not depend on props from above. Containers cannot accept props.
ReduceStore
A ReduceStore is a store based entirely on pure functions---functions that are deterministic on their inputs (so the same function always returns the same thing for the same input) and produce no observable side effects (so they don't affect other parts of the code).
For example, the lambda (a) => { return a * a; } is pure: it is deterministic and has no side effects. (a) => { echo a; return a; } is impure: it has a side effect (printing a). (a) => { return Math.random(); } is impure: it is nondeterministic.
The goal with a ReduceStore is simplification: by making your store is pure, you can make certain assumptions. Because the reductions are deterministic, anyone can perform the reductions at any time and get the same result, so sending a stream of actions is all but identical to sending raw data. Likewise, sending the raw data is perfectly reasonable because you were guaranteed no side effects: if my entire program is made of ReduceStores, and I overwrite the state of one client with the state of another (calling the required redraws), I am guaranteed perfect functionality. Nothing in my program can change because of the actions rather than the data.
Anyway, a ReduceStore should only implement the methods explicitly listed in its documentation. getInitialState() should determine the initial state, reduce(state, action) should transform state given action (and not use this at all: that would be non-deterministic/have side effects), and getState() & areEqual(one,two) should handle separating the raw state from the returned state (so that the user can't accidentally modify it).
For example, a counter would be a sensible ReduceStore:
class TodoStore extends ReduceStore {
getInitialState() {
return 0;
}
reduce(state, action) {
switch(action.type) {
case 'increment':
return state + 1;
case 'decrement':
return state - 1;
case 'reset':
return 0;
default:
return state;
}
getState() {
// return `this._state`, which is that one number, in a way that doesn't let the user modify it through something like `store.getState() = 5`
// my offhand JS knowledge doens't let me answer that with certainty, but maybe:
var a = this._state + 1;
return a - 1;
}
}
Notice that none of the transforms explicitly depended on the current state of the object: they only operated on the state variable they were passed. This means that an instance of store can calculate state for another instance of the same store. Not so useful in the current implementation of FB Flux, but still.
When to use
If you like pure-functional programming (yay!)
and if you don't like it enough to use a framework explicitly built with that assumption (redux, NuclearJS)
and you can sensibly write a store that is purely-functional (most stores can, and if they can't it might make sense to think about architecture a little more)
Note: this class does not ensure that your code is purely-functional. My guess is that it will break if you don't check that yourself.
I would always use this store. Unless I could use a...
FluxMapStore [DEPRECATED]
This class is no longer part of Flux!
This is a subclass of ReduceStore. It is for such pure-functional stores that happen to be Maps internally. Specifically, Immutable.JS maps (another FB thing!).
They have convenience methods to get keys and values from the state:
WarrantiesStore.at('extended') rather than WarrantiesStore.getState().get('extended').
When to use
As above, but also
if I can represent this store using a Map.
FluxStore
This brings us to FluxStore: the catch-all Store class and generic implementation of the Flux Store concept.
The other two stores are its descendants.
The documentation seems to me to be fairly clear on its usage, so I'll leave it at that
When to use
If you cannot use the other two Store util classes to hold your data
and you don't want to roll your own store
In my case, that would be never: I prefer immutable frameworks like redux and NuclearJS because they are easier for me to reason about. I take care to structure my stores in a purely functional way. But if you don't, this class is good.
I have read in several places that calling the Backbone.history.navigate function is considered bad practice.
For example Addy Osmani sais in his book "Developing Backbone.js Applications"
It is also possible for Router.navigate() to trigger the route along
with updating the URL fragment by passing the trigger:true option.
Note: This usage is discouraged...
http://addyosmani.github.io/backbone-fundamentals/#backbone.history
Or Derick Bailey in his blog post even sais:
You shouldn’t be executing the route’s handler from within your application, most of the time.
But I don't really understand the reasoning behind it and what would be a better solution.
In my opinion it is not really bad to call the navigate function with the trigger:true option. The route function could upon calling always check if the considered data is already loaded and show this loaded data instead of doing the whole work all over again...
There seems to be some confusion about what Router#navigate does exactly, I think.
Without any options set it will update the URL to the fragment provided.
E.g. router.navigate('todo/4/edit') will update the URL to #todo/4 AND will create a browser history entry for that URL. No route handlers are run.
However, setting trigger:true will update the URL, but it will also run the handler that was specified for that route (In Addy's example it will call the routers editTodo function) and create a browser history entry.
When passing replace:true the url will be updated, no handler will be called, but it will NOT create a browser history entry.
Then, what I think the answer is:
the reason why the usage of trigger:true is discouraged is simple, navigating from application state to application state to application state requires most of the time different code to be run than when navigating to a specific application state directly.
Let's say you have states A, B and C in your application. But state B builds upon state A and state C builds upon B.
In that case when you navigate from B to C only a specific part of code will need to be executed, while when hitting state C directly will probably execute some state checking and preparation:
has that data been loaded? If not, load it.
is the user logged in? If not redirect.
etc.
Let's take an example: State A (#list) shows a list of songs. State B (#login) is about user authentication and state C (#list/edit) allows for editing of the list of songs.
So, when the user lands on state A the list of songs is loaded and stored in a collection. He clicks on a login-button and is redirected to a login form. He successfully authenticates and is redirected back to the song list, but this time with delete-buttons next to the songs.
He bookmarks the last state (#list/edit).
Now, what needs to happen when the user clicks on the bookmark a few days later?
The application needs to load the songs, needs to verify the user is (still) logged in and react accordingly, stuff that in the state transition flow had already been done in the other states.
Now for a note of my own:
I'd never recommend the above approach in a real application as in the example. You should check whether the collection is loaded when going from B to C and not just assume it already is. Likewise you should check whether the user really is logged in. It's just an example.
IMO the router really is a special kind of view (think about it, it displays application state and translates user input into application state/events) and should always be treated as such. You should never ever rely on the router to transition between states, but rather let the router reflect the state transitions.
I have to disagree with #Stephen's answer here. And the main reason why is because the use of router.navigate({trigger : true}) gives the router responsibility to handle the application's state. It should only reflect application state, not control it.
Also, it is not a View's responsibility to change the hash of the window, this is the router's only job! Don't take it away from it! Good modularity and separation of concerns makes for a scalable and maintainable application.
Forwarding a person to a new section within your application
Backbone is an event driven framework, use events to communicate. There is absolutely no need to call router.navigate({ trigger : true }) since functionality should not be in the router. Here is an example of how I use the router and I think promotes good modularity and separation of concerns.
var Router = Backbone.Router.extend({
initialize: function(app) {
this.app = app;
},
routes: {
'videoLibrary' : function() { this.app.videoLibrary(); }
}
});
var Application = _.extend({}, Backbone.Events, {
initialize: function() {
this.router = new Router( this );
this.listenTo( Backbone, 'video:uploaded', function() {
this.router.navigate('/videoLibrary');
this.videoLibrary();
});
},
videoLibrary: function() {
//do useful stuff
}
});
var uploadView = Backbone.View.extend({
//...
uploadVideo: function() {
$.ajax({
//...
success: function() { Backbone.trigger('video:uploaded'); }
});
}
});
Your view does not need or want to know what to do when the user is done uploading, this is somebody else's responsibility. In this example, the router is just an entry point for the application's functionality, an event generated by the uploadView is another. The router always reflects the application state through hash changes and history but does not implement any functionality.
Testability
By separating concerns, you are enhancing the testability of your application. It's easy to have a spy on Backbone.trigger and make sure the view is working properly. It's less easy to mock a router.
Modules management
Also, if you use some module management like AMD or CommonJS, you will have to pass around the router's instance everywhere in the application in order to call it. Thus having close coupling in your application an this is not something you want.
In my opinion it's considered bad practice because you should imagine a Backbone application not like a Ruby On Rails application but rather like a Desktop application.
When I say RoR, I'm just saying a framework supporting routing in sense that a route brings you to a specific call to the controller to run a specific action (imagine a CRUD operation).
Backbone.history is intended just as a bookmark for the user so he can, for example, save a specific url, and run it again later. In this case he will find the same situation he left before.
When you say:
In my opinion it is not really bad to call the navigate function with
the trigger:true option. The route function could upon calling always
check if the considered data is already loaded and show this loaded
data instead of doing the whole work all over again...
That to me sounds smelly. If you are triggering a route and you are checking for the data to see if you have it, it means that you actually already had them so you should change your view accordingly without loading again the entire DOM with the same data.
That said trigger:true is there so do we have reason use it? In my opinion it is possible to use it if you are completely swapping a view.
Let's say I have an application with two tabs, one allows me to create a single resource, the other one let me see the list of the created resources. In the second tabs you are actually loading a Collection so data is different between the two. In this case I would use trigger:true.
That said I've been using Backbone for 2 weeks so I'm pretty new to this world but to me it sounds reasonable to discourage the use of this option.
It depends on your context.
If you have done something in your current view that might affect the view you are about to navigate to, for example creating for deleting a customer record, then setting trigger to true is the right thing to do.
Think about it. If you delete a customer record don't to want to refresh the list of customers to reflect that deletion?
Since backbone provides two ways of responding to certain events, I was wondering what the general consensus. This is a very common situation - I have a link on a page, i can set up the href on the page to route it so the router can call a function to handle it, like so:
HTML
<a href='#posts/2' class='handleInView'>Item 2</a>
JS
var AppRouter = Backbone.Router.extend({
routes: {
"posts/:id": "getPost"
}
});
or I can respond to the event in the View like so:
var MyView = Backbone.View.extend({
...
events: {
"click .handleInView": "open",
},
...
open: function() {
...
}
});
I know routes provide you with the added benefit of history and direct links, but from a performance standpoint and code layout perspective what is a better approach if I dont care about history.
My routes could be a single place where i can see all of the interactions but it also could get cluttered very quickly.
If you don't care about history or bookmarks, events have fewer side effects (people won't try to bookmark them and they won't interfere with your history) and they're simpler / faster to implement and handle.
Performance-wise, they're slightly faster as well (but really neither method is slow enough to matter at all).
I agree with the comments. Anything that requires deeplinking, bookmarks etc should be handled by using routes. However if you have something like a TabView, or another view that should be inaccessible from a URL, and is nested in another view, then it might make more sense in dealing with that inside of your view code. As for the clutter, you might want to think about reorganizing your routes into separate files. Here are some examples
Backbone general router vs. separate routing files?
Multiple routers vs single router in BackboneJs
In general, routes are used when you are calling a drastic change in the state of your application, or you would like to maintain a browsing history (via backbone.history) so the user can navigate back & forth between states via the browser buttons.
Ideally, you would use both in different circumstances.
I like to think of it in terms of what is changing on my page. If the general page state is the same, but certain elements are changing or updating, I will use events. If the general page state is changing or if I'm loading a different UI Screen, I will use routes.
I have a question about routing. The thing is, that I have trouble figuring out how to properly instantiate and render views when my app start for the first time on some route.
For example:
If a user accesses the app via the route /?#a/b/c
My app consists of different sections with subsections. This means that for the above route to work I have to render the view for section A before rendering and displaying the view for subsection B and at last render and display the view for subsubsection C.
The thing is, that this results in a bunch of ugly code in the various routing handlers
a: function(){
A.render();
}
b: function(){
A.render();
B.render();
}
c: function(){
A.render();
B.render();
C.render();
}
So I'm thinking that I'm approaching the problem the wrong way.
When introducing other Routers as the app grows this becomes even harder to maintain I would imagine.
A solution would be if there were an event being triggered before the callback for a route is called. But I can't find a such in the docs.
So my question is, how are these situations handled properly? Because my solution doesn't scale. Is there a way for a to always fire when visiting a "subroute"?
I haven't found a good way to do this but I'll share what I've done and it may or may not apply to your situation. What I wanted to do was have separate routers that respond to portions of the route but backbone doesn't work that way. Backbone will stop when it finds the first route that matches.
What I've done to handle it is to set up a router like this--notice the custom regex to define the route--hopefully it won't make your eyes bleed
initialize:
{
this.route(/([^\/]*)[\/]?([^\/]*)?[\/]?([^\/]*)?/, "renderViews", this.renderViews);
},
renderViews: function(mainView, subView, subSubView)
{
//here you can do something clever--mainView, subView and subSubView may or may not
// have values but they are the names of the views. route of "a/b" will pass
// ["a", "b", undefined] as your arguments
if (mainView)
(new App.Views[mainView]()).render();
if (subView)
(new App.Views[subView]()).render();
if (subSubView)
(new App.Views[subSubView]()).render();
}
I realize this isn't exactly what you're probably hoping for but it worked well for me in a project.
good luck