Nesting Flux Data - reactjs

In my app, I make two ajax calls to for one piece of data. First I make a call to get a list of ecommerceIntegrations. Once I have those, I can then grab each of their respective orders.
My current code looks something like this:
componentDidMount: function() {
EcommerceIntegrationStore.addChangeListener(this._onIntegrationStoreChange);
OrderStore.addChangeListener(this._onOrderStoreChange);
WebshipEcommerceIntegrationActionCreators.getEcommerceIntegrations();
},
_onIntegrationStoreChange: function() {
var ecommerceIntegrations = EcommerceIntegrationStore.getEcommerceIntegrations();
this.setState({ecommerceIntegrations: ecommerceIntegrations});
ecommerceIntegrations.forEach(function(integration) {
WebshipOrderActionCreators.getPendingOrdersOfIntegration(integration.id);
});
},
_onOrderStoreChange: function() {
this.setState({
pendingOrders: OrderStore.getAllPendingOrders(),
pendingOrdersByIntegration: OrderStore.getPendingOrdersByIntegration()
});
}
I'm trying to follow Facebook's Flux pattern, and I'm pretty sure this doesn't follow it. I saw some other SO posts about nesting data with Flux, but I still don't understand. Any pointers are appreciated.

Everything here looks good except this:
ecommerceIntegrations.forEach(function(integration) {
WebshipOrderActionCreators.getPendingOrdersOfIntegration(integration.id);
});
Instead of trying to fire off an action in response to another action (or worse yet, an action for every item in ecommerceIntegrations), back up and respond to the original action. If you don't yet have a complete set of data, and you need to make two calls to the server, wait to fire the action until you have all the data you need to make a complete update to the system. Fire off the second call in the XHR success handler, not in the view component. This way your XHR calls are independent of the dispatch cycle and you have moved application logic out of the view and into an area where it's more appropriately encapsulated.
If you really want to update the app after the first call, then you can dispatch an action in the XHR success handler before making the second call.
Ideally, you would handle all of this in a single call, but I understand that sometimes that's not possible if the web API is not under your control.
In a Flux app, one should not think of Actions as being things that can be chained together as a strict sequence of events. They should live independently of each other, and if you have the impulse to chain them, you probably need to back up and redesign how the app is responding to the original action.

Related

React - Old promise overwrites new result

I have a problem and I'm pretty sure I'm not the only one who ever had it... Although I tried to find a solution, I didin't really find something that fits my purpose.
I won't post much code, since its not really a code problem, but more a logic problem.
Imagine I have the following hook:
useEffect(() => {
fetchFromApi(props.match.params.id);
}, [props.match.params.id]);
Imagine the result of fetchFromApi is displayed in a simple table in the UI.
Now lets say the user clicks on an entity in the navigation, so the ID prop in the browser URL changes and the effect triggers, leading to an API call. Lets say the call with this specific ID takes 5 seconds.
During this 5 seconds, the user again clicks on an element in the navigation, so the hook triggers again. This time, the API call only takes 0,1 seconds. The result is immediatly displayed.
But the first call is still running. Once its finished, it overwrites the current result, what leads to wrong data being displayed in the wrong navigation section.
Is there a easy way to solve this? I know I can't cancel promises by default, but I also know that there are ways to achieve it...
Also, it could be possible that fetchFromApi is not a single API call, but instead multiple calls to multiple endpoints, so the whole thing could become really tricky...
Thanks for any help.
The solution to this is extremely simple, you just have to determine whether the response that you got was from the latest API call or not and only then except it. You can do it by storing a triggerTime in ref. If the API call has been triggered another time, the ref will store a different value, however the closure variable will hold the same previously set value and it mean that another API call has been triggered after this and so we don't need to accept the current result.
const timer = useRef(null);
useEffect(() => {
fetchFromApi(props.match.params.id, timer);
}, [props.match.params.id]);
function fetchFromApi(id, timer) {
timer.current = Date.now();
const triggerTime = timer.current;
fetch('path').then(() => {
if(timer.current == triggerTime) {
// process result here
// accept response and update state
}
})
}
Other ways to handle such scenarios to the cancel the previously pending API requests. IF you use Axios it provides you with cancelToken that you can use, and similarly you can cancel XMLHttpRequests too.

Which is more preferable? Dispatch the response of a promise or dispatch an action stating a promise needs to be triggered?

Here are two code samples:
onClick() { // click-handler of a button
axios.get(someUrl)
.then(response => {
// setData is a fn dispatching an action-creator passed through react-redux's connect()
setData(response.data);
});
}
or
// buttonClicked is also a fn dispatching an action-creator
// Difference being the middle-ware handles the entire async process
<button onClick={this.buttonClicked}>Click me</button>
The latter method will use Axios in some middleware, and then dispatch another action which will set the response data in the store.
So this means that the first approach will only dispatch one action, while the second approach will dispatch two actions.
Both ways obviously seem to work, but I would like to know what the best way would be. Is there a downside to either approach?
Disclaimer: This is an opinionated answer, and somewhat rambly.
The thing about promises is that they work the way a human being would think of a promise. So use them like that in your program. Typically IMO you should only use Promises when you know that an event would occur in the normal course of your program workflow, or when you are promising a result.
So for example if you ask for a socket connection, I promise to give one to you whenever I am able to, you don't have to wait for me, just go on and do your thing, as soon as I have done everything needed to get that back to you I will hand it to you; and you can move on in your workflow from the point that needs it. For example, (pseudo code):
var Socket = new Promise(function(resolve, reject) {
resolve(do_something_to_get_a_socket());
});
Socket.then(authenticate()).then(sendData());
etc.
Sticking a promise to an event handler like onClick should be a promise to do something for the user — use it in your code to create threads that will do the heavy lifting of complex processing, while the user is still able to interact with the interface.
For example, in a game a click could fire a dart and you promise that it will animate on the screen (and even if that glitches) you still promise that it will hit the target etc, but the user doesn't have to wait for the promise to be fulfilled to fire another dart.
So use Promises to make your program more readable by you and other coders, and use it to make workflow of your program more realistic to your usecase.
I heavily recommend something like: https://www.npmjs.com/package/redux-api-middleware
This middleware (or others like it) contain quite a few features you would most likely have to write yourself if you were to implement this with just axios in a callback. For example, it will automatically dispatch request, success, and failure actions based on the AJAX call's result.
When dispatching this action using the middleware, many things are taken care of for you.
{
[CALL_API]: {
endpoint: "http://example-api.com/endpoint",
method: "GET",
headers: { ... },
types: [
"GET_X_REQUEST", "GET_X_SUCCESS", "GET_X_FAILURE"
]
}
}
Something like this will automatically fire a "GET_X_REQUEST" action when it begins to load. Then a success or failure action (with appropriate data or error objects attached as a payload) when the AJAX call completes or fails.
Or any similar middleware where Redux ends up handling the entire async process.

ReactJS fetching new data on prop

As a preface, I'm still new to React, so I'm still fumbling my way through things.
What I have is a component that fetches data to render an HTML table. So I call my Actions' fetchData() (which uses the browser's fetch() API) from within componentWillMount(), which also has a listener for a Store change. This all works well and good, and I'm able to retrieve and render data.
Now the next step. I want to be able to fetch new data when the component's props is updated. But I'm not exactly sure what the proper way to do so is. So I have a three part question
Would the proper place to do my fetchData() on new props be in componentWillReceiveProps(), after validating that the props did change, of course?
My API is rather slow, so it's entirely possible a new prop comes in while a fetch is still running. Is it possible to cancel the old fetch and start a new one, or at least implement logic to ignore the original result and wait for the results from the newer fetch?
Related to the above question, is there a way to ensure only one fetch is running at any time besides having something like an isLoading boolean in my Action's state (or elsewhere)?
Yes, componentWillReceiveProps is the proper place to do that.
Regarding point 2 and 3:
The idea of cancelling the task and maintaining 'one fetch running' seems to be inadequate. I don't think this kind of solution should be used in any system because implementation would limit an efficiency of your app by design.
Is it possible to cancel the old fetch and start a new one, or at least implement logic to ignore the original result and wait for the results from the newer fetch?
Why don't you let a 'newer fetch' response override an 'old fetch' response?
If you really want to avoid displaying the old response you can implement it simply using a counter of all fetchData calls. You can implement it in this way:
var ApiClient = {
processing: 0,
fetchData: function(){
processing++
return yourLibForHTTPCall.get('http://endpoint').then(function (response)){
processing--
return response
}
},
isIdle: function(){
return processing == 0
}
}
and the place where you actually make a call:
apiClient.fetchData(function(response){
if(apiClient.isIdle()){
this.setState({
})
}
}
I hope yourLibForHTTPCall.get returns a Promise in your case.

Where should HTTP requests be initiated in Flux?

There is a plenty of discussion on how to communicate with external services in Flux.
It is pretty clear that the basic workflow is to fire an HTTP request, which will eventually dispatch successful or failure action based on the response. You can also optionally dispatch "in progress" action before making the request.
But what if request's parameters depend on store's state? Nobody seems to mention it.
So essentially, based on user interaction with the view, an ACTION is dispatched. Store owns logic on how to transition from current state0 to the next state1 given ACTION. Data from state1 is needed to properly form new HTTP request.
For example, user chooses a new filter on the page, and store decides to also reset pagination. This should lead to a new HTTP request with (new filter value, first page), not (new filter value, current page from state0).
View can not make the HTTP call itself right with user's interaction because it then would have to duplicate store's logic to transition to the next state.
View can not make the HTTP call in its store's onChange handler because at this point it is no longer known what was the origin of the state change.
It looks like a viable option to make store fire the HTTP request in the action handler, after it transitioned to the next state. But this will make this action implicitly initiating HTTP call, which disables neat possibility to have a replayable log of dispatched actions for debugging.
Where should HTTP requests be initiated in Flux?
Let's start at the bottom:
It looks like a viable option to make store fire the HTTP request in the action handler, after it transitioned to the next state. But this will make this action implicitly initiating HTTP call, which disables neat possibility to have a replayable log of dispatched actions for debugging.
This can be mitigated by not initiating HTTP requests if you're in debugging/replay mode. This works great as long as the only thing you do in your HTTP request handlers is fire actions (e.g. SUCCESS and FAILURE actions). You could implement this with a simple global boolean (if (!debug) { httpReq(...) }), but you could also make the pattern a bit more formal.
In Event Sourcing parlance, you use Gateways for such purposes. In normal operation, the Gateway makes your HTTP requests, and in debugging, you turn the Gateway off (so it doesn't make any HTTP requests).
That said, I think the problem can actually be solved by rethinking where your HTTP requests are made.
So essentially, based on user interaction with the view, an ACTION is dispatched. Store owns logic on how to transition from current state0 to the next state1 given ACTION. Data from state1 is needed to properly form new HTTP request.
In the second link in your question (Where should ajax request be made in Flux app?), I recommend doing your writes in action creators but reads in the stores. If you extrapolate that pattern into your use case, you might end up with something like this (pseudocode and long variable names for clarity):
class DataTable extends React.Component {
render() {
// Assuming that the store for the data table contains two sets of data:
// one for the filter selection and one for the pagination.
// I'll assume they're passed as props here; this also assumes that
// this component is somehow re-rendered when the store changes.
var filter = this.props.filter;
var start = this.props.start;
var end = this.props.end;
var data = this.props.dataTableStore.getDataForPageAndFilter(
start, end, filter
);
// the store will either give us the LOADING_TOKEN,
// which indicates that the data is still loading,
// or it will give us the loaded data
if (data === DataTableStore.LOADING_TOKEN) {
return this.renderLoading();
} else {
return this.renderData(data);
}
}
}
class DataTableStore {
constructor() {
this.cache = {};
this.filter = null;
this.start = 0;
this.end = 10;
}
getDataForPageAndFilter(start, end, filter) {
var url = HttpApiGateway.urlForPageAndFilter(start, end, filter);
// in a better implementation, the HttpApiGateway
// might do the caching automatically, rather than
// making the store keep the cache
if (!this.cache[url]) {
this.cache[url] = DataTableStore.LOADING_TOKEN;
HttpApiGateway.query(url)
.then((response) => {
// success
var payload = {
url: url,
data: response.body
};
dispatch(DATA_FETCH_SUCCESS, payload);
}, (error) => {
// error
dispatch(DATA_FETCH_FAIL, { ... });
});
}
return this.cache[url];
}
handleChangeFilterAction(action) {
this.filter = action.payload.filter;
// the store also decides to reset pagination
this.start = 0;
this.end = 10;
this.emit("change");
}
handleDataFetchSuccessAction(action) {
this.cache[action.payload.url] = data;
this.emit("change");
}
handleDataFetchFailAction(action) {
// ...
}
}
DataTableStore.LOADING_TOKEN = "LOADING"; // some unique value; Symbols work well
You can see that the store is responsible for deciding how to update the pagination and the filter variables, but is not responsible for deciding when HTTP requests should be made. Instead, the view simply requests some data, and if the store doesn't have it in the cache, it will then make the HTTP request.
This also allows the view to pass in any additional local state into the getter (in case the HTTP requests also depends on local state).
I'm not sure to understand all the parts of the question but will try to answer with some useful insights.
Flux is a bit like a not-yet mature version of EventSourcing / CQRS / Domain-Driven-Design for frontend developers
We use something akin to Flux for years on the backend with a different terminology. We can compare Flux ActionCreators to DDD Commands, and Flux Actions to DDD Events.
A command represent the user intent (LOAD_TIMELINE(filters)). It can be accepted or rejected by a command handler that will eventually publish some events. In an UI this does not make much sens to reject commands as you don't want to display buttons that the user should not click...
An event represent something that has happened (always in the past).
The React app state that drives the UI is then somehow a projection of the event log to a json state. Nothing can be displayed on the UI without an event being fired first.
Answering your questions
But what if request's parameters depend on store's state? Nobody seems
to mention it.
In DDD, command handlers can actually be stateful. They can use the app state to know how to handle the command appropriately. Somehow this means that your Flux ActionBuilders can be stateful too (maybe they can use some store data while )
So essentially, based on user interaction with the view, an ACTION is
dispatched. Store owns logic on how to transition from current state0
to the next state1 given ACTION. Data from state1 is needed to
properly form new HTTP request.
In DDD there is a concept called Saga (or Process Manager).
To make it simple, it receives the event stream and can produce new commands.
So basically you can express through a Saga your requirement: when there's an event "FILTERS_UPDATED", fire a command "RELOAD_LIST" with the new filters. I'm pretty sure you can implement something similar with any Flux implementation.
Sagas should rather be disabled when you replay the event log, as replaying the event log should not have side effects like triggering new events.
These kinds of semantics are supported in my Atom-React framework, where stores can act as stateful command handlers or sagas.

How can I update the view after each API call

I am new to angular and I am building an app where I want to make multiple API calls and update the view as the data from them comes by. I do not want to wait for all the api calls to be completed to update my view and my api calls are not dependent on each other. Some of the API calls takes more than a minute to return the data.
I was thinking of using $q.all since I can start multiple asynchronous tasks, but I can't update the view after each one is completed. Could someone please point out how I can build this ?
Should I just use $scope.$apply in the success block of my $http call ?
My progress so far LINK (this was different issue I had, but the code is the same)
It's a bit hard to understand your model and what you're trying to achieve from you question, but you might want to use something like $broadcast() and $on().
So you'd broadcast an event when you're API has finished downloading:
$scope.$broadcast('API-download', data);
and then listen for it elsewhere and update your view
$scope.$on(
'API-download',
function(data){
processData( data );
}
)
That syntax might not be perfect, and as you have multiple API calls you'll need to broadcast different events like 'API-product-download' and 'API-catalogue-download'

Resources