Here is my data flow:
1: 500+ objects come trough socket in 1-2s bursts
2: Objects are added directly to redux store
3: React table container is connected to redux store (using it as data source) and re-renders for every object.
That many re-renders pretty much kill browser.
What options do I have to buffer incoming objects(events) and send batches to reducer every, say 1s? Even better solution would be to somehow time-limit react rendering (shouldComponentUpdate...), but I doubt it's possible?
I would advise looking into RxJS and redux-observable specifically. What you're looking for is debounce operator if I'm not mistaken.
RxJS has a pretty steep learning curve, but I have managed to set up a basic working solution pretty quickly.
I have been inspired by this talk and I encourage you to take a listen.
You could use a temporary cache in the reducer as local data. Maybe with a timeout before adding the data to the store object.
You can implement your own throttle behavior using setTimeout in shouldComponentUpdate or use something ready like https://github.com/ryo33/react-throttle-render
Related
I'm looking for a good way to track which props received by a component are not being used and can be safely removed.
In a system I maintain, our client single-page app fetches a large amount of data from some private endpoints in our backend services via redux saga. For most endpoints called, all data received is passed directly to our React components, no filtering applied. We are working to improve the overall system performance, and part of that process involves reducing the amount of data returned by our backend-for-frontend services, given those themselves call a large number of services to compose the returned JSON data, which adds to the overall response time.
Ideally, we want to make sure we only fetch the data we absolutely need and save the server from doing unnecessary calls and data normalization. So far, we've been trimming the backend services data by doing a code inspection; we inspect the returned data for each endpoint, then inspect the front-end code and finally remove the data we identified (as a best guess) as unused. That's proven to be risky and inefficient, frequently we assume some data is unused, then months later find a corner case in which it was actually needed, and have to reverse the work. I'm looking for a smart, automated way to identify unused props in my app. Has anyone else had to work on something like that before? Ideas?
There's an existing library called https://github.com/aholachek/redux-usage-report , which wraps the Redux state in a proxy to identify which pieces of state are actually being used.
That may be sufficiently similar to what you're doing to be helpful, or at least give you some ideas that you can take inspiration from.
If there is a complex Redux store for determining the states of many components throughout the app.
What is the best pattern for when to save things to the DB? I see pros and cons to different approaches, but I am wondering what is standard for applications with a complex UI?
Save the store to DB every time a change is made. (Makes it difficult chasing lots of instant vs. async processes... Either lots of loading states and waiting or juggling the store and the DB separately.)
Autosaving every now and then... (Allows the store to instantly determine the UI, faster... With occasional loading states.)
Manual saving... Ya, no thanks...
I recommend saving automatically every time a change is made, but use a "debounce" function so that you only save at most every X milliseconds (or whatever interval is appropriate for your situation).
Here is an example of a "debounce" function from lodash: https://lodash.com/docs/#debounce
Scenario:
I am building a realtime IoT dashboard that will update sensor readings in millisecond intervals (100ms).The readings are streamed over websockets.There is a central redux store which dispatches actions to update the state.
Problem:
With a handful of sensors the UI is freezing possibly the browser re-painting is getting blocked (not sure).
I did a bit of research on browser fundamentals.Came across requestAnimationFrame(rAF) and an excellant talk Jake Archibald at JSConf.There is a github issue Should React use requestAnimationFrame by default related to the same.
So my question is should I use rAF for millisecond dom updates or simply rely on react internals to update the DOM? For me the millisecond precision is crucial and can't afford to have any batching of changes happening with sensor data.
Whats the way forward ?
According to this Github issue, for non-interactive events React will process setState calls asynchronously. This means that the latest state will be rendered as soon as possible. This is as close as you can get to realtime in a browser.
If you attempt to draw every single web socket state update synchronously, you run the risk of back pressure. Generally speaking, back pressure occurs when the consumer dequeues items from a queue slower than the producer enqueues them. The buffer builds up over time, causing both a memory leak and an increasing larger delay between the item's original enqueue time and the item's dequeue time.
The ReactiveX site has a good example of back pressure - the web socket being the Observable, and React's async rendering being akin to the Sample operator.
React's asynchronous rendering handles back pressure by sampling the latest provided state at the time of rendering, meaning updates to the DOM are made as quickly as possible. React will probably be faster than trying to handle DOM updates on your own, because it only updates according to state changes - it doesn't update every element every "frame".
You should probably trust React to render ASAP with the latest data, and if the web sockets messages are occuring to quickly for the UI thread to handle, then handle them in a web worker instead.
I am implementing/evaluating a "real-time" web app using React, Redux, and Websocket. On the server, I have changes occurring to my data set at a rate of about 32 changes per second.
Each change causes an async message to the app using Websocket. The async message initiates a RECEIVE action in my redux state. State changes lead to component rendering.
My concern is that the frequency of state changes will lead to unacceptable load on the client, but I'm not sure how to characterize load against number of messages, number of components, etc.
When will this become a problem or what tools would I use to figure out if it is a problem?
Does the "shape" of my state make a difference to the rendering performance? Should I consider placing high change objects in one entity while low change objects are in another entity?
Should I focus my efforts on batching the change events so that the app can respond to a list of changes rather than each individual change (effectively reducing the rate of change on state)?
I appreciate any suggestions.
Those are actually pretty reasonable questions to be asking, and yes, those do all sound like good approaches to be looking at.
As a thought - you said your server-side data changes are occurring 32 times a second. Can that information itself be batched at all? Do you literally need to display every single update?
You may be interested in the "Performance" section of the Redux FAQ, which includes answers on "scaling" and reducing the number of store subscription updates.
Grouping your state partially based on update frequency sounds like a good idea. Components that aren't subscribed to that chunk should be able to skip updates based on React Redux's built-in shallow equality checks.
I'll toss in several additional useful links for performance-related information and libraries. My React/Redux links repo has a section on React performance, and my Redux library links repo has relevant sections on store change subscriptions and component update monitoring.
React 0.14.0, Vanilla Flux Pattern
Question:
What if a triggered event needs to update 2 data structures that live in 2 separate stores?
This is a very fundamental pain that I feel with Flux.
You logically decompose your stores, and then one day you find yourself creating an event, but unfortunately it needs simultaneously update 2 separate data structures that live in 2 different stores.(crap)
Why that confuses me during development:
Please Correct Me If This Logic Is Wrong
As far as my understanding of Flux goes, we should not dispatch an Action until the re-rendering caused by the previous action is complete. Therefore multiple synchronous actions to update a store(or multiple stores) is a flux no-no.
My Solutions:
Crunch Stores Together -
I can move the data that needs to be updated into the same store in order to keep it to one action.(that sounds like over complicating a store)
Move State Server-Side -
I can keep track of some of the state server-side, and then I can use asynchronous action calls that update the server first and then the server pushes the update back to the store.(Sounds unRESTful and slow)
I'm a dedicated React developer, and am open to any advice and/or correction to help my understanding to build great React applications. Thx