If there is a complex Redux store for determining the states of many components throughout the app.
What is the best pattern for when to save things to the DB? I see pros and cons to different approaches, but I am wondering what is standard for applications with a complex UI?
Save the store to DB every time a change is made. (Makes it difficult chasing lots of instant vs. async processes... Either lots of loading states and waiting or juggling the store and the DB separately.)
Autosaving every now and then... (Allows the store to instantly determine the UI, faster... With occasional loading states.)
Manual saving... Ya, no thanks...
I recommend saving automatically every time a change is made, but use a "debounce" function so that you only save at most every X milliseconds (or whatever interval is appropriate for your situation).
Here is an example of a "debounce" function from lodash: https://lodash.com/docs/#debounce
Related
This application I'm building is using data that changes once a day at a predefined time, common to all data sources.
The app does not however show the collected data itself, but derivative information that is computed from it.
The computation is a bit heavy and doing it on the fly causes the page to take a few seconds to load. To solve this, I have two options:
Add a cron job that pre-computes the desired information and saves it to the database;
Use caching to avoid hitting the database every time someone loads a page.
My question is this: which method is more appropriate to solve the issue without leading to more headaches down the road?
I am implementing/evaluating a "real-time" web app using React, Redux, and Websocket. On the server, I have changes occurring to my data set at a rate of about 32 changes per second.
Each change causes an async message to the app using Websocket. The async message initiates a RECEIVE action in my redux state. State changes lead to component rendering.
My concern is that the frequency of state changes will lead to unacceptable load on the client, but I'm not sure how to characterize load against number of messages, number of components, etc.
When will this become a problem or what tools would I use to figure out if it is a problem?
Does the "shape" of my state make a difference to the rendering performance? Should I consider placing high change objects in one entity while low change objects are in another entity?
Should I focus my efforts on batching the change events so that the app can respond to a list of changes rather than each individual change (effectively reducing the rate of change on state)?
I appreciate any suggestions.
Those are actually pretty reasonable questions to be asking, and yes, those do all sound like good approaches to be looking at.
As a thought - you said your server-side data changes are occurring 32 times a second. Can that information itself be batched at all? Do you literally need to display every single update?
You may be interested in the "Performance" section of the Redux FAQ, which includes answers on "scaling" and reducing the number of store subscription updates.
Grouping your state partially based on update frequency sounds like a good idea. Components that aren't subscribed to that chunk should be able to skip updates based on React Redux's built-in shallow equality checks.
I'll toss in several additional useful links for performance-related information and libraries. My React/Redux links repo has a section on React performance, and my Redux library links repo has relevant sections on store change subscriptions and component update monitoring.
React 0.14.0, Vanilla Flux Pattern
Question:
What if a triggered event needs to update 2 data structures that live in 2 separate stores?
This is a very fundamental pain that I feel with Flux.
You logically decompose your stores, and then one day you find yourself creating an event, but unfortunately it needs simultaneously update 2 separate data structures that live in 2 different stores.(crap)
Why that confuses me during development:
Please Correct Me If This Logic Is Wrong
As far as my understanding of Flux goes, we should not dispatch an Action until the re-rendering caused by the previous action is complete. Therefore multiple synchronous actions to update a store(or multiple stores) is a flux no-no.
My Solutions:
Crunch Stores Together -
I can move the data that needs to be updated into the same store in order to keep it to one action.(that sounds like over complicating a store)
Move State Server-Side -
I can keep track of some of the state server-side, and then I can use asynchronous action calls that update the server first and then the server pushes the update back to the store.(Sounds unRESTful and slow)
I'm a dedicated React developer, and am open to any advice and/or correction to help my understanding to build great React applications. Thx
I was looking to implement CQRS pattern. For the process of updating the read database, is it best to use a windows service, or to update the view at the time of creating a new record in the update database? Is it best to use triggers, or some other process? I've seen a couple of approaches and haven't made up my mind what is the best approach to achieve this.
Thanks.
Personally I love to use messaging to solve these kind of problems.
You commands result in events when they are processed and if you use messaging to publish the events one or more downstream read services can subscribe to the events and process them to update the read models.
The reason why messaging is nice in this case is that it allows you to decouple the write and read side from each other. Also, it allows you to easily have several subscribers if you find a need for it. Additionally, messaging using a persistent queuing system like MSMQ enables retrying of failed messages. It also means that you can take a read model offline (for updates etc) and when it comes back up it can then process all the events in the queue.
I'm no friend of Triggers in relational databases, but I imagine the must be pretty hard to test. And triggers would introduce routing logic where it doesn't belong. Could it be also that if the trigger action fails, the entire write transaction rolls back? Triggers is probably the least beneficial solution.
It depends on how tolerant your application must be with regards to eventual consistency.
If your app has no problem with read data being 5 minutes old, there's no need to denormalize upon every write data change. In that case, a background service that kicks in every n minutes or that kicks in only when the CPU consumption is below a certain threshold, for instance, can be a good solution.
If, on the other hand, your app is time-sensitive, such as in the case of frequently changing statuses, machine monitoring, stock exchange data etc., then you will want to keep the lag as low as possible and denormalize on the spot -- that is, in-process or at least in real-time. So in this case you may choose to run the denormalizers in a constantly-running process or to add them to the chain of event handlers straight in your code.
Your call.
I am facing two options of how to update the database, and do not know which one is better for my situation. There are three tables in the database, which are used to read/store some user's information, such the url history or some inputs.
In real time, the database is accessible by users all the time, so the changes made to the database can be seen immediately by that user.
The batch processing hides the "update" from user, database is updated by parsing the log files, and such a process runs every X hours. So user can only see their changes after X hours.
Apart from the advantage/disadvantage of synchronized/asynchronized updates that user can see. What are the other benefits of choosing real-time or batch processing updating methods for database updating?
Thanks
It all depends on the amount of traffic you expect. If you want to scale your application, asynchronous processing is always recommended. But that does not mean that your users have to wait for X hours. You can have the process run every 5 minutes or even every minute.
This way you will reduce concurrency issues and at the same time users will be able to see their updated history with a little bit of delay.
See best practices for scalability in the book Scalability Rules
I would suggest you use EDA (Event Driven Architecture) which uses a middleware to
"glue" all of this.
http://searchsoa.techtarget.com/definition/event-driven-architecture
One advice : Keep away from batch processes.
Today, everything tends to be more and more real-time. Imagine if you would receive my answer in X hours... would you be satisfied? :)
If you give us more Info, we could also help you more.
I see that your input comes from a log file? Can this be changed?
You could also implement the observer pattern.