Apologies for the vague title; as usual, I can't think of an appropriate name.
I'm building an app that I'm hoping will be dynamic enough to adapt to various different scenarios, but, generally speaking, the goal of the app is to present a UI for a state machine. Speaking generically,
There will be a list of objects, and each object will have the same collection of states.
Each state will have any number of predecessor states that are required to be completed before it can be started
Each state will have data associated to it, presented to the app as key/value pairs (KVPs)
The keys for each individual state will be the same for each object
The keys for one state may or may not match with the keys for a different state
The values for each state will be derived from information stored on the object
The values for each state may change based on changes to the information stored on the object
As an example of two states,
Review general information. An example of data could be { "name": "Kiran Ramaswamy", "address": "Somewhere in Canada" }
Review references. An example of data could be { "name": "John Doe", "relationship": "Guy on the street", "contactInfo": "john.doe at gmail dot com" }
My question is, what is the best way of modeling this data? I see three options:
Store KVPs as part of state data for object
Pro: Changes to KVP structure can be rendered automatically, without rebuild and redeploy of app
Pro: Data presentation logic is very simple
Con: Need to generate data for all steps at object creation
Con: Need to potentially update data for each state any time there are updates to the object
Calculate KVPs dynamically from object
Pro: Changes to KVP structure can be rendered automatically, without rebuild and redeploy of app
Pro: No need to store any data anywhere
Con: Data presentation logic will be very complex. It will either involve setting up a very complex data retrieval construct, or setting up a very complex data presentation construct
Abandon the idea of trying to present data as KVPs
Pro: No need to have complex logic for data presentation, as each state will have its own dedicated set of logic
Con: Need to manage quite a bit more code
Con: Changes to KVP structure may require rebuild and redeploy
Related
I am fairly new to redux and redux-toolkit, but I am fairly familiar with the whole concept of stores (coming from a vuex background).
I have an issue organizing a large state with lots of dependencies between the objects. My current approach is to use a structure similar to RDBMS where each object has its own ID and other objects reference it through that ID. This way, there are no deeply nested objects.
My current problem is how to organize a large state with lots of dependencies and lots of types of objects (type of object = "table"). I could put all of the objects into a single slice, but this would make that slice incredibly large.
Another approach is to separate each of the object types into different slices, with a single slice for each object type. This presents the problem that accessing the state of another slice is not easily done but is required to update that state in order to keep the state consistent (i.e. avoid broken references).
Is there a good, common approach on how to structure a large state with a lot of dependencies?
Where I work we decided to go with RTK and to have one slice per entity/object related to a table in the backend (users, customers, products, etc). This is working fine.
You can access the state of another slice by using the thunkAPI object as the second parameter of the payloadCreator callback of the createAsyncThunk function. This object contains multiple methods. One of them is getState :)
I'm pretty new to GraphQL and Apollo but I've been using Redux along with React for the last 3 years. Based on Apollo documentation, they encourage developers to use it as the single source of truth:
We want to be able to access boolean flags and device API results from
multiple components in our app, but don't want to maintain a separate
Redux or MobX store. Ideally, we would like the Apollo cache to be the
single source of truth for all data in our client application
I'm trying to figure out the way to replicate with Apollo what Redux would allow me. In my application, I have "Tags", an array of objects with close to 15 different fields each. They are used on 3 different sections of my app and each section shows specific "Tags" as well as specific fields from the "Tags". Based on that, the way I approach this with Redux is to fetch the "Tags" from my API, and, in the reducer, I create different arrays containing the IDs of the specific "Tags" I need for each section and I also create a Map (id, value) with the original data. It would be something like:
const tags = new Map(); //(tagId, tag) containing all the tags
const sectionATags = []; // array of ids for section A tags
const sectionBTags = []; // array of ids for section B tags
const sectionCTags = []; // array of ids for section C tags
My goal is to replicate the same behavior but, even though they encourage you to manage your local state using Apollo, I'm not sure if what I want to achieve is possible in a simple way, or if it is actually a good practice to do so with Apollo. I've been following this example from the docs and what they mainly do is adding or removing extra fields to the data received from the server by extending the query or mutating the cached data with the #client directive.
At the same time, I understand GraphQL was meant to query the specific data you need, instead of the typical REST request where you would get a big JSON with all the data regardless of whether you need it or not, but it feels like it wouldn't be efficient to do 3 different queries in this case with the specific data I need for each section.
I'm not sure if I'm missing something or maybe Apollo is thought for "simpler local state management". Am I following the right path or shall I keep using Redux with another GraphQL library that just allows me to fetch data without a management layer?
Apollo has normalized cache ... you can query once all [fields required anywhere] data - subsequent/additional queries can be "cache only". No additional requests will be made - it's safe if all required fields already exists.
Converting data structure ...usually not needed, we're working on data [and shape] we need ("not our but backend problem"). You can:
change data shape in resolver [in graphql API wrapping REST],
reshape data in REST-link (direct REST API access from Apollo),
read data once at startup, save converted in local (all following queries reads local)
Common app state can be left in redux - data fetching should be in apollo.
Background: I want to implement a react-redux app with immutable collection as store, but I am thinking of how to transfer the immutable data between server and client.
The only possible way to solve the problem is using JSON.stringify(ImmutableObject.toJS()), but toJS() is absolutely unusable and extremely slow. However I cannot simply do the trick by JSON.stringify(ImmutableObject) because that would 100% break the structure. Does anyone got a nice way to do that?
It really depends if you're using only data structures that can map directly to JSON constructs (Map, List) or not.
You say that JSON.stringify(ImmutableObject) would 100% break the structure - but it wouldn't have any different impact on the JSON string's structure than JSON.stringify(ImmutableObject.toJS()), as all immutable.js collections implement the toJSON method. The output of both methods should be identical (apart maybe sometimes order of keys).
If your store is an immutable.js object - or contains some - you have no other option than serializing it to JSON (using one of the two above methods) - and client-side instantiating immutable.js objects before rehydrating the store.
If you use only Maps and Lists, that's fairly straightforward: .toJSON server-side, and fromJS client-side. If you use data structures such as OrderedMap, Set, Range, Record, etc... your only option is to have logic client-side that knows how to revive each piece of the state before feeding it to the redux store.
You're right that the penalty of toJS can be huge, but in my experience, that would be for really large collections.
Finally, there is of course the option to write you're own to/from JSON serializer/deserializer - something that would output a string like {"type": "OrderedMap", "value": [{"key": "a", "value": 1}]} - but I'm not sure that would be any better perf-wise, and having a store-reviving logic client-side is not that bad IMO.
I've had success using the transit-immutable-js lib.
From the docs:
"Transit is a serialisation format which builds on top of JSON to provide a richer set of types. It is extensible, which makes it a good choice for easily providing serialisation and deserialisation capabilities for Immutable's types."
I'm working on a project which has only been going a very short time. There are few flux stores in place already which manage different aspects of the application state and are relatively independent.
I have 2 questions :
Some of the stores that exist are emitting more than one type of change event. Is this indicative of the stores handling too much unrelated data that should be in separate stores or is this a usual situation?
We need to write a React component that is dependent on more than one of the stores that already exist, and also needs to query the server to get some specific information to render on the page which it will then allow the user to modify. So, before this component can render, it needs to ensure all the stores contain what they need to and issue actions to populate anything that is missing. My question is about how to handle this. Would it be better to create a new store that fetches the specific data required by the component and is dependent on the other stores (using the usual flux store dependency rules), or to have the component knowing which specific stores it is dependent on directly.
For first part of your question: it depends. Flux doesn't force you to follow strict set of rules. Redux for example, uses only one store for everything. I worked on project where almost all components had there own store and another one where we had single store per view that handled data model and all additional states. Without knowing specifics of your project (size, complexity etc.) I can't recommend one over the other. I would probably go with minimum number of stores that make sense to you and your team and refactor as needed (ie. when you feel it handles too much or single file contains too much of unrelated code). Whatever works best for your situation and makes you most comfortable.
For other part: since you want component to render only after data for all stores is populated I would introduce new store to handle server data and use Dispatcher’s waitFor method to define dependencies. If you opt to use stores directly you could then render component using some kind of intial state with loading spinner over parts that are missing or disable user editing and, once data is fetched, update state to display rest of the data/enable editing. This requires more code but can result with better UX. Again, it all depends on your needs.
What's the best way to handle a case where multiple components are using one store (which is populated by API calls in an action creator), but each component may need to access different sets of data without effecting the other. I.e.: two table components display data from the WidgetStore; one table wants all widgets, the other only displays widgets whose name contains "foo" (this would be based on user input). The table being queried via the API has tens of thousands of widgets, so loading them all into the store and filtering from the store isn't practical. Is there a Flux architecture (like Redux) that already has a way of handling this type of thing?
The simplest way is to just create a parent component and selectively hand off data, using a pluck or selector function, to each of the children.
For the more general answer for yourself going forward... if you follow something along the lines like redux there is already proven patterns which will help you understand passing complex data down.