Implementing undo/redo on array of objects using vuex - arrays

I am trying to implement undo/redo in a complex application
This is how my history array looks.
[
{
action:“added”,
payload:{
id:"32132132",
data: "JSON.Stringify(JSON.parse(data))) initial data getting from initial app setup"
}
},
{
action:“updated”,
payload:{
id:"32132132",
data: "getting the diff"
}
},
{
action:“deleted”,
payload:{
id:"32132132",
data: "data"
}
}
]
As far as I understood, the undo, redo works on a history index,
which will be changed based on undo (increment pointer) , redo (decrement pointer) concept.
Based on this approach I am calling a mutation where
apply(state,actiontype){
if(actiontype==undo){
remove the respective objects
pop(data) //pops the data out of other places in the application not the history
historyindex++;
}else if(actiontype==redo){
get the data from history
unshift(data);
historyindex–-;
}
}
I feel this is not the most efficient way to perform undo/redo operations, as it includes cloning objects and even it has to handle huge sets of data. Which might lead to freezing of the application I am still a newbie in vuejs, please do correct me if I am wrong, is there any more efficient way to perform undo-redo operations ? or the right way to implement undo/redo in vuejs?.
Any suggestions would be helpful

You should consider use some kind of data compress (like git does), with this you can continue using only the Vuex. Otherwise, consider use some local database as already recommended ;D

you may want to connect your app to any simple database that store your previous object try firebase if it has no backend and make vuex just have the previous value but older ones saved to the firebase database by the mutation method in vuex

Related

How do you share state in a micro-frontend scenario?

A first idea would be cookies, yet you can run out of space really fast.
There are several ways to get communication in microfrontends.
As already noted the different microfrontends should be loosely coupled, so you'd never directly talk from one to another.
The key question is: Is your microfrontend solution composed on the server-side or client-side?
For the client side I've written an article on the communication.
If you are on the server side (question seems to go in that direction due to the mention of cookies) then I would suggest using the standard microservice patterns for communication and exchanging state. Of course, using centralized systems such as Redis cache there would help.
In general the different microfrontends should have their own state and be as independent as possible.
Usually what you want to share is not the state / data, but rather the state with an UI representation. The reason is simple: That way you don't have to deal with the representation and edge cases (what if the data is not available?). One framework to show this is Piral.
Hope that helps!
There's no shared state, that'd break the concept of the segregation that's supposed to take place. This pattern is present among all microservices architectures as it's supposed to eliminate single points of failure and other complications in maintaining a bigger store. The common approach is for each "micro frontend" to have its own store (i.e. Redux). The Redux docs have a topic on this.
First, you should avoid having shared states between MicroFrontend (MFE) much as possible. This is a best practice to avoid coupling, reduce bugs, etc..
A lot of times you don't need it, for example, every information/state that came from the server (eg: the "user" information) could be requested individually for each MFE when they need it.
However, in case you really need a shared state there are a few solutions like:
- Implement the pub/sub pattern in the Window object.
There are a few libraries that already provide this.
//MFE 1
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
observable.publish(input.value); //input.value markup is not present in this example for simplicity
//MFE 2
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
const handleNewMessage = (newMessage) => {
setMessages((currentMessages) => currentMessages.concat(newMessage)); //custom logic to handle the message
};
observable.subscribe(handleNewMessage);
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#windowed-observable
- Dispatch/Capture Custom browser Events
Remember that Custom Events can have 'details' that allow pass information
//MF1
const customEvent = new CustomEvent('message', { detail: input.value });
window.dispatchEvent(customEvent)
//MF2
const handleNewMessage = (event) => {
setMessages((currentMessages) => currentMessages.concat(event.detail));
};
window.addEventListener('message', handleNewMessage);
This approach has the important issue that just works for 'new events dispatched', so you can read the state if you don't capture the event at the moment.
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#custom-events
In both implementations, using a good naming convention will help a lot to keep order.

How the state will store the data in redux in react native?

I would like to know that
if we are storing the data in a state in redux,
and suddenly the data is updated to redux
will the previous state will be overidden
or will it maintain a copy or like versions (v1,v2)
can anyone please guide me on this?
The previous states will not be stored, once you close the app the "session" is cleared, minimizing the app will see the state persists, there are a number of options for permanent data storage, redux-persist is one of them, and probably the easiest to implement, there is also a built in option with react-native (this is however the worst option as i find it often causes my apps to hang if a process takes too long, i suspect it's blocking the js thread). The best option, but slightly more difficult to setup is https://realm.io/ which i use in conjunction with redux and saga, this is great for an offline first approach to building your app, as you can check users connectivity and either make calls to your api, or to your realm storage.
See code example below for appending data in reducer and replacing data:
case NEXT_PAGE_SUCCESS:
return {
...state,
busy: false,
msg: action.payload.statusText,
status: true,
data: [...state.data, ...action.payload.data]
}
In the above reducer case, ...state will drop in the previous state, now afterwards we also assign busy: false, this will overwrite the old busy state.
Now if you look at the data field, we have:
[...state.data,...action.payload.data]
This will combine the previous state.data, with the new state.data appending the list instead of overwriting.
I hope this answers your question?
Lloyd

Handling sorted and filtered collections coming from APIs in a redux store

Redux guidelines suggest to
think at an app's state as a database
and to prefer key-based objects over arrays when it comes to store resources. This makes totally sense, since it simplifies 99% of the most common use cases when dealing with collections: search, find, add more, remove, read...
Unfortunately the downsides show up when it comes to keep a filterable and sortable collection of resources synched with APIs responses. For example a typical request:
GET /users?status=active&orderBy=name&orderDir=asc&lastID=1234&limit=10
will return a filtered, sorted and paged, list (array) of users. Typically the reducer will then take this array do something like:
users: {...state.users, keyBy(action.payload, 'id')}
This will merge new data with previously fetched ones breaking the computation done from APIS.
The app must then perform a second, client-side, computation on the collection to reconstruct the expected list. This results in:
redundant computation (redo something that has been already done by the server)
duplication of logic (same filtering and sorting code deployed both client-side and server-side)
maintenance coast (client app developers must take the extra burden to keep filters and sort logic synched every time it changes on the backend to guarantee consistency)
Another downside, if you are implementing a sort of infinite loading, is that you must keep track of the lastID, since there is not way do deduce what is the last loaded id after results have been merged.
So the question:
What's the best approach to design stores and reducers that must deal with sorted/filterd/paged data fetched via APIs?
On of the common approaches is to keep object-index map and sorted object-list in separate structures.
Example reducer (using ramda):
function reducer(state, action) {
if (action.type == 'USERS_LOADED') {
return r.merge(state, {
userIndex: ramda.reduce(
(acc, user) => ramda.assoc(user.id, user, acc),
{},
action.payload
),
userList: ramda.map(
ramda.prop('id'),
action.payload
)
}
}
}
Example connect selector:
connect(
state => ({
users: state.userList.map(id => state.userIndex[id]) // this reconstructs users in original order
})
)(...)
You can also use open-source project DataScript.
An immutable in-memory database and Datalog query engine.
DataScript is meant to run inside the browser. It is cheap to create,
quick to query and ephemeral. You create a database on page load, put
some data in it, track changes, do queries and forget about it when
the user closes the page.
DataScript databases are immutable and based on persistent data
structures. In fact, they’re more like data structures than databases
(think Hashmap). Unlike querying a real SQL DB, when you query
DataScript, it all comes down to a Hashmap lookup. Or series of
lookups. Or array iteration. There’s no particular overhead to it. You
put a little data in it, it’s fast. You put in a lot of data, well, at
least it has indexes. That should do better than you filtering an
array by hand anyway. The thing is really lightweight.
It has nice Javascript API. Usage example (a bit outdated). Discussion by redux core team.

Storing multi-dimensional data in Redux

First off, I'm still a relative newbie to the world of React & Redux. I'm reading about Normalizing State Shape, and their examples are about storing data by ID. But what if I have data that is keyed on multiple dimensions?
For example, my app is displaying cost data for a given Service ID, which is retrieved from an API. However, the user is able to select a time range. The start and end timestamps are passed to the API, and the API returns the aggregated data over that time range. I want to be able to store all the different time period data in Redux so if the user goes back to a previous time period, that data is already there (our API is slow, so having already loaded data available is critical to user experience).
So not only do I have data keyed by Service ID, but also by Start Time / End Time. Since Redux recommends flat data structures, I wonder how flat should I make it? Because normally, I would store the data like this:
{
costData: {
[service_id]: {
[start_time]: {
[end_time]: {
/* data */
}
}
}
}
}
But that seems to go about the idea of flattening the data. One of my ideas was to generate an ID based on Service ID & Start Time & End Time of the form:
<ServiceID>::<StartTime>::<EndTime>
eg.
00123::1505423419::1505785502
So the data is fairly flat:
{
costData: {
'00123::1505423419::1505785502': {
/* data */
}
}
}
The component can generate this ID and pass it to the fetchCostData() action, which can dispatch and fetch the data and store that data on that generated ID. But I don't know if that's the best approach. Are there any guidelines or recommendations on how to approach this?
I recommend using selectors (Reselect) for this nested data, if you're not comfortable with modifying your api.
-> Selectors are the best approach for computing derived data, allowing Redux to store the minimal possible state.
-> Selectors are efficient. A selector is not recomputed unless one of its arguments change.
-> Selectors are composable. They can be used as input to other selectors.
In addition to the other answer, you may want to read the article Advanced Redux Entity Normalization, which describes ways to track additional lookup descriptions of normalized data. I also have some other articles on normalization in the Redux Techniques#Selectors and Normalization section of my React/Redux links list.

Best Practice: When to throw away unwanted data from asynchronous call

I have some asynchronous middleware that accesses a remote api and dispatches it to a Redux.
I am accessing an existing api which returns a large chunk of data, much of it that I do not need. Is there any established best practice for when to discard unwanted elements from the data? As far as I can see I could:
1 - Filter it out when received and only pass what I need to the store.
2 - Store everything in the store when it is received and use a selector or mapStateToProps to extract only what I need to render.
3 - Store and extract everything and filter out what I need within the component.
What do others think?
In case you can't change the API to use something like streams or at least pagination, go with option no. 1.
Try to work with the least amount of data necessary to do the
job. This is a general rule and doesn't apply only to
redux!
Try to keep your store as flat as possible.
Try to minimize the data involved in any actions that lead to a change in the store
With that said, filter out all unused data right when the API response comes in.
If you're getting a large amount of data something like this from the API:
API
"payload": {
"info": [ ... large ...],
"meta": [ ... small ...],
}
The in your Redux action action make sure to use only the small data. For instance in your reducer
Reducer
function reducer(store, action) {
switch (action.payload) {
case 'GET_API':
store = {...store, meta: action.payload.meta }
}
break;
default:
break;
}
return store
}
So now, you wont have that large data anymore, once that API is finished, the reducer will only contain the the small data.

Resources