Does this look like a reasonable, if verbose, way to make sure that each click does not stack requests to saveOperation?
I'm using RxJS and storing the instance of the subscription in a mutable ref so it persists between renders. Then, if one exists, I cancel it before starting a new one.
const saveSubscription = useRef<Subscription>(); // RxJS Subscription (cancellable fetch)
const handleChange = () => {
saveSubscription.current?.unsubscribe();
saveSubscription.current = saveOperation({ ...data }).subscribe();
}
...
<input type="text" onClick={() => handleChange()} ref={fileInput} />
A more reactive way of solving your issue would be the to always have your subscription open and let your pipe control the data flow. One way could be the usage of switchMap.
One asynchronous value that changes over time is your text input. That could be the outer observable that unsubscribes your inner http request and starts a new one:
// Outer text observable that changes via your input
const text$ = new Subject();
// A fake http function to show async http responses
const fakeHttp$ = (text: string) => of('Http answer: ' + text).pipe(delay(1000));
// The http response that can be subscribed to, cancels the fakeHttp request if a new text$ emits within the old open request
const source$ = text$.pipe(
switchMap(fakeHttp$)
);
// Your handle change function that can be called in the JSX Input
handleChange = (text: string) => text$.next(text);
Running stackblitz
Related
Problem explaination:
I have a function, in this function I make a REST delete request.
After this REST delete I fetch the new Data.
When i then want to work with the new data in the same function, right after I fetched the new data, I still have the old Data, with the object I just deleted.
Parent pseudo code:
const Parent = () => {
const [listOfMessages, setListOfMessages] = useState();
async function fetchMessages() {
let response = await //Make rest call to get Messages
setListOfMessages(response);
}
async function deleteMessage(messageId) {
await //Make rest call to delete message with id
fetchMessages(response);
}
return (
<Child deleteMessage = {deleteMessage} fetchMessage = {fetMessage} listOfMessages ={listOfMessages}/>
)
}
Child pseudo code:
const Parent = (props) => {
async function handleDeletetButtonClick() {
//Delete newest message
await props.deleteMessage(0)
//Fetch Messages
await props.fetchMessages()
//Display all messages, here it does still contain message 0, which i just deleted
console.log(props.listOfMessages)
}
return (
<Button onClick={handleDeletetButtonClick}/>
)
}
I put everywhere the await keyword, everything gets executed in the right order, first it gets deleted, when that is finished the new messages get fetched and after that the messages get printed to the console. I verified this with consol.log at the end of every function. My current explaination is that the component needs to get rerendered, in order to get the new props, with the new data, how would i achieve this? If my assumption is correct.
I have a workaround, but maybe there is a better solution.
Workaround:
Delete the Message myself with setListOfMessages() in the function and then resume as normal.
Thanks for your help.
Inside your deleteMessage function, are you using useState() to update the contents of listOfMessages? You need to use React methods such as useState in order for your child component to get the new props automatically and re-render.
You should give us more details about how exactly deleteMessage is implemented.
Currently, I have an element that when clicked, sets up a global cooldown timer that effects all clients with the use of Django websockets. My issue is that while initially the websocket value is converted to state in my React client via componentDidMount, the websocket doesn't run again when its value changes in real time.
Heres how it works in detail.
The timer is updated via a django model, which I broadcast via my websocket to my React front-end with:
consumer.py
class TestConsumer(AsyncConsumer):
async def websocket_connect(self, event):
print("connected", event)
await self.send({
"type":"websocket.accept",
})
#correct way to grab the value btw, just work on outputting it so its streaming
#database_sync_to_async
def get_timer_val():
val = Timer.objects.order_by('-pk')[0]
return val.time
await self.send({
"type": "websocket.send",
"text": json.dumps({
'timer':await get_timer_val(),
})
})
async def websocket_receive(self, event):
print("received", event)
async def websocket_disconnect(self, event):
print("disconnected", event)
This works initially, as my React client boots up and converts the value to state, with:
component.jsx
//handles connecting to the webclient
componentDidMount() {
client.onopen = () => {
console.log("WebSocket Client Connected");
};
client.onmessage = (message) => {
const myObj = JSON.parse(message.data);
console.log(myObj.timer);
this.setState({ timestamp: myObj.timer });
};
}
//handles submitting the new timer upon clicking on element
handleTimer = () => {
// handles making PUT request with updated cooldown timer upon submission,
const timestamp = moment().add(30, "minutes");
const curr_time = { time: timestamp };
axios
.put(URL, curr_time, {
auth: {
username: USR,
password: PWD,
},
})
.then((res) => {
console.log(res);
});
};
//button that prompts the PUT request
<button
type="submit"
onClick={(e) => {
this.handleTimer();
//unrelated submit function
this.handleSubmit(e);
}}
>
Button
</button>
However, when a user clicks the rigged element and the database model changes, the web socket value doesn't until I refresh the page. I think the issue is that I'm only sending the websocket data during connection, but I don't know how to keep that "connection" open so any changes automatically get sent to the client server. I've looked through tons of links to find what the best way to implement real time is, but most of them are either about socket.io or implementing a chat app. All I want to do is stream a django models value to the front-end in real time.
When you want to send updates triggered by some other code to the websocket connection, the channels part of django-channels comes into play. It works like this:
On connection, you add the websocket to some named group
When the value of Timer changes, you send the event (via the channels layer) with certain type to this group, from the code that triggered the changes.
Django-channels then invoke the method of the Consumer named after the type of the event for each websocket in the group
And finally, in this method, your code sends the message to the client
You need to configure then channels layer with the Redis. https://channels.readthedocs.io/en/stable/topics/channel_layers.html
Now, step by step. I'll omit irrelevant parts.
1
async def websocket_connect(self, event):
await self.send({
"type":"websocket.accept"
})
await self.channel_layer.group_add('timer_observers', self.channel_name)
2 Here I am sending the event inside model, but you can do this in the view, or via django signals, however you want it. Also I am not checking whether the value actually changed, and I am assuming there is only one instance of Timer in the DB.
from asgiref.sync import async_to_sync
from channels.layers import get_channel_layer
class Timer(models.Model):
def save(self, *args, **kwargs):
super().save(*args, **kwargs)
async_to_sync(get_channel_layer().send)(
'timer_observers', {"type": "timer.changed"}
)
3+4
I have extracted the time-sending code to reuse it
class TestConsumer(AsyncConsumer):
async def websocket_connect(self, event):
print("connected", event)
await self.send({
"type": "websocket.accept",
})
await self.channel_layer.group_add('timer_observers', self.channel_name)
await self.send_current_timer()
async def timer_changed(self, event):
await self.send_current_timer()
async def send_current_timer(self):
#database_sync_to_async
def get_timer_val():
val = Timer.objects.order_by('-pk')[0]
return val.time
await self.send({
"type": "websocket.send",
"text": json.dumps({
'timer': await get_timer_val(),
})
})
The idea here is that you handle internal events generated by your application the same way as external events from the client, i.e. websocket.connect -> async def websocket_connect. So the channels layer kinda "sends" you a "websocket message", and you respond (but to the actual client).
I hope that helps to understand the concepts. Probably what you are doing is overkill, but I assume that's just a learning exercise =)
I am not 100% sure this will work, so don't hesitate to ask additional questions.
I have to pretty weird case to handle.
We have to few boxes, We can call some action on every box. When We click the button inside the box, we call some endpoint on the server (using axios). Response from the server return new updated information (about all boxes, not the only one on which we call the action).
Issue:
If user click submit button on many boxes really fast, the request call the endpoints one by one. It's sometimes causes errors, because it's calculated on the server in the wrong order (status of group of boxes depends of single box status). I know it's maybe more backend issue, but I have to try fix this on frontend.
Proposal fix:
In my opinion in this case the easiest fix is disable every submit button if any request in progress. This solution unfortunately is very slow, head of the project rejected this proposition.
What we want to goal:
In some way We want to queue the requests without disable every button. Perfect solution for me at this moment:
click first button - call endpoint, request pending on the server.
click second button - button show spinner/loading information without calling endpoint.
server get us response for the first click, only then we really call the second request.
I think something like this is huge antipattern, but I don't set the rules. ;)
I was reading about e.g. redux-observable, but if I don't have to I don't want to use other middleware for redux (now We use redux-thunk). Redux-saga it will be ok, but unfortunately I don't know this tool. I prepare simple codesandbox example (I added timeouts in redux actions for easier testing).
I have only one stupid proposal solution. Creating a array of data needs to send correct request, and inside useEffect checking if the array length is equal to 1. Something like this:
const App = ({ boxActions, inProgress, ended }) => {
const [queue, setQueue] = useState([]);
const handleSubmit = async () => { // this code do not work correctly, only show my what I was thinking about
if (queue.length === 1) {
const [data] = queue;
await boxActions.submit(data.id, data.timeout);
setQueue(queue.filter((item) => item.id !== data.id));
};
useEffect(() => {
handleSubmit();
}, [queue])
return (
<>
<div>
{config.map((item) => (
<Box
key={item.id}
id={item.id}
timeout={item.timeout}
handleSubmit={(id, timeout) => setQueue([...queue, {id, timeout}])}
inProgress={inProgress.includes(item.id)}
ended={ended.includes(item.id)}
/>
))}
</div>
</>
);
};
Any ideas?
I agree with your assessment that we ultimately need to make changes on the backend. Any user can mess with the frontend and submit requests in any order they want regardless how you organize it.
I get it though, you're looking to design the happy path on the frontend such that it works with the backend as it is currently.
It's hard to tell without knowing the use-case exactly, but there may generally be some improvements we can make from a UX perspective that will apply whether we make fixes on the backend or not.
Is there an endpoint to send multiple updates to? If so, we could debounce our network call to submit only when there is a delay in user activity.
Does the user need to be aware of order of selection and the impacts thereof? If so, it sounds like we'll need to update frontend to convey this information, which may then expose a natural solution to the situation.
It's fairly simple to create a request queue and execute them serially, but it seems potentially fraught with new challenges.
E.g. If a user clicks 5 checkboxes, and order matters, a failed execution of the second update would mean we would need to stop any further execution of boxes 3 through 5 until update 2 could be completed. We'll also need to figure out how we'll handle timeouts, retries, and backoff. There is some complexity as to how we want to convey all this to the end user.
Let's say we're completely set on going that route, however. In that case, your use of Redux for state management isn't terribly important, nor is the library you use for sending your requests.
As you suggested, we'll just create an in-memory queue of updates to be made and dequeue serially. Each time a user makes an update to a box, we'll push to that queue and attempt to send updates. Our processEvents function will retain state as to whether a request is in motion or not, which it will use to decide whether to take action or not.
Each time a user clicks a box, the event is added to the queue, and we attempt processing. If processing is already ongoing or we have no events to process, we don't take any action. Each time a processing round finishes, we check for further events to process. You'll likely want to hook into this cycle with Redux and fire new actions to indicate event success and update the state and UI for each event processed and so on. It's possible one of the libraries you use offer some feature like this as well.
// Get a better Queue implementation if queue size may get high.
class Queue {
_store = [];
enqueue = (task) => this._store.push(task);
dequeue = () => this._store.shift();
length = () => this._store.length;
}
export const createSerialProcessor = (asyncProcessingCallback) => {
const updateQueue = new Queue();
const addEvent = (params, callback) => {
updateQueue.enqueue([params, callback]);
};
const processEvents = (() => {
let isReady = true;
return async () => {
if (isReady && updateQueue.length() > 0) {
const [params, callback] = updateQueue.dequeue();
isReady = false;
await asyncProcessingCallback(params, callback); // retries and all that include
isReady = true;
processEvents();
}
};
})();
return {
process: (params, callback) => {
addEvent(params, callback);
processEvents();
}
};
};
Hope this helps.
Edit: I just noticed you included a codesandbox, which is very helpful. I've created a copy of your sandbox with updates made to achieve your end and integrate it with your Redux setup. There are some obvious shortcuts still being taken, like the Queue class, but it should be about what you're looking for: https://codesandbox.io/s/dank-feather-hqtf7?file=/src/lib/createSerialProcessor.js
In case you would like to use redux-saga, you can use the actionChannel effect in combination with the blocking call effect to achieve your goal:
Working fork:
https://codesandbox.io/s/hoh8n
Here is the code for boxSagas.js:
import {actionChannel, call, delay, put, take} from 'redux-saga/effects';
// import axios from 'axios';
import {submitSuccess, submitFailure} from '../actions/boxActions';
import {SUBMIT_REQUEST} from '../types/boxTypes';
function* requestSaga(action) {
try {
// const result = yield axios.get(`https://jsonplaceholder.typicode.com/todos`);
yield delay(action.payload.timeout);
yield put(submitSuccess(action.payload.id));
} catch (error) {
yield put(submitFailure());
}
}
export default function* boxSaga() {
const requestChannel = yield actionChannel(SUBMIT_REQUEST); // buffers incoming requests
while (true) {
const action = yield take(requestChannel); // takes a request from queue or waits for one to be added
yield call(requestSaga, action); // starts request saga and _waits_ until it is done
}
}
I am using the fact that the box reducer handles the SUBMIT_REQUEST actions immediately (and sets given id as pending), while the actionChannel+call handle them sequentially and so the actions trigger only one http request at a time.
More on action channels here:
https://redux-saga.js.org/docs/advanced/Channels/#using-the-actionchannel-effect
Just store the promise from a previous request and wait for it to resolve before initiating the next request. The example below uses a global variable for simplicity - but you can use smth else to preserve state across requests (e.g. extraArgument from thunk middleware).
// boxActions.ts
let submitCall = Promise.resolve();
export const submit = (id, timeout) => async (dispatch) => {
dispatch(submitRequest(id));
submitCall = submitCall.then(() => axios.get(`https://jsonplaceholder.typicode.com/todos`))
try {
await submitCall;
setTimeout(() => {
return dispatch(submitSuccess(id));
}, timeout);
} catch (error) {
return dispatch(submitFailure());
}
};
I have an application that triggers many update and I would like to know more about the best way to update the app properly.
In my app, I have 5 slots to fill with books (can be managed by drag and drop). When the app launches, the filled book for the user are loaded and are stored in the state.
Problem : when I update a book, like if I switch the position of 2 books in my list, I must do some operations to say "this book belongs here now and the other one belongs here now, switch!"
I feel like I'm doing some tedious actions because if I just return the whole data (get, after updating) from my API call and call the "load" function (as I do when I launch the app) I will not have to handle the update of the operation.
Plus, it could create bug If I'm loading correctly, but not updating correctly (if I miss position of a book for example)
The benefit I see in a functional update is that I only update the 2 books I need, instead of reload all of them again and again.
What way would be better? Should I get rid of those updates functions and just reload the data entirely? I think there could be also some libraries that cache it to only re-render modified books
Thanks you
Without code it is difficult to fully understand the problem but getting the data from the server has 2 advantages.
You are sure the ui shows the data as it is on the server
Your client code does not need to contain the logic of what needs to happen, the server has this logic. When the logic is refactored in some way they don't go out of sync.
Because of this I usually choose to get the data as is on the server.
One problem with fetching data based on user interaction is that fetching is async so the following can happen:
User does action A, request made for A, user Does action B, request made for B, B request resolves and UI is set to result of request B, request made for A resolves and UI is set to result of A.
So the order the user does the actions does not guarantee the order in which the requests are resolved.
To solve this you can use a helper that resolves only if it was last requested, in the example above when A request resolves the UI does not need to be set with anything because it has already been replaced with another request.
In the example below you can type search value, when the value is 1 character long it'll take 2 seconds to resolve so when you type ab the ab request will resolve before the a request. but because the function making the request is wrapped with the last helper when a resolves it'll will be rejected because it has been replaced with the newer request ab.
//constant to reject with when request is replaced with a
// more recent request
const REPLACED = {
message: 'replaced by more recent request',
};
//helper to resolve only last requested promise
const last = (fn) => {
const check = {};
return (...args) => {
const current = {};
check.current = current;
return Promise.resolve()
.then(() => fn(...args))
.then((result) => {
//see if current request is last request
if (check.current === current) {
return result;
}
//was not last request so reject
return Promise.reject(REPLACED);
});
};
};
const later = (howLong, value) =>
new Promise((resolve) =>
setTimeout(() => resolve(value), howLong)
);
const request = (value) =>
later(value.length === 1 ? 2000 : 10, value).then(
(result) => {
console.log('request resolved:', result);
return result;
}
);
const lastRequest = last(request);
const App = () => {
const [search, setSearch] = React.useState('');
const [result, setResult] = React.useState('');
React.useEffect(() => {
//if you use request instead of lastRequest here
// you see it will break, UI is updated as requests
// resolve without checking if it was the last request
lastRequest(search)
.then((result) => setResult(`result:${result}`))
.catch((err) => {
console.log(
'rejected with:',
err,
'for search:',
search
);
if (err !== REPLACED) {
//if the reject reason is not caused because request was
// replaced by a newer then reject this promise
return Promise.reject(err);
}
});
}, [search]);
return (
<div>
<label>
search
<input
type="text"
value={search}
onChange={(e) => setSearch(e.target.value)}
></input>
</label>
<div>{result}</div>
</div>
);
};
ReactDOM.render(<App />, document.getElementById('root'));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.4/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.4/umd/react-dom.production.min.js"></script>
<div id="root"></div>
I have a code here https://codesandbox.io/embed/cranky-glade-82leu
In here, I have a 'verifyUserNameAvailable' function under 'api/' folder. In this function, I added 10sec delay to test extreme case when our API server is bad.
When user enters a valid email address (valid email format like abc#test.net), I see it's email address in helperText line 10 sec later. (This is because I added 10 sec delay in 'verifyUserNameAvailable'.)
When user enters 'abc#test.net', 'abc2#test.net' and 'abc3#test.net' 3 emails one by one without waiting for 10 seconds, I see that all three emails are displayed in sequential order on helperText line. (I guess this is expected given what I have in the code now.)
My question here-> I want the helperText line to show the most latest one only. When user enters above 3 email addresses and 'abc3#test.net' is the last one on the textField, I want to just show the 'abc3#test.net' on helperText line not the others.
Is there anyway to check what's actually on textfield and discard or abort all irrelevant async requests?
There's a way to do this without cancelling the verifyUserNameAvailable promise: Keep track of the number of requests in flight, and ignore any responses other than the latest one.
Here's what that could look like in your example, this uses useRef to keep track of the latest request value (see the React docs for useRef to see how this works https://reactjs.org/docs/hooks-reference.html#useref):
export const EmailTextField = props => {
const { onStateChange } = props;
const [state, setState] = useState({
errors: [],
onChange: false,
pristine: true,
touched: false,
value: null
});
// `useRef` holds a mutable value that lasts as long
// as the component, staying "more up to date" than the
// current closure.
const requestsInFlight = useRef(0);
const helperText = "Email address will be used as your username.";
const handleBlur = async event => {
// Email Validation logic
const emailAddress = event.target.value;
const matches = event.target.value.match(
`[a-z0-9._%+-]+#[a-z0-9.-]+.[a-z]{2,3}`
);
if (matches) {
// If there's a match, increment the ref for in-flight requests
requestsInFlight.current += 1;
await verifyUserNameAvailable(emailAddress);
// After the response comes back, decrement the ref, and if
// there are any requests still in flight, ignore that response.
requestsInFlight.current -= 1;
if (requestsInFlight.current > 0) {
return;
}
const updatedState = {
...state,
touched: true,
value: emailAddress,
errors: [emailAddress]
};
setState(updatedState);
onStateChange(updatedState);
} else {
...
}
};
Lets think about possible solutions from deepest level.
Internally you calling setTimeout. Timeout can be cancelled with clearTimeout. I don't think that this is the way you're looking for, so I will not describe how to implement this.
setTimeout is wrapped with Promise. Unfortunately, Promises are not cancelable in current version of JS. There is proposal for it.
I suspect, that you'll use code with some API to fetch data from backend. If you'll use axios, it provides cancellation of pending requests to backend using cancellation token (and cancellation is based on cancellation proposal for JS).
And if you'll use Redux in your app, you may consider Redux-saga for backend requests. It also supports cancellation.