Which param triggered React.useMemo recalculation? - reactjs

here is my React hooks code:
function calc_c({a,b}){
//some long calculation that is based on a,b
}
function MyComponent(params){
var a=calc_a(params)
var a=calc_b(params)
var c=React.useMemo(()=>calc_c({a,b},[a,b])
}
my question: how to I find out which of the params in [a,b] changed and caused the calls to calc_c
edit: I ended up using a generic version of skyboyer excelent answer:
export function useChanged(name,value){
function print_it(){
console.log('changed',name)
}
React.useMemo(print_it,[value])
}

It depends whether you ask for debugging purposes or you'd like to rely on that in your code(e.g. "if A is changed then return B, otherwise C")
For both cases, there is no simple way to achieve. But work arounds would differ.
Assume you just want to figure out why this is recalculated. Then just put bazillion
useEffect(() => {
console.log("a is changed");
}, [a])
One per each dependency. Yes, boring and repetitive. But the simplest approach is, the less you should actually worry about additionally. Or take a look if useWhatChaged works to you(if there are literally dozen variables in dependency list).
Another thing, if you would like to make check(but why?) in your regular(not in temporary code for debugging purposes I mean). Then you might use usePrevious or write something similar.

Related

Is it a good practice to use optional chaining inside reacts useMemo/useEffect dependencies?

I'm working on a large react app where performance matters.
At first I avoided using objects properties inside useMemo dependencies (I was avoiding dot notation inside dependencies). But I have seen this in react's documentation so I think it is safe.
Now my linter sometimes complains when I don't include optional chaining inside the dependencies, and I end up doing this:
const artworks = useMemo(() => {
let list = availableArtworks.artworks;
if (route.params?.artworkId) {
// Some Stuff here
}
return list;
}, [availableArtworks, route.params?.artworkId]);
It looked dirty to me at first but now I'm considering starting to use it. If route has no params property, then the whole route.params?.artworkId expression should be falsy, right? Then if the object changes, and we suddenly have a params and artworkId the useMemo should reexecute and take that value into account?
So my question is: Is it safe to use it like so or is it dirty in any ways ?
Syntax wise
Looks safe. The one line I worry about is:
let list = availableArtworks.artworks;
If the key "artworks" isn't a primitive, you might modify it without intending to.
Also, your dependencies array should look like:
[availableArtworks?.artworks, route.params?.artworkId]);
Do you really need useMemo?
Lastly, if performance is super important, I would definitely reconsider (and measure) if you do need the useMemo. useMemo is for heavy computational lifting. Do you have some recursion happening? Some intensive calculation? Factorials? Long Loops? If the answer is no, bringing useMemo will in fact decrease performance.
Recommend you to read this article from Kent C. Dodds (Author of react-testing-library and remix) about performance.

Understanding react function notation

Learning react here. Can someone walk me through how to interpret the function below:
const onElementsRemove = (elementsToRemove) => setElements((els) => removeElements(elementsToRemove, els));
As far as I understand it, this is the same as calling:
onElementsRemove(setElements(elementsToRemove(els))?
Is that correct? Is there a benefit to the first notation? Perhaps I am biased coming from the python side of the world but the second one feels more compact? Can someone help me undrstand the reasoning? Thanks!
No, those are not the same. Let's start with the inner part, which needs to be the way it is:
setElements((els) => removeElements(elementsToRemove, els))
When setting state in react, there are two options. You can either directly pass in what you want the new state to be, or you can pass in a function. If you pass in a function, then react will look up what the latest value of the state is, and call your function. Then you return what the new state will be.
So the purpose of doing it this way is to find out what the latest value in the state is. There isn't another way to do this.
Next, the outer part, which has more flexibility:
const onElementsRemove = (elementsToRemove) => /* the stuff we looked at earlier */
This is defining a function called onElementsRemove. From the name, i assume that this is going to be called at some arbitrary point of time in the future. So it's just defining the functionality, and later on you can call it, once you know which elements you want to remove. It will then turn around and set the state. For example, you would do:
onElementsRemove([1, 2, 3]); // i don't actually know what will be in the array
Maybe having this outer function is useful, maybe not. If you're having to do this fairly often it could make sense. In other cases, maybe you could directly call setElements, as in:
setElements((els) => removeElements([1, 2, 3], els));

Should const be outside function components in React.js?

Some of my code got a review comment saying something like "move const outside the function to avoid redeclaration". This was a normal function component, something like this:
export default function someComponent() {
const someString = 'A string';
///...
}
I was confused about the idea of this causing a redeclaration, because it does not, I know that the record that holds the variables and constants belong to the scope, so it's not exactly that.
But then i remembered that typescript does not allows you to have a const inside a class, not sure about the reason or is this is related. But then ts added the readonly modifier in ts v2, so the confusion remains.
Should const be outside function components, or not?
I would love to know more opinions.
There are 2 sides to the coin. Firstly, in terms of clean code and readability, I strongly prefer local declarations like in your example. I would love using more nested functions as well.
However, in JavaScript, every time the function executes, the local definitions will be redeclared, even if they are constants or functions. So it's a trade-off. If the function is called many times, this becomes an overhead.
I think it would not be hard for a compiler like TypeScript's tsc or some other pre-processor to extract those definitions at compile time to have best of both worlds. But it's likely that they do not do this to remain fully compatible. I am not aware of such tools but would be interested if there are some.

setState of parent from child component with key value pairs in React? [duplicate]

When creating a JavaScript function with multiple arguments, I am always confronted with this choice: pass a list of arguments vs. pass an options object.
For example I am writing a function to map a nodeList to an array:
function map(nodeList, callback, thisObject, fromIndex, toIndex){
...
}
I could instead use this:
function map(options){
...
}
where options is an object:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
Which one is the recommended way? Are there guidelines for when to use one vs. the other?
[Update] There seems to be a consensus in favor of the options object, so I'd like to add a comment: one reason why I was tempted to use the list of arguments in my case was to have a behavior consistent with the JavaScript built in array.map method.
Like many of the others, I often prefer passing an options object to a function instead of passing a long list of parameters, but it really depends on the exact context.
I use code readability as the litmus test.
For instance, if I have this function call:
checkStringLength(inputStr, 10);
I think that code is quite readable the way it is and passing individual parameters is just fine.
On the other hand, there are functions with calls like this:
initiateTransferProtocol("http", false, 150, 90, null, true, 18);
Completely unreadable unless you do some research. On the other hand, this code reads well:
initiateTransferProtocol({
"protocol": "http",
"sync": false,
"delayBetweenRetries": 150,
"randomVarianceBetweenRetries": 90,
"retryCallback": null,
"log": true,
"maxRetries": 18
});
It is more of an art than a science, but if I had to name rules of thumb:
Use an options parameter if:
You have more than four parameters
Any of the parameters are optional
You've ever had to look up the function to figure out what parameters it takes
If someone ever tries to strangle you while screaming "ARRRRRG!"
Multiple arguments are mostly for obligatory parameters. There's nothing wrong with them.
If you have optional parameters, it gets complicated. If one of them relies on the others, so that they have a certain order (e.g. the fourth one needs the third one), you still should use multiple arguments. Nearly all native EcmaScript and DOM-methods work like this. A good example is the open method of XMLHTTPrequests, where the last 3 arguments are optional - the rule is like "no password without a user" (see also MDN docs).
Option objects come in handy in two cases:
You've got so many parameters that it gets confusing: The "naming" will help you, you don't have to worry about the order of them (especially if they may change)
You've got optional parameters. The objects are very flexible, and without any ordering you just pass the things you need and nothing else (or undefineds).
In your case, I'd recommend map(nodeList, callback, options). nodelist and callback are required, the other three arguments come in only occasionally and have reasonable defaults.
Another example is JSON.stringify. You might want to use the space parameter without passing a replacer function - then you have to call …, null, 4). An arguments object might have been better, although its not really reasonable for only 2 parameters.
Using the 'options as an object' approach is going to be best. You don't have to worry about the order of the properties and there's more flexibility in what data gets passed (optional parameters for example)
Creating an object also means the options could be easily used on multiple functions:
options={
nodeList:...,
callback:...,
thisObject:...,
fromIndex:...,
toIndex:...
}
function1(options){
alert(options.nodeList);
}
function2(options){
alert(options.fromIndex);
}
It can be good to use both. If your function has one or two required parameters and a bunch of optional ones, make the first two parameters required and the third an optional options hash.
In your example, I'd do map(nodeList, callback, options). Nodelist and callback are required, it's fairly easy to tell what's happening just by reading a call to it, and it's like existing map functions. Any other options can be passed as an optional third parameter.
I may be a little late to the party with this response, but I was searching for other developers' opinions on this very topic and came across this thread.
I very much disagree with most of the responders, and side with the 'multiple arguments' approach. My main argument being that it discourages other anti-patterns like "mutating and returning the param object", or "passing the same param object on to other functions". I've worked in codebases which have extensively abused this anti-pattern, and debugging code which does this quickly becomes impossible. I think this is a very Javascript-specific rule of thumb, since Javascript is not strongly typed and allows for such arbitrarily structured objects.
My personal opinion is that developers should be explicit when calling functions, avoid passing around redundant data and avoid modify-by-reference. It's not that this patterns precludes writing concise, correct code. I just feel it makes it much easier for your project to fall into bad development practices.
Consider the following terrible code:
function main() {
const x = foo({
param1: "something",
param2: "something else",
param3: "more variables"
});
return x;
}
function foo(params) {
params.param1 = "Something new";
bar(params);
return params;
}
function bar(params) {
params.param2 = "Something else entirely";
const y = baz(params);
return params.param2;
}
function baz(params) {
params.params3 = "Changed my mind";
return params;
}
Not only does this kind of require more explicit documentation to specify intent, but it also leaves room for vague errors.
What if a developer modifies param1 in bar()? How long do you think it would take looking through a codebase of sufficident size to catch this?
Admittedly, this is example is slightly disingenuous because it assumes developers have already committed several anti-patterns by this point. But it shows how passing objects containing parameters allows greater room for error and ambiguity, requiring a greater degree of conscientiousness and observance of const correctness.
Just my two-cents on the issue!
Your comment on the question:
in my example the last three are optional.
So why not do this? (Note: This is fairly raw Javascript. Normally I'd use a default hash and update it with the options passed in by using Object.extend or JQuery.extend or similar..)
function map(nodeList, callback, options) {
options = options || {};
var thisObject = options.thisObject || {};
var fromIndex = options.fromIndex || 0;
var toIndex = options.toIndex || 0;
}
So, now since it's now much more obvious what's optional and what's not, all of these are valid uses of the function:
map(nodeList, callback);
map(nodeList, callback, {});
map(nodeList, callback, null);
map(nodeList, callback, {
thisObject: {some: 'object'},
});
map(nodeList, callback, {
toIndex: 100,
});
map(nodeList, callback, {
thisObject: {some: 'object'},
fromIndex: 0,
toIndex: 100,
});
It depends.
Based on my observation on those popular libraries design, here are the scenarios we should use option object:
The parameter list is long (>4).
Some or all parameters are optional and they don’t rely on a certain
order.
The parameter list might grow in future API update.
The API will be called from other code and the API name is not clear
enough to tell the parameters’ meaning. So it might need strong
parameter name for readability.
And scenarios to use parameter list:
Parameter list is short (<= 4).
Most of or all of the parameters are required.
Optional parameters are in a certain order. (i.e.: $.get )
Easy to tell the parameters meaning by API name.
Object is more preferable, because if you pass an object its easy to extend number of properties in that objects and you don't have to watch for order in which your arguments has been passed.
For a function that usually uses some predefined arguments you would better use option object. The opposite example will be something like a function that is getting infinite number of arguments like: setCSS({height:100},{width:200},{background:"#000"}).
I would look at large javascript projects.
Things like google map you will frequently see that instantiated objects require an object but functions require parameters. I would think this has to do with OPTION argumemnts.
If you need default arguments or optional arguments an object would probably be better because it is more flexible. But if you don't normal functional arguments are more explicit.
Javascript has an arguments object too.
https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments

Which of these functions is more testable in C?

I write code in C. I have been striving to write more testable code but I am a little
confused on deciding between writing pure functions that are really good for testing
but require smaller functions and hurt readability in my opinion and writing functions
that do modify some internal state.
For example (all state variables are declared static and hence are "private" to my module):
Which of this is more testable in your opinion:
int outer_API_bar()
{
// Modify internal state
internal_foo()
}
int internal_foo()
{
// Do stuff
if (internal_state_variable)
{
// Do some more stuff
internal_state_variable = false;
}
}
OR
int outer_API_bar()
{
// Modify internal state
internal_foo(internal_state_variable)
// This could be another function if repeated many
// times in the module
if (internal_state_variable)
{
internal_state_variable = false;
}
}
int internal_foo(bool arg)
{
// Do stuff
if (arg)
{
// Do some more stuff
}
}
Although second implementation is more testable wrt to internal_foo as it has no sideeffects but it makes bar uglier and requires smaller functions that make it hard for the reader to even follow small snippets as he has to constantly shift attention to different functions.
Which one do you think is better ? Compare this to writing OOPS code, the private functions most of the time use internal state and are not pure. Testing is done by setting up internal state on a mock object instance and testing the private function. I am getting a little confused on whether to use or whether to pass in internal state to private functions for the sake of "testability"
Whenever writing automated tests, ideally we want to focus on testing the specification of that unit of code, not the implementation (otherwise we create fragile tests that will break whenever we modify the implementation). Therefore, what happens internally in the object should not be of concern to the test.
For this example, I would look to build a test that:
Executes the test by calling outer_API_bar.
Asserts that the correct behavior of the call using other publicly accessible functions and/or state (there must be some way of doing this, as if the only side effect of calling outer_API_bar was internal to this unit of code, then calling this function could not impact your wider application in any way, and essentially be useless).
This way, you are able to keep the fact that you use functions like internal_foo, and variables like internal_state_variable as implementation details, which you can freely change when refactoring your code (i.e. to make it more readable) without having to change your tests.
NOTE: This suggestion is based on my own personal preference for only testing public functions, and not private ones. You will find much debate on this topic where some people pose good arguments for testing private functions being a valid thing to do.
To answer your question very specifically pure functions are waaaaay more 'testable' than any other kind of abstraction. The more pure functions you can include, the more testable your code would be. As you rightly mention, this can come at the cost of readability, and I am sure there are other trade offs to consider. My suggestion would be to aim for more pure functions and look for other techniques that would allow you to compensate on the readability side of things.
Both snippets are testable via mocks. The second one, however, has the advantage that you can also check the argument of internal_foo(bool arg) for an expected value of true or false when the mock for internal_foo() is invoked. In my opinion, that would make for a more meaningful test.
Depending on the rest of the code that we don't know, testing without mocks may be more difficult.

Resources