Solving Memory Leaks in Large React Application - reactjs

I've been scavenging the internet for the last week lately trying to figure out how to spot and solve memory leaks within my React application, because well, I think I have a memory leak in my application. I've noticed that our application crashes more and more frequently lately and I continue to get the same error from Node.js: API fatal error handler returned after process out of memory. I knew that the application that I was working on developed by others before me had some serious flaws, but never knew that they were this bad, so I decided to turn to the internet to try and solve this issue.
I looked at the Chrome Dev Tools and taking heap snapshots to see if there is an increase in memory and it is apparent that there is when I see the memory shoot from 123MB to 200+MB after a few actions within the application. Now this is a good tool for determining whether there is a possible memory leak or not, but it's absolutely hard to read and understand, which doesn't help me determine where the issues lye.
Now our AWS instance is only 1GB in size and a lot of answers I see about this sort of issue is to just increase the max space of Node.js but that doesn't solve any issues, instead it just throws a band-aid on it until the issue occurs again, which is not good practice in my opinion. I'm coming here now in hopes that someone can help me understand what in the world I'm looking at when using the Chrome Dev Tools, and/or if someone knows a better way in finding out where the issues are in my code that would be very helpful as well. Thanks In Advance.
EDIT
Out of all of the most common issues with memory leaks in javascript that I have read online, there aren't any that stick out to me within our application, so I'm very confused on were the possible leak is coming from.
Another thing is that the application is grabbing a lot of data from our backend and keeping it in memory. Could minimizing the amount of data that is retrieved help, or would that only slow down the issue instead of fixing it?

I had the same issue with the react application in my organization. React application received huge data from the Api and it stored the data in the state variables. Based on the type of operations, the application would send back the huge data to the Api.
Due to this, the application break with Out of Memory issue.
The solution to the issue is not a direct one. However it involves lot of analysis on the code.
Try to see if the data size can be reduced.
Deep dive to see if you could use useMemo in the child components which are getting re-rendered everytime a parent component re-renders.
If there is need to modify the small part of the state in the application, try using Immutability Helper and Immer
In my application, I tried reducing the size of the response where ever it was applicable and used immer to modify the state. I don't see the Out of Memory issue in the application after the changes.

Related

Consequences of Space Complexity in React Apps

I've recently been taking a class on Big O Notation for space and time complexity and I'm trying to apply the concepts to their real world consequences in my day to day job as a React developer. I understand the consequences of time complexity. If a function is a higher complexity the larger that the data set grows the longer it will take to compute, ergo longer load times for the user which no one likes. What I don't understand is the consequences of space complexity in a React App. What are the consequences to either the user or the application if you have a really large data set and you end up using a huge amount of space in the browser?
Actually, it is not based specifically on React or any other frameworks. The concept of time complexity and space complexity doesn't depend much on frameworks or languages. That is the concept behind using time and space complexity notations.
Now coming to your question, the answer really depends on the browser you are using to run your React App. Every browser allocates max memory to each tab. If the website uses more than the allocated memory, the browser kills that webpage.
Here is a question about memory limits on StackOverflow--
Javascript memory limit
In Chrome and Chromium OS, the memory limit is defined by the browser,
and you can inspect the limit with the following command in the
Developer Tools command-line by hitting F12:
window.performance.memory.jsHeapSizeLimit 1090519040 On my Windows 10 OS, it is about 1 GB.
On Chrom(e/ium), you can get around the heap size limit by allocating
native arrays:
var target = [] while (true) {
target.push(new Uint8Array(1024 * 1024)); // 1Meg native arrays }
This crashes the tab at around 2GB, which happens very rapidly. After
that Chrom(e/ium) goes haywire, and repeating the test is not possible
without restarting the browser.
In the above link, there is a mention of react and angular
Here are some examples of frameworks with well-known performance
degradation:
..........
React Again,
Immutable collections of JS objects only scale so far.
create-react-app internally uses Immutable.JS, and Immutable.JS can
only create about 500k immutable collections before it dies.
One more side effect the user can experience is overall system lag. The lag may be due to the low RAM available and OS actively swapping to allocate more memory to your app until you reach your max limit before the browser kills it.

React/Redux - memory footprint not proportionate to state

So my whole Redux state is perhaps around 3-4mb, but Chrome is reporting my tab's memory usage at around 400-500mb, which climbs the longer you use it.
I understand there are other things it needs the memory for (so I shouldn't expect a 1:1 relationship), but does anyone know how I'd attempt to reduce memory consumption?
On a fresh session (or Incognito tab), my app is running very smoothly. If it's open for an afternoon or so, performance suffers greatly.
My Redux store isn't overly large,
Same page/DOM nodes etc between the 2 normal and Incognito tabs
Everything else is seemingly identical
I get that this is fairly vague, but I'm not sure what else to include. Anyone have any pointers?
Please use the Google Chrome Performance Analysis Tools to analyse the performance of your app and see where savings can be made.
That said, Google Chrome can be a memory-hungry application in general. You should try to consider whether this is a problem. If the Chrome session does not consume an amount of RAM which is hurting the overall computer's performance, then it is not a problem. Try running the application on a computer with very little RAM, as long as it stops consuming memory before it impacts performance, then it is a non-issue.
If it does not do this and it begins to consume more and more memory, you likely have a memory leak and should look to resolve this with the tools linked above.

Tracking down memory leak in Google App Engine Golang application?

I saw this Python question: App Engine Deferred: Tracking Down Memory Leaks
... Similarly, I've run into this dreaded error:
Exceeded soft private memory limit of 128 MB with 128 MB after servicing 384 requests total
...
After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
According to that other question, it could be that the "instance class" is too small to run this application, but before increasing it I want to be sure.
After checking through the application I can't see anything obvious as to where a leak might be (for example, unclosed buffers, etc.) ... and so whatever it is it's got to be a very small but perhaps common mistake.
Because this is running on GAE, I can't really profile it locally very easily as far as I know as that's the runtime environment. Might anyone have a suggestion as to how to proceed and ensure that memory is being recycled properly? — I'm sort of new to Go but I've enjoyed working with it so far.
For a starting point, you might be able to try pprof.WriteHeapProfile. It'll write to any Writer, including an http.ResponseWriter, so you can write a view that checks for some auth and gives you a heap profile. An annoying thing about that is that it's really tracking allocations, not what remains allocated after GC. So in a sense it's telling you what's RAM-hungry, but doesn't target leaks specifically.
The standard expvar package can expose some JSON including memstats, which tells you about GCs and the number allocs and frees of particular sizes of allocation (example). If there's a leak you could use allocs-frees to get a sense of whether it's large allocs or small that are growing over time, but that's not very fine-grained.
Finally, there's a function to dump the current state of the heap, but I'm not sure it works in GAE and it seems to be kind of rarely used.
Note that, to keep GC work down, Go processes grow to be about twice as large as their actual live data as part of normal steady-state operation. (The exact % it grows before GC depends on runtime.GOGC, which people sometimes increase to save collector work in exchange for using more memory.) A (very old) thread suggests App Engine processes regulate GC like any other, though they could have tweaked it since 2011. Anyhow, if you're allocating slowly (good for you!) you should expect slow process growth; it's just that usage should drop back down again after each collection cycle.
A possible approach to check if your app has indeed a memory leak is to upgrade temporarily the instance class and check the memory usage pattern (in the developer console on the Instances page select the Memory Usage view for the respective module version).
If the pattern eventually levels out and the instance no longer restarts then indeed your instance class was too low. Done :)
If the usage pattern keeps growing (with a rate proportional with the app's activity) then indeed you have a memory leak. During this exercise you might be able to also narrow the search area - if you manage to correlate the graph growth areas with certain activities of the app.
Even if there is a leak, using a higher instance class should increase the time between the instance restarts, maybe even making them tolerable (comparable with the automatic shutdown of dynamically managed instances, for example). Which would allow putting the memory leak investigation on the back burner and focusing on more pressing matters, if that's of interest to you. One could look at such restarts as an instance refresh/self-cleaning "feature" :)

High unmanaged Memory - WPF Application

I'm about to deploy my new WPF application and I've just noticed in the Task Manager that it was consuming a lot of memory. So I downloaded a trial of RedGate Antz to try and find out what was causing this issue and I was shocked to see about 90 MB of unmanaged memory usage. Because Antz does not support unmamaged memory I then tried to use Windbg which did not point to a high usage itself. This leads me to believe it must be one of the DLLs I'm loading. I'm using the DevExpress controls in my application.
An interesting feature is when I minimize my application the memory drops right down from say 110 MB to about 6-10 MB.
Should I be concerned / worried?
This is my first WPF application and I'm not totally sure what to expect in terms of memory usage. Does the fact when minimized this memory is regained/given up a sign that everything is ok?
Any thoughts or ideas on what could be causing this would be most helpful.
I've had good luck with SciTech's .Net Memory Profiler (memprofiler.com) if you want to know specifically what's causing it.
With the nature of the .Net runtime, if you're running on a machine that has plenty of memory available then it will generally try to use it. If you start seeing performance problems related to it then you should worry, and generally it's good to be aware of what is using resources regardless. A probable reason for the drop in memory is one of the DLLs may hook to your main Window's events and invoke a garbage collection on minimize.
If you're concerned about the perception of high memory usage there are tricks you can play to massage the numbers that show up in TaskManager (like p/invoking SetProcessWorkingSetSize), but that doesn't seem to be really what you're asking about.

Should you test an external system prior to using it?

Note: This is not for unit testing or integration testing. This is for when the application is running.
I am working on a system which communicates to multiple back end systems, which can be grouped into three types
Relational database
SOAP or WCF service
File system (network share)
Due to the environment this will run in, there are no guarantees that any of those will be available at run time. In fact some of them seem pretty brittle and go down multiple times a day :(
The thinking is to have a small bit of test code which runs before the actual code. If there is a problem then persist the request and poll until the target system until it is available. Tests could possibly be rerun within the code to check it is still available at logical points. The ultimate goal is to have a very stable system, regardless of the stability (or lack thereof) of the systems it communicates to.
My questions around this design are:
Are there major issues with it? (small things like the fact it may fail between the test completing and the code running are understandable)
Are there better ways to implement this sort of design?
Would using traditional exception handling and/or transactions be better?
Updates
The system needs to talk to the back end systems in a coordinated way.
The system is very async in nature so using things like queuing technologies is fine.
The system must run even if one or more backend systems are down as others may be up and processing of some information is possible.
You will be needing that traditional exception handling no matter what, since as you point out there's always the chance that things'll fail between your last check and the actual request. So I really think any solution you find should try to interact smoothly with this.
You are not stating if these flaky resources need to interact in some kind of coordinated manner, which would indicate that you should probably be using a transaction manager of some sort to do this. I do not believe you want to get into the footwork of transaction management in application code for most needs.
Sometimes I have also seen people use AOP to encapsulate retry logic to back-end systems that fail (for instance due to time-out issues). Used sparingly this may be a decent solution.
In some cases you can also use message queuing technology to alleviate unstable back-ends. You could for instance commit to a message queue as part of a transaction, and only pop off the queue when successful. But this design is normally only possible when you're able to live with an asynchronous process.
And as always, real stability can only be achieved by attacking the root cause of the problem. I had a 25-year old bug fixed in a mainframe TCP/IP stack fixed because we were overrunning it, so it is possible.
The Microsoft Smartclient framework provides a ConnectionMonitor class. Should be easy to use or duplicate.
Our approach to this kind of issue was to run a really basic 'sanity tester' prior to bringing up our main application. This was thick client so we could run the test every time the app started. This sanity test would go out and check things like database availability, and external network (extranet) access, and it could have been extended to do webservices as well.
If there was a failure, the user was informed, and crucially an email was also sent to the support/dev team. These emails soon became unweildy as so many were being created, but we then setup filters, so we knew when somethings really bad was happening. Overall the approach worked pretty well, our biggest win was being able to tell users that the system was down, before they had entered data, and got part way through a long winded process. They absolutely loved it.
At a technica level the sanity was written in C#, it used exception handling in a conventional way not to find the problems it was looking for. The sanity program became a mini app in its own right, and it was standalone from the main app. If I were doing it again I'd using a logging framework to capture issues, which is more flexible then our hard coded approach.

Resources