So my whole Redux state is perhaps around 3-4mb, but Chrome is reporting my tab's memory usage at around 400-500mb, which climbs the longer you use it.
I understand there are other things it needs the memory for (so I shouldn't expect a 1:1 relationship), but does anyone know how I'd attempt to reduce memory consumption?
On a fresh session (or Incognito tab), my app is running very smoothly. If it's open for an afternoon or so, performance suffers greatly.
My Redux store isn't overly large,
Same page/DOM nodes etc between the 2 normal and Incognito tabs
Everything else is seemingly identical
I get that this is fairly vague, but I'm not sure what else to include. Anyone have any pointers?
Please use the Google Chrome Performance Analysis Tools to analyse the performance of your app and see where savings can be made.
That said, Google Chrome can be a memory-hungry application in general. You should try to consider whether this is a problem. If the Chrome session does not consume an amount of RAM which is hurting the overall computer's performance, then it is not a problem. Try running the application on a computer with very little RAM, as long as it stops consuming memory before it impacts performance, then it is a non-issue.
If it does not do this and it begins to consume more and more memory, you likely have a memory leak and should look to resolve this with the tools linked above.
Related
I've recently been taking a class on Big O Notation for space and time complexity and I'm trying to apply the concepts to their real world consequences in my day to day job as a React developer. I understand the consequences of time complexity. If a function is a higher complexity the larger that the data set grows the longer it will take to compute, ergo longer load times for the user which no one likes. What I don't understand is the consequences of space complexity in a React App. What are the consequences to either the user or the application if you have a really large data set and you end up using a huge amount of space in the browser?
Actually, it is not based specifically on React or any other frameworks. The concept of time complexity and space complexity doesn't depend much on frameworks or languages. That is the concept behind using time and space complexity notations.
Now coming to your question, the answer really depends on the browser you are using to run your React App. Every browser allocates max memory to each tab. If the website uses more than the allocated memory, the browser kills that webpage.
Here is a question about memory limits on StackOverflow--
Javascript memory limit
In Chrome and Chromium OS, the memory limit is defined by the browser,
and you can inspect the limit with the following command in the
Developer Tools command-line by hitting F12:
window.performance.memory.jsHeapSizeLimit 1090519040 On my Windows 10 OS, it is about 1 GB.
On Chrom(e/ium), you can get around the heap size limit by allocating
native arrays:
var target = [] while (true) {
target.push(new Uint8Array(1024 * 1024)); // 1Meg native arrays }
This crashes the tab at around 2GB, which happens very rapidly. After
that Chrom(e/ium) goes haywire, and repeating the test is not possible
without restarting the browser.
In the above link, there is a mention of react and angular
Here are some examples of frameworks with well-known performance
degradation:
..........
React Again,
Immutable collections of JS objects only scale so far.
create-react-app internally uses Immutable.JS, and Immutable.JS can
only create about 500k immutable collections before it dies.
One more side effect the user can experience is overall system lag. The lag may be due to the low RAM available and OS actively swapping to allocate more memory to your app until you reach your max limit before the browser kills it.
I've been scavenging the internet for the last week lately trying to figure out how to spot and solve memory leaks within my React application, because well, I think I have a memory leak in my application. I've noticed that our application crashes more and more frequently lately and I continue to get the same error from Node.js: API fatal error handler returned after process out of memory. I knew that the application that I was working on developed by others before me had some serious flaws, but never knew that they were this bad, so I decided to turn to the internet to try and solve this issue.
I looked at the Chrome Dev Tools and taking heap snapshots to see if there is an increase in memory and it is apparent that there is when I see the memory shoot from 123MB to 200+MB after a few actions within the application. Now this is a good tool for determining whether there is a possible memory leak or not, but it's absolutely hard to read and understand, which doesn't help me determine where the issues lye.
Now our AWS instance is only 1GB in size and a lot of answers I see about this sort of issue is to just increase the max space of Node.js but that doesn't solve any issues, instead it just throws a band-aid on it until the issue occurs again, which is not good practice in my opinion. I'm coming here now in hopes that someone can help me understand what in the world I'm looking at when using the Chrome Dev Tools, and/or if someone knows a better way in finding out where the issues are in my code that would be very helpful as well. Thanks In Advance.
EDIT
Out of all of the most common issues with memory leaks in javascript that I have read online, there aren't any that stick out to me within our application, so I'm very confused on were the possible leak is coming from.
Another thing is that the application is grabbing a lot of data from our backend and keeping it in memory. Could minimizing the amount of data that is retrieved help, or would that only slow down the issue instead of fixing it?
I had the same issue with the react application in my organization. React application received huge data from the Api and it stored the data in the state variables. Based on the type of operations, the application would send back the huge data to the Api.
Due to this, the application break with Out of Memory issue.
The solution to the issue is not a direct one. However it involves lot of analysis on the code.
Try to see if the data size can be reduced.
Deep dive to see if you could use useMemo in the child components which are getting re-rendered everytime a parent component re-renders.
If there is need to modify the small part of the state in the application, try using Immutability Helper and Immer
In my application, I tried reducing the size of the response where ever it was applicable and used immer to modify the state. I don't see the Out of Memory issue in the application after the changes.
I saw this Python question: App Engine Deferred: Tracking Down Memory Leaks
... Similarly, I've run into this dreaded error:
Exceeded soft private memory limit of 128 MB with 128 MB after servicing 384 requests total
...
After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
According to that other question, it could be that the "instance class" is too small to run this application, but before increasing it I want to be sure.
After checking through the application I can't see anything obvious as to where a leak might be (for example, unclosed buffers, etc.) ... and so whatever it is it's got to be a very small but perhaps common mistake.
Because this is running on GAE, I can't really profile it locally very easily as far as I know as that's the runtime environment. Might anyone have a suggestion as to how to proceed and ensure that memory is being recycled properly? — I'm sort of new to Go but I've enjoyed working with it so far.
For a starting point, you might be able to try pprof.WriteHeapProfile. It'll write to any Writer, including an http.ResponseWriter, so you can write a view that checks for some auth and gives you a heap profile. An annoying thing about that is that it's really tracking allocations, not what remains allocated after GC. So in a sense it's telling you what's RAM-hungry, but doesn't target leaks specifically.
The standard expvar package can expose some JSON including memstats, which tells you about GCs and the number allocs and frees of particular sizes of allocation (example). If there's a leak you could use allocs-frees to get a sense of whether it's large allocs or small that are growing over time, but that's not very fine-grained.
Finally, there's a function to dump the current state of the heap, but I'm not sure it works in GAE and it seems to be kind of rarely used.
Note that, to keep GC work down, Go processes grow to be about twice as large as their actual live data as part of normal steady-state operation. (The exact % it grows before GC depends on runtime.GOGC, which people sometimes increase to save collector work in exchange for using more memory.) A (very old) thread suggests App Engine processes regulate GC like any other, though they could have tweaked it since 2011. Anyhow, if you're allocating slowly (good for you!) you should expect slow process growth; it's just that usage should drop back down again after each collection cycle.
A possible approach to check if your app has indeed a memory leak is to upgrade temporarily the instance class and check the memory usage pattern (in the developer console on the Instances page select the Memory Usage view for the respective module version).
If the pattern eventually levels out and the instance no longer restarts then indeed your instance class was too low. Done :)
If the usage pattern keeps growing (with a rate proportional with the app's activity) then indeed you have a memory leak. During this exercise you might be able to also narrow the search area - if you manage to correlate the graph growth areas with certain activities of the app.
Even if there is a leak, using a higher instance class should increase the time between the instance restarts, maybe even making them tolerable (comparable with the automatic shutdown of dynamically managed instances, for example). Which would allow putting the memory leak investigation on the back burner and focusing on more pressing matters, if that's of interest to you. One could look at such restarts as an instance refresh/self-cleaning "feature" :)
My instances all now start at 140m and average just under 200. If left long enough they start hitting 240m. However my question is more about the memory being used right after a fresh instance is booted up. I store nothing on the instances. Every request fetches stuff from memcache and datastore and I don't use singletons.
All I have are classes and a lot of static resources that deploy with the instances. I use JSPs extensively (if that makes a difference).
Thanks for any assistance!
I'm going from memory here, since I have used Java on App Engine for a few years. This may be stale.
The JVM doesn't like to release memory. If an instance gets created, and services a request, the memory watermark goes up. Garbage collection may 'free' part of that memory up in the sense of it being available for reuse, but the high watermark on process memory doesn't necessarily go down. A subsequent request may need allocations that aren't available as free chunks, so the watermark on memory goes up again. If the app isn't configured to serve multiple request simultaneously, memory use follows something like a sigmoid curve. If multiple requests are being processed simultaneously, the watermark is raised further.
That said, a common cause of unexpected memory growth is queries that retrieve more rows than are necessary, with filtering happening in the app.
But without more information, your specific case is impossible to diagnose.
I believe I figured out why my project was taking up an ever increasing amount of ram. I happen to have a lot of static resources in my project and it would appear that these static resources all get loaded into the frontend instance memory (probably for speed). I managed to free up huge amounts of memory by moving my static resources off of my primary application servers.
I'm about to deploy my new WPF application and I've just noticed in the Task Manager that it was consuming a lot of memory. So I downloaded a trial of RedGate Antz to try and find out what was causing this issue and I was shocked to see about 90 MB of unmanaged memory usage. Because Antz does not support unmamaged memory I then tried to use Windbg which did not point to a high usage itself. This leads me to believe it must be one of the DLLs I'm loading. I'm using the DevExpress controls in my application.
An interesting feature is when I minimize my application the memory drops right down from say 110 MB to about 6-10 MB.
Should I be concerned / worried?
This is my first WPF application and I'm not totally sure what to expect in terms of memory usage. Does the fact when minimized this memory is regained/given up a sign that everything is ok?
Any thoughts or ideas on what could be causing this would be most helpful.
I've had good luck with SciTech's .Net Memory Profiler (memprofiler.com) if you want to know specifically what's causing it.
With the nature of the .Net runtime, if you're running on a machine that has plenty of memory available then it will generally try to use it. If you start seeing performance problems related to it then you should worry, and generally it's good to be aware of what is using resources regardless. A probable reason for the drop in memory is one of the DLLs may hook to your main Window's events and invoke a garbage collection on minimize.
If you're concerned about the perception of high memory usage there are tricks you can play to massage the numbers that show up in TaskManager (like p/invoking SetProcessWorkingSetSize), but that doesn't seem to be really what you're asking about.