memory increment of the application in the browser while running it for several hour - reactjs

I am stuck with the memory increment of my application and as it is single page I can't even reload it. After running my application for around 5-6 hour memory size is reaching around 600mb from initial loading i.e 120mb and we did some fixed for this like making the ref to null in the componentWillunMount() and memory has reduced to 400 mb after the same testing for same time but still I can see there are lot many detached element, definitely it caused by some other parts of the code, in the snapshot file which we can take from chrome inbuilt functionality. So is there any way that I can remove all the detached-element while leaving the certain page or why don't browser removed this from memory as the detached-element is retaining some size of the memory ?

DOM node can only be garbage collected when there are no references to it from either the page's DOM tree or JavaScript code.
I suggest you take a look at your code and see if there are functions running not when you want it. If you use react or similar frameworks, you have to be careful with their lifecycle (important!).
Also here https://developers.google.com/web/tools/chrome-devtools/memory-problems/
There are many useful information, such as
- Investigate memory allocation by function
- Spot frequent garbage collections
So is there any way that I can remove all the detached-element while leaving the certain page or why don't browser removed this from memory
I cant offer any more accurate assumption or suggestion if what we have is I use javascript information. Countless consequences from countless combination of libraries, stacks and techniques make this impossible to guess.

Related

React error in render/flush: RangeError in flush RangeError: Maximum call stack size exceeded

I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.

angularjs memory consumption issue

Recently I had the opportunity of building a new web application, and thought of trying out Angular to get a good understanding of it. So yeah, I'm fairly new to this framework.
After understanding the nuances of the framework, I found it surprisingly easy to work with. Everything about my experience had been just great, until users started reporting the utterly laggy performance of the application.
The application is fairly simple—it's got 2 screens. One which shows a list of deals, and another where users can add/edit deal information—this second page is a simple form expecting the user to enter deal related information. It looks like this:
The outlined sections are rendered using ng-repeat. The retailers list has some 530 entries whereas the brand list has about 400 entries.
After a bit of profiling, I figured out that visiting this second form screen would keep on increasing the memory consumption of the browser. The first screen doesn't have any such effect. I simply toggled between the first screen and this second form screen, and found that every time this screen would get loaded, the memory consumption would spike by 50-75 MB. Eventually, the browser would just freeze up. Here's how the memory profile looks:
As you can see, the consumption keeps going up, and there's no sign of any GC! Every spike in the node count and memory trace correspond to a visit to the second form-based screen.
Now I have already checked out a whole lot of issues around angular and memory consumption, but each of them mentions that the $scope for any of the views will get removed when a new view loads. The DOM node count certainly doesn't indicate such a thing for me :/
I also came across 2 important points related to the usage of ng-repeat:
Avoid invocation of any function within the ng-repeat directive.
Don't have a two-way binding using ng-model within a ng-repeat directive.
Both of these I've avoided in the second screen, and yet, the memory consumption is going through the roof.
My question might seem to be yet another memory related question w.r.t angular, but I've really tried to get some sort of closure on this and haven't found one.
Would really appreciate any assistance on this, as my decision to progress with the usage of angular for the rest of the portal hinges on solving this issue.
Thanks for reading!
Update 1
Based on Ilan's suggestion, let me add that I make use of 2 plugins for rendering the dropdown and the implementing the date-picker.
For the dropdown, i'm using Bootstrap-select and for the date-picker, I'm using Bootstrap-datepicker.
For bootstrap-select to work, I had to write a custom directive which fired a broadcast on the $last event of ng-repeat. It looks something like this:
.directive('onFinishRender', function($timeout) {
return {
restrict : 'A',
link : function(scope, element, attr) {
if (scope.$last === true) {
$timeout(function() {
scope.$emit('ngRepeatFinished');
});
}
}
};
});
Then in the controller, I rely on this event to invoke the render for the dropdown plugin:
$scope.$on('ngRepeatFinished', function(ngRepeatFinishedEvent) {
$('#retailer').selectpicker('render');
});
For the bootstrap-datepicker, I do not have to do such an elaborate thing, as I only need to wrap the date input field using JS.
Update 2
After turning off the plugins, the memory consumption reduces drastically. However, the problem of a leak still persists. Earlier, whenever the form view was getting loaded, the memory would spike by 50-60 MB. After turning off the plugins, it spikes by 25-35 MB. But as you can see below, the memory consumption keeps on piling up.
I recently spent nights and days finding similar memory leaks as that of yours. These is no direct answer to your question. You will have to do the research but i can give you some pointers to finding the leak.
Don't use any other plugin in your chrome browser except for developer toolbar when debugging leak.
Timeline is good to figure out that there is a leak but to actually see the leak, use profiler tab. It runs a GC everytime you take a Heap Snapshot and gives you a clue if you improved or not.
If you are seeing memory leak in MBs than it is coming from DOMElements. With the size of leaks that you mentioned, i can tell that your whole document is hanging as detached dom element because of one or two components in your page are not getting released and are still hanging as attached dom.
Remove all the elements from your second page and do the switch to see if memory is increasing. If it does, first page has the leak otherwise do the same with second page.
Once you have located which page has the leak, remove all component from that page and add them one by one to see when leak returns.
Hope these steps help you in some way. Also i have found that using $timeout in directive can cause leaks just in case it helps.

Performance of web_reg_find vs strstr()

Which operation is fastest and creating less loading, LR web_reg_find() or C strstr()? Which is more preferable for a very strong loading test?
And if somebody knows how web_reg_find() works, please tell me.
With strstr you would have to pull each and every component on a page and search after the download explicitly against a string in a buffer. With web_reg_find() you are setting a filtering condition through which every response component on a page passes.
If you choose the strstr() route you will still have to download the page components and then run the check against each component. You will use more memory and unless you are very good at your memory management you will likely miss a free() on occasion and introduce a memory leak condition, which is you are pressed for time to get a script out the door becomes a common side effect. With the web_reg_find() you can have it oppewrating concurrent to the page download with no slowdown on the page download itself.
I am not sure where Adriano has the research on raw performance of one versus the other since the operations of the two are so different as a web_reg_find() would be complete before a strstr() could even be initiated - I have to download and populate a buffer to search before I can search it.

Are there any good software development "patterns" for memory intensive .net programs?

Basically I'm working on a program that processes a lot of large video and image files, and I'm struggling with the memory management side of it because I've never dealt with anything quite like this before.
For instance, it stores all these images in a database, and loads a list of videos, and then you can switch between the videos and view images from the video. Right now, it's keeping all of those images in memory all the time, which is eating up a lot of space. I know I can lazy load the images, but once you've switched back and forth you get all of them stuck in memory.
I want to take advantage of the WPF databinding functionality and MVVM as much as possible, but if I need to look at a different architecture I will.
I'm just looking for general advice, tips, links to articles, or anything that could help.
One of the things you could look at is data virtualization, which is not provided in WPF by default (they provide UI virtualization instead). Data virtualization can say 'load and bind the data for an item / range of items while visible, then unload when not visible'.
Here's a great article that describes a concrete implementation that you may be able to use as-is or adapt:
http://www.codeproject.com/KB/WPF/WpfDataVirtualization.aspx
It sounds like the main problem you're having is not so much the performance-intensiveness of the application (which things like fixed-size buffers and static allocation will help with) but its overall memory footprint. The way to control that is with virtualization.
Lazy loading gets you halfway there: you don't actually create the object until something needs it. That's fine, but the longer the user works with the application and the more objects he visits in the UI, the more objects get created, and eventually the application runs out of memory.
So you want to throw away objects that the user doesn't need anymore. Figuring out which objects the user doesn't need can be a hard problem, but it can also be as easy as assuming that the user doesn't need the object that he used least recently. You use a least-recently-used (LRU) cache to do this.
This is totally consistent with the MVVM pattern. In your view class, you make your property getter for the object use this pseudocode:
if object hasn't been loaded
load object
add object to the LRU cache (whether you loaded it or not)
return object
The LRU cache I wrote keeps a simple queue of the objects it contains. When you add an object to the cache, if it's not already in the queue it gets added to the back, and if it is already in the queue it gets moved to the back.
If the queue's at its capacity when you add an object, it pops off whatever is at the front of the queue (which is the one that was used least recently) and raises the DiscardingOldestItem event.
This event is the object's chance to tell anything that holds a reference to it (i.e. the view object that it's a property of) that it needs to be discarded (probably by raising an event of its own). The view object's event handler should first raise the PropertyChanged event. If the property getter gets called when it does this, there's a binding somewhere that's still looking at the property, so it shouldn't be discarded yet. (Also, since the getter was called, the object just got moved to the back of the queue.) Otherwise, it can be thrown away.
(Note that if you have more objects visible in the UI than the cache can hold, this little dance becomes an infinite loop and you'll get a stack overflow.)
A more sophisticated approach would have the LRU cache start discarding old items when the application started running low on memory (it uses a fixed capacity right now). That's a straightforward change, but if you make that change, the scenario described in the previous paragraph is something you need to give more thought to; one very large object could result in the whole UI going kablooey.
It seems that to increase raw performance you would actually want to avoid patterns. They have their uses, don't get me wrong, but if you're trying to blast video at the highest performance possible the last thing you need to do it introduce abstraction layers that are designed to write higher quality code, not increase application performance.
this article on informIt has a lot of good info on the subject although it is more c and c++.
Static Allocation Pattern: Allocates memory up front
It suggests,
Pool Allocation Pattern: Preallocates pools of needed objects
Fixed Sized Buffer Pattern: Allocates memory in same-sized blocks
Smart Pointer Pattern: Makes pointers reliable
Garbage Collection Pattern: Automatically reclaims lost memory
Garbage Compactor Pattern: Automatically defragments and reclaims memory
"I know I can lazy load the images,
but once you've switched back and
forth you get all of them stuck in
memory."
This is not true to my understanding. The images can get garbage collected just like anything else, by removing all references. Are you sure you dont have a reference to them somewhere? Try a memory profiler like memprofiler or ANTS to see whats happening.
To those who have found this question looking for general patterns (not WPF) to reduce memory, the famous one (which I have never seen used!) is The Flyweight pattern

Trying to track down a Silverlight memory leak that only happens in browsers

This is an odd one. I am making a app that is kind of a game, and I wanted to have a shooting starburst effect. I made it one evening and it all worked well, until I noticed that my browser was eating over 300 megs of ram, eating 1 meg every 5 seconds, mainly when the starburst would happen.
Here is an example stripped down to just the starburst:
http://www.sizzln.com/example.htm
First thought, I am not removing the objects or still have references somewhere. I am placing each generated star into a Canvas, but I am removing old starts every 3 seconds. I do have a lot of DoubleAnimations as well, but I even have a callback to set everything to null.
Here is the weird part, if I convert it to WPF it doesnt happen, if I run it inside of Silverlight Spy 3, it doenst happen. If I take a Heap Dump using WinDbg and SOS.dll, it reports that it should only be using between 1.8 and 3 MBs of ram.
I have the GC running every 3 seconds to cleanup, but it never has any effect. I can see in the heapdump that many objects are now deleted, and I always get back to 1.8 meg or so after a GC, but the memory shown in Task Manager just keeps going up.
I dont know what to do, I think I am carefully removing the objects unless my Heap is not being honest.
Are you running Vista or Win7? It sounds like the OS is not reclaiming memory, as it shouldn't unless it needs to.
It may also be that the Silverlight GC doesn't free its buffers, on the assumption that the memory may need to be reallocated soon.
In either case, it doesn't sound like anything to worry about, as long as the profiler says your program only uses 1.8MB after the GC runs.
I just briefly looked over your code. You have a lot of places where you hook into events (+=), but never unhook (-=). These are hard references and therefore won't ever be collected if they are ultimately connected to a root object.
OK I am going to sorta answer my own question. Silveright doesn't have the handy "BeginAnimation" method, so I found online a quick way to add an extension to do basically the same thing, it did this by creating a storyboard and starting it.
However, it just stayed there, I dont exactly know what it was being connected to either. Calling Stop() on it after it finishes fixed my memory issue.
One odd side effect is I have to be careful when I call the stop method, when creating so many storyboards it seemed to get a bit confused and it would cause some of the objects to reappear, even after they were removed from the control.

Resources