I am new to angular, so maybe I'm going about this completely wrong. I am attempting to make a treeView with angularJS directives. The code that I have so far accomplishes this, except that there appears to be a memory leak as each time the tree view reloads it slows down, and eventually crashes the browser.
I have created the following two directives to accomplish my task jscTreeView and jscTreeNode
This fiddler has my source, it builds you a random tree, and gives you the ability to choose the amount of nodes in the tree. If you bump that number up to a higher number, and reload several times you will notice that it progressively slows down to each time.
Any ideas on how to clean up after myself would be greatly appreciated, thanks.
Edit:
This fiddler is a second attempt with this one I went in a completely different direction. It is much more efficient, and the code is more clean in my opinion. However, this one has an issue too. periodically, and seemingly randomly on refresh of the tree this one throws an infinite digest exception.
Note: not all of the functionality that was in the former tree is in the current tree yet. That is just because I have not programmed it yet.
As the discussion in the comments indicates, I was creating, but never properly releasing, scopes in my tree view. While the solution was not very easy to figure out it is actually quite an easy solution, and clarified things a lot for me.
What I needed to do was make a clone of the root scope of my tree var newScope = scope.$new();, I then built all of the rest of the subtrees as well as their associated nodes and compiled with the cloned scope newScope. After the compile it stores the cloned scope into a variable private to the directive lastScope = newScope;. When the watch property is updated, and calls back to my directive the last cloned scope is destroyed lastScope.$destroy();. Destroying that cloned scope automatically destroys any child scope created beneath it (the nodes, as well as the subtrees). A new clone of the scope is created, and the process is repeated, thanks to #Jorg's tool to count scopes, I can see that everything is being cleaned up nicely on each iteration. Here is a fiddle to the solution.
Related
I read through all the articles in internet. Still i cant understand and mind is puzzling me questions again and again that is
At the end of all the virtual DOM it is going to call the browser API to update the real DOM then how come it will be faster?
Is virtual DOM of React have special access to browsers core API's to modify?
I cant understand? Any resolves this questions Thanks in advance
Here is a talk given circa 2013 (v0.4.0) by the two guys behind React. They describe exactly how it works. Unlike data binding and dirty checking (Angular, etc.) React uses one render method that's called recursively. It then generates a long string that is a representation of the DOM. The concept is actually really simple.
https://www.youtube.com/watch?v=XxVg_s8xAms
Yes, you are right finally the task is to update real dom but the virtual dom comes in picture before updating the real dom. how ?
Suppose you want to update any/many element(s) in Dom tree element then there should be a mechanism to find which element(s) needs to be updated in real dom i.e the browser screen we see.
So this dom finding algorithm executed in virtual dom i.e a javascript copy of real dom(a html dom tree).
React creates two virtual dom, one from existing real dom and other from the changes made. These two virtual dom comparison saves time. The difference of this comparison used for updating real dom.
At the end of all the virtual DOM it is going to call the browser API to update the real DOM then how come it will be faster?
Any speed benefits come from minimizing the number of DOM manipulations that are needed and doing them all at once. The virtual DOM is react's way of calculating the minimum set of changes.
Here's what i mean by calculating the minimum set of changes: The page starts off looking one way, and then you want to make it look some other way. To get there, you're going to need to make one or more changes to the DOM, but there are many different ways you could do it.
A really bad way to update the page would be to wipe the entire document, and then rebuild every single element from scratch. Most likely though, you can reuse most of the page and just make a few updates to select parts of the page: add a div here; update a property there, add an event listener there. That's what you want: a small number of steps that take the old page and turn it into the new page.
I'm working on a single-page app where some parts are really slow. They're slow because I'm displaying 400 complex things in a repeater for the user to scroll through. Each thing is generated by a fairly complex directive that does a ton of data binding and expression evaluation, adds one or two click handlers, and displays a couple of images. In some cases, I also need a grayscale CSS filter on those images, but that really seems way too slow.
I have of course already turned most of my data binding into one-time data binding, but simply generating the 400 things for the first time is still slow. It's initially hidden through ng-if, which speeds it up when I'm not showing it, but once I do need to show it, everything waits 10 seconds for that to happen. I would like to load it in advance, but using ng-show instead of ng-if means the loading of the entire app has to wait for this.
What I would like, is to load the rest of the app, and then, while we wait for user input, start creating these 400 things so they're ready once I need to show them. I don't want the user to notice how slow this is.
Problem is, I have no idea how to do this. Any ideas?
Edit: Having thought about this (and discussed this with my wife), I'm seeing two options (that I conceptually understand, at least):
The easy, quick, boring and cowardly solution is to simply not show the 400 things at the same time, but cut them in pieces and show 40 at a time. That should make it a lot quicker, but also less nice, as the user needs to click around to access all the data.
The interesting solution is to write a new ng-repeat that generates the 400 transcluded copies of the template each in their own asynchronous event, so they don't block user interaction. I really like this idea, but there's one big downside: it's an ambitious idea with deep Angular magic, and I don't have much time available.
OK, not seeing your code structure, through Q&A, I'm trying to get clarification. If I understand you correctly, I believe the solution is to process your images asynchronously and remove reliance of creating/processing them on the fly when the view is visible (i.e. via clicking on a button/tab to 'show' the array 'view' and thus triggering the ng-repeat). BTW, this solution assumes the delays are because the images are being processed rather than because they are being shown.
METHOD 1 (less preferred)
To do this, it's best to create an 'ImageDataService' service, where it get's kicked off at application start, and proceeds with building this array and taking whatever time it needs asynchronously without caring what view is showing or not. This service will be injected into the proper view or directive controller--perhaps where the current ng-repeat scope is.
Meanwhile, you also need to change the directives inside your ng-repeat to get the pre-built data from the array provided by ImageDataService rather than actually doing the calculation at view time. If the data for that directive is not ready, the directive will show something like 'loading...' otherwise, the directive will simply show the prebuilt images. This way, it doesn't matter when ng-repeat is triggered (i.e. its view is showing), because it will just show what are processed and ready at that point and the display the rest as 'loading...'.
METHOD 2 (I prefer this)
Alternatively, and maybe better, you can forego creating a pre-processed array in a service as in METHOD 1 and instead modify your directives to process their data asynchronously by invoking a service method that returns an asynchronous promise like this:
(code inside a controller)
var promise = ImageDataService.processImage(directiveData);
promise.then(function() {...set the directive image attributes..})
Needless to say, the ImageDataService.processImage() method needs to return a promise when it is done processing the data. The directive, as usual, will show 'loading...' until then. Also, although ImageDataService no longer needs to pre-populate its array mentioned in METHOD 1, it might be a good idea to save the processed images to a similar array anyway to serve as cache and not reprocess them needlessly. NOTE, in this case you don't need to have processImage() inside a service--but it is really good 'separation of concerns' practice to reserve the work of asynchronous data processing (ajax, etc) within a service which can be injected app-wide rather than within a controller.
Either of these 2 general tacks should work.
I was wondering whether I could gain any significant performance difference in my Angular app (specifically view processing) if I change ng-app and ng-controller attribute placement from i.e. body to some inner-page block element with much smaller DOM subtree?
Suppose I have a very long page with huge amount of content, but only part of it is Angular powered. Everything else is either server generated which means it's somewhat static from the client-side PoV.
Would it be better to put ng-app/ng-controller on only that subnode where Angular actually executes or would it be the same if I put them on body element of this very long page?
Does Angular process the view only of the sub-DOM where ng-app/ng-controller are defined or does it process the whole DOM anyway?
Is there any proof about this or even Angular documentation?
Theoretically/Potentially? Yes, as New Dev mentions, the bootstrap function will have a larger DOM to run compile on, which would take longer than compiling a smaller tree.
Practically? Probably not. But your best bet is to benchmark your own page.
As an experiment, you can try the following. I generated a simple random DOM and injected it into a JSFiddle with console.time starting at script load, and ending when the controller is ready. There's a small sub-tree beside (as a sibling) a much larger, 5000-node tree.
Here's the fiddle wrapping the entire body: http://jsfiddle.net/gruagq8d/
And here's the fiddle where only the small subset is used: http://jsfiddle.net/h0hod2j8/
For me, running either of those fiddles repeatedly converges on about 260ms.
I've also tried running similar code on real webpage sources, such as StackOverflow itself, and found the same results (however I did not publish any of these because publishing other people's real pages to JSFiddle does not feel proper without permission) - you can try this for yourself pretty trivially.
Admittedly, this doesn't follow great benchmarking methodology, but if wrapping lots of additional DOM nodes caused significant different performance I'd still expect at least some difference in these.
I don't think that the additional compile time is an issue; for small DOMs it's obviously fast, and for large DOM is very relevant compared to the time it takes your browser to build the DOM in the first place.
All said, as mentioned before, your best option is to try to run similar benchmarks with your own page.
EDIT: Modifying the same benchmark to test digest cycles shows that there's no significant difference in those as well:
Wrapping minimal: http://jsfiddle.net/fsyfn1ee/
Wrapping whole DOM: http://jsfiddle.net/04tdwxhp/
ng-app really just specifies the root element on which Angular calls angular.bootstrap. bootstrap starts a "compilation" process, which is to say that it goes over each DOM element in the subtree of the root and collects directives from it and links them.
Right there, you can see the benefit of restricting the app to a smaller subtree of the DOM:
Compilation process is slightly faster
Less directives are compiled (e.g. <input> elements that should not be part of the app are not compiled/linked).
The thing to remember is that it is the $digest cycle that is the significant source of performance problems/opportunities for optimizations - that is, the number of $watchers and how fast the "$watchers" are.
Official docs say that only portion that is wrapped into ng-app directive is compiled.
If the ng-app directive is found then Angular will:
load the module associated with the directive.
create the application
injector compile the DOM treating the ng-app directive as the root of
the compilation. This allows you to tell it to treat only a portion of
the DOM as an Angular application.
This is pretty much expected as Angular allows to have several Angular modules controlling independent parts of one page (requires manual bootstrapping as pointed by Josiah Keller in comments). And their scopes will not interfere.
However adding extra static html (not angular bound) affect performance only at bootstrap. Yes angular has to compile all that elements at bootstrap to learn whether it should deal with them later. But run-time performance is mostly affected by $watchs. Implicit form of creating them is to do bindings. So the more bindings you have overall, the longer each $digest cycle takes. And gives a general feeling of slow app. I've met a sensible threshold of 2k watches for modern browser/CPUs
i was wondering why - even on the simple SPA application with AngularJS there seems to be a DOM leakage. I may be misinterpreting this but the way I look at this is that DOM elements allocated are not being released properly.
The procedure to reproduce is as follows:
navigate to the page on the screenshot with simple AngularJS application
turn on timeline recording in developer tools
force garbage collection
add an item, and then remove it
force garbage collection
repeat last two steps for atleast 3 times
On the screenshot you can see that after you add an item and remove it there seems to be two more DOM elements more after garbage collection(jump from 502 to 504 DOM elements).
I was hoping that someone could shed some light on this before i get deeper on investigating what is happening. Reason for this test was more complicated AngularJS SPA that I am working on and which also seems to leak memory.
I'm doing a similar thing now. What I've noticed is a couple of things:
1) look at any usage of $(window).fn() -- where fn is any of the functions on the window object; if you're doing that more than once you're adding multiple event handlers to the global object which causes a memory leak
2) $rootScope.$watch() -- similarly, if you're doing this more than once (say, when loading a directive) then you're adding multiple handlers to the global object
3) In my tests (where I go back and forth between two pages) it seems that chrome consumes a large amount of memory (in my case, almost 1GB) before garbage collection kicks in. Maybe when you click the "garbage collection" chrome is not actually doing a GC? Or it's GC for javacsript but not for dom elements?
4) If you add an event handler to the dom element, then remove it from the dom, the handlers never get GC'ed.
Recently I had the opportunity of building a new web application, and thought of trying out Angular to get a good understanding of it. So yeah, I'm fairly new to this framework.
After understanding the nuances of the framework, I found it surprisingly easy to work with. Everything about my experience had been just great, until users started reporting the utterly laggy performance of the application.
The application is fairly simple—it's got 2 screens. One which shows a list of deals, and another where users can add/edit deal information—this second page is a simple form expecting the user to enter deal related information. It looks like this:
The outlined sections are rendered using ng-repeat. The retailers list has some 530 entries whereas the brand list has about 400 entries.
After a bit of profiling, I figured out that visiting this second form screen would keep on increasing the memory consumption of the browser. The first screen doesn't have any such effect. I simply toggled between the first screen and this second form screen, and found that every time this screen would get loaded, the memory consumption would spike by 50-75 MB. Eventually, the browser would just freeze up. Here's how the memory profile looks:
As you can see, the consumption keeps going up, and there's no sign of any GC! Every spike in the node count and memory trace correspond to a visit to the second form-based screen.
Now I have already checked out a whole lot of issues around angular and memory consumption, but each of them mentions that the $scope for any of the views will get removed when a new view loads. The DOM node count certainly doesn't indicate such a thing for me :/
I also came across 2 important points related to the usage of ng-repeat:
Avoid invocation of any function within the ng-repeat directive.
Don't have a two-way binding using ng-model within a ng-repeat directive.
Both of these I've avoided in the second screen, and yet, the memory consumption is going through the roof.
My question might seem to be yet another memory related question w.r.t angular, but I've really tried to get some sort of closure on this and haven't found one.
Would really appreciate any assistance on this, as my decision to progress with the usage of angular for the rest of the portal hinges on solving this issue.
Thanks for reading!
Update 1
Based on Ilan's suggestion, let me add that I make use of 2 plugins for rendering the dropdown and the implementing the date-picker.
For the dropdown, i'm using Bootstrap-select and for the date-picker, I'm using Bootstrap-datepicker.
For bootstrap-select to work, I had to write a custom directive which fired a broadcast on the $last event of ng-repeat. It looks something like this:
.directive('onFinishRender', function($timeout) {
return {
restrict : 'A',
link : function(scope, element, attr) {
if (scope.$last === true) {
$timeout(function() {
scope.$emit('ngRepeatFinished');
});
}
}
};
});
Then in the controller, I rely on this event to invoke the render for the dropdown plugin:
$scope.$on('ngRepeatFinished', function(ngRepeatFinishedEvent) {
$('#retailer').selectpicker('render');
});
For the bootstrap-datepicker, I do not have to do such an elaborate thing, as I only need to wrap the date input field using JS.
Update 2
After turning off the plugins, the memory consumption reduces drastically. However, the problem of a leak still persists. Earlier, whenever the form view was getting loaded, the memory would spike by 50-60 MB. After turning off the plugins, it spikes by 25-35 MB. But as you can see below, the memory consumption keeps on piling up.
I recently spent nights and days finding similar memory leaks as that of yours. These is no direct answer to your question. You will have to do the research but i can give you some pointers to finding the leak.
Don't use any other plugin in your chrome browser except for developer toolbar when debugging leak.
Timeline is good to figure out that there is a leak but to actually see the leak, use profiler tab. It runs a GC everytime you take a Heap Snapshot and gives you a clue if you improved or not.
If you are seeing memory leak in MBs than it is coming from DOMElements. With the size of leaks that you mentioned, i can tell that your whole document is hanging as detached dom element because of one or two components in your page are not getting released and are still hanging as attached dom.
Remove all the elements from your second page and do the switch to see if memory is increasing. If it does, first page has the leak otherwise do the same with second page.
Once you have located which page has the leak, remove all component from that page and add them one by one to see when leak returns.
Hope these steps help you in some way. Also i have found that using $timeout in directive can cause leaks just in case it helps.