Limit memory in using isam2 - slam

I'm investigating the usage of the incremental smoothing and mapping class (isam2) in the gtsam package for a real-time image navigation problem. The issue is that while the API has methods for adding factors to the factor graph using isam2::update(...), there is no clear method for removing these factors. Therefore in a real-time (not batch) application, the memory required will grow unbounded.
So the questions are:
Is there an API for removing factors and still maintaining a consistent SLAM?
How does the isam2 class avoid this memory creep?

Related

What are the alternative ways of Floodlight Counter tag(in Google Tag Manager)?

Here is the Story of current implementation:
I have over 50 Ad Campaigns. To track the user-behavior, I have implemented Floodlight Count tag for all of those. However, it is eating up lot of container size. Therefore, I am looking for a solution with which, I can dynamically fire Floodlights or without implementing Floodlights, I can get the similar result.
I already have implemented this solution. However, it increases the Loadtime of the webpage as it contains RegEx table(and my RegEx table has over 50 entries).
I am looking for a solution which involves minimal use of Custom variables by mostly using what is available by-default in GTM.
What's available by default in GTM (not counting custom variables) wouldn't be sufficient for covering 50 ad campaigns in an effective manner. You could always create 50 triggers and 50 tags, hardcoding your ids and maintain them separately in different tags, which is quite far from being the best practice. It actually may be the worst practice, especially given how the container size is hard-limited for GTM.
Even a large rLUT should be very insignificant given that the variable against which it's executed is reasonably small and the url is most definitely a tiny string. Have you actually measured the page load speed with and without the rLUT? Have you debugged the load speed issues and narrowed them down to the rLUT? It's actually quite an achievement to make GTM significantly affect the page load speed, given how it's all async and non-blocking.
You could reimplement the rLUT in a CSJ variable, which may be more preferable if you're good at JS, but it won't improve performance. Just like an rLUT won't hinder performance.

React error in render/flush: RangeError in flush RangeError: Maximum call stack size exceeded

I've been in the process of rewriting an old AngularJS app in React (actually it's using preact, chosen by the developer who started this project initially).
This app handles large deeply nested objects that get be displayed via Material UI accordions and tables. The data is more WIDE than deep, but at any rate, React has trouble rendering it all without this RangeError.
I've been dancing with this issue for a while now and have avoided it by strategically managing accordions and not rendering data for accordions that are not open.
I've commonly seen this reported as a recursion issue, and I've carefully reviewed the ode to confirm there is no recursion involved. Plenty of iteration, but no recursion.
Please note the stack trace, it's hitting this in the flush() function, which is not in our application code, but in the Chrome debugger VM. I've set breakpoints and it appears to be something related to DOM operations as the objects being flushed are React elements. Here's a code snippet from the point where this error is hit:
function flush(commit) {
const {
rootId,
unmountIds,
operations,
strings,
stats
} = commit;
if (unmountIds.length === 0 && operations.length === 0) return;
const msg = [rootId, ...flushTable(strings)];
if (unmountIds.length > 0) {
msg.push(MsgTypes.REMOVE_VNODE, unmountIds.length, ...unmountIds);
}
msg.push(...operations); <--- error occurs here when operations.length too long
And the stack trace logged when error occurs:
VM12639:1240 Uncaught (in promise) RangeError: Maximum call stack size exceeded
at flush (<anonymous>:1240:8)
at Object.onCommit (<anonymous>:3409:19)
at o._commit.o.__c (<anonymous>:3678:15)
at QRet.Y.options.__c (index.js:76:17)
at Y (index.js:265:23)
at component.js:141:3
at Array.some (<anonymous>)
at m (component.js:220:9)
The error is occurring if operations is too large. Normally it will be anywhere from a dozen or so in length up to maybe 3000, depending on what's going on, but when I try to load our page displaying the wide/deep nested object this number is more like 150000, which apparently is choking the spread operator.
My sense is that this type of app is a challenge for React. I cannot think of another example of a React app that displays data the way we do with this. If anyone here has experience with this sort of dataset and can offer suggestions as to how to make this work, please share.
My guess is I'm going to need to somehow break this object up into smaller chunks that represent smaller updates, but I'm posting here in case there's something I can learn.
It looks similar to this open issue on the React repo, only it happens in a different place (also in dev tools). Might be worth reporting your issue there too. So probably React is otherwise "fine" rendering this amount of elements, though you'll inevitably get slow performance.
Likely the app is just displaying too much data, or doing it inefficiently.
but when I try to load our page displaying the wide/deep nested object this number is more like 150000, ...
150000 DOM operations is a really high amount. Either your app really does display a whole lot of elements, or the old AngularJS app had too many wrapper elements and these were preserved. Since you mention it concerns data tables, it's probably the first reason. In any case complex applications always need some platform specific optimization.
If you can give an idea about the intended use case, or even better, share (parts of) the code, that would help others to give more targeted advice. Are the 150k operations close to what would happen in real world usage, or is it just a very inflated number for stress testing? Do you see any other performance regressions, compared to the Angular app, with very complex objects? How many tables are on the screen at a time?
A few hundreds of visible elements on the screen already gets quite cramped. So where would all these extra operations coming from? Either you're loading a super long page of which a user can only see a few percent at the same time, or the HTML structure is unnecessarily deeply nested.
Suggested performance improvements
I wouldn't say React isn't suitable for really large amounts of data, but you do need to watch out for some things yourself. React is only your vehicle to apply changes to the DOM. Putting a large amount of elements in the DOM is always going to lead to decreased performance, and is something you usually want to avoid.
In this case you could consider whether it's necessary to display all the table's data, which is probably the bulk of the operations. Using pagination would resolve the problem, and might even make it more user friendly.
If that's not an option, you maybe can use a library like react-lazyload to show/hide the items as they enter/exit the visible part of the table. To achieve this, use their unmountIfInvisible prop. You can then replace a complex data row with a single element that has the same height. The last is important to preserve the scroll height.
<LazyLoad
height={100}
offset={100}
unmountIfInvisible
placeholder={<tr height={100}/>}
>
<MyComplexDataRow />
</LazyLoad>
This way your data table never consists of much more complex elements than can be seen in the viewport. You probably need to tune the offset a bit so that it's always ready in time as it's benig scrolled.

memory increment of the application in the browser while running it for several hour

I am stuck with the memory increment of my application and as it is single page I can't even reload it. After running my application for around 5-6 hour memory size is reaching around 600mb from initial loading i.e 120mb and we did some fixed for this like making the ref to null in the componentWillunMount() and memory has reduced to 400 mb after the same testing for same time but still I can see there are lot many detached element, definitely it caused by some other parts of the code, in the snapshot file which we can take from chrome inbuilt functionality. So is there any way that I can remove all the detached-element while leaving the certain page or why don't browser removed this from memory as the detached-element is retaining some size of the memory ?
DOM node can only be garbage collected when there are no references to it from either the page's DOM tree or JavaScript code.
I suggest you take a look at your code and see if there are functions running not when you want it. If you use react or similar frameworks, you have to be careful with their lifecycle (important!).
Also here https://developers.google.com/web/tools/chrome-devtools/memory-problems/
There are many useful information, such as
- Investigate memory allocation by function
- Spot frequent garbage collections
So is there any way that I can remove all the detached-element while leaving the certain page or why don't browser removed this from memory
I cant offer any more accurate assumption or suggestion if what we have is I use javascript information. Countless consequences from countless combination of libraries, stacks and techniques make this impossible to guess.

How to model nested lists with many items using Google Drive Realtime API?

I'd like to model ordered nested lists of uniform items (like what you would see in a standard tree widget) using the Google Drive realtime API. These trees could get quite large, ideally working well with many thousands of items.
One approach would be:
Item:
title: CollaborativeString
attributes: CollaborativeMap
children: CollaborativeList // recursivly hold other items
But I'm unsure if this is feesible when dealing with a large number of items.
An alternative might be to store all items tree order in a single CollaborativeList and add an additional "level" attribute. Then reconstruct the tree structure based on that level on the client. That would change from having to maintain thousands of CollaborativeLists to just a single big one. Probably lots of other alternatives that I don't know about.
Thanks for pointers on the best way to model this in the Google Drive Realtime API.
So long as the total size of the document is within the size limits, there shouldn't be a significant performance difference between the approaches from a framework perspective. (One caveat, using ObjectChangedListeners with a highly connected graph may slow things down. Prefer registering listeners on the specific objects instead.)
Modeling it as a real tree makes sense, since that will be the easiest to work with, and you can use the new move operation to atomically rearrange items in the lists.

Can I get the instances of alive objects of a certain type in C#?

This is a C# 3.0 question. Can I use reflection or memory management classes provided by .net framework to count the total alive instances of a certain type in the memory?
I can do the same thing using a memory profiler but that requires extra time to dump the memory and involves a third party software. What I want is only to monitor a certain type and I want a light-weighted method which can go easily to unit tests. The purpose to count the alive instances is to ensure I don't have any expected living instances that cause "memory leak".
Thanks.
To do it entirely within the application you could do an instance-counter, but it would need to be explicitly coded and managed inside each class--there's no silver bullet that I'm aware of to let you query the framework from within the executing code to see how many instances are alive.
What you're asking for is really the domain of a profiler. You can purchase one or build your own, but it requires your application to run as a child process of the profiler. Rolling your own isn't an easy undertaking, by the way.
If you want to consider the instance counter it would have to be something like:
public class MyClass : IDisposable
public MyClass()
{
++ClassInstances;
}
public void Dispose()
{
--ClassInstances;
}
private static new object _ClassInstancesLock;
private static int _ClassInstances;
private static int ClassInstances
{
get
{
lock (_ClassInstancesLock)
{
return _ClassInstances
}
}
}
This is just a really rough sample, no tests for compilation; 0% guarantee for thread-safety (critical for this type of approach) and it leaves the door wide open for Dispose to be called, the instance counter to decrement, but for the object not to properly GC. To diagnose that bundle of joy you'll need, you guessed it, a professional profiler--or at least windbg.
Edit: I just noticed the very last line of your question and needed to say that my above approach, as shoddy and failure-prone as it is, is almost guaranteed to deceive and lie to you about the true number of instances if you're experiencing a leak. The best tool, IMO, for attacking these problems is ANTS Memory Profiler. Version 5 is a double-edge in that they broke Performance and Memory profiler into two seperate SKUs (used to be bundled together) but Memory Profiler 5.0 is absolutely lightning fast. Profiling these problems used to be slow as molases, but they've gotten around it somehome.
Unless this is for a personal project with 0 intent of redistribution you should invest the few hundred dollars needed for ANTS--but by all means use it's trial period first. It's a great tool for exactly this kind of analysis.
The only way I see to do this is without any form of instrumentation to use the CLR Profiling API to track object lifetimes. I'm not aware of any APIs available to the managed code to do the same thing, and, so far as I know, CLR doesn't keep the list of live objects anywhere (so even with profiler API you have to create the data structures for that yourself).
VB.NET has a feature where it lets you track objects in debugger, but it actually emits additional code specifically for that (which basically registers all created objects in internal list of weak references). You could do that as well, using e.g. PostSharp to post-process your assemblies.

Resources