ExtJS 3.* chart uses too much memory - extjs

I am using Ext JS line chart with over 5000+ data. It uses too much memory especially on IE. How can I fix this leak or why it causes?

Both displaying 5k+ data points and processing it on client side are bad design decisions and should be avoided. Nobody can possibly comprehend this much data in one chart; that should be 10-12 points max or it becomes meaningless white noise. Client side processing in JavaScript is expensive, especially in older IEs; not only that, but you're also wasting time and resources transferring the data that is not going to be used.
The best solution is to modify your server side method to filter or aggregate data and provide UI access to these features.

Actually I have same Question , I do agree its not worth loading that much data but there are cases where I need to show drilling data in my ExtJs window.
Whose image is displayed below :
Here the points may go upto 15k data. Actually user wants to see the variations here rather than actual data. But he may need mouse over / zoom .
I achieved this with HTML5 plugin and adding this as an iframe.
Any words to achieve in ExtJs or how to go for this.

Related

what is a best approach to reduce the redux state size?

I have a new project about mobile app using react native tech.
I am thinking about using redux to manage the whole data from remote server api. our product have more business data need to display in mobile app.
So, My question is: redux state store our business data and it will take more memory on mobile device, like a ListView component. how can i solve this problem if i want to reduce the memory usage?
I am choosing, based on your background description of what you're trying to do, to address the underlying concern about the size of your redux store generally and the approach of storing everything on the client in my answer, and will not address specifically how to actually reduce the size of your data store here (the only answer to that is simply "don't store so much").
This is just a total swag and ignores things like compression, data duplication, the difference between storing something in AsyncStorage vs simply being in memory, etc.
That having been said, if you need some sort of gut check on whether memory/storage will be an issue, take a representative chunk of record data served by your API, serialize it as a JSON string, and figure out how big it is.
For example, this example twitter response is roughly 8.5 KB with whitespace removed. Let's say 10KB for each individual record for simplicity.
Now, how many records do you plan on bringing down? 10? 100? 1000? Let's say 1000 records of this type. That's 10,000KB or roughly 10MB.
With the constructs here, 10 MB is (Edit: depending on the specific constraint you're concerned about, may or may not be) a trivial amount of memory/storage to use in your application.
You need to do this similar process to your particular use case, and see if the amount of data you wish to store will be a problem for the devices you have to support.
A more relevant thing to consider is the performance impact of churning through large quantities of data on a single thread to do things like data manipulation, joining/merging, etc if that will be a need.
Redux is a tiny library that doesn't actually do that much for you by itself. This consideration is a general one, and is totally unique to your own application and cannot be concretely answered.

Server side responsiveness good practices

I have the following scenario: I work with CakePHP and Twitter Bootstrap.
I'm using a lot of responsiveness and writing lots of HTML that change for each screen size.
I was thinking about detecting the screen size and saving it on the server side, so I can write only the HTML that will really be useful on that page. Since the user rarely changes the window size, won't hurt to hit F5.
Is it a good practice? What do you suggest?
That job is clearly supposed to be done by the browser and your task it to make that happen by using the right CSS for the right device. As somebody already mentioned in the comment, your device could rotate, devices have different DPI. I'm pretty sure you can't pass the DPI to the server without explicitly reading it via JS (if possible at all) and passing it to the server in an AJAX call.
You can't rely on your server side device detection alone nor is it good to render Markup conditionally for that purpose it just increases the maintenance amount you have to do by X for each device.
I recommend you to read a little more about repsonsive webdesign, there are plenty of articles and books about it these days.
Not really a good practice mate. You'll increase your application load and you'll waste unnecessary space just to save a few lines of code. Imagine if you're making a website like facebook, how much server space would it require to just store that piece of information for millions of users. Responsive design is a must these days in css and you should just give general values for ranges of resolutions. There's a good paid tutorial on team tree house's website i think, and others are freely available on youtube etc.

Architectural Design for a Data-Driven Silverlight WP7 app

I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again:
In the UI, set a loading message or loading progress bar in place of where the content is
Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request
If the content can not be acquired (no network connection, etc), display an error message
If the content is acquired, display it in the UI
Keep the content in main memory for subsequent queries
The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source.
I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified:
Where to display the content in the UI
The UI elements to show while loading, on failure, and on success
The URI of the HTTP request
How to parse the HTTP response into the data structure that will kept in memory
The location of the file in isolated storage, if it exists
How to parse the file contents into the data structure that will be kept in memory
It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have.
Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections.
My questions are:
Has something like this already been implemented?
Are my thoughts about the topic above fundamentally wrong in some way?
Here is a design I'm thinking of:
There are two components, a View and a Model.
The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI.
The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache.
How does that sound?
I took some time to have a good read of your requirements and noted some thoughts to offer as a sounding board.
Firstly, for repetetive tasks with common behaviour this is definitely the way to approach it. You are not alone in thinking about this problem.
People doing a bunch of this sort of thing may have created similar abstractions however, to my knowledge none have been publicly released.
How far you go with it may depend if you intend it to be just for your own use and for those with very similar requirements or whether you want to handle more general cases and make a product that is usable by a very wide audience.
I'm going to assume the former, but that does not preclude the possibility of releasing it as an open source project that can be developed further and/or forked.
By not trying to cater for all possibilities you can make certain assumptions about the nature of the using implementation and in particular UI design choices.
I think overall your thinking in the right direction. While reading some of your high level thoughts I considered some things could be simplified (a good thing) and at the same time delivering a compeling UI.
On your initial points.
You could just assume a performance isindeterminate progressbar is being passed in.
Do this if it's important to you, but you could be buying yourself into some complexity here handling different caching requirements - variance in duration or dirty handling. Perhaps sufficient to lean on the platforms inbuilt caching of urls (which some people have found gets in their way).
Handle network connectivity, yep this is repetitive and somewhat intricate. A perfect candidate for a general solution.
Update UI... arguably better to just return data and defer decisions regarding presentation and format of data to your individual clients.
Content in main memory - see above on caching.
On your potential inputs.
Where to display content - see above re data and defer presentation choices to client.
I would go with a UI element for the progress indicator, again a performant progress bar. Regarding communication of failure I would consider implementing this in a Completed event which you publish. Then through parameters you can communicate the result and defer handling to the client to place that result in some presentation control/log/whatever. This is consistent with patterns used by the .Net Framework.
URI - yes, this gets passed in.
How to parse - passing in a delegate to convert a stream or string into an object whose type can be decided by the client makes sense.
Loc of cache - you could pass this if generalising this matters, or hardcode it's path. It would be more useful to others if passed in (consider if you handle folders/creation).
On the implementation.
You could go with a UserControl, if it works for you to be bound by that assumption. It would be more flexible though, and arguably equally simple/elegant, to push presentation back on the client for both the data display and status messages and control hide/display of the progress bar as passed in.
Perhaps you would go so far as to assume the status messages would always be displayed in a textblock (if passed) and shift that housekeeping from each of your clients into your generic class.
I suspect you will benefit from not coupling the data format and presentation still.
Tombstone handling.. I would recommend some testing on the platforms in built caching of URLs here and see if you can identify whether it's durations/dirty conditions work for your general cases.
Hopefully this gives you some things to think about and some reassurance you're heading down the right path. There are many ways you could go about this. Which is the best path ultimately will be driven by your goals.
I'm developing a WP7 application which is basically a client of an existing REST API. The server returns data in JSON. With the help of the library JSON.NET (http://json.codeplex.com/) I was able to deserialize it directly to my .NET C# classes.
I store locally the data to handle offline scenario of my application and also to prevent the call on the server each time the user launch the application. I provide two ways to refresh the data: manually and/or after a period of time. To store the data I use Sertling (http://sterling.codeplex.com/), it’s a simple but easy to use local database for Silverlight/WP7.
The biggest challenge is to handle the asynchronous communication with the server. I provide clear UI feedbacks (Progressbar and /or loading wheel) to let know to the user what’s going on.
On a side note I’m using MVVM Light toolkit and SL Unit Testing to do integration test View Model => my local Client code => Server. (http://code.google.com/p/nunit-silverlight/wiki/NunitTestsWp7)

3rd Party Silverlight Grid Control

We are going through a process of selecting a 3rd party suite of controls for Silverlight 4.0. We're mostly interested in a feature-rich grid control. I'm surprised to find that most of the products out there focus on client side paging, filtering, sorting, and grouping. But if the dataset is large enough to benefit from these functions isn't also too big to bring to the client in one call? And doesn't this make most of the advertised fancy grid features useless? In my opinion 200 rows of data is ideal upper limit on how much I'd request from the server in one request. Yet the sites for Telerik, DevExpress, ComponentOne, Xceed, and others all have fancy demos that bring 10,000+ rows of data to the client and show off the ability to page, filter, group, and sort it. Who brings 10,000+ rows of data to the client? What if you have 1,000s of concurrent users? What if that data is volatile? What use-case does this really address?
Can you share your experiences with any of these control suites and whether you've implemented paging? Also whether you are using RIA?
Thanks.
You don't need a third party Grid control to achieve server side paging. You can use the grid control and ObjectDataSource provided by silverlight toolkit http://silverlight.codeplex.com/
http://borrell.parivedasolutions.com/2008/01/objectdatasource-linq-paging-sorting.html
I agree with you, it can be crazy for a client to want to view their entire years worth of data all at the same time, but sometimes the client (and product managers) don't see things the same way you do and insist upon doing stupid things....
In any case, just because the demo is paging through 1 million records that doesn't mean they are bringing them all to the client. You also have to consider the scenario where you have 200 rows worth of data but you can only show 10 rows at a time due to the data templates you are using (you may only fit 10 rows to a page) - you can still retrieve all 200 rows because it is simply your presentation that is using up the physical room. You can also implement paging and retrieve the next page worth of data when it is requested (which will introduce a small delay, but could be well worth it). Possibly the best way to deal with this is to not give the user the ability to retrieve squillions of records at once - if you give them that feature they will use it and then they will also complain about its performance.
As for fast client side sorting/grouping/filtering, that is a real world necessity. It is common for our users to fetch many thousands of records from the server, then use the filters (which i have extended) to view a handful of records at a time, operate on those records, then modify the filters to view a different bunch. It is important to have these functions working fast because it makes a huge difference to the user experience. I trialled several different component sets earlier this year and found there was a vast difference in the performance between them when it came to these functions, so choose wisely :)
I'd like to see a control suite that boast working with concurrency issues on order fullfullment and also uses queues or stacks in order to solve data conflicts. I see too often that this grids and list controls are really nice, pretty, and show you all the data, but they don't solve basic concurreny problems when you have more than one person working on the same set of data. If it automates the locking of a row of one user from another, prevents duplication of work, and automatically logs error messages, then I can see purchasing the control suite.
You don't need to load all your data at once you can specify a maximum load in the xaml of your ObjectDataSource. This will load your data in blocks of the specified size.
Take a look at the 2 RIA services videos here:
https://www.silverlight.net/getstarted/riaservices/
There are segments on paging which may also be useful to you.
note(some of the assembly references and syntax have changed slightly since these videos were made but the core function is still the same)

Is smartclient suitable?

How does the waiting time for SmartClient scale across thousands of users editing grids?
I have received warnings before that ExtJS would not be suitable.
SmartClient has a single grid component that does both horizontal and vertical incremental rendering, so it handles a very very large number of both rows (several million) and columns (several hundred) without degradation in performance.
All of the grid features supported by SmartClient - inline editing, grouping, filtering, dynamic frozen columns, sorting, reordering fields, drag and drop .. (too long to list) are supported by this single, high data volume grid component.
A number of users have run into scalability issues with the Ext grid component and discussed it here on the SmartClient forums:
http://forums.smartclient.com/showthread.php?t=2678
As far as scalability of the server, in reality the grid component contributes hugely to server-side scalability. Consider the adaptive filtering mechanism of the SmartClient grid:
http://www.smartclient.com/index.jsp#adaptiveFilter
This feature and the related "Adaptive Sort" feature cut down on 60-90% of the most expensive types of server hits (that is, those that access and filter/sort a large dataset).
SmartClient pervasively takes this approach of intelligently re-using data in order to avoid expensive server-side operations. A good overview is available in the ResultSet class documentation; the ResultSet is used as a cache management object by all components that work with datasets in SmartClient:
http://www.smartclient.com/docs/9.0/a/b/c/go.html#class..ResultSet
The number of users editing grids is not really relevant -- that's more of a question of how your application is designed to support load. If you are asking about performance relative to the grid component itself, the most relevant questions are about the grid's capabilities and how much data it can handle, not how many users will be using it over time.
I'm not familiar with SmartClient, but in the case of Ext, the grid performs very well for small to medium sized grid data (very approximately, up to ~50 rows per page, up to ~10 columns of data). Obviously this all depends on a lot of variables, but it is true that Ext's grid rendering time increases in direct proportion to the amount of data rendered at one time. This is because it uses a fairly heavy DOM under the covers, the trade-off being the rich feature set out of the box and the flexibility that is provided for creating customized nested row layouts. It does support paging to mitigate performance issues, and there is also a very popular extension that provides on-demand row loading (virtual scrolling) that enables higher-performance loading of large data sets. There's also an example of a lighter-weight and simpler version of buffered loading in the Ext examples that shows excellent performance with a lot of data.
Also, depending on your needs, there is a new lightweight ListView component in 3.0. It does not support all of the GridView's features, but if you primarily need a display-only grid, it might be a great alternative.
All of this is not to say that SmartClient is not good -- I have no idea. I just want anyone looking at this thread to have an informed decision on the Ext side of the equation since it sounds like you have received one-sided opinions on it.
thank you for your answer. I have been under the impression that the average-Joe entering a website with Ext would be discouraged when he was faced with a long loading time which was also increased if there were many people using the site.
This was the reason why I thought that SmartClient would be better but I haven't found any comparison between them. Maybe I was to hasty in disregarding ExtJS.
I will hope to get in contact with someone that has had experience from SmartClient to assist in developing my future site.
Thank you
Jez

Resources