Table pagination in qooxdoo - qooxdoo

Best wishes to all helpers!
I searched hard but nobody could give answer about pagination for tables in qooxdoo. It is fantastic framework but why developers (and architectors) missed that not so deep and hard feature built in?
Thanks to anyone who will answer!
Oleg

the qooxdoo table (lets hope you speak of the qx.Desktop layer) is a virtual widget and can handle a huge set of data without suffering any kind of performance decrease. Knowing that, there has been no need to add pagination to the table. If you don't want to load all the data at once, you can use the Remote Table Model to fetch only visible data. You can also check out the demo of this model to get an Idea of hot it is used.

Related

CakePhp speed optimization

My app is working very slow. I am using Cakephp 2.3.1 version. Will it be beneficial to load Model, Components and Helpers in the functions where they are needed? Right now I am calling them in class. eg:
class PanelsController extends AppController {
public $name = 'Panel';
public $uses = array(list of models goes here);
public $components = array(list of components goes here);
.................
}
What other techniques do you suggest. Thanks
Here is something I would check if the site is performing slow
Speed Optimization
Enable caching
Compress JS and CSS. (A nice plugin that does this)
A good speed optimization checklist
Cake Practices
Cake conventions are your best guidelines, the framework has been designed to be scaled with its conventions.
Recursion and Containable, By default Cake fetches all related data when a query is fired. Both recursive level and containable behavior can limit the amount of data retrieved. If cake fetches all related data by default doesn't mean that you have to keep it that way.
Keep your DB normalized. This will allow you to defer many processes. For eg. when retrieving posts, cake automatically fetches all of its related data (Tags, comments). But when you have higher-order normalized DB, u can defer loading comments from an XHR/AJAX request. This will also allow you to serve comment related logic from comment's Model, Controller and View. Even if you bring related model data set limits for them.
You can also drop needs of counter query for related data by using Counter cache. More Details here
Cache your view
You can cache query results manually too,
Cache::write($this->Post->find("all"));
Try them out and you should be able experience amazing speed improvements.
Lastly, I do believe that Architecture of an application plays a big role in performance. Some times, we have to separate certain logic from a request Life cycle to boost the performance.
public $uses() does not matter. you can add as many as you want. Cake will only lazyload them if needed somewhere.
Just make sure you got recursive = -1 per default in your AppModel and only raise it or contain data you really need.
But your components will all be loaded and initialized right away.
You might want to reduce those.
Those two attributes cannot be your bottleneck, though. You must have some other serious issues.
Also don't make assumptions in debug mode. The real speed is measured/obseved with debug 0 where no additional debug info is gathered and the cache is not constantly replaced.
EDIT: Please note that my words above are only meant from a "speed point of view". It does not matter for speed. But it is not recommended to add models in $uses if you are able to reach those via relations and the relation chain.
So let's say you want to make a dashbard. In most cases you only need to add "User" model, since Profile, Images, and other models are usually directly accessable via $this->User->Profile->foo($bar) etc.
You can use caching technique in cakephp to reduce time , for documentation see here :http://book.cakephp.org/2.0/en/core-libraries/caching.html
Do not use load Model, with large data code it will create problem.
Here is an article for your reference:(Tips to speed up cakephp app)
http://www.dereuromark.de/2012/02/13/what-really-speeds-up-your-cakephp-app/
cakephp app slowness can be caused by a lot of reasons, have experienced so far:
mysql server that was trying to do a dns lookup
rendering a page with lot of links assembled via reverse routing
memory issues
best way to find out seems to install XDEBUG1
and examine the profiling information

3rd Party Silverlight Grid Control

We are going through a process of selecting a 3rd party suite of controls for Silverlight 4.0. We're mostly interested in a feature-rich grid control. I'm surprised to find that most of the products out there focus on client side paging, filtering, sorting, and grouping. But if the dataset is large enough to benefit from these functions isn't also too big to bring to the client in one call? And doesn't this make most of the advertised fancy grid features useless? In my opinion 200 rows of data is ideal upper limit on how much I'd request from the server in one request. Yet the sites for Telerik, DevExpress, ComponentOne, Xceed, and others all have fancy demos that bring 10,000+ rows of data to the client and show off the ability to page, filter, group, and sort it. Who brings 10,000+ rows of data to the client? What if you have 1,000s of concurrent users? What if that data is volatile? What use-case does this really address?
Can you share your experiences with any of these control suites and whether you've implemented paging? Also whether you are using RIA?
Thanks.
You don't need a third party Grid control to achieve server side paging. You can use the grid control and ObjectDataSource provided by silverlight toolkit http://silverlight.codeplex.com/
http://borrell.parivedasolutions.com/2008/01/objectdatasource-linq-paging-sorting.html
I agree with you, it can be crazy for a client to want to view their entire years worth of data all at the same time, but sometimes the client (and product managers) don't see things the same way you do and insist upon doing stupid things....
In any case, just because the demo is paging through 1 million records that doesn't mean they are bringing them all to the client. You also have to consider the scenario where you have 200 rows worth of data but you can only show 10 rows at a time due to the data templates you are using (you may only fit 10 rows to a page) - you can still retrieve all 200 rows because it is simply your presentation that is using up the physical room. You can also implement paging and retrieve the next page worth of data when it is requested (which will introduce a small delay, but could be well worth it). Possibly the best way to deal with this is to not give the user the ability to retrieve squillions of records at once - if you give them that feature they will use it and then they will also complain about its performance.
As for fast client side sorting/grouping/filtering, that is a real world necessity. It is common for our users to fetch many thousands of records from the server, then use the filters (which i have extended) to view a handful of records at a time, operate on those records, then modify the filters to view a different bunch. It is important to have these functions working fast because it makes a huge difference to the user experience. I trialled several different component sets earlier this year and found there was a vast difference in the performance between them when it came to these functions, so choose wisely :)
I'd like to see a control suite that boast working with concurrency issues on order fullfullment and also uses queues or stacks in order to solve data conflicts. I see too often that this grids and list controls are really nice, pretty, and show you all the data, but they don't solve basic concurreny problems when you have more than one person working on the same set of data. If it automates the locking of a row of one user from another, prevents duplication of work, and automatically logs error messages, then I can see purchasing the control suite.
You don't need to load all your data at once you can specify a maximum load in the xaml of your ObjectDataSource. This will load your data in blocks of the specified size.
Take a look at the 2 RIA services videos here:
https://www.silverlight.net/getstarted/riaservices/
There are segments on paging which may also be useful to you.
note(some of the assembly references and syntax have changed slightly since these videos were made but the core function is still the same)

Database Performance Solution - "View Caching" - Is this a good idea?

A little context first:
I would say I have good SQL server experience, but am a developer, not a DBA. My current task is to improve on the performance of a database, which has been built with a very heavy reliance on views. The application is peppered with inline sql, and using Tuning Advisor only suggests a couple of missing indexes, which look reasonable.
I feel that reworking the database design so that the data created by these particular views (lots of CASE WHENS) is persisted as hard data is too much work given the budget and time scales here. A full rewrite of these enormous views and all of the code that relies on them also appears to be out of the question.
My proposed solution:
Testing has shown that if I do a SELECT INTO with the view data to persist it in permenant table, and then replace references to the view with this table, query times go down to 44% of what they were when using the view.
Since the data is refreshed by a spider process overnight, I think I can just drop and recreate this table on a daily basis, and then make minor modifications to the queries to use this view.
Can someone with good DBA experience give me an opinion on whether that is a good / *&?!! awful idea. If the latter, is there a better way to approach this?
Thanks.
You say: "...reworking the database design so that the data created by these particular views ... is persisted as hard data is too much work given the budget and time scales here." and yet this is exactly what you are proposing to do. It's just that you use the views themselves instead of making the code a function or a stored procedure.
Anyway, my opinion is that if you invest a bit of effort in making this robust (i.e. you ensure that if the spider runs, the persisted data always get refreshed, and you never run the select-into before the end of the spidering) your solution is ok.
What would concern me is that this is a hack - however brilliant - so whoever inherits your solution may find it difficult to understand the why and how. See if you can provide a good explanation either by comments or a seperate document.

What is best practice for working with DB in Wordpress?

I'm developing a plugin for Wordpress, and I need to store some information from a form into a table in the DB.
There are probably lots of pages explaining how to do this, this being one of them:
http://codex.wordpress.org/Function_Reference/wpdb_Class
But are there any other pages talking about best practice for interacting with th WP DB?
UPDATE
Found a few more pages which could be usefull:
http://wpengineer.com/wordpress-database-functions
http://blue-anvil.com/archives/wordpress-development-techniques-1-running-custom-queries-using-the-wpdb-class
Unless you need to create your own complex table structure I'd suggest using the existing tables for your needs. There's the options table, user meta table, and post meta table to work with. These all have built in apis for quick access.
Options: add_option(), get_option(), update_option(), delete_option()
Usermeta: add_user_meta(), get_user_meta(), update_user_meta(), delete_user_meta()
Postmeta: add_post_meta(), get_post_meta(), update_post_meta(), delete_post_meta()
I've not found much real need to go outside of these tables (yes, there are exceptions, I've done them myself when the data needs are complex) but these meta options all use a text field in the db to store the data, so there's a lot you can store here if its simple data.
If your data is simple then consider storing your information in one of these places as individual options or even as serialized arrays.
One of my BIGGEST pet peeves with plug in developers that leverage the WP database is that if/when a given database driven plugin isn't used anymore, the developer doesn't think to remove the footprint it made in the database.

Is there an ORM that can dynamically generate a DAL from within a .Net WinForms app?

I'd greatly appreciate your advice on a strange specification.
We have a requirement to create an application where users can drag/drop field types onto a form so that they can create their own "app". I have the front-end setup, but the back-end is a big problem.
There are forward mapping ORMs and reverse mapping ORMs, yet I've not found one that can embed within the application and generate the tables, relationship, etc. when the user starts up the app. Of course, if a table, field or other entity already exists, it would not overwrite them (and overwrite the underlying data).
ActiveRecord is the closest I've found, but it is web based and does not extend to a WinForm environment. I would SO prefer not have our crew write our own DAL, debug it, etc. when there might be an ORM out there that can do this.
Does anyone know of an ORM that can do this? If not, how would you go about solving this nightmare in the making?
Thank you so much for your help.
Although you can think of a solution for this problem built with ORM, I do not think this is a good idea: these tools are designed to solve another class of problems. The only way is to build the application yourself.
That's an unfortunate app-- if your users wanted to do that, they'd just buy Visual Studio!
This isn't a good position to be in, because no, I'm not aware of any suitable way to do this with an ORM. Sadly, if you're looking for this because of schedule pressure, your project may be in trouble.
In theory you can use any ORM, that can automatically generate database schema. For example see DataObjects.Net, it typically generates and upgrades database schema using persistent model, based on persistent classes and additional custom definitions. But I can hardly imagine how your whole application will work in this case... there are so many potential issues.

Resources