High performance pivot grid for pre-aggregated data - wpf

I have been tasked with coming up with a high performance front-end for a live ActivePivot back-end. I already have a client-side service layer that provides a continuous stream (IObservable<T>) of pre-aggregated, pre-formatted data, as well as metadata detailing the dimensions and what-not in the report. My requirements can be summarized as:
Dynamically set up row and column headers based on metadata in the stream.
Dynamically pass live data through to the appropriate row/column of the control.
Highlight changes to data. eg. increased values may highlight temporarily in green, decreased values in red.
Intercept user actions on row/column headers (ie. drill-downs) so that I can instigate a change in the underlying MDX query.
Intercept user actions (probably double-click) on data values so that I can instigate a drill-through query (the results of which would be displayed in a separate data grid).
All the third party components appear to be geared around slicing and dicing disconnected (or rarely updated) data sets. They sacrifice performance to achieve a higher degree of flexibility that I simply don't need, and performance is paramount for my scenario.
Does anyone know of a WPF control that is performance-focussed and geared more towards the viewing of pre-aggregated, pre-formatted data?

PivotTable-like frontends that allow slice and dice data exploration are in general associated with OLAP technology. Some of those frontends target one specific server, using a proprietary data model, and some others implement the standard: MDX queries over XMLA transport.
But when OLAP technology was designed 20 years ago, doing it in real-time seemed unthinkable. One consequence is that the XMLA standard has no support for updates in a cell set. Actually it practically forbids it because of the static representation of cell sets and cell set axis.
ActivePivot can push real-time updates into an OLAP result set and it exposes a (proprietary) streaming API to subscribe to those updates. The ActivePivot Live frontend was in the first place written to leverage those real-time updates, presenting them in familiar pivot table controls. But in 2013 ActivePivot is still the only OLAP server with real-time support. That explains why there isn't yet a standard to subscribe to OLAP real-time updates. And that also means that as of 2013 and outside of ActivePivot Live you will not find a toolkit (WPF or not) that has done the whole job of enriching its pivot table controls with real-time updates. The libraries we know of have actually transposed the static data representation of XMLA in their pivot table designs, making it cumbersome or impossible to update the cells (think of the Microsoft Excel Pivot Table for instance).
Under the constraint of a particular technology like WPF, I would select a general purpose UI toolkit, that makes it easy to arrange and compose tables. From there that's a D.I.Y. job.

Just in case anyone was wondering, I ended up writing my own WPF PivotGrid control specifically designed for high performance. It handles tens of millions of cells with hundreds of thousands of updates per second. Why anyone would want that much data in a single grid I don't know, but there you go.
It handles all the requirements I lay out in my question, and more. Can't share any more than this, however, as it's proprietary.

Related

What is the right database technology for this simple outlined BI tool use case?

Reaching out to the community to pressure test our internal thinking.
We are building a simplified business intelligence platform that will aggregate metrics (i.e. traffic, backlinks) and text list (i.e search keywords, used technologies) from several data providers.
The data will be somewhat loosely structured and may change over time with vendors potentially changing their response formats.
Data volume may be long term 100,000 rows x 25 input vectors.
Data would be updated and read continuously but not at massive concurrent volume.
We'd expect to need to do some ETL transformations on the gathered data from partners along the way to the UI (e.g show trending information over the past five captured data points).
We'd want to archive every single data snapshot (i.e. version it) vs just storing the most current data point.
The persistence technology should be readily available through AWS.
Our assumption is our requirements lend themselves best towards DynamoDB (vs Amazon Neptune or Redshift or Aurora).
Is that fair to assume? Are there any other questions / information I can provide to elicit input from this community?
Because of your requirement to have a schema-less structure, and to version each item, DynamoDB is a great choice. You will likely want to build the table as a composite Partition/Sort key structure, with the Sort key being the Version, and there are several techniques you can use to help you locate the 'latest' version etc. This is a very common pattern, and with DDB Autoscaling you can ensure that you only provision the amount of capacity that you actually need.

Fetching a large block of data using NHibernate

I have some 3 million rows of data to show on data grid using C#.
Currently using NHibernate to fetch data from database sqlserver 2005.
NHibernate takes lots of time to get data. Is there any way to retrieve data from data from database efficiently using NHibernate.
---Edit----
As the application has huge data to operate upon, loading all rows is just a worst case scenario. In a normal use user will load 10k rows. No of displaying rows can be minimised by using paging but as some rows are dependent on others I need to load all data while initializing the app.
NHibernate gets slow even with 1000 rows. Any suggestions to improve the performance.
Thanks.
You could use Stateless Session to get the data.
http://ayende.com/blog/4137/nhibernate-perf-tricks
http://darioquintana.com.ar/blogging/2007/10/08/statelesssession-nhibernate-without-first-level-cache/
But first ask your self do you really need to display millions of rows. What value does that give to your user? Can they easily locate the data they want?
Also, DataGrid itself will take a large amount of memory (regardless whether you are using Windows Forms, WPF, ASP.NET...). Memory to store the data itself, plus the memory to store additional DataGrid column / cell metadata.
Consider having only a subset of data instead. You could either allow the user to filter thru the data and/or add paging. Filter and paging can be translated to HQL / Criteria / Linq / QueryOver queries and eventually to SQL queries.
My recommendation here is not to use an ORM to fetch this size of data but then my second point is why would you want to fetch 3 Million rows of data and show it in a grid?
No user can possibly want to scroll through 3 Million lines of a table.
You could use a paged data system to request only the page you are viewing at any one time. Or you could filter the data for a smaller subset that the user is interested in.
If you have 3 Million records maybe the data needed is an analysis of those records.
I would take a look at some of these resources:
http://msdn.microsoft.com/en-us/magazine/cc164022.aspx
http://weblogs.asp.net/rajbk/archive/2010/05/08/asp-net-mvc-paging-sorting-filtering-using-the-mvccontrib-grid-and-pager.aspx
As the application grows in complexity and the data on which they thrive upon becomes complex, the generality of such OR mapping engines may lead to various performance and scalability bottleneck. Applications cannot scale linearly with the increasing transactional requirements. Caching is one technique which can enhance the performance of the NHibernate app. Infect NHibernate provides a basic, not-so-sophisticated in-process L1 cache out of box. However, it doesn’t provide features that a caching solution must have to have a notable impact on the application performance. So its better to use a 2nd level cache, it may help you to increase the performance of your Nhibernate App. There are many third party NHibernate 2nd Level cache are provider are available there but I'll recommend you to use NCache. Here is a good read about it,
http://www.alachisoft.com/ncache/nhibernate-l2cache-index.html

heavy data application silverlight vs mvc vs asp.net

we've been using a custom made ERP application for more than 4 years, actually there were some bad design issues which affected the performance for the application right now.
in fact the problem is, this application can be classified as heavy-data application, aka business application.
the biggest bad design issues, is that we are getting the whole data for a particular business unit, and assign it for controls, or keep it in memory (like ArrayLists, Generic Lists), and give the user the flexibility of navigation and full application speed response.
in the first 2 years of usage, we didn't notice the problem, but by the end of that period, we are getting complaints, more and more about the performance of the application.
We were and still using ADO.NET, with stored procedures, which is generated for CRUD operations (Get, List, Delete, Insert, Update SPs), so if we are going to separate each year's data, we've to redesign those auto generated SPs with the required modifications in the data access layer, and with the fact that there were no actual separation in layers, we are not able by easy means to do that modification, so the first upcoming solution is to do some database enhancements to balance this huge data traffic, but the problem raises again for about 6 Months by now.
so we think this is the time to correct our mistakes (very late decision), but how could we do this? we end up with these solutions:
Use web application rather than windows application, thus we'll be able to use our server's full power, and minimizes the traffic for the connection
Redesign the application so that it takes care of Year Endings, don't get all data, use better ORM rather than old fusion ADO.NET, but stick in windows forms, and same application
and according to time frame (limited some how), team members (2, both have good experience in windows, web development), I'm not able to take the decision right now, I'm afraid that windows application modifications could take longer time than expected.
can you advice me?
I'm using C# 4.0, ASP.NET web forms, (MVC newbie).
If the database itself is quick, as you've indicated in your response to my comment, then, as you have suggested, you must reduce the amount of data populating the DataSet.
Although ADO.NET uses a "disconnected" model, the developer still has control over the select-query that is used to populate each table/relation in the DataSet. A hypothetical example: suppose you had 5000 customers in your CUSTOMERS table in the database. You might populate the CUSTOMERS table in your DataSet like this:
select * from CUSTOMERS order by customername
or like this:
select id, customername from CUSTOMERS order by customername
The order-headers table could be populated like this:
create proc getOrderHeaders4Customer
#customerid int
as
select * from ORDERHEADER where customerid = #customerid
that is, you would need to have a current customer in the GUI before you'd populate the OrderHeaders relation, and you would fetch the order-detail data only when the user clicks on a particular order-header in the GUI. The point is, that ADO.NET can be used to do this -- you don't have to abandon everything in the app in order to be more parsimonious with the disconnected dataset.
How much data would be fetched by the client, and more importantly, when the data are fetched by the client, is completely under developer control regardless of whether it's Silverlight, WinForms, or ASP.NET. Now, if your ERP application was not efficient, retrofitting efficiency may be difficult. But you don't have to abandon ADO.NET to achieve efficiency, and simply choosing another data-layer technology will not itself bring you efficiency.

3rd Party Silverlight Grid Control

We are going through a process of selecting a 3rd party suite of controls for Silverlight 4.0. We're mostly interested in a feature-rich grid control. I'm surprised to find that most of the products out there focus on client side paging, filtering, sorting, and grouping. But if the dataset is large enough to benefit from these functions isn't also too big to bring to the client in one call? And doesn't this make most of the advertised fancy grid features useless? In my opinion 200 rows of data is ideal upper limit on how much I'd request from the server in one request. Yet the sites for Telerik, DevExpress, ComponentOne, Xceed, and others all have fancy demos that bring 10,000+ rows of data to the client and show off the ability to page, filter, group, and sort it. Who brings 10,000+ rows of data to the client? What if you have 1,000s of concurrent users? What if that data is volatile? What use-case does this really address?
Can you share your experiences with any of these control suites and whether you've implemented paging? Also whether you are using RIA?
Thanks.
You don't need a third party Grid control to achieve server side paging. You can use the grid control and ObjectDataSource provided by silverlight toolkit http://silverlight.codeplex.com/
http://borrell.parivedasolutions.com/2008/01/objectdatasource-linq-paging-sorting.html
I agree with you, it can be crazy for a client to want to view their entire years worth of data all at the same time, but sometimes the client (and product managers) don't see things the same way you do and insist upon doing stupid things....
In any case, just because the demo is paging through 1 million records that doesn't mean they are bringing them all to the client. You also have to consider the scenario where you have 200 rows worth of data but you can only show 10 rows at a time due to the data templates you are using (you may only fit 10 rows to a page) - you can still retrieve all 200 rows because it is simply your presentation that is using up the physical room. You can also implement paging and retrieve the next page worth of data when it is requested (which will introduce a small delay, but could be well worth it). Possibly the best way to deal with this is to not give the user the ability to retrieve squillions of records at once - if you give them that feature they will use it and then they will also complain about its performance.
As for fast client side sorting/grouping/filtering, that is a real world necessity. It is common for our users to fetch many thousands of records from the server, then use the filters (which i have extended) to view a handful of records at a time, operate on those records, then modify the filters to view a different bunch. It is important to have these functions working fast because it makes a huge difference to the user experience. I trialled several different component sets earlier this year and found there was a vast difference in the performance between them when it came to these functions, so choose wisely :)
I'd like to see a control suite that boast working with concurrency issues on order fullfullment and also uses queues or stacks in order to solve data conflicts. I see too often that this grids and list controls are really nice, pretty, and show you all the data, but they don't solve basic concurreny problems when you have more than one person working on the same set of data. If it automates the locking of a row of one user from another, prevents duplication of work, and automatically logs error messages, then I can see purchasing the control suite.
You don't need to load all your data at once you can specify a maximum load in the xaml of your ObjectDataSource. This will load your data in blocks of the specified size.
Take a look at the 2 RIA services videos here:
https://www.silverlight.net/getstarted/riaservices/
There are segments on paging which may also be useful to you.
note(some of the assembly references and syntax have changed slightly since these videos were made but the core function is still the same)

What should I have in mind when building OLAP solution from scratch?

I'm working for a company running a software product based on a MS SQL database server, and through the years I have developed 20-30 quite advanced reports in PHP, taking data directly from the database. This has been very successful, and people are happy with it.
But it has some drawbacks:
For new changes, it can be quite development intensive
The user can't experiment much with the data - it is locked to a hard-coded view
It can be slow for big reports
I am considering gradually going to a OLAP-based approach, which can be queried from Excel or some web-based service. But I would like to do this in a way that introduces the least amount of new complexity in the IT environment - the least amount of different services, synchronization jobs etc!
I have some questions in this regard:
1) Workflow-related:
What is a good development route from "black box SQL server" to "OLAP ready to use"?
Which servers and services should be set up, and which scripts should be written?
Which are the hardest/most critical/most time-intensive parts?
2) ETL:
I suppose it is best to have separate servers for their Data Warehouse and Production SQL?
How are these kept in sync (push/pull)? Using which technologies/languages?
For me SSIS looks overly complicated, and the graphical workflow doesn't appeal much to me -- I would rather like a text based script that does the job. Is this feasible?
Or is it advantagous to use the graphical client with only one source and one destination?
3) Development:
How much of this (data integration, analysis services) can be efficiently maintained from a CLI-tool?
Can the setup be transferred back and forth between production and development easily?
I'm happy with any answer that covers just some of this - and even though it is a MS environment, I'm also interested to hear about advantages in other technologies.
I only have experience with Microsoft OLAP, so here are my two cents regarding what I know:
If you are implementing cubes, then separate the production SQL Server from the source for the cubes. Cubes require a lot of SELECT DISTINCT column_name FROM source.table. You don't want cube processing to block your mission critical production system.
Although you can implement OLAP cubes with standard relation tables, you will quickly find that unless your data is a ledger-style system you will probably need to fully reprocess your fact and dimension tables and this will require requerying the source database over and over again. That's a large argument for building a separate data warehouse that uses ledger-style transactions for the fact tables. For instance, if a customer orders something and then cancels it, your source system may track this as a status change. In your fact table, you probably need to show this as a row for ordering that has a positive quantity and revenue stream and a row for cancelling that has a negative quantity and revenue stream.
OLAP may be overkill for your environment. The main issue you appeared to raise was that your reports are static and users want access to the data directly. You could build a data model and give users Report Builder access in SSRS, or report writing access in some other BI suite like Cognos, Business Objects, etc. I don't generally recommend this approach since it is way beyond what most users should have to know to get data, but in a small shop this may be sufficient and it is easy to implement. Let's face it -- users generally just want to get the data into Excel to manipulate it further. So if you don't want to give them a web front-end and you just want them to get to the data from Excel, you could give them direct database access to a copy of the production data. The downside of this approach is users don't generally understand SQL or database relationships. OLAP helps you avoid forcing users to learn SQL or relationships, but is isn't easy to implement on your end. If you only have a couple of power users who need this kind of access, it could be easy enough to teach the few power users how to do basic queries in Excel against the database and they will be happy to get this tomorrow. OLAP won't be ready by tomorrow.
If you only have a few kinds of source data systems, you could get away with building a super-dynamic static report. For instance, I have a report that was written in C# that basically allows users to select as many columns as they want from a list of 30 columns and filter the data on a few date range fields and field filter lists. This simple report covers about 40% of all ad hoc report requests from end-users since it covers all the basic, core customer metrics and fields. We recently moved this report to SSRS and that allowed us to up the number of fields to about 100 and improved the overall user experience. Regardless of the reporting platform, it is possible to give users some dynamic flexibility even in the confines of a static reporting system.
If you only have a couple of databases, you can probably backup and restore the databases as your ETL. However, if you want to do anything beyond that, then you might as well bite the bullet and use SSIS (or some other ETL tool). Once you get into ETL for data warehousing, you are going to use a graphic-oriented design tool. Coding works well for applications, but ETL is more about workflows and that's why the tools tend to converge on a graphical UI. You can work around this and try to code a data warehouse from a text editor, but in the end you are going to lose out on a lot. See this post for more details on the differences between loading data from code and loading data from SSIS.
FEEDBACK ON HOW TO USE CUBES WITH A RELATIONAL DATA STORE
It is possible to implement a cube over a relational data store, but there are some major problems with using this approach. The main reason it is technically feasible has to do with how you configure your DSV. The DSV is essentially a logical layer between the physical database and the cube/dimension definitions. Instead of importing the relational tables into the DSV, you could define Named Queries or create views in the database that flatten the data.
The advantage of this approach are as follows:
It is relatively easy to implement since you don't have to build an entire ETL subsystem to get started with OLAP.
This approach works well for prototyping how you want to build a more long-term solution. You can prototype it in 1-2 days and show some of the benefits of OLAP today.
Some very, very large tables don't have to be completely duplicated just to support an OLAP cube. I have several multi-billion row tables that are almost completely standardized fact tables. The only columns they don't have are date keys and they also contain some NULL values on fields that shouldn't have nulls at all. Instead of duplicating these very massive tables, you can create the surrogate date keys and set values for the nulls in the view or named query. If you aren't going to see a huge performance boon for duplicating the table, then this may be a candidate for leaving in a more raw format in the database itself.
The disadvantages of this approach are as follows:
If you haven't built a true Kimball method data warehouse, then you probably aren't tracking transactions in a ledger-style. Kimball method fact tables (at least as I understand them) always change values by adding and subtracting rows. If someone cancels part of an order, you can't update the value in the cube for the single transaction. Instead, you have to balance out the transaction with a negative value. If you have to update the transaction, then you will have to fully reprocess the partition of the cube to replace the value which can be a very expensive operation. Unless your source system is a ledger-style transaction system, you will probably have to build a ledger-style copy in your ETL subsystem.
If you don't build a Kimball method data warehouse, then you are probably using unobscured and possibly non-integer primary keys in your database. This directly impacts query performance inside the cube. It also sets you up for having a theoretically inflexible data warehouse. For instance, if you have an product ordering system that uses an integer key and you start using a second product ordering system either as a replacement for the legacy system or in tandem with the legacy system, you may struggle to combine the data together merely through the DSV since each system has different data points, metrics, workflows, data types, etc. Worse, if they have the same data types for the order id and the order id values overlap between systems, then you must declare a surrogate key that you can use across both systems. This can be difficult, but not impossible, to implement without using a flattened data warehouse.
You may have to build the system twice if you start with the relational data store and then move to flattened database. Frankly, I think the amount of duplicated work is trivial. Most of what you learned building the cube off a relational data store will translate to setting up the new OLAP cube. The main problem, though, is that you will probably create a new cube altogether and then any users of the old cube will have to migrate to the new cube. Any reports built in SSRS or Excel will probably break at that point and need to be rewritten from the ground up. So the main cost of rebuilding the cube is really on rebuilding dependent objects -- not on rebuilding the cube itself.
Let me know if you want me to expand on any of the above points. good luck.
You're basically asking the million dollar question of "How do I build a DWH". This is not really a question that can decisively be answered.
Nevertheless, here is a kickstart:
If you are looking for a minimum viable product, be aware that you are in a data environment, and not a pure software one. In data-heavy environments, it is much harder to incrementally build a product, because the amount of effort to introduce changes in the system is much greater. Think about it as if every change you make in a piece of software has to be somehow backwards-compatible with anything you've ever done. Now you understand the hell Microsoft are in :-).
Also, data systems involve many third-party tools such as DBs, ETL tools and reporting platforms. The choices you make should be viable for the expected development of your system, else you might have to completely replace these tools down the road.
While you can start with a DB cloning that will be based on simple copy SQLs and then aggregating it or pushing it into an OLAP, I would recommend getting your hands dirty with a real ETL tool from the start. This is especially true if you foresee the need to grow. 9 out of 10 times, the need will grow.
MS-SQL is a good choice for a DB if you don't mind the cost. The natural ETL tool would be SSIS, and it's a solid tool as well.
Even if your first transformations are merely "take this table and dump it in there", you still gain a lot in terms of process management (has the job run? What happens if it fails? etc) and debugging. Also, it is easier to organically grow as requirements and/or special cases have to be dealt with.

Resources