Best Practices for Managing/Loading Data in Silverlight Client from Server - silverlight

I am developing a Business Application which is flooded with data. For each functionality, I have one ViewModel and for each of this ViewModel, I create one Separate Db Context object. To be more clear ..
i.e. There are almost 5 to 8 Functionality where I need Customers list. and to get them, I create Separate Db Context and Load Separate List from Server in each ViewModel. there are lots of redundant Data Download with multiple Db trips. which occupies too much space RAM and slows down the performance. It might affect Performance in many difference ways.
I was wondering What is the Best Practices for handling such massive data and optimize the performance of the Application?
I think one solution is to maintain common data pool across whole application But am little bit confused how to design it properly so that it won't create some other bottleneck for the Application. And There must be some Standard solution for this too..
Thank you so much for your time and help.

One option is to create a SharedViewModel as a singleton and inject it into ViewModels that need the shared data. I do this and it works well.
Another option is to use something like SterlingDB, a local document database for SL/WP7, and store the data in isolated storage.

Related

Using sql queries in Business-Data Layered application

I'm sorry if exactly same question is somewhere in the haystack of Stack Overflow questions but I've been searching answers for 2 days and at last I'm here.
Please feel free to mention if there are any duplicates but make sure they address the issue.
I have a very old application where code is not object oriented and sql queries are everywhere to get data and each of then uses sqlcommand/sqldatareader.
Now, I'm trying to move to WCF Web API framework and trying to separate business and data layers. Most of the sql queries return different set of columns.
I believe the sql queries must be executed in data layer.
I'm building those queries or placing existing queries in Business layer and passing those queries to Data Layer.
My plan is to return datasets from data layer. Manipulate and put into domain classes in Business layer and return into the controller.
However, I feel this is not a good approach and it's tightly coupled. I cannot find a way to how I should make it loosely coupled.
If I was to use EF or ORM then I could use DBContext to get data in the Business Layer.
I cannot map each database table to POCO class in my data layer (because most queries are complex and return different set of columns).
Question
How should I deal with the queries to make this architecture a better one in terms of making it loosely coupled?
Fixing up a brownfields project like the one you are dealing with is tricky. It sounds like you have the right idea in terms of trying to get all sql queries into a data layer.
My suggestion is to find a thin slice. A small part of the application (e.g. if the system allows you to add a user) and try to refactor the code into the structure you want. Then find the next thin slice...
Another tip is that you should do one small refactor at a time as soon as you let your refactorings get too big you increase the risk of introducing bugs.
A tool like resharper is a great help when it comes safely refactoring your code. It does have a bit of a learning curve, but it's worth the effort.

Data layer architecture for WPF Rich-Client?

Background
I need to build a Rich-Client application using .NET. The app needs to handle TreeViewControls and TableViewControls with about 100000 entities. GUI is build with WPF, very likely using Telerik Controls. My question is about the general architecture of the data-layer. I've got some coarse ideas of the concepts, but would highly appreciate your comments / thoughts and hints into which technology I should dig deeper. Here're my thoughts:
Conceptual Layers
Presentation Layer
just the WPF Controls, I'd need performant synchronizing of different controls on property changes, but I don't anticipate major problems here.
Business Layer
creating views (object selections to be displayed in the controls), CRUD operations (modifications done directly with the POCOs), searching (global search, but also limited to a view)
Repository
holds POCOs in an enitity map, decides weather to load from persistence store
Persistence-Manager
I'm thinking of using a LocalDB or simple Key-Value Store as (persistent) Client-Cache. So, the Persistence-Manager would try to get an object from the local store. Otherwise get the data from the server. Also, persisting data to the Client-Cache. The data would be available via a webservice. I'm happy to give WCF Data Services a try.
Persistence-Layers
There would be two parts:
- Local DB connection using an ORM like EF or OpenAccess; or a simple key-value store
- HTTP connection to consume the Web-Service
Questions
In a layering like this, how about lazy loading referenced objects? I know EF and other ORMs take care of a lot of the issues I have here, too. But I don't see yet how to plug these frameworks into the above layering. Also, where to track changes? Where to secure consistency when deleting objects? (e.g. deleting references to these objects as well)
I would eager load whole views (hierarchical structures) and perform Linq to objects to those collections of POCOs. Maybe implement a simple inverted index if Linq performance would become a matter. But how should I best implement global searches on the server? Are there libraries ("Linq to OData") available?
What do you think about a fully "diconnected" scenario? Holding all data a user needs in a local database. Sync on start / stop and user triggered. I could use an ORM directly on the local DB, with good chances to save a lot of headaches trying to implement a lot of consistency features by hand (using the above layering).
Or in contrast, forget about the local database and batch eager load most of the needed data. Here I'm concerned about the performance of the webservices (without having experience with OData, WCF). I've build an app using Redis and Python that loads about 200000 business objects quite fast (< 1 min) to the client (the objects are already serialized cached in Redis).
I'll certainly do some prototyping and benchmarking, but to get a good start, any thoughts and recommendations are highly appreciate.
Cheers,
Jan

How to abstract my business model using dbExpress and Delphi (maybe DataSnap as well)?

If my question isn't clear, please help me improve it with comments.
I'm new to Delphi and dbExpress and I'm just getting acquaintance with TSQLDataset, TDataSetProvider, TClientDataSet and TDataSource classes.
The project that I'm working on uses this components in a way that feels strange to me. There is a huge data module unit that contains lots and lots of the quartet of classes previously described. I'm guessing that there are better (and more modularized) ways of doing this. DataSnap is used only to place this data module in a server application, so the clients access the data through it.
So, let me try to explain some of my doubts:
What is the role of each one of this classes? I read the documentation but I can't get a practical insight on this subject (specially about TDataSetProvider).
Which classes should be in the data module and which should be in my forms?
Is it possible to create an intermediate layer to abstract my business model from my database setup (maybe creating functions that return immutable datasets?)?
If so, is it wise to use DataSnap to do so?
I'm sorry if I'm not clear enough. Thanks in advance.
Components
TDataSource is the bridge between data-aware controls and the dataset (TDataSet descendant) from which they are to get their values.
TClientDataSet is one such dataset. TClientDataSet can be used in isolation, for example to access data contained in xml files, but can also be hooked up to a TDataSetProvider.
TDataSetProvider is the bridge between the in-memory TClientDataSet and the actual dataset that takes its data from a database through some kind of driver. In Client Server development you will usually see a TRemoteDataSetProvider (name may be different, I don't work with these components that often), which bridges the gap between the client and server.
TSQLDataSet is the actual dataset getting its data from some database.
It would feel strange to me to see this quartet all in one executable. I would expect the TSQLDataSet on the server side hooked up to the TRemoteDataSetProvider's counter part. However, I guess that with an embedded database it could be a way to support the briefcase model, which is where TClientDataSet is really helpful (TClientDataset is very powerful, that is just one of its strong points.)
Single datamodule
Ouch. A single huge datamodule is lazy programming or the result of misconceptions about how to use datamodules. It is perfectly normal to have a single datamodule that "hosts" the database connection which is then used by various other datamodules that are more tightly focused on aspects of the application.
Domain abstraction
With regard to abstracting your business model, dbexpress and datasnap should really not be anywhere in your business model. They should be part of your data layer.
TDataSource, TClientDataSet and custom TDataSetProvider descendant(s) can be used to leverage the power of the data-aware controls in the UI while still keeping the UI separate from the business model. In this case the custom TDataSetProvider would be the bridge between the client dataset and the collections and instances in the domain layer.
Even so, I would still expect to see a separate data layer, using TRemoteDataSetProviders or straight TDataSet descendants (like TSQLDataSet) to provide the domain layer with its data.
The single huge data module you mentioned could be part of that data layer, with the client datasets providing the business layer with its data. As you also mention TDataSource as part of a common quartet, the application was probably developed in a RAD-data-aware fashion where UI controls are basically hooked up straight to database columns/tables.
If you want to transform this application to have a more layered architecture, tread carefully and slowly. Get to know the current architecture first and get to know it well enough to see the impact that this kind of transformation would have. The links provided by Serg will certainly help you there. Pawel Glowacki has written extensively about DataSnap.
The project you are working on is a Delphi Multitier Database Application
Your question is too wide for SO and hardly can be answered in its current form - you should learn to understand the underlying architecture.
You can start from video tutorial by Pawel Glowacki:
http://www.youtube.com/watch?v=B4uxLLIUddg
http://cc.embarcadero.com/item/28188

Delphi DataModule Usage - Single or Multiple?

I am writing an application, there are various forms and their corresponding datamodules.
I wrote in a way that they are using each other by mentioning in uses class(one in implementation and another in interface to avoid cross reference)
Is this approach is wrong? why or why not i should use in this way?
I have to agree with Ldsandon, IMHO it's way better to have more than one datamodule in your project. If you look at it as a Model - View - Controller thingie, your DB is the model, your forms would be the Views and your datamodules would be the controllers.
Personally I always have AT LEAST 2 datamodules in my project. One datamodule is used to share Actions, ImageLists, DBConnection, ... and other stuff throughout the project. Most of the time this is my main datamodule.
From there on I create a new datamodule for each 'Entity' in my application. For example If my application needs to proces or display orders, cursomters and products, then I will have a Datamodule for each and everyone of those.
This way I can clearly separate functionality and easily reuse bits and pieces without having to pull in everything. If I need something related to Customers, I simple use the Customers Datamodule and just that.
Regards,
Stefaan
It is ok, especially if you're going to create more than one instance of the same form each using a different instance of the related datamodule.
Just be aware of a little issue in the VCL design: if you create two instances of the same form and their datamodules, both forms will point to the same datamodule (due the way the VLC resolves links) unless you use a little trick when creating the datamodule instance:
if FDataModule = nil then
begin
FDataModule := TMyDataModule.Create(Self);
FDataModule.Name := ''; // That will avoid pointing to the same datamodule
end;
A data module just for your table objects, if you only have a db few tables, would be a good one to be the first one.
And a data module just for your actions is another.
A data module just for image lists, is another good third data module. This data module only contains image lists, and then your forms that need access to image lists can use them all from this shared location.
If you have 200 table objects, maybe more than one data module just for db tables. Imagine an application that has 20 tables to do with invoicing, and another 20 tables to do with HR. I would think an InvoicingDataModule, and HRDataModule would be nice to be separate if the tables inside them, and the code that works against them, doesn't need to know anything about each other, or even when one module has a "uses" dependency in one direction, but that relationship is not circular. Even then, finer grained data module modularity can be beneficial.
Beyond the looking glass. I always use Forms instead of DataModules. I know it's not common ground.
I always use Forms instead of DataModules. I call them DataMovules.
I use one such DataMovule per each group of tables logically related.
I use Forms instead of DataModules because both DataModules and Forms are component containers and both can effectively be used as containers for data related components.
During development:
My application is easier to develop. Forms give you the opportunity to view the data components, making easy to develop those components.
My application is easier to debug, as you may put some viewer components right away so you may actually see the data. I usually create a tab rack, with one page per table, and one datagrid on every page for browsing the data on the table.
My application is easier to test, as you may eventually manipulate the data, for example experimenting with extreme values for stress testing.
After development:
I do turn form invisible. Practically making it a DataModule, I enjoy the very same container features than a datamodule has.
But with a bonus. The Form is still there, so I may eventually can turn it visible for problem determination. I use the About Box for this.
And no, I have not experienced any noticeable penalty in application size or performance.
I don't try to break the MVC paradigm. I try to adhere to it instead. I don't mix the Forms that constitute my View with the DataMovules that constitute my Controllers. I don't consider them part of my View. They will never be used as the User Interface of my application. DataMovules just happen to be Forms. They are just convenient engineering artifacts.

In Memory Database

I'm using SqlServer to drive a WPF application, I'm currently using NHibernate and pre-read all the data so it's cached for performance reasons. That works for a single client app, but I was wondering if there's an in memory database that I could use so I can share the information across multiple apps on the same machine. Ideally this would sit below my NHibernate stack, so my code wouldn't have to change. Effectively I'm looking to move my DB from it's traditional format on the server to be an in memory DB on the client.
Note I only need select functionality.
I would be incredibly surprised if you even need to load all your information in memory. I say this because, just as one example, I'm working on a Web app at the moment that (for various reasons) loads thousands of records on many pages. This is PHP + MySQL. And even so it can do it and render a page in well under 100ms.
Before you go down this route make sure that you have to. First make your database as performant as possible. Now obviously this includes things like having appropriate indexes and tuning your database but even though are putting the horse before the cart.
First and foremost you need to make sure you have a good relational data model: one that lends itself to performant queries. This is as much art as it is science.
Also, you may like NHibernate but ORMs are not always the best choice. There are some corner cases, for example, that hand-coded SQL will be vastly superior in.
Now assuming you have a good data model and assuming you've then optimized your indexes and database parameters and then you've properly configured NHibernate, then and only then should you consider storing data in memory if and only if performance is still an issue.
To put this in perspective, the only times I've needed to do this are on systems that need to perform millions of transactions per day.
One reason to avoid in-memory caching is because it adds a lot of complexity. You have to deal with issues like cache expiry, independent updates to the underlying data store, whether you use synchronous or asynchronous updates, how you give the client a consistent (if not up-to-date) view of your data, how you deal with failover and replication and so on. There is a huge complexity cost to be paid.
Assuming you've done all the above and you still need it, it sounds to me like what you need is a cache or grid solution. Here is an overview of Java grid/cluster solutions but many of them (eg Coherence, memcached) apply to .Net as well. Another choice for .Net is Velocity.
It needs to be pointed out and stressed that something like NHibernate is only consistent so long as nothing externally updates the database and that there is exactly one NHibernate-enabled process (barring clustered solutions). If two desktop apps on two different PCs are both updating the same database with NHibernate the caching simply won't work because the persistence units simply won't be aware of the changes the other is making.
http://www.db4o.com/ can be your friend!
Velocity is an out of process object caching server designed by Microsoft to do pretty much what you want although it's only in CTP form at the moment.
I believe there are also wrappers for memcached, which can also be used to cache objects.
You can use HANA, express edition. You can download it for free, it's in-memory, columnar and allows for further analytics capabilities such as text analytics, geospatial or predictive. You can also access with ODBC, JDBC, node.js hdb library, REST APIs among others.

Resources