WPF/MVVM - keep ViewModels in application component or separate? - wpf

I have a WPF application where I'm using the MVVM pattern.
I get the VM activated for actions that require user input and thus need to activate views from the VM.
I've started out separating the VMs into a separate component/assembly, partly because I see them as the unit testable part, partly because views should rely on VM, not the other way round. But when I then need to bring up a window, the window is not known by the VM.
All introductions I find place the VM in the WPF/App component, thus eliminating the problem.
This article recommends keeping them in separate layers : http://waf.codeplex.com/wikipage?title=Architecture%20-%20Get%20The%20Big%20Picture&referringTitle=Home
As I see it, I have the following choices
Move VMs to the WPF/App assembly to allow VMs to access the windows directly.
Place interfaces of views in VM-assembly, implement views in WPF/App assembly and register the connection through IOC or alternative ways.
File a 'request' from the VM into some mechanism/bus that routes the request (but which mechanism!? E.g something in Prism?!)
What's the recommendation?
Thanks for any comments,
Anders, Denmark

Don't pick option 1. You'll be adding an unwanted dependency from VM to V.
Options 2 and 3 are both valid and being used. Picking between these is sometimes a matter of taste. I think that IOC enables/allows mocking better whereas a messagebus works fine for small applications.

Keep your ViewModels in a separate assembly from the Views.
If you look into Cinch and MEFedMVVM you'll see a very powerful mechanisms for connecting views and viewmodels using MEF. Keeping them separate facilitates running your application headless (no UI), which is great for testing and exposing an API.

Related

JSF application : should I use micro-services and how?

I have a web application developed with JSF 2 and primefaces. The project has been frozen for months, but it's quite advanced, the whole application run inside the same container under glassfish, so it's a monolith.
My application has an user interface and its purpose is to offer them the possibility to organize urls to tutorials (any kinds) as cards, with tags for the classification, into folders. So any user has its own tree, they can make a research inside the other users's tree create a link on a file in their own tree, copy a entire folder, reorganize it etc.
Nowedays we hear a lot about microservices, Spring boot, Angular Js, react etc. I like to develop with JSF it's a great framework, but I'm asking myself about refactoring my application, at least the necessary parts into microservices, and if JSF is appropriate for that or if I should user other tools.
What I like for example with JSF is the facility to create views, its component approach, and how it handle the full cycle of a request.
For example with a simple folder creation form :
I have to choose the parent folder, so I can bind a research component to a backing bean that makes a research indirectly in my DB using a DAO ( in my app an EJB using JPA). That happens at the "invoke application" phase and refresh my form list with ajax at the end. When I submit the form I can also bind a converter to the research component to retrieve directly a Folder object, the converter uses also a DAO to retrieve the object that I need at the "Invoke application" phase to finish the job.
I also use validators to control different attributes of a new folder, usually I declare them inside my entity class (Folder, User ...) with annotations like #NotNull etc. Before I save the folder on my db, I also check the user rights to see if he can write inside the parent folder and so on. I do that inside the backing bean, so at the 'invoke application' phase, and return a faces message if anything happens wrong.
When I read about micro-services I see that you can use them directly inside a form using json for communication, so it seems quite different. For example if I have a micro-service for the CRUD operations of my folders, are the validators, the converters, part of the service or are they stand alone services ? And what about the security checks ? that kind of architecture is quite mysterious to me.
ps : English is not my mother tongue so be indulgent please :)
AngularJs is pretty ancient man :)
You have to look at the pain points to identify ways to tear down your monolith. Monolith pains are usually slow and painful dev cycle and difficult manual test phases. If you did the entire arquillian thing and have full continuouos integration with single button deployments, you've slain the beast the hard way. Not many braved this route. But if you're looking at mounting feature creep with code freezes and manual test cycles then yeah you kind of want to try to pull some of those features out into a service you can redeploy very quickly

Prism 4 WPF App - Solution Architecture with MVVM, Repositories and Unit of Work patterns implemented

I'm new to .Net and trying to learn things. I'm trying to develop a Prism4 WPF app with
Visual Studio CSharp 2010 Express Edition,
Prism v4,
Unity as IoC,
SQL Server CE as data store.
I've studied a lot(?) and infuenced by this and this among others, and decided to implement MVVM, Repository and UnitofWork patterns. This application will be a Desktop application with a single user (me:-)
So, I've created a solution with the following projects:
Shell (Application Layout and startup logic)
Common (Application Infrastructure and Workflow Logic)
BusinessModuleA (Views and ViewModels)
BusinessModuleA.Model (Business Entities - POCO)
BusinessModuleA.Data (Repositories, Data Access (EF?) )
BusinessModuleB (Views and ViewModels)
BusinessModuleB.Model (Business Entities - POCO)
BusinessModuleB.Data (Repositories, Data Access (EF?) )
My questions are:
Which projects should reference which projects ?
If I implement Repositories in 'BusinessModuleX.Data', which is
obvious, where should I define IRepositories ?
Where should I define IUnitOfWork and where should I implement UnitOfWork ?
Is it ok if I consume UnitOfWork and Repositories in my ViewModels ?
Instict says it is bad design.
If (4) above is bad, then ViewModel should get data via a Service
Layer (another project ?). Then, how can we track changes to the
entities so as to call the relevant CRUD methods on those objects at the Service Layer?
Is any of this making any sense or am I missing the big picture ?
Ok, may be I've not made myself clear on what I wanted exactly in my first post. There are not many answers coming up. I'm still looking for answers because while what #Rachel suggested may be effective for the immediate requirements, I want to be careful not to paint myself into a corner. I've an Access Db that I developed for my personal use at Office, and which became kind of a success and now being used by 50+ users and growing. Maintaining and modifying the access code base has been fairly simple at the beginning, but as the app evolved, began to fall apart. That's why I have chosen to re-write everything in .Net/Wpf/Prism and want to make sure that I get the basic design right.
Please discuss.
Meanwhile, I came up with this...
First off, I would simplify your project list a bit to just Shell, Common, ModuleA, and ModuleB. Inside each Project I'd have sub-folders to specify where everything is. For example, ModuleA might be separated into folders for Views, ViewModels, and Models
I would put all interfaces and global shared objects such as IUnitOfWork in your Common project, since it will be used by all modules.
How you implement IUnitOfWork and your Repositories probably depends on what your Modules are.
If your entire application links to one database, or shares database objects, then I would probably create two more projects for the DataAccessLayer. One would contain public interfaces/classes that can be used by your modules, and the other would contain the actual implementation of the Data Access Layer, such as Entity Framework.
If each Module has it's own database, or its own set of objects in the database (ie. Customer objects don't exist unless you have the Customer Module installed), then I would implement IUnitOfWork in the modules and have them handle their own data access. I would probably still have some generic interfaces in the Common library for the modules to build from though.
Ideally, all your modules and your Shell can access the Common library. Modules should not access each other unless they build on them. For example, a Customer Statistics module that builds on the base Customer module should access the Customer module.
As for if your ViewModels should consume a UnitOfWork orRepository, I would have them use a Repository only. Ideally your Repository should be like a black box - ViewModels can Get/Save data using the Repository, but should have no idea how it's implemented. Repositories can get the data from a service, entity framework, direct data access, or wherever, and the ViewModel won't care.
I'm no expert on design architecture, however that's how I'd build it :)
I would highly recommend you to get the Introduction to PRISM and Repository pattern inside Design Patterns Library training videos. They are great ones. Hope it helps

How to implement DI using DbContext in Windows Form

I have a class running in a winforms app which uses EF Code First. The DbContext is created via DI through the class constructor. All works well.
The problem is the data being referenced is also being modified via a web site, using the same DI pattern with EF Code First, and the data changes are not being reflected in the context instance in the winforms app.
I can solve this by recreating the DbContext object in winforms every time I access it, but seems to be more of a service location pattern to me?
Is there a true DI technique to achieve this?
Or should I remove the context from the DI and use service location?
Were you not happy with the answer to your other question (http://stackoverflow.com/questions/7657643/how-to-force-ef-code-first-to-query-the-database) which suggested using Detach, AsNoTracking or Overwrite Changes?
1) Maybe you could pass an interface that has the ability to create a DbContext, instead of the context itself.
using(var context = _contextFactory.Create()) {
var entity = from table in context.Blah...;
}
The Create method could either create the concrete class itself (defeating the DI pattern a bit), or use service location to have one created for it. Not that nice, but it's better than embedding service location calls everywhere and still means you're controlling the lifecycle yourself.
2) Change the WinForm to read from a webservice run by the website, effectively similar to disabling caching.
3) Deep in the heart of MVC (well not really that deep) it is referencing the DI container directly and using it as a service locator to pass as arguments for newly created objects. Technically you could do something similar in WinForms, but it would need you to split your application up into little chunks (controllers) that don't have a very long lifetime. Maybe it's worth looking at some MVC/MVP frameworks for WinForms, although I found myself cringing at most I saw after a quick google.
The problem is the data being referenced is also being modified via a web site, using the same DI pattern with EF Code First, and the data changes are not being reflected in the context instance in the winforms app.
This is a problem with your expectations.
If your web service and window forms app are in separate processes, they won't share in-memory data.
If you want to sync their in-memory data, simply re-query in one context after committing to the database in the other. This is the same as trying to share data between different SQL connections.
I can solve this by recreating the DbContext object in winforms every time I access it, but seems to be more of a service location pattern to me?
If you want to recreate the DbContext repeatedly, you could use an abstract factory to allow manual re-creation of the object, yet allow you to inject the specific implementation into the factory.
This is not (necessarily) the Service Locator pattern, and you would have to ensure that you manually dispose your DbContext instances. I'd give you some example code, but different DI containers have totally different ways of accomplishing a factory pattern.
Or you could simply make sure that you commit your data on the web service side, and re-query the data on the WinForms app side.

Using a Reference Table Service in a WPF Application

For past projects(the last few have been web using asp.net mvc) we created a service that caches our reference tables(as required) to be used primarily for dropdown lists.
Now I'm working on a desktop application.An upgrade from vb6/sybase to vb.net/sql server
I'm trying out WPF.
I started down the same path building up my DAL. one entity for each reference table.
I'm at the stage now where I want to setup the business layer (some reference tables can be edited)
And I'm not sure if I should follow the same process which is to use ReferenceTableService to "manage" the reference tables.(interacts with the DAL, Controller)
This will be an application that sits on a share that multiple users run.
What's the best way to deal with the reference tables? Caching them doesn't seem to be an option. Should I simply load them as each user opens up a new form in the application? Perhaps using a "ReferenceTableService"?
In this case, the Reference Table Service is thin layer in the application. Not a process running elsewhere.
I haven't done much WPF (be interesting to see what the WPF Gurus think) but I think your existing approach is sound and I don;t see why you should deviate from it.
Loading up on app start sounds reasonable; you just have to think about the expected lifetime of a user session vs the expected frequency of changes to the reference data.
Caching: if the data comes from a central service you could always introduce caching there.

How to avoid coupling when using regions in Composite WPF

I have an application designed using Microsoft's Composite Application Library. My shell has several regions defined so that I can inject content from separate modules. I'm looking for a design pattern that will reduce the coupling that these regions introduce.
In all examples I have seen, regions are defined and accessed using a string in a static class in the infrastructure project.:
<ItemsControl cal:RegionManager.RegionName="{x:Static inf:RegionNames.TabRegion}">
public static class RegionNames
{
public const string TabRegion = "TabRegion";
}
This introduces an dependency on the shell from the infrastructure project, because part of the infrastructure project must now match the shell. The CAL RegionManager throws an exception if you attempt to access a region which is not defined, so I must ensure that the infrastructure and shell projects are kept in sync.
Is there a way to isolate the shell's regions so that they are defined only within the shell (no region names in the infrastructure project)?
Is there a way to make regions optional, so that shells can be swapped out even if they don't have all the same regions? (An example: One shell has menu and toolbar regions, another only has the menu... modules should be able to inject into the toolbar if it's available, without failing when it's not)
Update - More details on my architecture
In response to depictureboy's answer below, I wanted to describe the way my system is set up... perhaps there will be more good feedback on it.
I am treating the Infrastructure and Shell projects as generic libraries, and I have several applications which use them. The Infrastructure project provides "framework" code and resources (like MVVM stuff, reflection, icons), and my Shell is a generic host window, with the basic window layout (menus, toolbars, status bar, main content area). The applications all share a common look and behave similarly because they share the shell.
My applications get their individual functionality from the modules which get loaded, so I have a bootstrapper project per application which pulls everything together (infra, shell, modules).
I imagine if I ever need to develop a brand new application that is very different from the current ones, I will be able to re-use the infrastructure project, but not the shell. That is why I am curious about decoupling the infrastructure project and the shell.
I think you have your logic backwards. Your shell is the glue that binds everything together. In my mind you want the infrastructure and shell tightly coupled because they are the application. Your modules are the parts of the application that will be changing and switching around dynamically. You want your shell regions to be static so that, for example, another developer could write a module for your application knowing where his different views were going to be placed and how the application should behave with his module attached. The Infrastructure project is there to be the go between between your shell and its modules...thats just a fact of life at least in my book. One of the WPF gurus may come up with something that absolutely blows that out of the water tomorrow....

Resources