I am wondering what would be the best approach in a WPF (possibly MVVM) based application, where the data exchange with the remote devices is made through Protocol-buffers (if conveniently applies).
WPF is strongly based on observability, as well as the mutability of the underlying model/viewmodel, with DPs and INotify* interfaces. Is it fighting against the protocol-buffer approach of create/mutate POCO's?
The typical context is having a WPF client application, connected via TCP/IP to an embedded device running Linux. Basically, I'm evaluating pros/cons of several solutions in order to find out the best one.
Thank you in advance.
WPF should have zero bearing on this because your data exchange should be separated into a separate, UI-agnostic layer. Your service layer can return non-GPB objects if necessary (or returns interfaces that your GPB objects implement via partial classes), and your view model layer provides yet another layer of insulation.
Your key points seem to be about mutability and observability.
The google protobuf API is indeed largely immutable and won't love WPF very much; however, you also mention protobuf-net, which is not that pattern, and adopts instead standard .NET idioms.
A protobuf-net model can be any standard model you want. If you want it to have notification events... have notification events. It won't mind. I can't remember 100%, but if you are working from a .proto file, I believe there is a switch to have to codegen add notification events automatically, but .proto is entirely optional with protobuf-net.
The output from protobuf-net should be entirely interchangeable with any other implementation for your linux device. One option there would be Mono/protobuf-net, but you could use the "standard" implementations too.
Related
I have a WinForms application that I am hopefully going to be refactoring to be utilizing a DDD architecture. First, I am trying to really wrap my head around the architecture itself, I have Evans' book and I have Vernon's book and I find myself struggling with three scenarios I would face rather immediately in my application. I am afraid I might be over thinking or being too strict in my conceptual design process.
1.) Utilizing an example provided inside a Pluralsight tutorial on DDD the speaker made a point that different bounded contexts should be represented by their own solution. However, if I have a winforms app that is not service oriented (this will eventually change and a lot of this question becomes moot) this doesn't seem feasible. I am therefore operating under the assumption I'll be separating these into different projects/namespaces being vigilant there are no interdependencies. Is this the right way to think about it or am I missing something obvious?
2.) I have a navigation UI that launches other modules/windows that would belong in separate presentation layers of different bounded contexts. Think of the first window that would be open when you launched an ERP application. Since this doesn't fit cleanly within any particular BC how would something like this be properly implemented. Should this fall within a shared kernel?
3.) I have a Job Management bounded context and a Rating/Costing bounded context. It is part of the business process that when a Job is created its details are then rated. This has its own UI, etc, which I feel pretty good that this presentation still adequately falls inside the Job Management context. However, the actual rating process of these details definitely should not. I am not entirely sure how to communicate with the Rating/Costing context since bc's are to be kept separate from one another. I realize I could do messaging, but that seems to be overkill for a non distributed app. Each BC could feasibly self host some kind of API but again this seems overkill, although this would set the team up nicely for migrating to a distributed architecture later on. Finally, my last idea is having some kind of shared dependency that is an event store of sorts. I don't know if this is the same as Domain Events as those seem to have a separate concern in and of themselves. So, does that mean this would fall under a shared kernel as well or some other type of solution?
Thank you in advance.
1) Guidance about BCs corresponding to solution is only guidance, not a hard rule. However, it does provide much needed isolation. You can still have this with a WinForms project. For example, suppose you have a BC called Customers. Create a solution for it and within it, create an additional project called Customers.Contracts. This project effectively houses the public contract of the BC which consists of DTOs, commands and events. External BCs should be able to communicate with the Customers BC using only the messages defined in this contracts project. Have the WinForms solution reference Customers.Contracts and not the Customers project.
2) A UI often serves a composing role, orchestrating many BCs - a composite UI. A stereotypical example is the Amazon product page. Hundreds of services from different BCs are required to render the page.
3) Again this seems like a scenario calling for a composite UI. The presentation layer can mediate between different BCs. BCs are loosely coupled, but there still are relationships between BCs. Some are downstream from others, some are upstream, or even both. Each has an anti-corruption layer, a port, to integrate with related BCs.
The feeling I get from these questions can be summarized as: "What is a sane approach to BC boundaries from a code artifact perspective? and How do I build a UI that both queries and commands several BCs?". It depends ...
Another, not yet mentioned approach could be to regard the UI as a seperate context. I doubt it's a very popular POV, but it can be useful at times. The UI could dictate what it needs using e.g. its own interfaces and data structures, and have each BC implement the appropriate interfaces (doing an internal translation). The downside is the extra translation going on, but then again it only makes sense when there is sufficient value to be reaped. The value is in keeping things simple on the UI side and not having to worry how and where the data is coming from or how changes affect each BC. That can all be handled behind a simple facade. There are several places this facade could sit (on the client or on the server). Don't be fooled though, the "complexity" has just moved behind yet another layer. Coordination and hard work still needs to be done.
Alternatively you may also want to look into what I call "alignment" of a UI with use cases exposed by a BC. As Tom mentioned, workflow or saga implementations might come in handy to pull this off when coordination is required. Questioning the consistency requirements (when does this other BC need to know about given information?) might bring new insight into how BCs interoperate. You see, a UI is a very useful feedback loop. When it's not aligned with a BC's use case maybe there is something wrong with the use case or maybe there is something wrong with how it was designed in the UI or maybe we just uncovered a different use case. That is why UI mockups make such a great tool for having discussions. They offer an EXTRA view on the same problem/solution. Extra as in "this is not the only visualization you should use in conversations with a domain expert". UX requirements are requirements too. They should be catered for.
Personally I find that when I'm discussing UI I'm wearing another hat than when I'm discussing pure functionality (you know, things that don't require a UI to explain what the application is doing/should do). I might switch hats during the same conversation just to find out malalignment.
First things first, as I saw you talking about a message bus, I think we need to talk about BC integration first.
You do not need a message bus to communicate between BC's; here is an explanation on how I integrate different BC's:
I expose some public interfaces on each BC (similar to domain commands, - queries and - events), and have an intermediate layer in my infrastructure that translates this call to the other BC.
Here is an example interface for exposed commands in a BC:
public interface IHandleCommands
{
void DoSomething(Guid SomeId,string SomeName);
}
I also have a similar one for exposed events
public interface IPublishEvents
{
void SomethingHappened(Guid SomeId,string SomeName);
}
Finally for my exposed data (i.e. the Queries in CQ(R)S) I have another interface, please note that this allows you to remove coupling between your domain model and query code at any given time.
public interface IQueryState
{
IEnumerable<SomeData> ActiveData(DateTime From=DateTime.Minvalue, ... );
}
And my implementation looks like this:
public class SomeAR:IHandleCommands
{
IPublishEvents Bus;
public SomeAr(IPublishEvents Bus)
{
this.Bus = Bus;
}
public void DoSomething(Guid x,string y)
{
Bus.SomethingHappened(SomeId: x,SomeName: y);
}
}
After all, when you think about it: things like domain events can be done without the messaging as well; just replace the message classes by interface members, and replace the handlers by interface implementations that get injected into your BC.
These handlers then invoke commands on other BC's; they are like the glue that bind together different BC's (think workflows/stateless saga's etc).
This could be an example handler:
public class WorkFlow : IPublishEvents
{
public void SomethingHappened(Guid SomeId,string SomeName)
{
AnotherBC.DoSomething(SomeId,SomeName);
}
}
This is an easy approach that does not require a lot of effort, and I have used this with great success. If you want to switch to full-blown messaging later on, it should be easy to do.
To answer your questions about the UI:
I think you are being too rigid about this.
As long as my domain is (or can be easily) decoupled from your UI, you can easily start with a single UI project, and then split it up the minute you start experiencing pain somewhere. However, if you do split up code, you should split it up per BC, so project structures match.
I find building a UI this way to be the most efficient way for me...
If my question isn't clear, please help me improve it with comments.
I'm new to Delphi and dbExpress and I'm just getting acquaintance with TSQLDataset, TDataSetProvider, TClientDataSet and TDataSource classes.
The project that I'm working on uses this components in a way that feels strange to me. There is a huge data module unit that contains lots and lots of the quartet of classes previously described. I'm guessing that there are better (and more modularized) ways of doing this. DataSnap is used only to place this data module in a server application, so the clients access the data through it.
So, let me try to explain some of my doubts:
What is the role of each one of this classes? I read the documentation but I can't get a practical insight on this subject (specially about TDataSetProvider).
Which classes should be in the data module and which should be in my forms?
Is it possible to create an intermediate layer to abstract my business model from my database setup (maybe creating functions that return immutable datasets?)?
If so, is it wise to use DataSnap to do so?
I'm sorry if I'm not clear enough. Thanks in advance.
Components
TDataSource is the bridge between data-aware controls and the dataset (TDataSet descendant) from which they are to get their values.
TClientDataSet is one such dataset. TClientDataSet can be used in isolation, for example to access data contained in xml files, but can also be hooked up to a TDataSetProvider.
TDataSetProvider is the bridge between the in-memory TClientDataSet and the actual dataset that takes its data from a database through some kind of driver. In Client Server development you will usually see a TRemoteDataSetProvider (name may be different, I don't work with these components that often), which bridges the gap between the client and server.
TSQLDataSet is the actual dataset getting its data from some database.
It would feel strange to me to see this quartet all in one executable. I would expect the TSQLDataSet on the server side hooked up to the TRemoteDataSetProvider's counter part. However, I guess that with an embedded database it could be a way to support the briefcase model, which is where TClientDataSet is really helpful (TClientDataset is very powerful, that is just one of its strong points.)
Single datamodule
Ouch. A single huge datamodule is lazy programming or the result of misconceptions about how to use datamodules. It is perfectly normal to have a single datamodule that "hosts" the database connection which is then used by various other datamodules that are more tightly focused on aspects of the application.
Domain abstraction
With regard to abstracting your business model, dbexpress and datasnap should really not be anywhere in your business model. They should be part of your data layer.
TDataSource, TClientDataSet and custom TDataSetProvider descendant(s) can be used to leverage the power of the data-aware controls in the UI while still keeping the UI separate from the business model. In this case the custom TDataSetProvider would be the bridge between the client dataset and the collections and instances in the domain layer.
Even so, I would still expect to see a separate data layer, using TRemoteDataSetProviders or straight TDataSet descendants (like TSQLDataSet) to provide the domain layer with its data.
The single huge data module you mentioned could be part of that data layer, with the client datasets providing the business layer with its data. As you also mention TDataSource as part of a common quartet, the application was probably developed in a RAD-data-aware fashion where UI controls are basically hooked up straight to database columns/tables.
If you want to transform this application to have a more layered architecture, tread carefully and slowly. Get to know the current architecture first and get to know it well enough to see the impact that this kind of transformation would have. The links provided by Serg will certainly help you there. Pawel Glowacki has written extensively about DataSnap.
The project you are working on is a Delphi Multitier Database Application
Your question is too wide for SO and hardly can be answered in its current form - you should learn to understand the underlying architecture.
You can start from video tutorial by Pawel Glowacki:
http://www.youtube.com/watch?v=B4uxLLIUddg
http://cc.embarcadero.com/item/28188
I'm writing a WPF application using a MVVM pattern and using Prism in selected places for loose coupling, and I'd like to have logging messages shown in a window and written to a file. The subset of messages going each way may not be the same.
I think I should publish a message through the EventAggregator (MS-Prism implementation of observer pattern) and have two objects subscribe: one that updates the LogWindowViewModel and one that logs using the Enterprise Library logger. Is this a good idea or am I duplicating something that's already implemented?
The fact that the log message will be different in each output is the limiting factor.
Extending the block may suffice and defining a CustomTraceListener or ILogFilter may work out for you. This would avoid needing to use the EventAggregator.
It boils down to who has the knowledge of what and where to log. Are the differences driven off values within the logging engine such as severity? Are they instead driven by the consumer of the logging engine and therefore tightly coupled to the class itself? These types of questions will dictate your choice.
Leveraging the extension points in the logging block would be my first choice before having to rely on using the EventAggregator.
I think an idea is fine. There is not so much functionality to be duplicated it seems
I used Common.Logging as data collector, filter and distributor for something comparable and wrote a custom appender for my own processing and ui-output.
I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again:
In the UI, set a loading message or loading progress bar in place of where the content is
Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request
If the content can not be acquired (no network connection, etc), display an error message
If the content is acquired, display it in the UI
Keep the content in main memory for subsequent queries
The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source.
I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified:
Where to display the content in the UI
The UI elements to show while loading, on failure, and on success
The URI of the HTTP request
How to parse the HTTP response into the data structure that will kept in memory
The location of the file in isolated storage, if it exists
How to parse the file contents into the data structure that will be kept in memory
It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have.
Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections.
My questions are:
Has something like this already been implemented?
Are my thoughts about the topic above fundamentally wrong in some way?
Here is a design I'm thinking of:
There are two components, a View and a Model.
The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI.
The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache.
How does that sound?
I took some time to have a good read of your requirements and noted some thoughts to offer as a sounding board.
Firstly, for repetetive tasks with common behaviour this is definitely the way to approach it. You are not alone in thinking about this problem.
People doing a bunch of this sort of thing may have created similar abstractions however, to my knowledge none have been publicly released.
How far you go with it may depend if you intend it to be just for your own use and for those with very similar requirements or whether you want to handle more general cases and make a product that is usable by a very wide audience.
I'm going to assume the former, but that does not preclude the possibility of releasing it as an open source project that can be developed further and/or forked.
By not trying to cater for all possibilities you can make certain assumptions about the nature of the using implementation and in particular UI design choices.
I think overall your thinking in the right direction. While reading some of your high level thoughts I considered some things could be simplified (a good thing) and at the same time delivering a compeling UI.
On your initial points.
You could just assume a performance isindeterminate progressbar is being passed in.
Do this if it's important to you, but you could be buying yourself into some complexity here handling different caching requirements - variance in duration or dirty handling. Perhaps sufficient to lean on the platforms inbuilt caching of urls (which some people have found gets in their way).
Handle network connectivity, yep this is repetitive and somewhat intricate. A perfect candidate for a general solution.
Update UI... arguably better to just return data and defer decisions regarding presentation and format of data to your individual clients.
Content in main memory - see above on caching.
On your potential inputs.
Where to display content - see above re data and defer presentation choices to client.
I would go with a UI element for the progress indicator, again a performant progress bar. Regarding communication of failure I would consider implementing this in a Completed event which you publish. Then through parameters you can communicate the result and defer handling to the client to place that result in some presentation control/log/whatever. This is consistent with patterns used by the .Net Framework.
URI - yes, this gets passed in.
How to parse - passing in a delegate to convert a stream or string into an object whose type can be decided by the client makes sense.
Loc of cache - you could pass this if generalising this matters, or hardcode it's path. It would be more useful to others if passed in (consider if you handle folders/creation).
On the implementation.
You could go with a UserControl, if it works for you to be bound by that assumption. It would be more flexible though, and arguably equally simple/elegant, to push presentation back on the client for both the data display and status messages and control hide/display of the progress bar as passed in.
Perhaps you would go so far as to assume the status messages would always be displayed in a textblock (if passed) and shift that housekeeping from each of your clients into your generic class.
I suspect you will benefit from not coupling the data format and presentation still.
Tombstone handling.. I would recommend some testing on the platforms in built caching of URLs here and see if you can identify whether it's durations/dirty conditions work for your general cases.
Hopefully this gives you some things to think about and some reassurance you're heading down the right path. There are many ways you could go about this. Which is the best path ultimately will be driven by your goals.
I'm developing a WP7 application which is basically a client of an existing REST API. The server returns data in JSON. With the help of the library JSON.NET (http://json.codeplex.com/) I was able to deserialize it directly to my .NET C# classes.
I store locally the data to handle offline scenario of my application and also to prevent the call on the server each time the user launch the application. I provide two ways to refresh the data: manually and/or after a period of time. To store the data I use Sertling (http://sterling.codeplex.com/), it’s a simple but easy to use local database for Silverlight/WP7.
The biggest challenge is to handle the asynchronous communication with the server. I provide clear UI feedbacks (Progressbar and /or loading wheel) to let know to the user what’s going on.
On a side note I’m using MVVM Light toolkit and SL Unit Testing to do integration test View Model => my local Client code => Server. (http://code.google.com/p/nunit-silverlight/wiki/NunitTestsWp7)
I had a client ask for advice building a simple WPF LOB application the other day. They basically want a CRUD application for a database, with main purpose being as a WPF training project.
I also showed them Linq To SQL and they were impressed.
I then explained its probably not a good idea to use that L2S entities directly from their BLL or UI code. They should consider something like a Repository pattern instead.
At which point I could already feel the over-engineering alarm bells going off in their heads (and also in mine to some extent). Do they really need all that complexity for a simple CRUD application? (OK, its effectively functioning as a WPF training project for them but lets pretend it turns into a "real" app).
Do you ever think its acceptable to use L2S entities all through your application?
How difficult is it (from experience) to refactor to use another persistence framework later?
The way I see it, if the UI layer uses the L2S entities as a simple a POCO (without touching any L2S specific methods) then that should be really easy to refactor later on if need be.
They do need a way to centralize the L2S queries, so some sort of way to manage that is needed, even if they do use L2S entities directly. So in a way we are already pushing toward some aspects of DAL/DAO/Repository.
The main problem with Repo I can see is the pain of mapping between L2S entities and some domain model. And is it really worth it? You get quite a lot "for free" on L2S entities which I think would be hard to use once you map to another model.
Thoughts?
The only major drawback to using L2S entities all the way through is that your UI needs to know about and bind to the concrete entities. That means your UI knows your data layer. Not usually a good idea. You really want a layered approach for anything that has the potential to be serious.
That said, it's perfectly possible to use the LINQ-to-SQL entities themselves in a layered architecture without knowledge of the data layer: extract interfaces for the entities and bind to them instead.
Keep in mind that all L2S entities are partial classes. Create interfaces that reflect your entities (Refactor => Extract Interface is your friend here) and create partial class definitions of your entities that implement the interfaces. Put the interfaces themselves (and only the interfaces) in a separate .DLL that your UI and Business Layer reference. Have your Business Layer and Data Layer accept and emit these interfaces rather than the concrete versions of them. You can even have the interfaces implement INotifyPropertyChanging, since the L2S entities themselves implement those interfaces. And it all works peachy.
Then, when/if you need a different persistence framework, you have no pain at all in the BL or UI, only in the data layer -- which is where you want it.
Repositories are not a big deal. See here for a pretty good treatment of their use in ASP.NET MVC:
http://nerddinnerbook.s3.amazonaws.com/Part3.htm
Basically what we did for a project was that we had a Business layer that did all of the "LINQing" to L2S objects... in essence centralizing all querying to one layer via "Manager Objects" (I guess these are somewhat akin to repositories).
We did not use DTOs to map to L2S; as we didn't feel is was worth the effort in this project. Part of our logic was that as more and more ORMs support Iqueryable and similar syntax to L2S (e.g. entity framework), then it probably wouldn't be THAT bad to switch to a different ORM, so its not that bad of a sin, IMO.
Do you ever think its acceptable to use L2S entities all through your application?
That depends. If you do a short lived product you can get things easily wired up in a quick succession if you use L2S. But if you do a long lived product which you might have to maintain for a long duration, then you better think about a proper persistence layer.
How difficult is it (from experience) to refactor to use another persistence framework later?
If you use L2S in all your layers, well then you must not think about re-factoring them to use another persistence framework. This is exactly the advantage of using a framework like NHibernate or Entity Framework in your persistence layer , that though it requires a bit of extra effort upfront it will be easy to maintain in the long run
It sounds like you should be considering the ViewModel Pattern for WPF
http://msdn.microsoft.com/en-us/magazine/dd419663.aspx