I'm writing a WPF application using a MVVM pattern and using Prism in selected places for loose coupling, and I'd like to have logging messages shown in a window and written to a file. The subset of messages going each way may not be the same.
I think I should publish a message through the EventAggregator (MS-Prism implementation of observer pattern) and have two objects subscribe: one that updates the LogWindowViewModel and one that logs using the Enterprise Library logger. Is this a good idea or am I duplicating something that's already implemented?
The fact that the log message will be different in each output is the limiting factor.
Extending the block may suffice and defining a CustomTraceListener or ILogFilter may work out for you. This would avoid needing to use the EventAggregator.
It boils down to who has the knowledge of what and where to log. Are the differences driven off values within the logging engine such as severity? Are they instead driven by the consumer of the logging engine and therefore tightly coupled to the class itself? These types of questions will dictate your choice.
Leveraging the extension points in the logging block would be my first choice before having to rely on using the EventAggregator.
I think an idea is fine. There is not so much functionality to be duplicated it seems
I used Common.Logging as data collector, filter and distributor for something comparable and wrote a custom appender for my own processing and ui-output.
Related
I'm trying to decide whether to build a Logic App or a Web App.
It has to do things I'm quite comfortable doing in C#: receive messages in various formats (a few thousand per day), translate them, make API calls and forward them. None of the endpoints are widely used, so the out-of-the-box connectors won't be a benefit. Some require custom headers, the contents of which are calculated using a hashing algorithm. Some of the work involves converting Json into XML and vice-versa.
From what I've read, one of the key points of difference of Logic Apps are that you don't have to write any code. Since our organisation is actually quite comfortable with code, that doesn't feel like it'll actually be a benefit.
Am I missing something? Are there any compelling reasons why a Logic App would be better than a Web App in this instance?
Using Logic Apps has a few additional benefits over just writing code which include:
Out of box monitoring. For every execution you get to see exactly what happened in each step of the process with a monitoring view that replicates your Logic App design view.
Built in failure handling. Logic Apps will automatically retry calls on failure cases and also allows you to either customize the retry policy or have a custom retry policy with a do-until pattern.
Out of box alerting. You can configure alerts to inform you of failures.
Serverless. You don't worry about sizing or scaling and you pay by consumption.
Faster development. Logic Apps allows you to build out the solution faster especially as you consider that you don't have to code for monitoring views, alerting, and error handling that comes out of the box with Logic Apps.
Easy to extend. If you are already using a Logic App access to over a 125 connectors to various services will make it easy to add business value or making it smarter by including things like cognitive services to your workflow with very little extra effort.
I've decided to keep away from Logic Apps for these reasons:
It is not supported outside Azure. We aren't tied to any other providers, and to use Logic Apps would break that independence.
I don't know how much of the problem is readily soluble using Logic Apps. (It seems I will be solving all sorts of problems which wouldn't be problems if I was using C#. This article details some issues encountered while developing a simple process using an earlier version of Logic Apps.)
Nobody has come up with an argument more compelling than the reasons I've given above (especially the first one) why we should use it, so it would be a gamble with little to gain and plenty to lose.
You can think of Logic Apps as an orchestrator - something that takes external pieces of functionality, and weaves a workflow together.
It has nothing to do with your requirement of "writing code" - your code can be external functions on any platform - on-prem, AWS, Azure, Zendesk, and all of your code can be connected together using Logic Apps.
Regardless of which platform you choose, you will still have cross-cutting concerns such as monitoring, logging, alerting, deployments, etc, and Logic Apps addresses very robustly all of those requirements.
I have a WinForms application that I am hopefully going to be refactoring to be utilizing a DDD architecture. First, I am trying to really wrap my head around the architecture itself, I have Evans' book and I have Vernon's book and I find myself struggling with three scenarios I would face rather immediately in my application. I am afraid I might be over thinking or being too strict in my conceptual design process.
1.) Utilizing an example provided inside a Pluralsight tutorial on DDD the speaker made a point that different bounded contexts should be represented by their own solution. However, if I have a winforms app that is not service oriented (this will eventually change and a lot of this question becomes moot) this doesn't seem feasible. I am therefore operating under the assumption I'll be separating these into different projects/namespaces being vigilant there are no interdependencies. Is this the right way to think about it or am I missing something obvious?
2.) I have a navigation UI that launches other modules/windows that would belong in separate presentation layers of different bounded contexts. Think of the first window that would be open when you launched an ERP application. Since this doesn't fit cleanly within any particular BC how would something like this be properly implemented. Should this fall within a shared kernel?
3.) I have a Job Management bounded context and a Rating/Costing bounded context. It is part of the business process that when a Job is created its details are then rated. This has its own UI, etc, which I feel pretty good that this presentation still adequately falls inside the Job Management context. However, the actual rating process of these details definitely should not. I am not entirely sure how to communicate with the Rating/Costing context since bc's are to be kept separate from one another. I realize I could do messaging, but that seems to be overkill for a non distributed app. Each BC could feasibly self host some kind of API but again this seems overkill, although this would set the team up nicely for migrating to a distributed architecture later on. Finally, my last idea is having some kind of shared dependency that is an event store of sorts. I don't know if this is the same as Domain Events as those seem to have a separate concern in and of themselves. So, does that mean this would fall under a shared kernel as well or some other type of solution?
Thank you in advance.
1) Guidance about BCs corresponding to solution is only guidance, not a hard rule. However, it does provide much needed isolation. You can still have this with a WinForms project. For example, suppose you have a BC called Customers. Create a solution for it and within it, create an additional project called Customers.Contracts. This project effectively houses the public contract of the BC which consists of DTOs, commands and events. External BCs should be able to communicate with the Customers BC using only the messages defined in this contracts project. Have the WinForms solution reference Customers.Contracts and not the Customers project.
2) A UI often serves a composing role, orchestrating many BCs - a composite UI. A stereotypical example is the Amazon product page. Hundreds of services from different BCs are required to render the page.
3) Again this seems like a scenario calling for a composite UI. The presentation layer can mediate between different BCs. BCs are loosely coupled, but there still are relationships between BCs. Some are downstream from others, some are upstream, or even both. Each has an anti-corruption layer, a port, to integrate with related BCs.
The feeling I get from these questions can be summarized as: "What is a sane approach to BC boundaries from a code artifact perspective? and How do I build a UI that both queries and commands several BCs?". It depends ...
Another, not yet mentioned approach could be to regard the UI as a seperate context. I doubt it's a very popular POV, but it can be useful at times. The UI could dictate what it needs using e.g. its own interfaces and data structures, and have each BC implement the appropriate interfaces (doing an internal translation). The downside is the extra translation going on, but then again it only makes sense when there is sufficient value to be reaped. The value is in keeping things simple on the UI side and not having to worry how and where the data is coming from or how changes affect each BC. That can all be handled behind a simple facade. There are several places this facade could sit (on the client or on the server). Don't be fooled though, the "complexity" has just moved behind yet another layer. Coordination and hard work still needs to be done.
Alternatively you may also want to look into what I call "alignment" of a UI with use cases exposed by a BC. As Tom mentioned, workflow or saga implementations might come in handy to pull this off when coordination is required. Questioning the consistency requirements (when does this other BC need to know about given information?) might bring new insight into how BCs interoperate. You see, a UI is a very useful feedback loop. When it's not aligned with a BC's use case maybe there is something wrong with the use case or maybe there is something wrong with how it was designed in the UI or maybe we just uncovered a different use case. That is why UI mockups make such a great tool for having discussions. They offer an EXTRA view on the same problem/solution. Extra as in "this is not the only visualization you should use in conversations with a domain expert". UX requirements are requirements too. They should be catered for.
Personally I find that when I'm discussing UI I'm wearing another hat than when I'm discussing pure functionality (you know, things that don't require a UI to explain what the application is doing/should do). I might switch hats during the same conversation just to find out malalignment.
First things first, as I saw you talking about a message bus, I think we need to talk about BC integration first.
You do not need a message bus to communicate between BC's; here is an explanation on how I integrate different BC's:
I expose some public interfaces on each BC (similar to domain commands, - queries and - events), and have an intermediate layer in my infrastructure that translates this call to the other BC.
Here is an example interface for exposed commands in a BC:
public interface IHandleCommands
{
void DoSomething(Guid SomeId,string SomeName);
}
I also have a similar one for exposed events
public interface IPublishEvents
{
void SomethingHappened(Guid SomeId,string SomeName);
}
Finally for my exposed data (i.e. the Queries in CQ(R)S) I have another interface, please note that this allows you to remove coupling between your domain model and query code at any given time.
public interface IQueryState
{
IEnumerable<SomeData> ActiveData(DateTime From=DateTime.Minvalue, ... );
}
And my implementation looks like this:
public class SomeAR:IHandleCommands
{
IPublishEvents Bus;
public SomeAr(IPublishEvents Bus)
{
this.Bus = Bus;
}
public void DoSomething(Guid x,string y)
{
Bus.SomethingHappened(SomeId: x,SomeName: y);
}
}
After all, when you think about it: things like domain events can be done without the messaging as well; just replace the message classes by interface members, and replace the handlers by interface implementations that get injected into your BC.
These handlers then invoke commands on other BC's; they are like the glue that bind together different BC's (think workflows/stateless saga's etc).
This could be an example handler:
public class WorkFlow : IPublishEvents
{
public void SomethingHappened(Guid SomeId,string SomeName)
{
AnotherBC.DoSomething(SomeId,SomeName);
}
}
This is an easy approach that does not require a lot of effort, and I have used this with great success. If you want to switch to full-blown messaging later on, it should be easy to do.
To answer your questions about the UI:
I think you are being too rigid about this.
As long as my domain is (or can be easily) decoupled from your UI, you can easily start with a single UI project, and then split it up the minute you start experiencing pain somewhere. However, if you do split up code, you should split it up per BC, so project structures match.
I find building a UI this way to be the most efficient way for me...
I am wondering what would be the best approach in a WPF (possibly MVVM) based application, where the data exchange with the remote devices is made through Protocol-buffers (if conveniently applies).
WPF is strongly based on observability, as well as the mutability of the underlying model/viewmodel, with DPs and INotify* interfaces. Is it fighting against the protocol-buffer approach of create/mutate POCO's?
The typical context is having a WPF client application, connected via TCP/IP to an embedded device running Linux. Basically, I'm evaluating pros/cons of several solutions in order to find out the best one.
Thank you in advance.
WPF should have zero bearing on this because your data exchange should be separated into a separate, UI-agnostic layer. Your service layer can return non-GPB objects if necessary (or returns interfaces that your GPB objects implement via partial classes), and your view model layer provides yet another layer of insulation.
Your key points seem to be about mutability and observability.
The google protobuf API is indeed largely immutable and won't love WPF very much; however, you also mention protobuf-net, which is not that pattern, and adopts instead standard .NET idioms.
A protobuf-net model can be any standard model you want. If you want it to have notification events... have notification events. It won't mind. I can't remember 100%, but if you are working from a .proto file, I believe there is a switch to have to codegen add notification events automatically, but .proto is entirely optional with protobuf-net.
The output from protobuf-net should be entirely interchangeable with any other implementation for your linux device. One option there would be Mono/protobuf-net, but you could use the "standard" implementations too.
I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again:
In the UI, set a loading message or loading progress bar in place of where the content is
Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request
If the content can not be acquired (no network connection, etc), display an error message
If the content is acquired, display it in the UI
Keep the content in main memory for subsequent queries
The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source.
I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified:
Where to display the content in the UI
The UI elements to show while loading, on failure, and on success
The URI of the HTTP request
How to parse the HTTP response into the data structure that will kept in memory
The location of the file in isolated storage, if it exists
How to parse the file contents into the data structure that will be kept in memory
It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have.
Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections.
My questions are:
Has something like this already been implemented?
Are my thoughts about the topic above fundamentally wrong in some way?
Here is a design I'm thinking of:
There are two components, a View and a Model.
The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI.
The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache.
How does that sound?
I took some time to have a good read of your requirements and noted some thoughts to offer as a sounding board.
Firstly, for repetetive tasks with common behaviour this is definitely the way to approach it. You are not alone in thinking about this problem.
People doing a bunch of this sort of thing may have created similar abstractions however, to my knowledge none have been publicly released.
How far you go with it may depend if you intend it to be just for your own use and for those with very similar requirements or whether you want to handle more general cases and make a product that is usable by a very wide audience.
I'm going to assume the former, but that does not preclude the possibility of releasing it as an open source project that can be developed further and/or forked.
By not trying to cater for all possibilities you can make certain assumptions about the nature of the using implementation and in particular UI design choices.
I think overall your thinking in the right direction. While reading some of your high level thoughts I considered some things could be simplified (a good thing) and at the same time delivering a compeling UI.
On your initial points.
You could just assume a performance isindeterminate progressbar is being passed in.
Do this if it's important to you, but you could be buying yourself into some complexity here handling different caching requirements - variance in duration or dirty handling. Perhaps sufficient to lean on the platforms inbuilt caching of urls (which some people have found gets in their way).
Handle network connectivity, yep this is repetitive and somewhat intricate. A perfect candidate for a general solution.
Update UI... arguably better to just return data and defer decisions regarding presentation and format of data to your individual clients.
Content in main memory - see above on caching.
On your potential inputs.
Where to display content - see above re data and defer presentation choices to client.
I would go with a UI element for the progress indicator, again a performant progress bar. Regarding communication of failure I would consider implementing this in a Completed event which you publish. Then through parameters you can communicate the result and defer handling to the client to place that result in some presentation control/log/whatever. This is consistent with patterns used by the .Net Framework.
URI - yes, this gets passed in.
How to parse - passing in a delegate to convert a stream or string into an object whose type can be decided by the client makes sense.
Loc of cache - you could pass this if generalising this matters, or hardcode it's path. It would be more useful to others if passed in (consider if you handle folders/creation).
On the implementation.
You could go with a UserControl, if it works for you to be bound by that assumption. It would be more flexible though, and arguably equally simple/elegant, to push presentation back on the client for both the data display and status messages and control hide/display of the progress bar as passed in.
Perhaps you would go so far as to assume the status messages would always be displayed in a textblock (if passed) and shift that housekeeping from each of your clients into your generic class.
I suspect you will benefit from not coupling the data format and presentation still.
Tombstone handling.. I would recommend some testing on the platforms in built caching of URLs here and see if you can identify whether it's durations/dirty conditions work for your general cases.
Hopefully this gives you some things to think about and some reassurance you're heading down the right path. There are many ways you could go about this. Which is the best path ultimately will be driven by your goals.
I'm developing a WP7 application which is basically a client of an existing REST API. The server returns data in JSON. With the help of the library JSON.NET (http://json.codeplex.com/) I was able to deserialize it directly to my .NET C# classes.
I store locally the data to handle offline scenario of my application and also to prevent the call on the server each time the user launch the application. I provide two ways to refresh the data: manually and/or after a period of time. To store the data I use Sertling (http://sterling.codeplex.com/), it’s a simple but easy to use local database for Silverlight/WP7.
The biggest challenge is to handle the asynchronous communication with the server. I provide clear UI feedbacks (Progressbar and /or loading wheel) to let know to the user what’s going on.
On a side note I’m using MVVM Light toolkit and SL Unit Testing to do integration test View Model => my local Client code => Server. (http://code.google.com/p/nunit-silverlight/wiki/NunitTestsWp7)
Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code.
However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer.
Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.)
In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases).
Does that make sense? Or am messing things up? How do you handle inheritance and serialization?
There are indeed a few gotcha with inheritance and serialization. One is that it leads to an asymmetry between serialization/deserialization. If a class is subclassed, this will work transparently during serialization, but will fail during deserialization, unless the deserialization is made aware of the new class. That's why we have tags such as #SeeAlso to annotate data for XML serialization.
These problems are however not new about inheritance. It's frequently discussed under the terminology open/closed world. Either you consider you know the whole world and classes, or you might be in a case where new classes are added by third-parties. In a closed world assumption, serialization isn't much a problem. It's more problematic in an open world assumption.
But inheritance and the open world assumption have anyway other gotchas. E.g. if you remove a protected method in your classes and refactor accordingly, how can you ensure that there isn't a third party class that was using it? In an open world, the public and internal API of your classes must be considered as frozen once made available to other. And you must take great care to evolve the system.
There are other more technical internal details of how serialization works that can be surprising. That's for Java, but I'm pretty sure .NET has similarities. E.g. Serialization Killer, by Gilad Bracha, or Serialization and security manager bug exploit.
I ran into this on my current project and this might not be the best way, but I created a service layer of sorts for it and its own classes. I think it came out being named ObjectToSerialized translator and a couple of interfaces. Typically this was a one to one (the "object" and "serialized" had the exact same properties) so adding something to the interface would let you know "hey, add this over here too".
I want to say I had a IToSerialized interface with a simple method on it for generic purposes and used automapper for most of the conversions. Sure, it's a bit more code but hey whatever, it worked and doesn't gum up other things.