Domain Logic in an Entity Component System - entity-component-system

Where to put the domain logic in an Entity Component System?
For sure you will not put it onto the Entity (number) itself but you can implement it as method on the Component but also as Method on the System. However, if put on the Component there might be the problem of validating across several Components and on the other hand when applied as System part there is the issue of possible duplication due to having multiple Systems adressing the same set of Entities.
This leads me to the point that there is some kind of domain logic handler required but who is calling it? It must be ensured to be called. Any Architecutral proposals/patterns here?

Related

WPF App User Control

I have a WPF Application where I register many users in my apps inner Data Base. I have many classes that would like to query which user is logged in currently.
I wonder which is the best architecture for this kind of scenario.
So far I can think of:
A Singleton object storing the current logged in User.
Use Dependency Injection and pass an object containing all the login information for every class that needs it (and to some classes that won't need it directly, but might have to pass it along to another one later).
Singleton has its disadvantages, and DI could lead to many dependencies being shared all over the place and could be a maintenance nightmare on the furniture. I wonder if there is a better solution for this issue.

How to allow multiple users to perform Db updates in Entity Framework in distributed application

Apologies if this is a dumb fool question, but I'm tired today and google isn't helping. This must be a common problem, but nothing I've found answers the question.
Whenever I've made use of Entity Framework in clickonce desktop applications, I've suffered a bit with the "entity object cannot be referenced by multiple instances of IEntityChangeTracker" error. This arises when multiple versions of the same database object are updated by different versions of the context.
On a web project, it's easy to bypass this issue by sending all the Db calls through some sort of singleton pattern on the server. If you're farming out a lot of Db update/insert calls to a service, you can do the same.
But what if you had a distributed application (e.g. WPF via clickonce) which potentially had a lot of users connecting to the database. Each user has their own install of the code, so even if you separate out the business logic from the Db logic, each user will still have their own copy of the context.
Is there any sort of pattern or technique that would allow inserts/updates direct from the application while avoiding the "multiple contexts" problem? Or is using something like a Windows service the only way to achieve this?
Can you reproduce this error even with a single user running your app?
Because getting "entity object cannot be referenced by multiple instances of IEntityChangeTracker" error means that you are creating 2 DBContext in your code and trying to manage a entity (created or readed) from a context through the other context. Nothing about distributed concurrency. Nothing about multiple users.
Check this SO question and try to identfy the error pattern in your code. In the example the two context are in memory but, as every entity keeps a reference to its IEntityChangeTrackeer, you can get this error even if its DbContext was disposed and you try to manage the entity with other context.

Communication between Bounded Contexts

I have a WinForms application that I am hopefully going to be refactoring to be utilizing a DDD architecture. First, I am trying to really wrap my head around the architecture itself, I have Evans' book and I have Vernon's book and I find myself struggling with three scenarios I would face rather immediately in my application. I am afraid I might be over thinking or being too strict in my conceptual design process.
1.) Utilizing an example provided inside a Pluralsight tutorial on DDD the speaker made a point that different bounded contexts should be represented by their own solution. However, if I have a winforms app that is not service oriented (this will eventually change and a lot of this question becomes moot) this doesn't seem feasible. I am therefore operating under the assumption I'll be separating these into different projects/namespaces being vigilant there are no interdependencies. Is this the right way to think about it or am I missing something obvious?
2.) I have a navigation UI that launches other modules/windows that would belong in separate presentation layers of different bounded contexts. Think of the first window that would be open when you launched an ERP application. Since this doesn't fit cleanly within any particular BC how would something like this be properly implemented. Should this fall within a shared kernel?
3.) I have a Job Management bounded context and a Rating/Costing bounded context. It is part of the business process that when a Job is created its details are then rated. This has its own UI, etc, which I feel pretty good that this presentation still adequately falls inside the Job Management context. However, the actual rating process of these details definitely should not. I am not entirely sure how to communicate with the Rating/Costing context since bc's are to be kept separate from one another. I realize I could do messaging, but that seems to be overkill for a non distributed app. Each BC could feasibly self host some kind of API but again this seems overkill, although this would set the team up nicely for migrating to a distributed architecture later on. Finally, my last idea is having some kind of shared dependency that is an event store of sorts. I don't know if this is the same as Domain Events as those seem to have a separate concern in and of themselves. So, does that mean this would fall under a shared kernel as well or some other type of solution?
Thank you in advance.
1) Guidance about BCs corresponding to solution is only guidance, not a hard rule. However, it does provide much needed isolation. You can still have this with a WinForms project. For example, suppose you have a BC called Customers. Create a solution for it and within it, create an additional project called Customers.Contracts. This project effectively houses the public contract of the BC which consists of DTOs, commands and events. External BCs should be able to communicate with the Customers BC using only the messages defined in this contracts project. Have the WinForms solution reference Customers.Contracts and not the Customers project.
2) A UI often serves a composing role, orchestrating many BCs - a composite UI. A stereotypical example is the Amazon product page. Hundreds of services from different BCs are required to render the page.
3) Again this seems like a scenario calling for a composite UI. The presentation layer can mediate between different BCs. BCs are loosely coupled, but there still are relationships between BCs. Some are downstream from others, some are upstream, or even both. Each has an anti-corruption layer, a port, to integrate with related BCs.
The feeling I get from these questions can be summarized as: "What is a sane approach to BC boundaries from a code artifact perspective? and How do I build a UI that both queries and commands several BCs?". It depends ...
Another, not yet mentioned approach could be to regard the UI as a seperate context. I doubt it's a very popular POV, but it can be useful at times. The UI could dictate what it needs using e.g. its own interfaces and data structures, and have each BC implement the appropriate interfaces (doing an internal translation). The downside is the extra translation going on, but then again it only makes sense when there is sufficient value to be reaped. The value is in keeping things simple on the UI side and not having to worry how and where the data is coming from or how changes affect each BC. That can all be handled behind a simple facade. There are several places this facade could sit (on the client or on the server). Don't be fooled though, the "complexity" has just moved behind yet another layer. Coordination and hard work still needs to be done.
Alternatively you may also want to look into what I call "alignment" of a UI with use cases exposed by a BC. As Tom mentioned, workflow or saga implementations might come in handy to pull this off when coordination is required. Questioning the consistency requirements (when does this other BC need to know about given information?) might bring new insight into how BCs interoperate. You see, a UI is a very useful feedback loop. When it's not aligned with a BC's use case maybe there is something wrong with the use case or maybe there is something wrong with how it was designed in the UI or maybe we just uncovered a different use case. That is why UI mockups make such a great tool for having discussions. They offer an EXTRA view on the same problem/solution. Extra as in "this is not the only visualization you should use in conversations with a domain expert". UX requirements are requirements too. They should be catered for.
Personally I find that when I'm discussing UI I'm wearing another hat than when I'm discussing pure functionality (you know, things that don't require a UI to explain what the application is doing/should do). I might switch hats during the same conversation just to find out malalignment.
First things first, as I saw you talking about a message bus, I think we need to talk about BC integration first.
You do not need a message bus to communicate between BC's; here is an explanation on how I integrate different BC's:
I expose some public interfaces on each BC (similar to domain commands, - queries and - events), and have an intermediate layer in my infrastructure that translates this call to the other BC.
Here is an example interface for exposed commands in a BC:
public interface IHandleCommands
{
void DoSomething(Guid SomeId,string SomeName);
}
I also have a similar one for exposed events
public interface IPublishEvents
{
void SomethingHappened(Guid SomeId,string SomeName);
}
Finally for my exposed data (i.e. the Queries in CQ(R)S) I have another interface, please note that this allows you to remove coupling between your domain model and query code at any given time.
public interface IQueryState
{
IEnumerable<SomeData> ActiveData(DateTime From=DateTime.Minvalue, ... );
}
And my implementation looks like this:
public class SomeAR:IHandleCommands
{
IPublishEvents Bus;
public SomeAr(IPublishEvents Bus)
{
this.Bus = Bus;
}
public void DoSomething(Guid x,string y)
{
Bus.SomethingHappened(SomeId: x,SomeName: y);
}
}
After all, when you think about it: things like domain events can be done without the messaging as well; just replace the message classes by interface members, and replace the handlers by interface implementations that get injected into your BC.
These handlers then invoke commands on other BC's; they are like the glue that bind together different BC's (think workflows/stateless saga's etc).
This could be an example handler:
public class WorkFlow : IPublishEvents
{
public void SomethingHappened(Guid SomeId,string SomeName)
{
AnotherBC.DoSomething(SomeId,SomeName);
}
}
This is an easy approach that does not require a lot of effort, and I have used this with great success. If you want to switch to full-blown messaging later on, it should be easy to do.
To answer your questions about the UI:
I think you are being too rigid about this.
As long as my domain is (or can be easily) decoupled from your UI, you can easily start with a single UI project, and then split it up the minute you start experiencing pain somewhere. However, if you do split up code, you should split it up per BC, so project structures match.
I find building a UI this way to be the most efficient way for me...

Long living Entity Framework context - best practice to avoid data integrity issues

As a very rudimentary scenario, let's take these two operations:
UserManager.UpdateFirstName(int userId, string firstName)
{
User user = userRepository.GetById(userId);
user.FirstName = firstName;
userRepository.SaveChanges();
}
InventoryManager.InsertOrder(Order newOrder)
{
orderRepository.Add(newOrder);
orderRepository.SaveChanges();
}
I have only used EF in web projects and relied heavily on the stateless nature of the web. With every request I would get a fresh copy of the context injected into my business layer facade objects (services, managers, whatever you want to call them), all of the business managers sharing the same instance of the EF context. Currently I am working on a WPF project and I'm injecting the business managers and subsequently the repositories that they use directly into the View Model.
Let's assume a user is on a complicated screen and his first button click calls the UpdateFirstName() method. Let's assume SaveChanges() fails for whatever reason. Their second button click will invoke the InsertOrder() method.
On the web this is not an issue as the context for operation #2 is not related (new http request) to the context used by operation #1. On the desktop however, it's the same context across both actions. The issue arises in the fact that the user's first name has been modified and as such is tracked by the context. Even though the original SaveChanges() didn't take (say the db was not available at that time), the second operation calling SaveChanges() will not only insert the new order, it will also update the users first name. In just about every cause this is not desirable, as the user has long forgotten about their first action which failed anyway.
This is obviously being a silly example, but I always tend to bump into these sort of scenarios where I wish I could just start fresh with a new context for every user action as opposed having a longer lived (for the life of the WPF window for example) context.
How do you guys handle these situations?
A very real interrogqation when coming from ad hoc desktop apps that hit the database directly.
The answer that seems to go with the philosophy for ORMs is to have a context that has the same lifespan as your screen's.
The data context is a Unit Of Work implementation, in essence. For a web app, it is obviously the request and it works well.
For a desktop app, you could have a context tied to the screen, or possibly tied to the edit operation (between loading and pressing "save"). Read only operations could have a throwaway context (using IDs to reload the object when necessary, for example when pressing a "delete button" in a grid).
Forget about a context that spans the entire application's life if you want to stay aware of modifications from other users (relationship collections are cached when first loaded).
Also forget about directly sharing EF entities between different windows, because that effectively makes it an app-wide and thus long lived context.
It seems to me ORMs tend to enforce a web-like design on desktop apps because of this. No real way around this I'm afraid.
Not that it is necessary a bad thing in itself. You should normally not be attacking the database at the desktop level if it is to be shared between multiple instances. In which case you would be using a service layer, EF would be in its element and your problem would be shifted to the "service context"...
We have a large WPF application that calls WCF services to perform CRUD operations like ‘UpdateFirstName()’ or ‘InsertOrder()’. Every call into the service creates a new ObjectContext, therefore we don’t need to worry about inconsistent ObjectContexts hanging around.
Throw away the ObjectContext as soon as you are done with it. Nothing’s wrong with generating a new ObjectContext on the fly. There’s unnoticeable overhead in creating a new ObjectContext and you’ll save yourself many future bugs and headaches.
Suppose you do create a new ObjectContext each time a method is called on your repository, the below snippet would still give you transaction support by using the TransactionScope class. For example,
using (TransactionScope scope = new TransactionScope())
{
UserManager.UpdateFirstName(userid,firstName);
InventoryManager.InsertOrder(newOrder);
scope.Complete();
}
One completely different way is writing all changes to the database directly (with short lived database contexts). And then add some kind of draft/versioning on top of that. That way you can still offer ok/undo/cancel functionality.

How can i do Error Handling in a MVC/MVVM Windows Form app

i'm building an application using a pattern similar to the MVC. The situation is the next: in the context of the model making changes to the associated repository. If the change throw an exception what is the correct way to present the information about the error to the user?
In the previous version of my program when i have spaguetti code organization (model, view, controller overlaped), launching a messagebox telling the user about the error wasn't weird because i was doing almost everything from the views. Now in this new version i want to do the things correctly, so i guess that is bad doing anything that has a visual representation in the model layer.
Some time ago i ask what is the correct way to capture exceptions. The specific point i was refering was to scale up exceptions from an inner code to an upper layer vs capture them in the most upper layer. Almost all the response were that isnt a good approach scale exceptions(capture and throwing again to be captured by a responsable entity), and is better to capture in the most upper layer.
So i have this conflict in my head. I was thinking that is inevitable to maintain the separation of concerns to scale up, but that is in conflict with those previous advices.
How can i proceed?
A common pattern is to have a generic place to put errors in your existing model.
One easy way to do this is to have your model classes all inherit from a base model class that has a property of type IEnumerable<ErrorBase>, or some other type of your choosing.
Then, in your presenter, you can check the error collection and display as necessary.
As far as having exceptions bubble up, the approach I use (almost regardless of what type of application I'm building) is to only handle exceptions at lower levels if you are going to do some special logging (like logging important local variables), or if you can do something intelligent with that exception. Otherwise, let it bubble.
At the layer right before your presenter (or web service class, or whatever), that's when you can capture your exceptions, do general logging, and wrap them (or replace them with) a "sanitized" exception. In the case of the UI, you just make sure you don't reveal sensitive data and perhaps display a friendly message if possible. For web services, you turn these into some kind of fault. Etc.
The upper most layers aren't "responsible" for the bubbled exceptions, they're simply responsible for making sure those don't show up to the end users (or web service client, or whatever) it a form you don't want them to... in other words, you're "presenting" them appropriately.
Remember, separation of concerns is a paradigm that you should follow as a rule of thumb, not an edict that owns all. Just like leaky abstractions, there are leaky paradigms. Do what makes sense and don't worry about it too much. :)

Resources