Domain Driven Design (DDD) and database generated reports - database

I'm still investigating DDD, but I'm curious to know about one potential pitfall.
According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?

According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
Oh no, it's worse than that; the entire aggregate (the root and all of the subordinate entities) get loaded instantiated in memory. Essentially by definition, you need all of the state loaded in order to validate any change.
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?
You don't need the aggregate-root to do that.
The primary role of the domain model is to ensure the integrity of the book of record by ensuring that all writes respect your business invariant. A read, like a database report, isn't going to change the book of record, so you don't need to load the domain model.
If the domain model itself needs the report, it typically defines a service provider interface that specifies the report that it needs, and your persistence component is responsible for figuring out how to implement that interface.

According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
Aggregate roots are consistency boundaries, so yes you would typically load the whole aggregate into memory in order to enforce invariants. If this sounds like a problem it is probably a hint that your aggregate is too big and possibly in need of refactoring.
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?
The aggregate wouldn't ask the database to group and sum data - typically you would load the aggregate in an application service / command handler. For example:
public class SomeUseCaseHandler : IHandle<SomeCommand>
{
private readonly ISomeRepository _someRepository;
public SomeUseCaseHandler(ISomeRepository someRepository)
{
_someRepository = someRepository;
}
public void When(SomeCommand command)
{
var someAggregaate = _someRepository.Load(command.AggregateId);
someAggregate.DoSomething();
_someRepository.Save(someAggregate);
}
}
So your aggregate remains ignore of how it is persisted. However, your implementation of ISomeRepository is not ignorant, so can do whatever is necessary to fully load the aggregate. So you could have your persistence implementation group/sum when loading the aggregate, but more often you would probably query a read model:
public class SomeUseCaseHandler : IHandle<SomeCommand>
{
private readonly ISomeRepository _someRepository;
private readonly ISomeReadModel _someReadModel;
public SomeUseCaseHandler(ISomeRepository someRepository, ISomeReadModel readModel)
{
_someRepository = someRepository;
_someReadModel = someReadModel;
}
public void When(SomeCommand command)
{
var someAggregaate = _someRepository.Load(command.AggregateId);
someAggregate.DoSomethingThatRequiresTheReadModel(_someReadModel);
_someRepository.Save(someAggregate);
}
}
You haven't actually said what your use case is though. :)
[Update]
Just noticed the title refers to database generated reports - this will not go through your domain model at all, it would be a completely separate read model. CQRS applies here

Related

OOP composition and orm

I am building a simple rate limiter to train my oop skills and I am having some doubts regarding composition and orm.
I have the following code:
interface RateLimiterService {
void hit(String userid, long timestamp) throws UserDoesNotExistEx, TooManyHitsEx; // Option A
SingleRateLimiter getUser(String userid) throws UserDoesNotExistEx; // Option B
Optional<SingleRateLimiter> getUser(String userid); // Option C
}
class LocalRateLimiterService implements RateLimiterService {
// Uses an hash table userid -> SingleRateLimiter
}
interface SingleRateLimiter {
void hit(long timestamp) throws TooManyHitsEx;
}
class TimestampListSRL implements SingleRateLimiter {
// Uses a list to store the timestamps and purges the expired ones at each call
}
class TokenBucketSRL implements SingleRateLimiter {
// Uses the token bucket aproach
}
My doubts are:
Which option should I use for the RateLimiterService interface?
Option A is usually called "method forwarding" or "delegation" or "Law of Demeter". It protects the composed object by only exposing the intended methods and/or by possibly adding some extra validation logic before forwarding the call. Therefore, it seems like a good solution when that is needed. However, when this is not the case (as in my example), this option creates a lot of redundant repetitions which add nothing usefull.
Option B breaks encapsulation in a way but it avoids method repetions (the DRY principle). By picking A or B you always end up breaking some well known principles/good practices. Is there another option?
Option C is the same as B but returns an optional instead of throwing an exception. Which approach is considered better?
If the classes that implement the RateLimiterService had a single composing SingleRateLimiter instead of a collection of SingleRateLimiters (doesn't make much sense in this case but trying to be generic to other situations when the composed object is not a colection), would the best Option change to other alternative from the one in 1.?
If I wanted to add a database to this system, what would be the best approach to "talk to" the database?
creating a class DBRateLimiterService that implements RateLimiterService and has a private connection object to the database (is basically a DAO)? In this case, this class does not know anything besides the userid of the inner SingleRateLimiters, since there are multiple implementations available/possible. So how can I do this approach without changing the current OOP architecture?
In addition, I would need to create a DAO for each SingleRateLimiter implementation too, right? In this case, the SingleRateLimiter is not a simple model object that has only getters and setters so it should also be a DAO, right? Its hit method must be implemented as a transaction in most cases (if not all). If this is the right approach, how can the two DAOs operate together and map to the same database table?
What other options could serve for this?

How to handle multiple entity update in the same transaction in Spring Data REST

Is anyone having an idea on how to handle multiple entity updates within the same transaction in Spring Data REST ? The same thing can be handle within Spring controller methods using the #Transactional annotation. If I am correct, Spring Data REST executes every execution event within separate transactions. So multiple entity updates cannot be handled in a proper way.
I am having issues updating 2 entities (ABC and PQR) within the same transaction and rolling back the ABC entity when the PQR entity is failed.
// ABC repository
#RepositoryRestResource
public interface ABCEntityRepository extends MongoRepository<ABC, String> {
}
// PQR repository
#RepositoryRestResource
public interface PQREntityRepository extends MongoRepository<PQR, String> {
}
// ABC repository handler
#RepositoryEventHandler
public class ABCEventHandler {
#Autowired
private PQREntityRepository pqrEntityRepository;
#HandleBeforeSave
public void handleABCBeforeSave(ABC abc) {
log.debug("before saving ABC...");
}
#HandleAfterSave
public void handleABCAfterSave(ABC abc) {
List<PQR> pqrList = pqrEntityRepository.findById(abc.getPqrId());
if (pqrList != null && !pqrList.isEmpty()) {
pqrList.forEach(pqr -> {
// update PQR objects
}
}
// expect to fail this transaction
pqrEntityRepository.saveAll(pqrList);
}
}
since #HandleAfterSave method is executed in a separate transaction, calling HandleAfterSave method means the ABC entity updation is already completed and cannot rollback, therefore. Any suggestion to handle this ?
Spring Data REST does not think in entities, it thinks in aggregates. Aggregate is a term coming from Domain-Driven Design that describes a group of entities for which certain business rules apply. Take an order along side its line items for example and a business rule that defines a minimum order value that needs to be reached.
The responsibility to govern constraints aligns with another aspect that involves aggregates in DDD which is that strong consistency should/can only be assumed for changes on an aggregate itself. Changes to multiple (different) aggregates should be expected to be eventually consistent. If you transfer that into technology, it's advisable to apply the means of strong consistency – read: transactions – to single aggregates only.
So there is no short answer to your question. The repository structure you show here virtually turns both ABCEntity and PQREntity into aggregates (as repositories only exist for aggregate roots). That means, OOTB Spring Data REST does not support updating them in a single transactional HTTP call.
That said, Spring Data REST allows the declaration of custom resources that can take responsibility of doing that. Similarly to what is shown here, you can simply add resources on additional routes to completely implement what you imagine yourself.
Spring Data REST is not designed to produce a full HTTP API out of the box. It's designed to implement certain REST API patterns that are commonly found in HTTP APIs and will very likely be part of your API. It's build to avoid you having to spend time on thinking about the straight-forward cases and only have to plug custom code for scenarios like the one you described, assuming what you plan to do here is a good idea in the first place. Very often requests like these result in the conclusion that the aggregate design needs a bit of rework.
PS: I saw you tagged that question with spring-data-mongodb. By default, Spring Data REST does not support MongoDB transactions because it doesn't need them. MongoDB document boundaries usually align with aggregate boundaries and updates to a single document are atomic within MongoDB anyway.
I'm not sure I understood your question correctly, but I'll give it a try.
I'd suggest to have a service with both Repositories autowired in, and a method annotated with #Transactional that updates everything you want.
This way, if the transaction fails anywhere inside the method, it will all rollback.
If this does not answer your question, please clarify and I'll try to help.

is it a bad practice to have a static field?

I use a static field in this situation because I think it is time consuming to recreate the object at each request.
private static AnalysedCompanies db = new AnalysedCompanies();
public class AnalysedCompanies:DbContext
{
...
}
I use Entity Framework code first.
than I have methods for saving and loading data from the database trough the db object.
Is the static db object going to cause a bottleneck? Is this the right thing to do?
In a ASP.net Web Application, static are shared by all users, so yes, that's pretty bad as it means that User A can possibly see/modify data that User B sees and leads to all sorts of headaches.
Static fields are fine for static data, that is data that a) is shared for everyone and b) isn't modified by users (as changes are global to all other users). I do use statics for stuff like System Configuration or objects that can be safely shared.
I think the main problem is this: "I think it is time consuming" - don't guess, measure. There are many profilers available for .net. If you have performance issues, measure to see if it really is a problem and then act.

Do I call my Services from the ViewModel OR Model in MVVM design pattern?

I was asking another question here on SO and one user really confused me suggesting to make the following: I have read it a 1000 times on SO, that an entity should never make a save/add/delete call via a service to the database. Thats the task of the ViewModel!
What do you say?
public class School
{
private ISchoolRepository _repository;
public string Name { get; set; }
public School()
{
this._repository = IoC.Resolve<ISchoolRepository>();
}
public bool IsValid()
{
// Some kind of business logic?
if (this.Name != null)
{
return true;
}
return false;
}
public void Save()
{
if (this.isValid())
{
this._repository.Save(this)
}
}
I would not mix the repository with the entities either because I like the entities to be free of any contextual or environmental state. I think managing the storage of the entities is the sole responsibility of the repository.
What would happen if you have dependencies between entities? For example, a school has students. You can’t save students until you save the school. You would have to build this logic into your student entities. Would your students save the school also? Would they refuse to save? Do they need to check the database for the school? They will at least need to know something about the school so you then create a dependency between school and students that is pretty hard wired.
Then you add teachers and you need to add similar logic for them. Your code to represent these relationships and dependencies is then spread across many entities. Think about transactions too. Then add in multiple tiers. Do you see how complicated this could become? Pretty soon you have spaghetti with meatballs and cheese!
It's the repositories’ responsibility to know this stuff.
HTH
Cheers
Calling the service from your entity violates the Single Responsibility Principle. If, in the future, you need to have your entities hydrated from a different backing store than the service you will have to change all your entities. Even though you are injecting the repository it is still violating SRP.
I dont understand what is wrong with this approach, if I wanted to swap out from say SQL server to Oracle I would simply register a new repository called "OracleSchoolRepository and it ensure it satisfies the ISchoolRepository interface.
I dont see any issues with that? Can you highlight a scenario when the above would become a problem?
Thanks!
Ben

Model-View-ViewModel pattern violation of DRY?

I read this article today http://dotnetslackers.com/articles/silverlight/Silverlight-3-and-the-Data-Form-Control-part-I.aspx about the use of the MVVM pattern within a silverlight app where you have your domain entities and view spesific entities which basically is a subset of the real entity objects. Isn't this a clear violation of the DRY principle? and if so how can you deal with it in a nice way?
Personally, I don't like what Dino's doing there and I wouldn't approach the problem the same way. I usually think of a VM as a filtered, grouped and sorted collections of Model classes. A VM to me is a direct mapping to the View, so I might create a NewOrderViewModel class that has multiple CollectionViews used by the View (maybe one CV for Customers and another CV for Products, probably both filtered). Creating an entirely new VM class for every class in the Model does violate DRY in my opinion. I would rather use derivation or partial classes to augment the Model where necessary, adding in View specific (often calculated) properties. IMO .NET RIA Services is an excellent implementation of combining M and VM data with the added bonus that it's usable in on both the client and the server. Dino's a brilliant guy, but way to call him out on this one.
DRY is a principle, not a hard rule. You are a human and can differentiate.
E.g. If DRY really was a hard rule you would never assign the same value to two different variables. I guess in any non trivial program you would have more than one variable containing the value 0.
Generally speaking: DRY does usually not apply to data. Those view specific entities would probably only be data transfer objects without any noteworthy logic. Data may be duplicated for all kinds of reasons.
I think the answer really depends on what you feel should be in the ViewModel. For me the ViewModel represents the model of the screen currently being displayed.
So for something like a ViewCategoryViewModel, I don't have a duplication of the fields in Category. I expose a Category object as a property on the ViewModel (under say "SelectedCategory"), any other data the view needs to display and the Commands that screen can take.
There will always be some similarity between the domain model and the view model, but it all comes down to how you choose to create the ViewModel.
It's the same as with Data Transfer Objects (DTO).
The domain for those two object types is different, so it's not a violation of DRY.
Consider the following example:
class Customer
{
public int Age
}
And a corsponding view model:
class CustomerViewModel
{
public string Age;
// WPF validation code is going to be a bit more complicated:
public bool IsValid()
{
return string.IsNullOrEmpty(Age) == false;
}
}
Differnt domains - differnet property types - different objects.

Resources