WHen submitting data to Data Layer when userID is not a field in the object being passed, but will still need to cross reference tables with userID when submitting data, should I call the the membership class to get the UserID at the datalayer, or should I pass UserID from level to level as a parameter? (ie from the business layer to the data layer? )
(Or does it not matter either way?)
Controller or Business Layer:
MembershipUser user = Membership.GetUser();
Guid userID = (Guid)user.ProviderUserKey;
DL.SaveObject (Object, userID);
OR
Do it in DataLayer:
SaveObject(Object)
{
MembershipUser user = Membership.GetUser();
Guid userID = (Guid)user.ProviderUserKey;
...
...
}
Whilst this is indeed a cross-cutting concern, my personal preference is that code such as:
MembershipUser user = Membership.GetUser();
Guid userID = (Guid)user.ProviderUserKey;
should NOT be in the data layer.
I like to see this kind of code in a layer higher than the data layer, usually the business layer, simply because I want my data layer to be entirely agnostic as where the data it will read/write/process has come from. I want my data gathering to be done primarily in the UI layer (in the case of user supplied data/input) and perhaps a little more data gathering within the business layer (such as gathering a UserID or retrieving a user's roles/authorisation).
Putting code such as this in the business layer can potentially lead to duplication of this code across many different domain objects, however, this can be alleviated by abstracting this code away into it's own object, and using object composition to allow other domain objects to access it.
Passing input to the DataAccessLayer should be done by the controller or BL. I prefer not to include anything other than Data read / write in DAL. (in this case, DAL is given the task of determining the currently logged in user)
In general I'd prefer to see GetUser() in SaveObject(), since that would allow the business layer to be abstracted away from it and should reduce the amount of code that is calling GetUser(). But it depends on the requirements. If you needed to apply business rules based on who the user is, then (also) putting it in the business layer might make more sense.
Authorization/authentication is one of those cross-cutting concerns that AOP (aspect oriented programming) is best suited to handle.
Update:
CraigTP makes a valid point wrt having the data layer be agnostic about where its data comes from. In general I would agree. In this scenario there is a requirement where user identity is needed by the data persistence mechanism, probably for security and/or auditing purposes. So in this case I'd prefer to put the user identity access call under the control of the layer that needs it. I'd abstract away the details of the GetUser() implementation behind another call, so the data layer doesn't have a dependency on System.Web.Security.
Related
I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.
We have a WPF desktop application that uses MVVM pattern and DDD (well, let's say that at least my model classes that store data named by entities taken from the real world). APP uses several microservices through REST API. And it worked perfectly. Until we thought that it's time to use some facade for back-end part to unite all those microservices and get only data that we need for particular screen.
BUT. The question is, how to make them live together.
On the one hand, we have dynamically returned data from graphql. It
means that, for example, if we have list of people on the one screen,
we will request id, name, surname and role of the person. On the
different screen for dropdown of people we will request the same data
but without role.
On the other hand we have class Person that has static set of fields Name, Surname, Role and Id, which person has in "real life"
If we use the same Person class with graphql, converting data from JSON to model Person, both screens will work fine, but behind the scene one screen that doesn't need Role wouldn't request it from graphQL. And we will have a situation when model class Person will have field Role but it will be just empty (which is i believe is kind of smells. At least I don't feel like it would be easy to maintain such a code. Developer needs to add some information to the screen, opens model, sees that Role is there, bind the field to the screen and goes to drink cofee. And then oops, there is the fields but there was no data assigned ).
Two variants I have on my mind are:
either to not use models and DDD and map data directly to ViewModel
(which personally feels like ruining everything we had before).
or we map that dynamic data to our existing models and different field for different screens (for the same class Person e.g.) will be
empty (because not requested).
Maybe somebody has already used such a combination. How do you use it and what pros and cons are?
It's a fairly common situation where you have a data layer returns many columns but only some are used in a given view.
There is no absolute "best" solution independent of how much impact the full set of columns will have on performance. Which might in turn be linked to things like caching.
You could write services that return subsets of data and then you only use the necessary bandwidth. Sort of a CQRS pattern but with maybe more models than just read + write.
Often this is unnecessary and the complications introduced do not compensate for the increased cost of maintenance.
What is often done is just to map from model to viewmodel (and back). The viewmodel that needs just 4 columns just has 4 properties and any more returned by the model are not copied. The viewmodel that needs 5 has 5 properties and they are copied from the model.
For example, if I have a microservice with this API:
service User {
rpc GetUser(GetUserRequest) returns (GetUserResponse) {}
}
message GetUserRequest {
int32 user_id = 1;
}
message GetUserResponse {
int32 user_id = 1;
string first_name = 2;
string last_name = 3;
}
I figured that for other services that require users, I'm going to have to store this user_id in all rows that have data associated with that user ID. For example, if I have a separate Posts service, I would store the user_id information for every post author. And then whenever I want that user information to return data in an endpoint, I would need to make a network call to the User service.
Would I always want to do that? Or are there certain times that I want to just copy over information from the User service into my current service (excluding saving into in-memory databases like Redis)?
Copying complete data generally never required, most of times for purposes of scale or making microservices more independent, people tend to copy some of the information which is more or less static in nature.
For eg: In Post Service, i might copy author basic information like name in post microservices, because when somebody making a request to the post microservice to get list of post based on some filter , i do not want to get name of author for each post.
Also the side effect of copying data is maintaining its consistency. So make sure you business really demands it.
You'll definitely want to avoid sharing database schema/tables. See this blog for an explanation. Use a purpose built interface for dependency between the services.
Any decision to "copy" data into your other service should be made by the service's team, but they better have a real good reason in order for it to make sense. Most designs won't require duplicated data because the service boundary should be domain specific and non-overlapping. In case of user ids they can be often be treated as contextual references without any attached logic about users.
One pattern observed is: If you have auth protected endpoints, you will need to make a call to your auth service anyway - for security - and that same call should allow you to acquire whatever user id information is necessary.
All the regular best practices for API dependencies apply, e.g. regarding stability, versioning, deprecating etc.
I have the following n-tier design.
In descending order of what can see what:
View>
ViewModel>
Business Logic Layer>
Data Acess Layer(repositories)
Currently, the view model uses a business object to do some high level action(add something to the database) and the BLL uses the DAL to do the low level operations. The DAL only has simple atomic operations(CRUD operations), whereas the BLL uses a Unit of work pattern to accomplish a higher level business operation that may require access to different repositories, etc. So the business logic for doing these operations exists in the BLL.
My issue is that right now I am having an issue thinking about what my models would be. Since I am using entity framework, my business models are basically my EF entities. I have been told by everyone that the business logic should go in the model. If that is true, what is the point of the business logic layer if each model contains individual business logic? I feel like I would have 2 areas where I have "business logic".
And how would I add business logic to my EF entities, since I am not using code-first and the entities get re-created when I change my .edmx.
Thank you.
To answer your second question, EF classes are marked as "partial" which means you can create a new file (in the same assembly), put another "partial" class with the same name as the generated class in it and the compiler will act as if the code in your new file is present in the generated code.
The first question is more complicated, I would put the business logic where it makes the most sense. If it pertains to a specific entity, make it part of the partial class mentioned above. If it would have to interact with multiple entities, deal with other sources, etc, it should probably go in its own class. Duplicating code is almost never the right answer.
Note that in my opinion, the "Model" is the underlying business logic and data in MVVM. It doesn't necessarily mean that it is just in the data objects, or all in one object. It is simply separated from the View and ViewModel objects.
Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data.
If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same.
Why would you go for a single instance per entity vs private instances per view or per use-case?
Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data.
Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level.
What would your argument be?
You should implement Equals/GetHashCode methods. This is a recommended practice when using ORMs.
In addition, you should typically stick with the "One View, One Session" mantra. Persist all of your objects when your view changes or loses focus. If you really need to share entities across views... well do it! Sometimes you must.
And once again, because when we are looking at the business objects from an entity and row type of perspective, we should not be concerned with "object" level equality.
I can't speak for ORM's, but I think you answered your own question - to an extent. You've provided pros and cons for both options: neither is right or wrong in absolute terms.
The options are only right or wrong depending on your situation. If sharing info makes sense use single-shared instance, but if the ability to undo is more important use multiple / private instances.
You might have other concerns which drive the decisions too: think about the NFR's (or "illities") and the context of the system. For example, if performance is a key concern and you know you're going to have large user bases then that might help suggest one option over the other, or force you to re-think it again from scratch.
Finally - you have "order", what about other entities - how are they being handled?
Or, if you don't have any, what will happen when/if you do? Would that have any imapct on your architecture?