I'm a little confused here.
I have my POCO classes created with the entity framework modeled from the datbase.
Obviously, I'd like to use these classes in the client too (and any bookkeeping on them would be nice if I'd like to send them back and re-attach)
I looked through the classes generated for the WCF service reference and it seems a bit verbose to be sending over the internet, but it doesn't look like there's anything risky in there security-wise.
And yet, I can't find anything online about doing this. Am I going down a completely awful path?
Help?
EDIT : I suppose they're technically they're not POCO classes if i had them generated by the EntityFramework from the database; just to clear up any possible confusion.
This is a difficult question to answer without knowing more details about your system, but ultimately whether exposing your EF entities in the WCF service contract is the right path or not is influenced by the scope and requirements of the application you are developing.
Perhaps ask yourself the following questions which will hopefully guide your decision:
Is it likely your relational model and object model will need to diverge? This can be driven by a number of factors, but most commonly reporting requirements may enforce a certain design on your database schema (for performance) that you do not want to reflect in your application object model. Using the DB generated EF entities throughout the application layers can bind you to this database design
Are you concerned that changes to your database schema may require your clients to regenerate their service references? Again, using the EF entities throughout your application tiers means any changes implemented in your DB schema (whether of concern to the client or not) may bubble up to the service interface, potentially breaking client compatability with that interface
Is performance a concern? As you mentioned, the generated classes are verbose. You are likely to be transporting unnecessary baggage across the wire, which could be optimized.
Are you concerned about exposing the implementation details of your database schema and persistence mechanism on the wire and to your clients? Given you have generated the model from the database there are likely to be properties that expose information about your schema and persistence mechanism that are redundant from a client's perspective.
In summary there may be a limited number of cases where exposing the EF entities may be acceptable but typically I would design for change and implement some sort of pattern where you map your EF entities to light-weight "persistence-ignorant" POCOs at your repository layer. EF 4.0 does provide the ability to code up a context that returns POCOs, but on my current project we use the codegen'd context and then use automapper to map the EF entities to our data contracts. Outside of the repository layer nothing is aware of the EF entities and I feel this allows for a more maintainable and robust design.
Related
We have a requirement of building stateless micro services which rely on a database cluster to persist data.
What is the approach that is recommended for redundant stateless micro services(for high availability and scalability) using the database cluster. For example: Running multiple copies of version 1.0 Payment service.
Should all the redundant micro services use a common shared DB schema or they should have their own schema? In case of independent DB schema inconsistency among the redundant services may exist.
Also how can the schema upgrade handled in case of common DB schema?
This is a super broad topic, and rather hard to answer in general terms.
However...
A key requirement for a micro service architecture is that each service should be independent from the others. You should be able to deploy, modify, improve, scale your micro service independently from the others.
This means you do not want to share anything other than API definitions. You certainly don't want to share a schema; each service should be able to define its own schema, release new versions, change data types etc. without having to check with the other services. That's almost impossible with a shared schema.
You may not want to share a physical server. Sharing a server means you cannot make independent promises on scalability and up-time; a big part of the micro service approach means that the team that builds it is also responsible for running it. You really want to avoid the "well, it worked in dev, so if it doesn't scale on production, it's the operations team's problem" attitude. Databases - especially clustered, redundant databases - can be expensive, so you might compromise on this if you really need this.
As most microservice solutions use containerization and cloud hosting, it's quite unlikely that you'd have the "one database server to rule them all" sitting around. You may find it much better to have each micro service run its own persistence service, rather than sharing.
The common approach to dealing with inconsistencies is to accept them - but to use CQRS to distribute data between microservices, and make sure the micro services deal with their internal consistency requirements.
This also deals with the "should I upgrade my database when I release a new version?" question. If your observers understand the version for each message, they can make decisions on how to store them. For instance, if version 1.0 uses a different set of attributes to version 1.1, the listener can do the mapping.
In the comments, you ask about consistency. This is a super complex topic - especially in micro service architectures.
If you have, for instance, a "customers" service and an "orders" service, you must make sure that all orders have a valid customer. In a monolithic application, with a single database, and exclusively synchronous interactions, that's easy to enforce at the database level.
In a micro service architecture, where you might have lots of data stores, with no dependencies on each other, and a combination of synchronous and asynchronous calls, it's really hard. This is an inevitable side effect of reducing dependencies between micro services.
The most common approach is "eventual consistency". This typically requires a slightly different application design. For instance, on the "orders" screen, you would invoke first the client microservice (to get client data), and then the "orders" service (to get order details), rather than have a single (large) service call to retrieve everything.
I am learning some patterns, Unity of work / repository... There are some examples on web, but no one connects to more than one database.
In my applications almost always I have the need to get some object from a database (for example users) and some other object from another, how can I use the patterns ? (Since I am a novice on this subject an explicit example is a must)
Thank you!
As a general reference I advise you and anyone interested to visit http://martinfowler.com/eaaCatalog/index.html
which has a collection of UML and explanation over the most common Design Pattern for enterprises software
In your specific case Unity of work is particularly suited to work along with Data Mapper and Identity Map. I guess to understand 100% unity of work one must master also the other 2 pattern.
To answer your question I think you can create a unity of work and save it in a registry, so it is available all over the application. The unity must be a singleton since you need to ensure a central gateway to communicate with the database. Inside your unity of work you will have an identity map which is a collection of valued Objects in memory representing your model which is responsible to maintain the object states during all the application's operations. The unity will be used by your service layer to perform CRUD operations over the model and commit these changes.
To work with more databases I guess you need to leverage some sort of namespaced access to the object stored in the Identity map. You have the choice where to namespace: unity of work or identity map. The decision is really up to your application and use cases. You might need to connect to different DBs for splitting responsibilities between read and write or trying to integrate heterogeneous data sources.
An alternative would be to inject the DB object into the unity of work methods, in this case the application has 100% control over which database is used.
The repository pattern as far as I understand helps you to abstract to the storage of you model and it is particularly useful when you are working with heterogeneous data sources of you must provide such a flexibility to your application. Therefore I guess it is quite different from unity of work and Data Mapper layer.
I am doing something I consider to be pretty normal (although I personally haven't had to do it before), and I have assumed there'd be a 'no-brainer' way forward, but Im yet to find it - which is really frustrating.
I will be creating a WPF application, which is a data-oriented business application. My data will come from a remote IIS server (that I control) that has a standard SQL server 2008 database, so Web services/WCF seem to be the way forward. The remote service needs to be secure (reasonably) via a user (of the WPF client) username/password login.
I dont want to use 3rd party ORM products, but I expect the data layer (between the service and the database) to be able to cope with very simple ORM type functionality (I really dont want to hand-craft a data retrieval and persistence layer). Im not worried about concurrency very much as this will be a fairly simple app.
My options seem to be one of the following:
ADO.NET Entity Framework over WCF
Linq2Sql over WCF
WCF Data Services
On further investigation, none of the above seem to be the 'no brainer' Im after
1) ADO.NET entity Framework - Ive had a play with this and getting all sorts of issues serializing objects over WCF. Even when I try to generate POCO entities and use them, Im having to decorate service contracts with custom attributes just to get it to not error all the time, and I seem to have to hand-crank anything more than a flat object graph. It seems to me that EF simply isn't designed to be exposed via a service.
2) Linq2Sql - This doesn't seem much better than EF. I seem to have to hand-crank too much stuff. Ive tried the designer and SQLMetal but nothing seems to 'just work' - it all needs fiddling with.
3) WCF Data Services - this seems like a good option on the face of it, but essentially it seems like I'm just exposing my SQL database tables 'in the raw' over the service layer. Im not an expert in this technology by any means but it seems like a potentially dangerous approach, and on top of that it doesnt seem to support any kind of access security as standard (you have to hack it to require authentication it seems).
As I said, this scenario feels like it should have a no-brainer solution, but Im still scratching my head. Ive done lots of things with .NET technologies, but to be honest this area represents a bit of a hole in my understanding, so I apologize if any of my comments or assumptions are naive.
Of course, it may well be that the 'hacky' long-way-round on EF or Linq2SQL may be all I can do, in which case I can roll up my sleeves, and accept the fact that I haven't missed a more elegant solution.
Any help/advice will be much appreciated.
This is a tad subjective, but i'll offer my opinion.
First of all, forget L2SQL - it's basically obsolete and doesn't have the full POCO support of EF4 (it can be done, but needs XML tinkering, or SQLMetal generation), which means serializaing your entities will be a left-to-right entity cloning nightmare.
I would go with ADO.NET Entity Framework over WCF, Entity Framework 4.0 specifically. You will have a wealth of flexibility in your model (including the ability to apply OO principles such as inheritance).
Use Self-Tracking-Entities. Yes, you have to decorate service contracts - this is by design, and there are many reasons for this.
You could always use DTO's, as opposed to serializing the actual EF entities.
OData is really good as well in it's flexibility and simplicity. But if your only consuming your model via a single client application, a specialized service layer (WCF) is a better approach IMO.
3) WCF Data Services - this seems like
a good option on the face of it, but
essentially it seems like I'm just
exposing my SQL database tables 'in
the raw' over the service layer.
That might be a first impression - but it's fundamentally wrong. What you're exposing over the web is a model - and you have full control over what gets into that model, and how consumers of your WCF Data Services might be able to see and/or even update entities in that model.
That's where Entity Framework comes in and shines (and where Linq-to-SQL miserably fails): you can grab your database (or at least parts of it) into an Entity Data Model, and then modify it. You can tweak your entity names to be totally different from your table names, you can add computed attributes, you can remove certain attributes and much more.
If you're talking about a fairly simple app, that's definitely the way I'd go:
grab your database and turn it into an Entity Data Model using EF
expose that EDM over WCF Data Services and define what can be seen read-only, and what might even be updated over the wire
I had a client ask for advice building a simple WPF LOB application the other day. They basically want a CRUD application for a database, with main purpose being as a WPF training project.
I also showed them Linq To SQL and they were impressed.
I then explained its probably not a good idea to use that L2S entities directly from their BLL or UI code. They should consider something like a Repository pattern instead.
At which point I could already feel the over-engineering alarm bells going off in their heads (and also in mine to some extent). Do they really need all that complexity for a simple CRUD application? (OK, its effectively functioning as a WPF training project for them but lets pretend it turns into a "real" app).
Do you ever think its acceptable to use L2S entities all through your application?
How difficult is it (from experience) to refactor to use another persistence framework later?
The way I see it, if the UI layer uses the L2S entities as a simple a POCO (without touching any L2S specific methods) then that should be really easy to refactor later on if need be.
They do need a way to centralize the L2S queries, so some sort of way to manage that is needed, even if they do use L2S entities directly. So in a way we are already pushing toward some aspects of DAL/DAO/Repository.
The main problem with Repo I can see is the pain of mapping between L2S entities and some domain model. And is it really worth it? You get quite a lot "for free" on L2S entities which I think would be hard to use once you map to another model.
Thoughts?
The only major drawback to using L2S entities all the way through is that your UI needs to know about and bind to the concrete entities. That means your UI knows your data layer. Not usually a good idea. You really want a layered approach for anything that has the potential to be serious.
That said, it's perfectly possible to use the LINQ-to-SQL entities themselves in a layered architecture without knowledge of the data layer: extract interfaces for the entities and bind to them instead.
Keep in mind that all L2S entities are partial classes. Create interfaces that reflect your entities (Refactor => Extract Interface is your friend here) and create partial class definitions of your entities that implement the interfaces. Put the interfaces themselves (and only the interfaces) in a separate .DLL that your UI and Business Layer reference. Have your Business Layer and Data Layer accept and emit these interfaces rather than the concrete versions of them. You can even have the interfaces implement INotifyPropertyChanging, since the L2S entities themselves implement those interfaces. And it all works peachy.
Then, when/if you need a different persistence framework, you have no pain at all in the BL or UI, only in the data layer -- which is where you want it.
Repositories are not a big deal. See here for a pretty good treatment of their use in ASP.NET MVC:
http://nerddinnerbook.s3.amazonaws.com/Part3.htm
Basically what we did for a project was that we had a Business layer that did all of the "LINQing" to L2S objects... in essence centralizing all querying to one layer via "Manager Objects" (I guess these are somewhat akin to repositories).
We did not use DTOs to map to L2S; as we didn't feel is was worth the effort in this project. Part of our logic was that as more and more ORMs support Iqueryable and similar syntax to L2S (e.g. entity framework), then it probably wouldn't be THAT bad to switch to a different ORM, so its not that bad of a sin, IMO.
Do you ever think its acceptable to use L2S entities all through your application?
That depends. If you do a short lived product you can get things easily wired up in a quick succession if you use L2S. But if you do a long lived product which you might have to maintain for a long duration, then you better think about a proper persistence layer.
How difficult is it (from experience) to refactor to use another persistence framework later?
If you use L2S in all your layers, well then you must not think about re-factoring them to use another persistence framework. This is exactly the advantage of using a framework like NHibernate or Entity Framework in your persistence layer , that though it requires a bit of extra effort upfront it will be easy to maintain in the long run
It sounds like you should be considering the ViewModel Pattern for WPF
http://msdn.microsoft.com/en-us/magazine/dd419663.aspx
I've not yet found a clear answer to this and to clarify:
With nHibernate and SQL server are you expected to disregard or migrate your business logic stored in your stored procedures, views and triggers into HQL or application code?
NHibernate is an O/R mapper which is very suited for applications that are build using a 'domain driven' methodology.
In such applications, the domain model is an expressive object oriented model of the business. This means that the 'model' contains all (or most of) the business logic.
In such cases, I see very little (if any) situations where you would put business logic into stored procedures.
Well, leaving aside all details: yes.
NH is an Object-relational mapper, and is intended to be used with an architectural style called 'Domain-Driven Design'. An important aspect of it is that it completely disregards the database for anything other than saving and loading data - this concept is called Persistence Ignorance, and its motto is: There is no database.
From this point of view, having business logic living in stored procedures or some other db object is not only discouraged, but it would clearly be a severe code smell.
If you follow the preferred Domain-Driven Design methodology, there will be just no opportunity to put business logic into the db - simply because there isn't any db around at the time of constructing your business layer...