I have come to a cross roads and can't figure out the proper way to get lots of data for a form onto a Silverlight / WCF RIA Services application. Imagine an order form that you can update fields about the order (Billing information, etc.) and also other information that is read-only, payments to the order, order items, etc.
The database is roughly Orders have Order Items and Order Payments. Order Payments have Payment Methods. There is a lot of other data associated with the orders table, but this gives you an idea.
With EF4, I can do Include statements to include child properties of the Order object, like OrderPayments and OrderItems, and get them all in one shot. But I haven't found a way to get child properties that point to objects (OrderPayments->PaymentMethod).
So would it be better to have lots of queries (declared explicitly in the XAML) calling each section of data individually (using domain data contexts), or is it better to build one massive view object that gets populated and sent to the client in one shot?
The biggest advantage of RIA services with EF4 is that the queries are lazily executed on the server. e.g. If you use paging on long lists of data only the page-size chunks are transferred. That is definitely the way to go. Not massive views with multiple sets of data.
When you need specific items, not covered by the automated relational links, add query methods to RIA and call those explicitly on your domain context.
The more I use RIA the more I like it. You just have to play nice with it :)
Related
Currently in our front end project (AngularJS), we need to consume different endpoints that are built in microservices architecture and show the data in the list view. Then we need to allow users to sort the data based on the columns selected by user. For eg, we are listing 10 columns out of which 6 are rendered from Service A and other 4 columns are pulled from another Service B. Both the services don't have direct relation mapping instead based on the object id Service B returns the data.
Now we have consolidated the list and shown the columns and allowed users to choose columns of their choice. As a next step, we need to allow users to sort any column data seamlessly. Is there any best practice followed in microservices paradigm to retrieve the data from both the services and sort them and show the result.
We have few options like
list all the data at once from both the services and sort the data in frontend. But problem with this approach is, if there are more dataset then user might feel slowness and at times browser can get hanged. We are using AngularJs in our project and already facing slowness when data set grows.
Introduce an intermediate API service(light weight nodejs server) which will helps to coordinate the request and it internally handles requesting data between different services and sends the result back.
Create an intermediate API service which will cache the data and orchestrates the request and responds the data from multiple services.
Can any one just share any other practices can be followed for the above use case? In current microservices trends, all API services are exposed as separate service and it makes frontend world a bit complex to handle services between different APIs and show data to users in UI to interact.
Any suggestions or approaches or hint will be helpful.
Thanks in advance.
Srini
Like you said, there are a few ways to handle the scenario you have. In my opinion the best approach would be option two. It is similar to the Gateway Aggregation Pattern where you introduce a gateway layer to handle the aggregation of your service APIs. The added benefit is that you may be able to park some common functionalities in this gateway layer if required.
Of course, the obvious drawback would be that you now have another layer that needs to be highly available and managed. So do consider the pros and cons carefully before deciding on your approach. For example, if this is the only aggregation that you will ever need, then 3 may be a better option.
We have a distributed system with 3 sites. Each site has its own services that encapsulates both logic and data.All services are using mysql database as the persistence system and SOAP services. But we get a trouble with database reports since maintaining services encapsulation prevents from accessing database directly. So How to get reports from web services without breaking encapsulation provided by web services and in the same time maintaining efficiency.
Share a common data-structure known by the services and the clients.
I'd implement a very simple serializable data-structure, and have these entities to be interchanged, known between the client and the server(s). And of course all services would output the same data-structures.
If you have already a persistence layer (if not, build one), with DAO/DAL(s) entities, have them to be responsible of querying the data and performing the transformation between the original data to these new common data-structures. A helper class would do that automatically.
What I think it could be this data-structure, is an entity based on a set of rows and columns (array of object instances), plus, an array of columns identifiers, known by both the client and the server, so that your model knows which are the columns being requested by the client.
In this way you could have a client requesting 3 columns of the report, and a different client, might be requesting many others of the same report.
Additionally, I'd of course, not including any HTML in the data, just the raw data, and your clients to be responsible on how to present that data.
This above is a little bit abstract.. but hope it helps you anyway.
I don't understand SOA (Service-oriented Architecture) and databases. While I'm attracted by the SOA concept (encapsulating reusable business logic into services) I can't figure out how it's supposed to work if data tables encapsulated in a service are required by other services/systems---or is SOA suitable at all in this scenario?
To be more concrete, suppose I have two services:
CustomerService: contains my Customers database table and associated business logic.
OrderService: contains my Orders table and logic.
Now what if I need to JOIN the Customers and Orders tables with an SQL statement? If the tables contain millions of entries, unacceptable performance would result if I have to send the data over the network using SOAP/XML. And how to perform the JOIN?
Doing a little research, I have found some proposed solutions:
Use replication to make a local copy of the required data where needed. But then there's no encapsulation and then what's the point of using SOA? This is discussed on StackOverflow but there's no clear consensus.
Set up a Master Data Service which encapsulates all database data. I guess it would get monster sized (with essentially one API call for each stored procedure) and require updates all the time. To me this seems related to the enterprise data bus concept.
If you have any input on this, please let me know.
One of the defining principals of a "service" in this context is that it owns, absolutely, that data in the area it is responsible for, as well as operations on that data.
Copying data, through replication or any other mechanism, ditches that responsibility. Either you replicate the business rules, too, or you will eventually wind up in a situation where you wind up needing the other service updated to change your internal rules.
Using a single data service is just "don't do SOA"; if you have one single place that manages all data, you don't have independent services, you just have one service.
I would suggest, instead, the third option: use composition to put that data together, avoiding the database level JOIN operation entirely.
Instead of thinking about needing to join those two values together in the database, think about how to compose them together at the edges:
When you render an HTML page for a customer, you can supply HTML from multiple services and compose them next to each other visually: the customer details come from the customer service, and the order details from the order service.
Likewise an invoice email: compose data supplied from multiple services visually, without needing the in-database join.
This has two advantages: one, you do away with the need to join in the database, and even the need to have the data stored in the same type of database. Now each service can use whatever data store is most appropriate for their need.
Two, you can more easily change the outside of your application. If you have small, composable parts you can easily add rearrange the parts in new ways.
The guiding principle is that it is ok to cache immutable data
This means that simple immutable data from the customer entity can exist in the order service and there's no need to go to the customer service every time you need the info. Breaking everything to isolated services and then always making these remote procedure calls ignores the fallacies of distributed computing.
If you have extensive reporting needs you need to create an additional service. I call that Aggregated Reporting service, which, again gets read-only data for reporting purposes. You can see an article I wrote about that for InfoQ a few years ago
In the SO question you quoted, various people state that it is OK for a service to access another services data, so the Order service could have a GetAllWithCustomer functionality, which would return all the orders along with the customer details for that order.
Also, this question of mine may be helpful:
https://softwareengineering.stackexchange.com/questions/115958/is-it-bad-practice-for-services-to-share-a-database-in-soa
Let me set up my LOB scenario.
I am re-writing our core business app. The requirements are that I create an internally usable app (I'd like to use Silverlight) that our employees use on a daily basis. I also need to provide a SOAP service that can be used to input orders, get invoices, etc.
I also will be doing this in pieces, so when I update a record in the new SQL Server database, I need to make sure to update our legacy SQL Server as well.
So, it certainly makes sense to create a DAL that will pull data from the new SQL server, as well as write back to 2 data stores.
It would also make sense to create a BLL that can be used by both Silverlight/RIA and the WCF web services.
I have created a data entity of the new database in it's own project and it is used in all the other projects. The problem here is that RIA seems to require that I create it right inside the ASP.Net project in order to get the metadata for Silverlight. Without this, I need to manually re-create the metadata for Silverlight to access it correctly.
My question then, should I create duplicates of the Entity Model? One for RIA and one for everything else? Is there a better way to do this? Should I just forego using RIA and have Silverlight access WCF services? Or should I just continue to duplicate the metadata in RIA?
We use entities for direct reference to storage and Data Transfer Objects (DTOs) which are almost identical for passing back/forth between BLL and WCF/GUI/etc. We map between the 2 using AutoMapper which means there's very little additional work but we don't have to worry about if a given entity is attached to the context/tracking state changes/etc...
Edit: You definitely want to keep your code as DRY as possible. Personally, I'd look at using DTOs above the BLL and either having 2 sets of repositories which are co-ordinated in the DAL (one RW, one W only). or even having Meta-repositories which handle the datasets on the 2 stores themselves.
If you're not already using it, Unity and IoC would be of real benefit to you here. You might also want to use one of the modular code patterns to allow you to register [n] data stores in different modes, so that when you finally want to retire the old store, you don't need to do much work.
I'd also question whether your entities need to be defined in ASP.Net - you may simple be able to reference the appropriate DLLs from your entity/DTO project and add the appropriate markup/config
I’m using NHibernate with RIA Services and Silverlight 4. I create DTOs for transferring the data via RIA Services rather than distributing my domain layer objects (as per Martin Fowler’s First Law of Distributed Object Design: “Don’t distribute your objects!”). The DTO objects are flattened down to two layers from five corresponding layers in the domain layer.
Here’s my problem. After making changes in Silverlight 4, RIA Services knows which DTO objects have been modified, but in the server-side update code I need to transfer the changes back to the “real” domain layer objects so that NHibernate can apply these changes back to the database. What’s the best way to do this?
Since the DTOs are intended to be lightweight, containing only the information that is needed on the client side, I obviously would not want to embed the corresponding domain objects inside the DTOs.
Here are a few of possibilities that I’ve considered:
1) Hold references to the domain objects within the DTO objects. As long as only the references get serialized and sent across the wire, not the entire referenced objects, then this might be a reasonable approach. Of course, the references wouldn’t be valid on the client side because they would point to non-existent memory locations, but at the end of the trip they could be used server side. (?)
2) Same as above but only save a reference to the domain aggregate root in the DTO object. Then use object relationship traversal to get to the other related domain objects.
3) Store the IDs of the domain objects in the DTOs and use NHibernate’s “Get” by ID or “Load” by ID functionality to retrieve the correct domain objects so that the updates can be applied.
4) Same as above but only use the “Get” or “Load” for the aggregate root and then use traversal for all related objects.
Perhaps none of the above is ideal and there is a better approach…
Whenever I build an access layer on top of ORM, I typically go ahead and put whatever the unique key is for the entity in the DTO, so that is tracked, and of course support for default(T) in the case of an add.
Then, when the object comes back to the server side, I can easily do a Load, marshall the changed values over from the DTO and then either let the session save it or perform an explicit save.
This would be your 3/4.
To answer your question at basic level - you may want to look into presentation model. Deepesh from RIA Services team has a good introductory blog post about it.
Also, you could use ID instead of reference (i.e. intrinsic, serializable value instead of app-domain-scoped object reference) and use [Association].
To answer at the next level, presentation model usage still involves work and additional types. It makes most sense when the shape of the model you want to see is substantially different from that on the server (whether a rich domain model or just DTO-based model). The increase in number of types and the need to map between them is the cost you pay for the flexibility. There are cheaper options that do less - e.g. non-public members, serialization directive [Exclude] etc that let you shape the code-gen'ed and serialized model. They may be worth considering. After all, the types on two sides of the trust boundary are very different by default (e.g. your types on the server vs code-gen'ed types on the client).
HTH
Dinesh