Silverlight Asynchronous Column Binding - database

I have two distinct databases as a source for a Silverlight RIA application. They are exposed through separate RIA services.
There is one relationship between the databases, meaning that I have a foreign key (no constraint) between the databases. My Entities currently load this as an Int32. How would I go about mapping this to an actual end-user display value from the other database?
It appears that Value Converters require synchronous operations. Also, any asynchronous call in the DomainDataSource.LoadedData will cause the data source to remain busy indefinitely.

You could also consider using the ExternalReference attribute.
For example,
public partial class SalesOrderHeader
{
[ExternalReference]
[Association("My_Custom_FK", "CustomerID", "CustomerID")]
public Customer Customer { get; set; }
}
In this way you could build a connection between your RIA domain contexts. An example that helps towards making this work is Nikhil's BookClub solution where he projects domain entity types into objects he returns to his view models.
You could do the same except you would be bridging the gap between domain contexts.

Depending on the details of your scenario you could create either a view or a stored procedure in one database that will run your query across both databases and return a single result set.
You can then get RIA Services to return the results of the view/stored procedure...
That way you are only making a single call from Silverlight...

Related

Multimapping in Dapper Without Custom SQL

Is there a way to use multimapping in Dapper in a generic way, without using custom SQL embedded in C# code?
See for example
Correct use of Multimapping in Dapper
Is there a generic way to query the data from 2 related entities, where common fields are determined automatically for join?
Don't do this. Don't even think this way! Databases are long lasting and normalized. Objects are perishable and frequently denormalized, and transitioning between the two is something to do thoughtfully, when you're writing your SQL. This is really not a step to automate. Long, painful experience has convinced many of us that database abstractions (tables and joins) should not just be sucked into (or generated out of) code. If you're not yet convinced, then use an established ORM.
If, on the other hand, you absolutely want to be in control of your SQL, but its the "embedding" in string literals in C# that bugs you, then I couldn't agree more. Can I suggest QueryFirst, a visual studio extension that generates the C# wrapper for your queries. Your SQL stays in a real SQL file, syntax validated, DB references checked, and at each save, QueryFirst generates a wrapper class with Execute() methods, and a POCO for the results.
By multi-mapping, I presume you want to fill a graph of nested objects. A nice way to do this is to use one QueryFirst .sql per class in your graph, then in the partial class of the parent, add a List of children. (QueryFirst generated POCOs are split across 2 partial classes, you control one of them, the tool generates the other.)
So, for a graph of Customers and their orders...
In the parent sql
select * from customers where name like #custName
The child sql
select * from orders where customerId = #customerId
In the parent partial class, for eager loading...
public List<Orders> orders;
public void OnLoad()
{
orders = new getOrders().Execute(customerId); // property of the parent POCO
}
or for lazy loading...
private List<Orders> _orders;
public List<Orders> orders
{
get
{
return _orders ?? _orders = new GetOrders().Execute(customerId);
}
}
5 lines of code, not counting brackets, and you have a nested graph, lazy loaded or eager loaded as you prefer, the interface discoverable in code (intellisense for the input parameter and result). Their might be hundreds of columns in those tables, whose names you will never need to re-type, and whose datatypes are going to flow transparently into your C#.
Clean separation of responsibilities. Total control. Disclaimer : I wrote QueryFirst :-)
Multimapping with Dapper is a method of running multiple SQL queries at once and then return each result mapped to a specific object.
In the context of this question, Multimapping is not even relevant, re: you're asking for a way to automatically generate a SQL query from the given objects and creating the correct joins which would result in a single SQL query which is not related to Multimapping.
I suspect what you're looking for is something along the lines of the Entity Framework. There are a couple of Dapper extension projects you may want to look into which will generate some of your SQL. See: Dapper.Rainbow VS Dapper.Contrib

RIA Services - call a stored procedure

I am using RIA Services with Silverlight and Entity Framework. I want to call a stored procedure and map the results to a datagrid. What is the best way to do this? The output of the stored procedure doesn't map to any table design.
I found the following article -
http://blogs.msdn.com/b/tom/archive/2009/05/07/silverlight-ria-calling-stored-procedures-that-don-t-return-tables.aspx
However, it doesn't work for me - I get an error saying that the result complex set does not have a primary key defined. I can't see how to define this in code.
Anyway, I'm open to any and all solutions.
I found the following excellent step-by-step guide at this site -
http://betaforums.silverlight.net/forums/p/218383/521023.aspx
1) Add a ADO Entity Data Model to your Web project; Select generate from database option; Select your Database instance to connect to.
2) Choose your DB object to import to the Model. You can expand Table node to select any table you want to import to the Model. Expand Stored Procedure node to select your Stored Precedure as well. Click Finish to finish the import.
3) Right click the DB model designer to select Add/Function Import. Give the function a name (same name as your SP would be fine) and select the Stored Procedure you want to map. If your SP returns only one field, you can map the return result to a collection of scalars. If your SP returns more than one field, you could either map the return result to a collection or Entity (if all the field are from a single table) or a collection of Complex types.
If you want to use Complex type, you can click Get Column button to get all the columns for your SP. Then click Create new Complex type button to create this Complex type.
4) Add a Domain Service class to the Web project. Select the DataModel you just created as the DataContext of this Service. Select all the entitis you want expose to the client. The service functions should be generated for those entities.
5) You may not see the Complex type in the Entity list. You have to manully add a query function for your SP in your Service:
Say your SP is called SP1, the Complex type you generated is called SP1_Result.
Add the following code in your Domain Service class:
public IQueryable<SP1_Result> SP1()
{
return this.ObjectContext.SP1().AsQueryable();
}
Now you can compile your project. You might get an error like this: "SP1_Result does not have a Key" (if you not on RIA service SP1 beta). If you do, you need to do the following in the service metadata file:
Added a SP1_Result metadata class and tagged the Key field:
[MetadataTypeAttribute(typeof(SP1_Result.SP1_ResultMetadata))]
public partial class SP1_Result
{
internal sealed class SP1_ResultMetadata
{
[Key]
public int MyId; // Change MyId to the ID field of your SP_Result
}
}
6) Compile your solution. Now you have SP1_Result exposed to the client. Check the generated file, you should see SP1_Result is generated as an Entity class. Now you can access DomainContext.SP1Query and DomainContext.SP1_Results in your Silverlight code. You can treat it as you do with any other Entity(the entity mapped to a table) class.
Well, I figured out how to do it, though it's a bit messy. You need to create a metadata class for the result set in the domain metadata file. After that, RIA will treat it essentially like it does an entity.
Full details can be found here - http://leeontech.wordpress.com/2010/05/24/ria-services-and-storedprocedures/

ADO.NET Data Services on Silverlight: Using the generated key within the same transaction

We have a Silverlight application that uses WCF Data Services. We want to add logging functionality: when a new row is generated, the primary key for this new row is also recorded in the logging table. The row generation and the logging should occur within the same transaction. The primary keys are generated via the database (using the IDENTITY keyword).
This might best be illustrated with an example. Here, I create a new Customer row, and in the same transaction I write the Customer's primary key to an AuditLog row. This example uses a thick client and the Entity Framework:
using (var ts = new TransactionScope())
{
AuditTestEntities entities = new AuditTestEntities();
Customer c = new Customer();
c.CustomerName = "Acme Pty Ltd";
entities.AddToCustomer(c);
Debug.Assert(c.CustomerID == 0);
entities.SaveChanges();
// The EntityFramework automatically updated the customer object
// with the newly generated key
Debug.Assert(c.CustomerID != 0);
AuditLog al = new AuditLog();
al.EntryDateTime = DateTime.Now;
al.Description = string.Format("Created customer with customer id {0}", c.CustomerID);
entities.AddToAuditLog(al);
entities.SaveChanges();
ts.Complete();
}
It's a trivial problem when developing a thick client using Entity Framework.
However, using Silverlight and ADO.NET data services:
SaveChanges can only be invoked
asynchronously
I'm not sure TransactionScope is available
I'm not sure if generated keys can be reflected in the client Edit: According to Alex James they are indeed reflected in the client
So, will this even be possible?
Short Answer: No this is not even possible
Okay... so:
Generated Keys are reflected in the client.
You can transact one SaveChanges operation by using DataServiceContext.SaveChanges(SaveChangesOption.Batch)
But unfortunately you can't do anything to tie one request to the response of another, and wrap them both in one transaction.
However...
If you change the model by making a CustomerAuditLog method that derives from AuditLog:
// Create and insert customer ...
// Create audit log and relate to un-insert customer
CustomerAuditLog al = new CustomerAuditLog();
al.EntryDateTime = DateTime.Now;
al.Description = string.Format("Created customer with {Customer.ID}");
// assuming your entities implement INotifyPropertyChanging and you are using
// the Data Services Update to .NET 3.5 SP1 to use DataServiceCollection
// to notify the DataServiceContext that a relationship has been formed.
//
// If not you will manually need to tell Astoria about the relationship too.
al.Customer = c;
entities.AddToAuditLog(al);
entities.SaveChanges();
And having some sort of logic deep in your underlying DataSource or maybe even the database to replace {Customer.ID} with the appropriate value.
You might be able to get it to work, because if two inserts happen in the same transaction and one (CustomerAuditLog) depends on another (Customer) they should be ordered appropriately by the underlying data source.
But as you can see this approach is kind of hacky, you don't want a Type for each possible audit message do you! And ... it might not even work.
Hope this helps
Alex
Data Services Team, Microsoft

Is there any overhead with LINQ or the Entity Framework when getting large columns as part of an entity?

Let's say you have a table containing articles and you want want to display a list of them, excluding the actual article text. When you get a list of the article objects using LINQ or the Entity Framework, is there a LOT of overhead associated with getting that text column too? I assume that when you start enumerating the list, the article text will be stored in memory until the objects are disposed of.
So would it make sense to create an intermediary object that doesn't contain the text column? If so, how would you do this? Make a class inside your DAL, allow the ORM to automatically create one by setting up a stored procedure, or some other process?
The overhead isn't huge (just the cost of sending the data over the wire), but if you don't need the data sure, don't return it. I find the easiest way is to use anonymous types:
from a in Context.Articles
select new {Name = a.Name, Author = a.Author};
Since you're not actually materializing any Article instances, the Entity Framework won't need to fill out all the properties of an instance.
If you don't need the data you should definitely create a different type. By convention I typically name this sort of class "nnnInfo" or "nnnListItem". To create ArticleListItem, in L2S, simply drag the table to your DataContext designer a second time. Then rename it from 'Article1' to 'ArticleListItem' and remove the unneeded properties (rt click, delete). In EF, the process would be similar. As Craig notes, you could use anonymous types, but by creating a concrete type, you can reuse throughout your app, expose via services, etc.
A second way to do this would be to create the class manually and write an extension method to return ArticleListItem:
public static IQueryable<ArticleListItem> ToListItems(this IQueryable<Article> articles)
{
return from a in articles select new ArticleListItem{ Title = a.Title, ...}
}
This would allow you to "cast" any queries against Article as ArticleListItem...

Adaptive Database

Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities.
For example, assuming an empty database (pseudo code):
user1 = new User() // Creates the user table with a single id column
user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255)
user2 = new User() // Reuses the table
user2.firstName = "Bob"
user2.lastName = "Loblaw" // Alters the table to have a last name column
Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later.
Also, you could generate your schema by unit testing it this way.
And obviously this is only for prototyping.
Is there anything like this out there?
Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing.
Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy.
class User {
String userName
String firstName
String lastName
Date dateCreated
Date lastUpdated
static constraints = {
userName(blank: false, unique: true)
firstName(blank: false)
lastName(blank: false)
}
String toString() {"$lastName, $firstName"}
}
Saving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes.
I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes.
[ActiveRecord]
public class User : ActiveRecordBase<User>
{
[PrimaryKey]
public Int32 UserId { get; set; }
[Property]
public String FirstName { get; set; }
}
There are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple.
So, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database.
ActiveRecordStarter.Initialize();
ActiveRecordStarter.CreateSchema();
May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files.
Its not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us
Do you want the schema, but have it generated, or do you actually want NO schema?
For the former I'd go with nhibernate as #tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts)
If you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/
But by the sounds of it (N)hibernate would suite the best, but I could be wrong.
You could use an object database.

Resources