Merging two database tables into a single Vaadin Treetable - sql-server

TL;DR: How do I combine info from two database tables into a Vaadin Treetable (or, when Vaadin 7.5 is released, a heirarchical Grid)?
I have a Java Swing desktop application that does this currently, albeit probably very ineffeciently with ArrayLists of Java Beans that updates from the SQL Server every 30 seconds. Well, I'm now attempting to port this desktop app over to a Vaadin web app. The desktop app has login capabilities and I'll eventually worry about doing the same for the web app, but for now, I just want to try and get the most basic part of this web app working: The Treetable. Or, hopefully soon, a heirarchical Grid.
To help illustrate what I'm aiming for, I'll try and post an image I created that should show how the data from the two tables needs to merge into the treetable (using a partial screenshot of my existing desktop app):
I am well aware of how to use the JOIN command in SQL and I've briefly read about Referencing Another SQLContainer, but I'm still in the early stages of learning Vaadin and still trying to wrap my head around SQLContainer, FreeformQuery, and how I need to implement FreeformStatementDelegate for my project. Not to mention that I'll need to implement checkboxes for each row, as you can see in that photo, so that it updates the database when they are clicked. And a semi-checked state for the checkbox would be necessary for Jobs that have more than one OrderDetail item wherein only some of those OrderDetail items are completed. To get that working for my Java Swing program, I had to lean on an expert Java developer who already had most of the code ready, and boy, is it super-complicated!
If anyone can give me a high-level view of how to accomplish this task along with some examples, I would be indebted. I totally understand that I'm asking for a great deal here, and I'm willing to take it slow, step-by-step, as long as you are. I really want to fully understand this so I'm not just copy-pasting code without thinking.

I have never used SQLContainer so this might not be the answer you want. I just had a quick look at SQLContainer and I'm not sure if it will serve your purpose. For a TreeTable you will need a Container Implementing the Container.Hierarchical interface or the table will put a wrapper around it and you have to set the parent-children relations manually. You probably could extend SQLContainer and implement the methods from Container.Hierarchical in that class but this might get complicated.
In your situation I think I'd go with implementing my own Container, probably extending AbstractContainer, to get the listener code for free, and implementing Hierarchical. There are quite some methods to implement, I know, and so this will need some time, but most methods are quickly implemented and you can start with the basic methods and add more interfaces (Ordered, Sortable, Indexed, Filterable, Collapsible,...) later.
If done properly you'll end up with with easy readable code that can be extended in the future without to much trouble and you will not depend on future versions of SQLContainer.
Another good thing is that you'll learn a lot about the data structures (Container, Item, Property) used in vaadin. But as I said I don't really know SQLContainer so maybe there will be a better answer telling you that it is easy with the SQLContainer
For the Checkbox feature you could go display the name/product property as a CheckBox. With Icon and Caption it looks almost like you want it. See http://demo.vaadin.com/sampler/#ui/data-input/other/check-box and set an Icon. The semi-checked state could be done with css.
Hope this helps you finding the right solution for your task.

I'll admit that I'm a beginner with vaadin myself and there may be much better ways of doing this, but here's something I've mocked up which seems to work. It doesn't do everything you need but it might be a base to start from. Most importantly, for changes to be saved back into the database you'll need to update the SQLContainers when something in the container is changed.
import com.vaadin.data.Item;
import com.vaadin.data.Property;
import com.vaadin.data.util.HierarchicalContainer;
import com.vaadin.data.util.sqlcontainer.SQLContainer;
#SuppressWarnings("serial")
public class TwoTableHierarchicalContainer extends HierarchicalContainer {
private SQLContainer parentContainer;
private SQLContainer childContainer;
private String parentPrimaryKey;
private String childForeignKey;
public TwoTableHierarchicalContainer(SQLContainer parentContainer, SQLContainer childContainer,
String parentPrimaryKey, String childForeignKey) {
this.parentContainer = parentContainer;
this.childContainer = childContainer;
this.parentPrimaryKey = parentPrimaryKey;
this.childForeignKey = childForeignKey;
init();
}
private void init() {
for (Object containerPropertyIds : parentContainer.getContainerPropertyIds()) {
addContainerProperty(containerPropertyIds, Object.class, "");
}
for (Object containerPropertyIds : childContainer.getContainerPropertyIds()) {
addContainerProperty(containerPropertyIds, Object.class, "");
}
for (Object itemId : parentContainer.getItemIds()) {
Item parent = parentContainer.getItem(itemId);
Object newParentId = parent.getItemProperty(parentPrimaryKey).getValue();
Item newParent = addItem(newParentId);
setChildrenAllowed(newParentId, false);
for (Object propertyId : parent.getItemPropertyIds()) {
#SuppressWarnings("unchecked")
Property<Object> newProperty = newParent.getItemProperty(propertyId);
newProperty.setValue(parent.getItemProperty(propertyId).getValue());
}
}
for (Object itemId : childContainer.getItemIds()) {
Item child = childContainer.getItem(itemId);
Object newParentId = child.getItemProperty(childForeignKey).getValue();
Object newChildId = addItem();
Item newChild = getItem(newChildId);
setChildrenAllowed(newParentId, true);
setParent(newChildId, newParentId);
setChildrenAllowed(newChildId, false);
for (Object propertyId : child.getItemPropertyIds()) {
#SuppressWarnings("unchecked")
Property<Object> newProperty = newChild.getItemProperty(propertyId);
newProperty.setValue(child.getItemProperty(propertyId).getValue());
}
}
}
}

Related

How to save and retrive view when it's needed

My goal is to keep session size as small as possible. (Why?.. it's other topic).
What I have is Phase listener declared in faces-config.xml
<lifecycle>
<phase-listener>mypackage.listener.PhaseListener</phase-listener>
</lifecycle>
I want to save all other views, except the last one(maximum two) , in some memcache. Getting the session map:
Map<String, Object> sessionMap = event.getFacesContext().getExternalContext().getSessionMap();
in beforePhase(PhaseEvent event) method is giving me access to all views. So here I could save all views to the memcache and delete them from the session. The question is where in jsf these views that are still loaded in the browser are requested so that I can refill with this view if it's needed. Is it possible at all? Thank you.
To address the core of your question, implement a ViewHandler, within which you can take control of the RESTORE_VIEW and RENDER_RESPONSE phases/processes. You'll save the view during the RENDER_RESPONSE and selectively restore, during the RESTORE_VIEW phase. Your view handler could look something like the following
public class CustomViewHandlerImpl extends ViewHandlerWrapper{
#Inject ViewStore viewStore; //hypothetical storage for the views. Could be anything, like a ConcurrentHashMap
ViewHandler wrapped;
public CustomViewHandlerImpl(ViewHandler toWrap){
this.wrapped = toWrap;
}
public UIViewRoot restoreView(FacesContext context, String viewId) throws IOException{
//this assumes you've previously saved the view, using the viewId
UIViewRoot theView = viewStore.get(viewId);
if(theView == null){
theView = getWrapped().restoreView(context, viewId);
}
return theView;
}
public void renderView(FacesContext context, UIViewRoot viewToRender) throws IOException, FacesException{
viewStore.put(viewToRender.getId(),viewToRender);
getWrapped().renderView(context, viewToRender);
}
}
Simply plug in your custom viewhandler, using
<view-handler>com.you.customs.CustomViewHandlerImpl</view-handler>
Of course, you probably don't want to give this treatment to all your views; you're free to add any conditions to the logic above, to implement conditional view-saving and restoration.
You should also consider other options. It appears that you're conflating issues here. If your true concern is limit the overhead associated with view processing, you should consider
Stateless Views, new with JSF-2.2. The stateless view option allows you to exclude specific pages from the JSF view-saving mechanism, simply by specifying transient="true" on the f:view. Much cleaner than mangling the UIViewRoot by hand. The caveat here is that a stateless view cannot be backed by scopes that depend on state-saving, i.e. #ViewScoped. In a stateless view, the #ViewScoped bean is going to be recreated for every postback. Ajax functionality also suffers in this scenario, because state saving is the backbone of ajax-operations.
Selectively set mark components as transient The transient property is available for all UIComponents, which means, on a per-view basis, you can mark specific components with transient="true", effectively giving you the same benefits as 1) but on a much smaller scope. Without the downside of no ViewScoped
EDIT: For some reason, UIViewRoot#getViewId() is not returning the name of the current view (this might be a bug). Alternatively, you can use
ExternalContext extCtxt = FacesContext.getCurrentInstance().getExternalContext();
String viewName = ((HttpServletRequest)extCtxt.getRequest()).getRequestURI(); //use this id as the key to store your views instead

Binding an MVVM IEnumerable<POCO> to MyGeneration BLL entities (real-time)

I am working on my first WPF project using MVVM. I have successfully managed to abstract away my service layer so that I could use (for instance) XML files to store the data. Using IEnumerable collections of my POCO's, any changes to the table on the GUI automaticaly propagated to repository.
Now I'm trying to switch over to using our company DB2 database instead. We use MyGeneration dOOdads to generate DAL's and BLL's for our DB2 database. The DAL's have built-in CRUD and other utility methods.
One of my colleagues has managed to successfully bind the MyGeneration BLL DataView to his WPF application (he did not use MVVM) so that it too could make real-time changes to the DataView (only requiring a call to the BLL's SaveChanges method).
My problem is that in the translation between the MyGeneration DataView, and my collection of POCO's, I would need to explicitly update any changes at this layer.
Am I approaching this the wrong way? Would something like AutoMapper be an answer to my problem, or would I still not have real-time mapping?
public override IEnumerable<PromotionPlanHeader> ReadAll()
{
foreach (DataRow row in bll_PROMPLANH.DefaultView.Table.Rows)
{
yield return new PromotionPlanHeader
{
PlanNumber = Convert.ToInt32(row["PLANNUMBER"]),
Active = (row["ACTIVE"].ToString() == "1"),
Capturer = row["CAPTURER"].ToString(),
Region = row["REGION"].ToString(),
Cycle = row["CYCLE"].ToString(),
Channel = row["CHANNEL"].ToString(),
StartDate = ConvertDb2Date(row["STARTDATE"].ToString()),
EndDate = ConvertDb2Date(row["ENDDATE"].ToString()),
AdvertStartDate = ConvertDb2Date(row["ADVERTSTARTDATE"].ToString()),
AdvertEndDate = ConvertDb2Date(row["ADVERTENDDATE"].ToString()),
BpcsDealNumber = Convert.ToInt32(row["BPCSDEALNUMBER"]),
Description = row["DESCRIPTION"].ToString(),
DeactivationReason = row["DEACTIVATIONREASON"].ToString(),
LastSavedUsername = row["LASTUSER"].ToString(),
LastSavedDateTime = ConvertDb2DateTime(row["LASTDATE"].ToString(), row["LASTDATE"].ToString().PadLeft(6, '0'))
};
}
}
The "WPF DataGrid Practical Examples" walkthrough has really cleared up some things for me. Particularly, the Binding in a Layered Application chapter, which demonstrates how you handle Updates and Inserts with an IEditableObject interface.
Although that makes me wonder if a BindingList is not better(?)

Entity Framework and WPF best practices

Is it ever a good idea to work directly with the context? For example, say I have a database of customers and a user can search them by name, display a list, choose one, then edit that customer's properties.
It seems I should use the context to get a list of customers (mapped to POCOs or CustomerViewModels) and then immediately close the context. Then, when the user selects one of the CustomerViewModels in the list the customer properties section of the UI populates.
Next they can change the name, type, website address, company size, etc. Upon hitting a save button, I then open a new context, use the ID from the CustomerViewModel to retrieve that customer record, and update each of its properties. Finally, I call SaveChanges() and close the context. This is a LOT OF WORK.
My question is why not just work directly with the context leaving it open throughout? I have read using the same context with a long lifetime scope is very bad and will inevitably cause problems. My assumption is if the application will only be used by ONE person I can leave the context open and do everything. However, if there will be many users, I want to maintain a concise unit of work and thus open and close the context on a per request basis.
Any suggestions? Thanks.
#PGallagher - Thanks for the thorough answer.
#Brice - your input is helpful as well
However, #Manos D. the 'epitome of redundant code' comment concerns me a bit. Let me go through an example. Lets say I'm storing customers in a database and one of my customer properties is CommunicationMethod.
[Flags]
public enum CommunicationMethod
{
None = 0,
Print = 1,
Email = 2,
Fax = 4
}
The UI for my manage customers page in WPF will contain three check boxes under the customer communication method (Print, Email, Fax). I can't bind each checkbox to that enum, it doesn't make sense. Also, what if the user clicked that customer, gets up and goes to lunch... the context sits there for hours which is bad. Instead, this is my thought process.
End user chooses a customer from the list. I new up a context, find that customer and return a CustomerViewModel, then the context is closed (I've left repositories out for simplicity here).
using(MyContext ctx = new MyContext())
{
CurrentCustomerVM = new CustomerViewModel(ctx.Customers.Find(customerId));
}
Now the user can check/uncheck the Print, Email, Fax buttons as they are bound to three bool properties in the CustomerViewModel, which also has a Save() method. Here goes.
public class CustomerViewModel : ViewModelBase
{
Customer _customer;
public CustomerViewModel(Customer customer)
{
_customer = customer;
}
public bool CommunicateViaEmail
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Email;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Email;
}
}
public bool CommunicateViaFax
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Fax;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Fax;
}
}
public bool CommunicateViaPrint
{
get { return _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print); }
set
{
if (value == _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print)) return;
if (value)
_customer.CommunicateViaPrint |= CommunicationMethod.Print;
else
_customer.CommunicateViaPrint &= ~CommunicationMethod.Print;
}
}
public void Save()
{
using (MyContext ctx = new MyContext())
{
var toUpdate = ctx.Customers.Find(_customer.Id);
toUpdate.CommunicateViaEmail = _customer.CommunicateViaEmail;
toUpdate.CommunicateViaFax = _customer.CommunicateViaFax;
toUpdate.CommunicateViaPrint = _customer.CommunicateViaPrint;
ctx.SaveChanges();
}
}
}
Do you see anything wrong with this?
It is OK to use a long-running context; you just need to be aware of the implications.
A context represents a unit of work. Whenever you call SaveChanges, all the pending changes to the entities being tracked will be saved to the database. Because of this, you'll need to scope each context to what makes sense. For example, if you have a tab to manage customers and another to manage products, you might use one context for each so that when a users clicks save on the customer tab, all of the changes they made to products are not also saved.
Having a lot of entities tracked by a context could also slow down DetectChanges. One way to mitigate this is by using change tracking proxies.
Since the time between loading an entity and saving that entity could be quite long, the chance of hitting an optimistic concurrency exception is greater than with short-lived contexts. These exceptions occur when an entity is changed externally between loading and saving it. Handling these exceptions is pretty straightforward, but it's still something to be aware of.
One cool thing you can do with long-lived contexts in WPF is bind to the DbSet.Local property (e.g. context.Customers.Local). this is an ObservableCollection that contains all of the tracked entities that are not marked for deletion.
Hopefully this gives you a bit more information to help you decide which approach to help.
Microsoft Reference:
http://msdn.microsoft.com/en-gb/library/cc853327.aspx
They say;
Limit the scope of the ObjectContext
In most cases, you should create
an ObjectContext instance within a using statement (Using…End Using in
Visual Basic).
This can increase performance by ensuring that the
resources associated with the object context are disposed
automatically when the code exits the statement block.
However, when
controls are bound to objects managed by the object context, the
ObjectContext instance should be maintained as long as the binding is
needed and disposed of manually.
For more information, see Managing Resources in Object Services (Entity Framework). http://msdn.microsoft.com/en-gb/library/bb896325.aspx
Which says;
In a long-running object context, you must ensure that the context is
disposed when it is no longer required.
StackOverflow Reference:
This StackOverflow question also has some useful answers...
Entity Framework Best Practices In Business Logic?
Where a few have suggested that you promote your context to a higher level and reference it from here, thus keeping only one single Context.
My ten pence worth:
Wrapping the Context in a Using Statement, allows the Garbage Collector to clean up the resources, and prevents memory leaks.
Obviously in simple apps, this isn't much of a problem, however, if you have multiple screens, all using alot of data, you could end up in trouble, unless you are certain to Dispose your Context correctly.
Hence I have employed a similar method to the one you have mentioned, where I've added an AddOrUpdate Method to each of my Repositories, where I pass in my New or Modified Entity, and Update or Add it depending upon whether it exists.
Updating Entity Properties:
Regarding updating properties however, I've used a simple function which uses reflection to copy all the properties from one Entity to Another;
Public Shared Function CopyProperties(Of sourceType As {Class, New}, targetType As {Class, New})(ByVal source As sourceType, ByVal target As targetType) As targetType
Dim sourceProperties() As PropertyInfo = source.GetType().GetProperties()
Dim targetProperties() As PropertyInfo = GetType(targetType).GetProperties()
For Each sourceProp As PropertyInfo In sourceProperties
For Each targetProp As PropertyInfo In targetProperties
If sourceProp.Name <> targetProp.Name Then Continue For
' Only try to set property when able to read the source and write the target
'
' *** Note: We are checking for Entity Types by Checking for the PropertyType to Start with either a Collection or a Member of the Context Namespace!
'
If sourceProp.CanRead And _
targetProp.CanWrite Then
' We want to leave System types alone
If sourceProp.PropertyType.FullName.StartsWith("System.Collections") Or (sourceProp.PropertyType.IsClass And _
sourceProp.PropertyType.FullName.StartsWith("System.Collections")) Or sourceProp.PropertyType.FullName.StartsWith("MyContextNameSpace.") Then
'
' Do Not Store
'
Else
Try
targetProp.SetValue(target, sourceProp.GetValue(source, Nothing), Nothing)
Catch ex As Exception
End Try
End If
End If
Exit For
Next
Next
Return target
End Function
Where I do something like;
dbColour = Classes.clsHelpers.CopyProperties(Of Colour, Colour)(RecordToSave, dbColour)
This reduces the amount of code I need to write for each Repository of course!
The context is not permanently connected to the database. It is essentially an in-memory cache of records you have loaded from disk. It will only request records from the database when you request a record it has not previously loaded, if you force it to refresh or when you're saving your changes back to disk.
Opening a context, grabbing a record, closing the context and then copying modified properties to an object from a brand new context is the epitomy of redundant code. You are supposed to leave the original context alone and use that to do SaveChanges().
If you're looking to deal with concurrency issues you should do a google search about "handling concurrency" for your version of entity framework.
As an example I have found this.
Edit in response to comment:
So from what I understand you need a subset of the columns of a record to be overridden with new values while the rest is unaffected? If so, yes, you'll need to manually update these few columns on a "new" object.
I was under the impression that you were talking about a form that reflects all the fields of the customer object and is meant to provide edit access to the entire customer record. In this case there's no point to using a new context and painstakingly copying all properties one by one, because the end result (all data overridden with form values regardless of age) will be the same.

What ORM can I use for Access 2007 - 2010? I'm after WPF binding to the tables etc

I've a legacy database that all sites have, it describes specific content in a number of catagory/subcatagory/child item format. Until now, adding/editing the content is either manual work in the tables OR raw sql Windows Forms tool (I built when I started out in the job!).
I would like Entity Framework style drag, drop, bind and run coding ability with WPF 4.5 and .net 4.5.
I hesitate to use NHibernate as EF5 is very simple to get going with, I understand Nhibernate is more work (albeit a faster ORM). Are there alternatives that work well? I'm trying to avoid too much manual setup, if possible. The editor isn't a mandatory project and I can't justify lots of extra work on it - but it would make my job easier for the next 2 years if a nice version of it was put together.
All the argument against Access I know really well :) - swapping this isn't an option for at least a year.
Having searched the StackOverflow site, I don't see too many questions asking for this, but apologies if I've missed a good one!
Update: I think I should refine my question slightly as really what I needed to get at what code generation so that I don't need to hand build all the classes for the Access database. From what I can see, Dapper's work is around efficiency but is distinct from generating code. Coming from a entity framework mindset, I can see where I've conjoined the tasks somewhat in my thinking :). So apart from boil my own - does anyone know a good code gen for use with Access. This I can marry to Dapper :).
You can't use Entity Framework, because it doesn't work with Access databases.
It's possible to use NHibernate with MS Access, although NH doesn't support Access out of the box.
You need NHibernate.JetDriver from NHContrib and here are example settings for the NH config file.
If I recall it correctly, NH Contrib needs to be compiled against the exact NH version you're using, so you probably need to download the source code and compile it by yourself.
As an alternative, you can use one of the many micro-ORMs, for example Stack Overflow's own Dapper.
Dapper is DB agnostic, so it can connect to everything including Access. Quote from the official site:
Will dapper work with my db provider?
Dapper has no DB specific implementation details, it works across all .net ado providers
including sqlite, sqlce, firebird, oracle, MySQL and SQL Server
The disadvantage is that because Dapper is DB agnostic, you have to implement some advanved stuff yourself, like paging.
EDIT:
IMO Dapper is in the "fairly easy to run quickly catagory".
Take a look at this:
(complete demo project here)
using System;
using System.Data.OleDb;
using Dapper;
namespace DapperExample
{
class Program
{
static void Main(string[] args)
{
using (var con = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=test.mdb"))
{
var list = con.Query<Product>("select * from products");
Console.WriteLine("map to a strongly typed list:");
foreach (var item in list)
{
Console.WriteLine(item.ProductNumber + " : " + item.Description);
}
Console.WriteLine();
var list2 = con.Query("select * from products");
Console.WriteLine("map to a list of dynamic objects:");
foreach (var item in list2)
{
Console.WriteLine(item.ProductNumber + " : " + item.Description);
}
Console.ReadLine();
}
}
}
public class Product
{
public string ProductNumber { get; set; }
public string Description { get; set; }
}
}
There are two different queries in this example code.
The first one maps to a strongly typed list, e.g. the result is an IEnumerable<Product>. Of course it needs a Product class that it can map to.
The second query returns an IEnumerable<Dynamic> (>= .NET 4.0) which means that the properties are evaluated on the fly and you don't need to define a class before, but the disadvantage is that you lose type safety (and IntelliSense).
My personal opinion is that the missing type safety is a deal breaker for me (I prefer the first query syntax), but maybe this is something for you.
Hate to resurrect an old thread but I recently did a WPF project using PetaPoco, a micro-ORM, with MS Access so I thought I'd share my implementation.
To add MS Access support to PetaPoco, you just need to add a couple of bits of code:
First add an AccessDatabaseType class. All of the DataBaseType classes are at the end of the PetaPoco.cs file. Just add the new class after SqlServerDatabaseType.
class AccessDatabaseType : DatabaseType
{
public override object ExecuteInsert(Database db, IDbCommand cmd, string PrimaryKeyName)
{
db.ExecuteNonQueryHelper(cmd);
return db.ExecuteScalar<object>("SELECT ###IDENTITY AS NewID;");
}
}
Next, modify PetaPoco.Internal.DatabaseType.Resolve() to support the AccessDatabaseType. (This code assumes you are using the Jet OLEDB provider)
public static DatabaseType Resolve(string TypeName, string ProviderName)
{
//...
if (ProviderName.IndexOf("Oledb", StringComparison.InvariantCultureIgnoreCase) >= 0)
return Singleton<AccessDatabaseType>.Instance;
// Assume SQL Server
return Singleton<SqlServerDatabaseType>.Instance;
}
Finally, to instantiate PetaPoco use this:
Db = New PetaPoco.Database("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\db.mdb", "System.Data.Oledb")
Limitations:
PetaPoco assumes your primary keys are autonumber/identity fields. If you have a PK that's not an autonumber or you have a composite PK, you'll need to implement your own insert & save logic.
I didn't need paging in my application so I didn't implement it.
We are using
Jet Entity Framework Provider. That way we can easily port to another database later.
It does not have all the limitations mentioned above and works great.
Tortuga Chain fully supports Access.
https://docevaad.github.io/Chain/Introduction.htm

How to get can CanAddNew to be true for a collection returned by RIA Services

RIA Services is returning a list of Entities that won't allow me to add new items. Here are what I believe to be the pertinent details:
I'm using the released versions of Silverlight 4 and RIA Services 1.0 from mid-April of 2010.
I have a DomainService with a query method that returns List<ParentObject>.
ParentObject includes a property called "Children" that is defined as List<ChildObject>.
In the DomainService I have defined CRUD methods for ParentObject with appropriate attributes for the Query, Delete, Insert, and Update functions.
The ParentObject class has an Id property marked with the [Key] attribute. It also has the "Children" property marked with the attributes [Include], [Composition], and [Association("Parent_Child", "Id",
"ParentId")].
The ChildObject class has an Id marked with the [Key] attribute as well as a foreign key, "ParentId", that contains the Id of the parent.
On the client side, data is successfully returned and I assign the results of the query to a PagedCollectionView like this:
_pagedCollectionView = new PagedCollectionView(loadOperation.Entities);
When I try to add a new ParentObject to the PagedCollectionView like this:
ParentObject newParentObject = (ParentObject)_pagedCollectionView.AddNew();
I get the following error:
" 'Add New' is not allowed for this view."
On further investigation, I found that _pagedCollectionView.CanAddNew is "false" and cannot be changed because the property is read-only.
I need to be able to add and edit ParentObjects (with their related children, of course) to the PagedCollectionView. What do I need to do?
I was just playing around with a solution yesterday and feel pretty good about how it works. The reason you can't add is the source collection (op.Entities) is read-only. However, even if you could add to the collection, you'd still want to be adding to the EntitySet as well. I created a intermediate collection that takes care of both these things for me.
public class EntityList<T> : ObservableCollection<T> where T : Entity
{
private EntitySet<T> _entitySet;
public EntityList(IEnumerable<T> source, EntitySet<T> entitySet)
: base(source)
{
if (entitySet == null)
{
throw new ArgumentNullException("entitySet");
}
this._entitySet = entitySet;
}
protected override void InsertItem(int index, T item)
{
base.InsertItem(index, item);
if (!this._entitySet.Contains(item))
{
this._entitySet.Add(item);
}
}
protected override void RemoveItem(int index)
{
T item = this[index];
base.RemoveItem(index);
if (this._entitySet.Contains(item))
{
this._entitySet.Remove(item);
}
}
}
Then, I use it in code like this.
dataGrid.ItemsSource = new EntityList<Entity1>(op.Entities, context.Entity1s);
The only caveat is this collection does not actively update off the EntitySet. If you were binding to op.Entities, though, I assume that's what you'd expect.
[Edit]
A second caveat is this type is designed for binding. For full use of the available List operation (Clear, etc), you'd need to override a few of the other methods to write-though as well.
I'm planning to put together a post that explains this a little more in-depth, but for now, I hope this is enough.
Kyle
Here's a workaround which I am using:
Instead of using the AddNew, on your DomainContext you can retrieve an EntitySet<T> by saying Context.EntityNamePlural (ie: Context.Users = EntitySet<User> )
You can add a new entity to that EntitySet by calling Add() and then Context.SubmitChanges() to send it to the DB. To reflect the changes on the client you will need to Reload (Context.Load())
I just made this work about 15mins ago after having no luck with the PCV so I am sure it could be made to work better, but hopefully this will get you moving forward.
For my particular situation, I believe the best fit is this (Your Mileage May Vary):
Use a PagedCollectionView (PCV) as a wrapper around the context.EntityNamePlural (in my case, context.ParentObjects) which is an EntitySet. (Using loadOperation.Entities doesn't work for me because it is always read-only.)
_pagedCollectionView = new PagedCollectionView(context.ParentObjects);
Then bind to the PCV, but perform add/delete directly against the context.EntityNamePlural EntitySet. The PCV automatically syncs to the changes done to the underlying EntitySet so this approach means I don't need to worry about sync issues.
context.ParentObjects.Add();
(The reason for performing add/delete directly against the EntitySet instead of using the PCV is that PCV's implementation of IEditableCollectionView is incompatible with EntitySet causing IEditableCollectionView.CanAddNew to be "false" even though the underlying EntitySet supports this function.)
I think Kyle McClellan's approach (see his answer) may be preferred by some because it encapsulates the changes to the EntitySet, but I found that for my purposes it was unneccessary to add the ObservableCollection wrapper around loadOperation.Entities.
Many thanks to to Dallas Kinzel for his tips along the way!

Resources