Any use in hacking Silverlight NotifyCollectionChangedEventArgs to support multiple items? - silverlight

The Silverlight version of NotifyCollectionChangedEventArgs differs from the full framework version, in that it does not accept multiple (added, changed, removed) items. The constructors that take lists are in fact missing, so it appears Microsoft intended to block this usage. Indeed, if you pass in a collection of items, they're just nested as the first item of an internal collection.
However! Since the NewItems and OldItems members are of type IList, they are not immutable and can be grown. I made the following helper to test this idea:
private NotifyCollectionChangedEventArgs CreateEventArgsWithMultiple(NotifyCollectionChangedAction action, IEnumerable items, int newStartingIndex)
{
NotifyCollectionChangedEventArgs eventArgs = null;
foreach (var item in items)
{
if (eventArgs == null)
{
eventArgs = new NotifyCollectionChangedEventArgs(action, item, newStartingIndex);
}
else
{
eventArgs.NewItems.Add(item);
}
}
return eventArgs;
}
I haven't seen any problems yet, but I'm looking for experience and input with this particular corner of Silverlight. Should I bother to batch Adds like this, or just use a Reset?
This is on Windows Phone 7.1 (Mango), by the way.
Edit: To follow up on the comment by Erno. Microsoft says in this (poorly worded) Silverlight documentation page on MSDN that it can be "generally" assumed that NewItems only has one element, and even suggests the shortcut of using NewItems[0] to access it. So they retain the IList signature for "compatibility", but then go on butcher the meaning of the type. Disappointing.

I haven't run into any issues, but the answer is "Don't do it!" (unless you're only passing the args to code that you've written).
The reason (as has been said in the comments) is that there may be code in Silverlight which assumes there's only one item. Even if there isn't today, there may be tomorrow, and you definitely don't want your app to break when some new version of Silverlight comes out that relies more heavily on this assumption.

Related

Immutable data model for an WPF application with MVVM implementation

I have an application which has a data tree as a databackend. To implement the the MVVM pattern I have a logic layer of classes which encapsulate the data tree. Therefore the logic also is arranged in a tree. If the input is in a valid state the data should be copied to a second thread which works as a double buffer of the last valid state. Therefore one way would be cloning.
Another approach would be to implement the complete data backend to be immutable. This would imply to rebuild the whole data tree if something new is entered. My question is, is there a practical way to do this? I'm stuck at the point where I have to reassign the data tree efficently to the logic layer.
**UPDATE - Some Code
What we are doing is to abstract hardware devices which we use to run our experiments. Therefore we defined classes like "chassis, sequence, card, channel, step". Those build a tree structure like this:
Chassis
/ \
Sequence1 Sequence2
/ | \
Card1 Card2 Card3
/ \
Channel1 Channel2
/ \
Step1 Step2
In code it looks like this:
public class Chassis{
readonly var List<Sequence> Sequences = new List<Sequence>();
}
public class Sequence{
readonly var List<Card> Cards = new List<Card>();
}
and so on. Of course each class has some more properties but those are easy to handle. My Problem now is that List is a mutable object. I can call List.Add() and it changed. Ok there is a ReadOnlyList but I'm not sure if it implements the immutability the right way. Right as in copy by value not reference and not just blocking to write to it by blocking the set methods.
The next problem is that the amount of sequences and step can vary. For this reason I need an atomic exchange of list elements.
At the moment I don't have any more code as I'm still thinking if this way would help me and if it is possible at all to implement it in a reasonable amount of time.
Note that there are new immutable collections for .NET that could help you achieve your goal.
Be very cautious about Dave Turvey's statement (I would downvote/comment if I could):
If you are looking to implement an immutable list you could try storing the list as a private member but exposing a public IEnumerable<>
This is incorrect. The private member could still be changed by its container class. The public member could be cast to List<T>, IList<T>, or ICollection<T> without throwing an exception. Thus anything that depends on the immutability of the public IEnumerable<T> could break.
I'm not sure if I understand 100% what you're asking. It sounds like you have a tree of objects in a particular state and you want to perform some processing on a copy of that without modifying the original object state. You should look into cloning via a "Deep Copy". This question should get you started.
If you are looking to implement an immutable list you could try storing the list as a private member but exposing a public IEnumerable<>
public class Chassis
{
List<Sequence> _sequences = new List<Sequence>();
public IEnumerable<Sequence> Sequences { get { return _sequences; } }
}
18/04/13 Update in response to Brandon Bonds comments
The library linked in Brandon Bonds answer is certainly interesting and offers advantages over IEnumerable<>. In many cases it is probably a better solution. However, there are a couple of caveats that you should be aware of if you use this library.
As of 18/04/2013 This is a beta library. It is obviously still in development and may not be ready for production use. For example, The code sample for list creation in the linked article doesn't work in the current nuget package.
This is a .net 4.5 library. so it will not be suitable for programs targeting an older framework.
It does not guarantee immutability of objects contained in the collections only of the collection itself. It is possible to modify objects in an immutable list You will still need to consider a deep copy for copying collections.
This is addressed in the FAQ at the end of the article
Q: Can I only store immutable data in these immutable collections?
A: You can store all types of data in these collections. The only immutable aspect is the collections themselves, not the items they contain.
In addition the following code sample illustrates this point (using version 1.0.8-beta)
class Data
{
public int Value { get; set; }
}
class Program
{
static void Main(string[] args)
{
var test = ImmutableList.Create<Data>();
test = test.Add(new Data { Value = 1 });
Console.WriteLine(test[0].Value);
test[0].Value = 2;
Console.WriteLine(test[0].Value);
Console.ReadKey();
}
}
This code will allow modification of the Data object and output
1
2
Here are a couple of articles for further reading on this topic
Read only, frozen, and immutable collections
Immutability in C# Part One: Kinds of Immutability

What will the difference be between instantiating a form and assigning to a variable vs. simply instantiating?

I have a windows form that doesn't have any events or properties I wish to access from the owner. There are two ways I can open the form:
frmExample ex = new frmExample();
ex.ShowDialog(this);
and
(new frmExample()).ShowDialog(this);
Will there be differences in terms of memory allocation and such? Are there any implications, pros and cons? Personally, possibly naively, I prefer the second approach.
Thanks
One big difference is that you won't be able to Dispose() the form instance. You should, disposal is not automatic when you call ShowDialog(), only when you call Show(). Boilerplate code is:
using (var dlg = new frmExample()) {
if (dlg.ShowDialog() == DialogResult.Ok) {
// Access dlg properties
//...
}
}
You can perhaps see from this snippet why the form doesn't get disposed automatically. It would risk generating ObjectDisposedException when you access the properties. You have to dispose it yourself after you're done accessing the properties. The using statement makes it automatic and exception-safe.

How can I set the StreamingContext in Silverlight DataContractSerizer?

I need to do a deep copy in Silverlight, which I can do with the tried and tested serialize/deserialize approach. The copied objects aren't exact clones - they need to have some of their properties modified on the copy.
I should be able to do something like this:
[OnDeserialized()]
public void OnDeserializedMethod(StreamingContext context)
{
if (context.State == StreamingContextStates.Clone)
{
//stuff
}
}
where the StreamingContext is set up using a NetDataContractSerializer:
NetDataContractSerializer ds = new NetDataContractSerializer(new StreamingContext(StreamingContextStates.Clone));
Silverlight doesn't have a NetDataContractSerializer though :-(.
So is there any way I can set the StreamingContext on the DataContractSerializer to give me something to work with? I can't just blindly apply my changes to every serialize operation, it's only on the specific case of a copy.
Or, alternatively, is there another method that gives me similar hooks into the (de)serialization process so I can play with the data?
(I've looked into implementing IDataContractSurrogate but a) it was painful and b) Silverlight doesn't have one of those either...)
I've come to the conclusion that you can't do it, so I guess an alternative approach is in order.

How to generate and print large XPS documents in WPF?

I would like to generate (and then print or save) big XPS documents (>400 pages) from my WPF application. We have some large amount of in-memory data that needs to be written to XPS.
How can this be done without getting an OutOfMemoryException? Is there a way I can write the document in chunks? How is this usually done? Should I not be using XPS for large files in the first place?
The root cause of the OutOfMemoryException seems to be the creation of the huge FlowDocument. I am creating the full FlowDocument and then sending it to the XPS document writer. Is this the wrong approach?
How do you do it? You didn't show any code.
I use an XpsDocumentWriter to write in chunks, like this:
FlowDocument flowDocument = . .. ..;
// write the XPS document
using (XpsDocument doc = new XpsDocument(fileName, FileAccess.ReadWrite))
{
XpsDocumentWriter writer = XpsDocument.CreateXpsDocumentWriter(doc);
DocumentPaginator paginator = ((IDocumentPaginatorSource)flowDocument).DocumentPaginator;
// Change the PageSize and PagePadding for the document
// to match the CanvasSize for the printer device.
paginator.PageSize = new Size(816, 1056);
copy.PagePadding = new Thickness(72);
copy.ColumnWidth = double.PositiveInfinity;
writer.Write(paginator);
}
Does this not work for you?
Speaking from perfect ignorance of the specific system involved, might I suggest using the Wolf Fence in Alaska debugging technique to identify the source of the problem? I'm suggesting this because other responders are not reporting the same problem you are experiencing. When working with easy-to-reproduce bugs Wolf Fence is dead simple to do (it doesn't work so well with race conditions and the like).
Pick a midpoint in your input data and try to generate your output document from only that data, no more.
If it succeeds, pick a point about 75% into the input and try again, otherwise pick a point at about 25% of the way into the input and try again.
Lather, rinse, repeat, each time narrowing the window to where the works/fails line is.
You may find that you fairly quickly identify one specific page -- or perhaps one specific object on that page -- that is "funny." Note: you only have to do this log2(N) times, or in this case 9 times given the 400 pages you mention.
Now you probably have something you can attack directly. Good luck.
You cannot use a single FlowDocument for generating large documents because you will run out of memory. However if it is possible to generate your output as a sequence of FlowDocument or as an extremely tall ItemsControl, it is possible.
I've found the easiest way to do this is to subclass DocumentPaginator and pass an instance of my subclass to XpsDocumentWriter.Write:
var document = new XpsDocument(...);
var writer = XpsDocument.CreateXpsDocumentWriter(xpsDocument);
writer.Write(new WidgetPaginator { Widget = widgetBeingPrinted, PageSize = ... });
The WidgetPaginator class itself is quite simple:
class WidgetPaginator : DocumentPaginator, IDocumentPaginatorSource
{
Size _pageSize;
public Widget Widget { get; set; }
public override Size PageSize { get { return _pageSize; } set { _pageSize = value; } }
public override bool IsPageCountValid { return true; }
public override IDocumentPaginatorSource Source { return this; }
public override DocumentPaginator DocumentPaginator { return this; }
public override int PageCount
{
get
{
return ...; // Compute page count
}
}
public override DocumentPage GetPaget(int pageNumber)
{
var visual = ...; // Compute page visual
Rect box = new Rect(0,0,_pageSize.With, _pageSize.Height);
return new DocumentPage(visual, _pageSize, box, box);
}
Of course you still have to write the code that actually creates the pages.
If you want to use a series of FlowDocuments to create your document
If you're using a sequence of FlowDocuments to lay out your document one section at a time instead of all at once, your custom paginator can work in two passes:
The first pass occurs as the paginator is constructed. It creates a FlowDocument for each section, then gets a DocumentPaginator to retrieve the number of pages. Each section's FlowDocument is discarded after the pages are counted.
The second pass occurs during actual document output: If the number passed to GetPage() is in the most recent FlowDocument created, GetPage() simply calls that document's paginator to get the appropriate page. Otherwise it discards that FlowDocument and creates a FlowDocument for the new section, gets its paginator, then calls GetPage() on the paginator.
This strategy allows you to continue to use FlowDocuments as you have been, as long as you can break the data into "sections" each with its own document. Your custom paginator then effectively treats all the individual FlowDocuments as one big document. This is similar to Word's "Master Document" feature.
If you can render your data as a sequence of vertically-stacked visuals
In this case, the same technique can be used. During the first pass, all visuals are generated in order and measured to see how many will fit on a page. A data structure is built to indicate which range of visuals (by index or whatever) are found on a given page. During this process each time a page fills up, the next visual is placed on a new page. Headers and footers would be handled in the obvious way.
During the actual document generation, the GetPage() method is implemented to regenerate the visuals previously decided to be on a given page and combine them using a vertical DockPanel or other panel of your choice.
I've found this technique more flexible in the long run because you don't have to deal with the limitations of FlowDocument.
I can confirm that XPS does not throw out-of-memory on long documents. Both in theory (because operations on XPS are page-based, it doesn't try to load whole document in memory), and in practice (I use XPS-based reporting, and seen run-away error messages add up to many thousands of pages).
Could it be that the problem is in a single particularly large page? A huge image, for example? Large page with high DPI resolution? If single object in document is too big to be allocated at once, it will lead to out-of-memory exception.
Have you used sos to find out what is using up all the memory?
It could be that managed or unmanaged objects are being created during the production of your document, and they're not being released until the document is finished (or not at all).
Tracking down managed memory leaks by Rico Mariani could be of help.
like you say: probably the in-memory FixedDocument is consuming too much memory.
Maybe an approach in which you write out the XPS pages each individually (and make sure the FixedDocument gets released each time), and then use a merger afterwards could prove fruitful.
Are you able to write each page separately?
Nick.
ps. Feel free to concact me directly (info#nixps.com); we do a lot of XPS stuff at NiXPS, and I'm very interested in helping you getting this issue resolved.

Strongly typed databinding in WPF/Silverlight/XAML?

One of my biggest pet peeves with how databinding works with XAML is that there's no option to strongly type your databindings. In other words, in C#, if you want to access a property on an object that doesn't exist, you won't get any help from Intellisense, and if you insist on ignoring Intellisense, the compiler will gripe at you and won't let you proceed -- and I suspect that lots of folks here would agree that this is a Very Good Thing. But in XAML databinding, you're operating without a net. You can bind to anything, even if it doesn't exist. Indeed, given the bizarre syntax of XAML databinding, and given my own experience, it's a great deal more complicated to bind to something that does exist than to something that doesn't. I'm much more likely to get my databinding syntax wrong than to get it right; and the comparative time I spend troubleshooting XAML databindings easily dwarfs the time I spend with any other portion of Microsoft's stack (including the awkward and annoying WCF, if you can believe it). And most of that (not all of it) goes back to the fact that without strongly-typed databindings, I can't get any help from either Intellisense or the compiler.
So what I want to know is: why doesn't MS at least give us an option to have strongly-typed databindings: kind of like how in VB6, we could make any object a variant if we were really masochistic, but most of the time it made sense to use normal, typed variables. Is there any reason why MS couldn't do that?
Here's an example of what I mean. In C#, if the property "UsrID" doesn't exist, you'll get a warning from Intellisense and an error from the compiler if you try this:
string userID = myUser.UsrID;
However, in XAML, you can do this all you want:
<TextBlock Text="{Binding UsrID}" />
And neither Intellisense, the compiler, or (most astonishingly) the application itself at runtime will give you any hint that you've done something wrong. Now, this is a simplistic example, but any real-world application that deals with complex object graphs and complex UI's is going to have plenty of equivalent scenarios that aren't simple at all, nor simple to troubleshoot. And even after you've gotten it working correctly the first time, you're SOL if you refactor your code and change your C# property names. Everything will compile, and it'll run without an error, but nothing will work, leaving you to hunt and peck your way through the entire application, trying to figure out what's broken.
One possible suggestion (off the top of my head, and which I haven't thought through) would maybe be something like this:
For any portion of the logical tree, you could specify in XAML the DataType of the object that it's expecting, like so:
<Grid x:Name="personGrid" BindingDataType="{x:Type collections:ObservableCollection x:TypeArgument={data:Person}}">
This would perhaps generate a strongly-typed ObservableCollection<Person> TypedDataContext property in the .g.cs file. So in your code:
// This would work
personGrid.TypedDataContext = new ObservableCollection<Person>();
// This would trigger a design-time and compile-time error
personGrid.TypedDataContext = new ObservableCollection<Order>();
And if you then accessed that TypedDataContext through a control on the grid, it would know what sort of an object you were trying to access.
<!-- It knows that individual items resolve to a data:Person -->
<ListBox ItemsSource="{TypedBinding}">
<ListBox.ItemTemplate>
<DataTemplate>
<!--This would work -->
<TextBlock Text="{TypedBinding Path=Address.City}" />
<!-- This would trigger a design-time warning and compile-time error, since it has the path wrong -->
<TextBlock Text="{TypedBinding Path=Person.Address.City} />
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
I've made a blog posting here that explains more about my frustrations with WPF/XAML databinding, and what I think would be a significantly better approach. Is there any reason why this couldn't work? And does anyone know if MS is planning to fix this problem (along the lines of my proposal, or hopefully, a better one)?
There will be IntelliSense support for data-binding in Visual Studio 2010. That seems to be what your complaint really boils down to, since data-binding is strongly-typed. You just don't find out until run-time whether or not the binding succeeded, and more often than not it fails quietly rather than with a noisy exception. When a binding fails, WPF dumps explanatory text via debug traces, which you can see in the Visual Studio output window.
Besides the lack of IntelliSense support and a few weird syntax issues, data-binding is quite well done (at least in my opinion). For some more assistance debugging data-bindings, I would check out Bea's lovely article here.
This is my biggest gripe with XAML! Not having the compiler enforce valid databindings is a BIG issue. I don't really care about intellisense, but I DO care about the lack of refactoring support.
Changing property names or types is dangerous in a WPF application - using the built in refactoring support won't update data bindings in XAML. Doing a search and replace on the name is dangerous as it may change code that you didn't intend to change. Going through a list of find results is a pain in the arse and time consuming.
MVC has had strongly typed views for some time - the MVC contrib project provided them for MVC1 and MVC2 provides them natively. XAML must support this in future, particularly if it is used in "agile" projects where an application's design evolves over time. I haven't looked at .NET 4.0 / VS2010, but I hope the experience is far better than it is!
I feel that xaml is sth like old-days html. i cannot imagine that after 10+ years, i am doing programming in that way: typing open-close tags manually because i cannot have a matured GUi for me to define styles and binding and templates. I strongly support your view, Ken. I feel very weird why so many people are supporting MVVM without a single complain about the pain in debugging xaml. Command binding and data binding are very good concepts and I have been designing my winform apps in that way. However, the xaml binding solution together with some other xaml issue (or lack of a sophisticated feature) is really a big failure in VS. It made the development and debug very difficult, and made the code very unreadable.
Ken, C# would benefit from a concise syntactical element for referencing a PropertyInfo class. PropertyInfo structures are static objects defined at compile time and as such provide a unique key for every Property on an object. Properties could then be validated at compile time.
The only problem with this is the weirdness of treating an instance of a object as a data type given that strong typing is enforced on types, not the values of a type. Traditionally, compilers don't enforce data values, and instead rely on runtime code to check its data. Most cases its not even possible to validate data at compile time, but reflection is one of those edge cases where it is at least possible.
Alternatively, the compiler could create a new data type for every property. I could imagine a lot of types being created, but that would enable compile time enforcement of property binding.
One way to think about it is that the CLR introduced a level of reflection that was another level of magnitude compared to the systems that preceded it. It's now being used to do some rather impressive stuff like data binding. But its implementation is still at a metadata level, a kind of report from the compiler for every data type is generates. I supposed one way to grow C# would be to promote the metadata to compile time checking.
It seems to me that someone could develop a compilation tool that adds that reflection level validation. The new intellisense is sorta like that. It would be tricky to generically discover string parameters that are destined to be compared to PropertyInfos, but its not impossible. A new data type like "PropertyString" could be defined that clearly identifies parameters that will be compared to PropertyInfos in the future.
Anyway, I feel your pain. I've chased down a lot of misspelled property name references. Honestly, there are a lot of irritations in WPF related to reflection. A useful utility would be a WPF enforcement checker that makes sure all your static control constructors are in place, your attributes properly defined, bindings are accurate, correct keys, etc. There is a long list of validations that could be performed.
If I were still working for Microsoft, I'd probably try doing it.
It sounds like what you are asking for would not require any framework changes, and possibly is already fixed in VS 2010 (I don't have it installed).
The XAML designer needs IntelliSense for bindings within a DataTemplate when you specify a DataType, which is already possible like so:
<DataTemplate DataType="{x:Type data:Person}">
...
</DataTemplate>
I agree that having IntelliSense in this case would be a helpful change, but your other suggestions seem to miss the point of being able to change DataContexts at runtime and use DataTemplates for different types to render them uniquely.
This is truly a solution to what you are wanting!
But, it isn't is a built-in, native, framework solution. Yes, I think that's what we all really want here. Maybe we'll get that later.
In the interim, if you are dead set on restricting the type, this solves that!
Using this converter:
public class RequireTypeConverter : System.Windows.Data.IValueConverter
{
public object Convert(object value, Type targetType,
object parameter, System.Globalization.CultureInfo culture)
{
if (value == null)
return value;
// user needs to pass a valid type
if (parameter == null)
System.Diagnostics.Debugger.Break();
// parameter must parse to some type
Type _Type = null;
try
{
var _TypeName = parameter.ToString();
if (string.IsNullOrWhiteSpace(_TypeName))
System.Diagnostics.Debugger.Break();
_Type = Type.GetType(_TypeName);
if (_Type == null)
System.Diagnostics.Debugger.Break();
}
catch { System.Diagnostics.Debugger.Break(); }
// value needs to be specified type
if (value.GetType() != _Type)
System.Diagnostics.Debugger.Break();
// don't mess with it, just send it back
return value;
}
public object ConvertBack(object value, Type targetType,
object parameter, System.Globalization.CultureInfo culture)
{
throw new NotImplementedException();
}
}
Then, restrict your DataType like this:
<phone:PhoneApplicationPage.Resources>
<!-- let's pretend this is your data source -->
<CollectionViewSource x:Key="MyViewSource" Source="{Binding}"/>
<!-- validate data type - start -->
<converters:RequireTypeConverter x:Key="MyConverter" />
<TextBlock x:Key="DataTypeTestTextBlock"
DataContext="{Binding Path=.,
Source={StaticResource MyViewSource},
Converter={StaticResource MyConverter},
ConverterParameter=System.Int16}" />
<!-- validate data type - end -->
</phone:PhoneApplicationPage.Resources>
See how I am requiring the CollectionViewSource to have System.Int16? Of course, you can't even set the Source of a CVS to an Integer, so this will always fail. But it proves the point for sure. It's just too bad that Silverlight does not support {x:Type} or I could have done something like ConverterParameter={x:Type sys:Int16} which would have been nice.
Plus, that XAML should be non-intrusive, so you should be able to implement this without any risk. If the data type is ever what you do not want it to be then the debugger will break and you can kick yourself for breaking your own rule. :)
Again, I know this is a little funky, but it does what you want - in fact I have been playing with it while coding it and maybe I even have a use for it. I like that it's design-time/debuggin only. But, listen, I am not trying it sell it to you.
I am just having fun, if this is too much syntax, just enjoy my effort ;)
PS: you could also create an attached property of type Type that you could use the same way. You could attach it to your CVS or anything else, like x:RequiredType="System.Int16" and in the behavior of the property you could just repeat the converter logic. Same effect, probably the same amount of code - but another viable option if you are serious.
Okay I could not resist, here's the Attached Property approach:
Here's the XAML:
<phone:PhoneApplicationPage.Resources>
<!-- let's pretend this is your data source -->
<CollectionViewSource
x:Key="MyViewSource" Source="{Binding}"
converters:RestrictType.Property="Source"
converters:RestrictType.Type="System.Int16" />
</phone:PhoneApplicationPage.Resources>
Here's the property code:
public class RestrictType
{
// type
public static String GetType(DependencyObject obj)
{
return (String)obj.GetValue(TypeProperty);
}
public static void SetType(DependencyObject obj, String value)
{
obj.SetValue(TypeProperty, value);
Watch(obj);
}
public static readonly DependencyProperty TypeProperty =
DependencyProperty.RegisterAttached("Type",
typeof(String), typeof(RestrictType), null);
// property
public static String GetProperty(DependencyObject obj)
{
return (String)obj.GetValue(PropertyProperty);
}
public static void SetProperty(DependencyObject obj, String value)
{
obj.SetValue(PropertyProperty, value);
Watch(obj);
}
public static readonly DependencyProperty PropertyProperty =
DependencyProperty.RegisterAttached("Property",
typeof(String), typeof(RestrictType), null);
private static bool m_Watching = false;
private static void Watch(DependencyObject element)
{
// element must be a FrameworkElement
if (element == null)
System.Diagnostics.Debugger.Break();
// let's not start watching until each is set
var _PropName = GetProperty(element);
var _PropTypeName = GetType(element);
if (_PropName == null || _PropTypeName == null)
return;
// we will not be setting this up twice
if (m_Watching)
return;
m_Watching = true;
// listen with a dp so it is a weak reference
var _Binding = new Binding(_PropName) { Source = element };
var _Prop = System.Windows.DependencyProperty.RegisterAttached(
"ListenToProp" + _PropName,
typeof(object), element.GetType(),
new PropertyMetadata((s, e) => { Test(s); }));
BindingOperations.SetBinding(element, _Prop, _Binding);
// run now in case it is already set
Test(element);
}
// test property value type
static void Test(object sender)
{
// ensure element type (again)
var _Element = sender as DependencyObject;
if (_Element == null)
System.Diagnostics.Debugger.Break();
// the type must be provided
var _TypeName = GetType(_Element);
if (_TypeName == null)
System.Diagnostics.Debugger.Break();
// convert type string to type
Type _Type = null;
try
{
_Type = Type.GetType(_TypeName);
if (_Type == null)
System.Diagnostics.Debugger.Break();
}
catch { System.Diagnostics.Debugger.Break(); }
// the property name must be provided
var _PropName = GetProperty(_Element);
if (string.IsNullOrWhiteSpace(_PropName))
System.Diagnostics.Debugger.Break();
// the element must have the specified property
var _PropInfo = _Element.GetType().GetProperty(_PropName);
if (_PropInfo == null)
System.Diagnostics.Debugger.Break();
// the property's value's Type must match
var _PropValue = _PropInfo.GetValue(_Element, null);
if (_PropValue != null)
if (_PropValue.GetType() != _Type)
System.Diagnostics.Debugger.Break();
}
}
Best of luck! Just having fun.
Also, if you have your output window open when you're debugging your project, VS will inform you of any databinding errors, i.e. that the property a Control is bound to doesn't exist.
Guys,
What Ken and grant are trying to say is ..
How About have a XAMl where i can do like p.UserId)> where P is the DataContext of Type Customer
You can now!
http://msdn.microsoft.com/en-us/library/dd490796(VS.100).aspx

Resources