How can I set the StreamingContext in Silverlight DataContractSerizer? - silverlight

I need to do a deep copy in Silverlight, which I can do with the tried and tested serialize/deserialize approach. The copied objects aren't exact clones - they need to have some of their properties modified on the copy.
I should be able to do something like this:
[OnDeserialized()]
public void OnDeserializedMethod(StreamingContext context)
{
if (context.State == StreamingContextStates.Clone)
{
//stuff
}
}
where the StreamingContext is set up using a NetDataContractSerializer:
NetDataContractSerializer ds = new NetDataContractSerializer(new StreamingContext(StreamingContextStates.Clone));
Silverlight doesn't have a NetDataContractSerializer though :-(.
So is there any way I can set the StreamingContext on the DataContractSerializer to give me something to work with? I can't just blindly apply my changes to every serialize operation, it's only on the specific case of a copy.
Or, alternatively, is there another method that gives me similar hooks into the (de)serialization process so I can play with the data?
(I've looked into implementing IDataContractSurrogate but a) it was painful and b) Silverlight doesn't have one of those either...)

I've come to the conclusion that you can't do it, so I guess an alternative approach is in order.

Related

Immutable State - Propagating Changes to the GUI Efficiently

In a previous question I asked how to idiomatically implement an observer pattern for an F# application. My application now uses a MailboxProcessor as reccomended and I've created some helper functions to create sub-MailboxProcessor's etc. However, I'm at a mental block when it comes to specific case scenarios w.r.t. GUI binding.
Lets say I have a model as such:
type Document = {
Contents : seq<DocumentObject>
}
And the GUI (WPF, XAML) requires binding like so:
interface IMainWindowViewModel
{
IEnumerable<Control> ContentViews { get; }
}
Each ViewModel for each Control will require a DocumentObject (its underlying model) and a way of knowing if it has changed. I supply this as a sub-MailboxProcessor<DocumentObject> so that changes may be propagated correctly, I'm moderately confident this pattern works. Essentially, it maps the service outputs and wraps modification requests (outer interface example below):
let subSvc = generateSubSvc svc (fun doc -> doc.Contents[0]) (fun f -> fun oldDoc -> { oldDoc with Contents[0] = f Contents[0] })
let viewModel = new SomeDocObjViewModel(docObjSvc)
new DocObjView(viewModel)
Now, imagine a modification command now deletes a DocumentObject from MyDocument. The top-level MailboxProcessor now echoes the change to IMainWindowViewModel using it's IEvent<MyDocument>. And here's where my problems begin.
My IMainWindowViewModel doesn't really know which DocumentObject has been deleted. Only that there's a new Document and it has to deal with it. There may be ways of it figuring out but it never really knows directly. This can force me down the path of having to either re-create all the Control's for all DocumentObject's to be safe (inefficient). There are additional problems (such as dangling subSvc's) which I also haven't mentioned here for brevity.
Normally, these kind of dynamic changes would be dealt with something like an ObservableCollection<DocumentObject> which is then mapped into an ObservableCollection<Control>. This comes with all the caveats of shared mutable state and is a little 'hackish'; however, it does do the job.
Ideally, I'd like a 'pure' model, free from the trappings of PropertyChanged and ObservableCollections, what kind of patterns in F# would satisfy this need? Where is it appropriate to draw the line between idiomatic and realistic?
Have you considered using the Reactive Extensions (and Reactive UI further down the road) for the purpose of modelling mutable state (read: your model properties over time) in a functional way?
I don't see anything wrong technically to use an ObservableCollection in your model. After all, you need to track collection changes. You could do it on your own, but it looks like you can save yourself a lot of trouble reinventing the observable collection, unless you have a very specific reason to avoid the ObservableCollection class.
Also, using MailboxProcessor seems a bit overkill, since you could just use a Subject (from Rx) to publish and expose it as an IObservable to subscribe to 'messages':
type TheModel() =
let charactersCountSubject = new Subject()
let downloadDocument (* ... *) = async {
let! text = // ...
charactersCountSubject.OnNext(text.Length)
}
member val CharactersCount = charactersCountSubject.AsObservable() with get
type TheViewModel(model : TheModel) =
// ...
member val IsTooManyCharacters = model.CharactersCount.Select((>) 42)
Of course since we're talking about WPF, the view-model should implement INPC. There are different approaches, but whichever one you take, the ReactiveUI has a lot of convenient tools.
For example the CreateDerivedCollection extension method that solves one of the problems you've mentioned:
documents.CreateDerivedCollection(fun x -> (* ... map Document to Control ... *))
This will take your documents observable collection, and make another observable collection out of it (actually a ReactiveCollection) that will have documents mapped to controls.

Immutable data model for an WPF application with MVVM implementation

I have an application which has a data tree as a databackend. To implement the the MVVM pattern I have a logic layer of classes which encapsulate the data tree. Therefore the logic also is arranged in a tree. If the input is in a valid state the data should be copied to a second thread which works as a double buffer of the last valid state. Therefore one way would be cloning.
Another approach would be to implement the complete data backend to be immutable. This would imply to rebuild the whole data tree if something new is entered. My question is, is there a practical way to do this? I'm stuck at the point where I have to reassign the data tree efficently to the logic layer.
**UPDATE - Some Code
What we are doing is to abstract hardware devices which we use to run our experiments. Therefore we defined classes like "chassis, sequence, card, channel, step". Those build a tree structure like this:
Chassis
/ \
Sequence1 Sequence2
/ | \
Card1 Card2 Card3
/ \
Channel1 Channel2
/ \
Step1 Step2
In code it looks like this:
public class Chassis{
readonly var List<Sequence> Sequences = new List<Sequence>();
}
public class Sequence{
readonly var List<Card> Cards = new List<Card>();
}
and so on. Of course each class has some more properties but those are easy to handle. My Problem now is that List is a mutable object. I can call List.Add() and it changed. Ok there is a ReadOnlyList but I'm not sure if it implements the immutability the right way. Right as in copy by value not reference and not just blocking to write to it by blocking the set methods.
The next problem is that the amount of sequences and step can vary. For this reason I need an atomic exchange of list elements.
At the moment I don't have any more code as I'm still thinking if this way would help me and if it is possible at all to implement it in a reasonable amount of time.
Note that there are new immutable collections for .NET that could help you achieve your goal.
Be very cautious about Dave Turvey's statement (I would downvote/comment if I could):
If you are looking to implement an immutable list you could try storing the list as a private member but exposing a public IEnumerable<>
This is incorrect. The private member could still be changed by its container class. The public member could be cast to List<T>, IList<T>, or ICollection<T> without throwing an exception. Thus anything that depends on the immutability of the public IEnumerable<T> could break.
I'm not sure if I understand 100% what you're asking. It sounds like you have a tree of objects in a particular state and you want to perform some processing on a copy of that without modifying the original object state. You should look into cloning via a "Deep Copy". This question should get you started.
If you are looking to implement an immutable list you could try storing the list as a private member but exposing a public IEnumerable<>
public class Chassis
{
List<Sequence> _sequences = new List<Sequence>();
public IEnumerable<Sequence> Sequences { get { return _sequences; } }
}
18/04/13 Update in response to Brandon Bonds comments
The library linked in Brandon Bonds answer is certainly interesting and offers advantages over IEnumerable<>. In many cases it is probably a better solution. However, there are a couple of caveats that you should be aware of if you use this library.
As of 18/04/2013 This is a beta library. It is obviously still in development and may not be ready for production use. For example, The code sample for list creation in the linked article doesn't work in the current nuget package.
This is a .net 4.5 library. so it will not be suitable for programs targeting an older framework.
It does not guarantee immutability of objects contained in the collections only of the collection itself. It is possible to modify objects in an immutable list You will still need to consider a deep copy for copying collections.
This is addressed in the FAQ at the end of the article
Q: Can I only store immutable data in these immutable collections?
A: You can store all types of data in these collections. The only immutable aspect is the collections themselves, not the items they contain.
In addition the following code sample illustrates this point (using version 1.0.8-beta)
class Data
{
public int Value { get; set; }
}
class Program
{
static void Main(string[] args)
{
var test = ImmutableList.Create<Data>();
test = test.Add(new Data { Value = 1 });
Console.WriteLine(test[0].Value);
test[0].Value = 2;
Console.WriteLine(test[0].Value);
Console.ReadKey();
}
}
This code will allow modification of the Data object and output
1
2
Here are a couple of articles for further reading on this topic
Read only, frozen, and immutable collections
Immutability in C# Part One: Kinds of Immutability

Supporting multiple instances of a plugin DLL with global data

Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten.
I can see a number of solutions, but they all have some disadvantages:
You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome.
The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers.
The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there?
Can the stackoverflow community give some advice on these (or other) solutions?
Put the whole darn thing inside a C++ class, then references to variables will automatically find the instance variable.
You can make a global pointer to the active instance. This should probably be thread-local (see __declspec(thread)).
Add extern "C" wrapper functions that delegate to the corresponding member function on the active instance. Provide functions to create new instance, teardown existing instance, and set the active instance.
OpenGL uses this paradigm to great effect (see wglMakeCurrent), finding its state data without actually having to pass a state pointer to every function.
Although I received a lot of answers that suggested to go for solution 3, and although I agree it's a better solution conceptually, I think there was no way to realize that solution practically and reliably under my constraints.
Instead, what I actually implemented was a variation of solution #1. Although the DLL name in DLLImport needs to be a compile-time constant, this question explains how to do it dynamically.
If my code before looked like this:
using System.Runtime.InteropServices;
class DotNetAccess {
[DllImport("mylib.dll", EntryPoint="GetVersion")]
private static extern int _getVersion();
public int GetVersion()
{
return _getVersion();
//May include error handling
}
}
It now looks like this:
using System.IO;
using System.ComponentModel;
using System.Runtime.InteropServices;
using Assembly = System.Reflection.Assembly;
class DotNetAccess: IDisposable {
[DllImport("kernel32.dll", EntryPoint="LoadLibrary", SetLastError=true)]
private static extern IntPtr _loadLibrary(string name);
[DllImport("kernel32.dll", EntryPoint = "FreeLibrary", SetLastError = true)]
private static extern bool _freeLibrary(IntPtr hModule);
[DllImport("kernel32.dll", EntryPoint="GetProcAddress", CharSet=CharSet.Ansi, ExactSpelling=true, SetLastError=true)]
private static extern IntPtr _getProcAddress(IntPtr hModule, string name);
private static IntPtr LoadLibrary(string name)
{
IntPtr dllHandle = _loadLibrary(name);
if (dllHandle == IntPtr.Zero)
throw new Win32Exception();
return dllHandle;
}
private static void FreeLibrary(IntPtr hModule)
{
if (!_freeLibrary(hModule))
throw new Win32Exception();
}
private static D GetProcEntryDelegate<D>(IntPtr hModule, string name)
where D: class
{
IntPtr addr = _getProcAddress(hModule, name);
if (addr == IntPtr.Zero)
throw new Win32Exception();
return Marshal.GetDelegateForFunctionPointer(addr, typeof(D)) as D;
}
private string dllPath;
private IntPtr dllHandle;
public DotNetAccess()
{
string dllDir = Path.GetDirectoryName(Assembly.GetCallingAssembly().Location);
string origDllPath = Path.Combine(dllDir, "mylib.dll");
if (!File.Exists(origDllPath))
throw new Exception("MyLib DLL not found");
string myDllPath = Path.Combine(dllDir, String.Format("mylib-{0}.dll", GetHashCode()));
File.Copy(origDllPath, myDllPath);
dllPath = myDllPath;
dllHandle = LoadLibrary(dllPath);
_getVersion = GetProcEntryDelegate<_getVersionDelegate>(dllHandle, "GetVersion");
}
public void Dispose()
{
if (dllHandle != IntPtr.Zero)
{
FreeLibrary(dllHandle);
dllHandle = IntPtr.Zero;
}
if (dllPath != null)
{
File.Delete(dllPath);
dllPath = null;
}
}
private delegate int _getVersionDelegate();
private readonly _getVersionDelegate _getVersion;
public int GetVersion()
{
return _getVersion();
//May include error handling
}
}
Phew.
This may seem extremely complex if you see the two versions next to each other, but once you've set up the infrastructure, it is a very systematic change. And more importantly, it localizes the modification in my DotNetAccess layer, which means that I don't have to do modifications scattered all over a very large code base that is not my own.
In my opinion, solution 3 is the way to go.
The disadvantage that you have to touch every call to the DLL should apply to the other solutions, too... without the lack of scalability and uglyness of the multiple-DLL approach and the unnecessary overhead of IPC.
Solution 3 IS the way to go.
Imagine, that current object oriented programming languages work similar to your 3rd solution, but only implicitly pass the pointer to the structure that contains the data of "this".
Passing around "some kind of context thing" is not cumbersome, but it is just how things work! ;-)
Use approach #3. Since the code is in C, an easy way to deal with the globals that are spread everywhere is to define a macro for each global that has the same name as the global variable, but the macro expands to something like getSession()->theGlobal where getSession() returns a pinter to a "session-specific" structure that holds all the data for your globals. getSession() would fish the right data structure out of a global map of data structures somehow, perhaps using thread-local storage, or based on process ID, etc.
Actually solution 3 is easier than it sounds. All other solutions are kind of patch and will break with time.
Create a .net class which will encapsulate all access to the legacy code. Make it IDisposable.
Change all the global variables to reside in a class named 'Context'
Have all the C++ interfaces get the context object and pass it around as the first argument. This is probably the longest stage and you can avoid it using the "thread-local-storage" method suggested by someone else, but I would vote against that solution: if your library has any working threads which it runs, the "thread-local-storage" solution will break. Just add the context object where it is needed.
Use the context object to access all global data.
Have the context object created from .net ctor (by p/invoking a new create_context function) and deleted by the .net Dispose() method.
Enjoy.
Some thoughts on suggested solution #2 (and a bit on #1 and #3).
Some sort of IPC layer might introduce lagging. It depends on the actual engine how bad that is. If the engine is a rendering engine and it is called, say, 60 times a second the overhead might be too much. But, if not, a named pipe might be quick enough and easy to create using WCF.
Are you entirely sure you will need EXACTLY the same engine multiple times or are you in danger of changing requirements that might lead to a scenario that forces your toward loading multiple versions at the same time? If so, option #2 might be a better way than option #3 as it would allow this easier.
If the IPC layer is not slowing things down too much, this architecture might allow you to distribute the engines over other PC's. This might enable you to use more hardware than you previously planned for. You could even think about hosting the engine in the Azure cloud.

How can I edit immutable objects in WPF without duplicating code?

We have lots of immutable value objects in our domain model, one example of this is a position, defined by a latitude, longitude & height.
/// <remarks>When I grow up I want to be an F# record.</remarks>
public class Position
{
public double Latitude
{
get;
private set;
}
// snip
public Position(double latitude, double longitude, double height)
{
Latitude = latitude;
// snip
}
}
The obvious way to allow editing of a position is to build a ViewModel which has getters and setters, as well as a ToPosition() method to extract the validated immutable position instance. While this solution would be ok, it would result in a lot of duplicated code, especially XAML.
The value objects in question consist of between three and five properties which are usually some variant of X, Y, Z & some auxiliary stuff. Given this, I had considered creating three ViewModels to handle the various possibilities, where each ViewModel would need to expose properties for the value of each property as well as a description to display for each label (eg. "Latitude").
Going further, it seems like I could simplify it to one general ViewModel that can deal with N properties and hook everything up using reflection. Something like a property grid, but for immutable objects. One issue with a property grid is that I want to be able to change the look so I can have labels and textboxes such as:
Latitude: [ 32 ] <- TextBox
Longitude: [ 115 ]
Height: [ 12 ]
Or put it in a DataGrid such as:
Latitude | Longitude | Height
32 115 12
So my question is:
Can you think of an elegant way to solve this problem? Are there any libraries that do this or articles about something similar?
I'm mainly looking for:
Code duplication to be minimized
Easy to add new value object types
Possible to extend with some kind of validation
Custom Type Descriptors could be used to solve this problem. Before you bind to a Position, your type descriptor could kick in, and provide get and set methods to temporarily build the values. When the changes are committed, it could build the immutable object.
It might look something like this:
DataContext = new Mutable(position,
dictionary => new Position(dictionary["lattitude"], ...)
);
Your bindings can still look like this:
<TextBox Text="{Binding Path=Lattitude}" />
Because the Mutable object will 'pretend' to have properties like Lattitude thanks to its TypeDescriptor.
Alternatively you might use a converter in your bindings and come up with some kind of convention.
Your Mutable class would take the current immutable object, and a Func<IDictionary, object> that allows you to create the new immutable object once editing completes. Your Mutable class would make use of the type descriptor, which would create PropertyDescriptors that create the new immutable object upon being set.
For an example of how to use type descriptors, see here:
http://www.paulstovell.com/editable-object-adapter
Edit: if you want to limit how often your immutable objects are created, you might also look at BindingGroups and IEditableObject, which your Mutable can also implement.
I found this old question while researching my possible options in the same situation. I figured I should update it in case anyone else stumbles on to it:
Another option (not available when Paul offered his solution since .Net 4 wasn't out yet) is to use the same strategy, but instead of implementing it using CustomTypeDescriptors, use a combination of generics, dynamic objects and reflection to achieve the same effect.
In this case, you define a class
class Mutable<ImmutableType> : DynamicObject
{
//...
}
It's constructor takes an instance of the immutable type and a delegate that constructs a new instance of it out of a dictionary, just like in Paul's answer. The difference here, however, is that you override the TryGetMember and TrySetMember to populate an internal dictionary that you're eventually going to use as the argument for the constructor-delegate. You use reflection in order to verify that the only properties that you're accepting are those that are actually implemented in ImmutableType.
Performance wise, I wager that Paul's answer is faster, and doesn't involve dynamic objects, which are known to put C# developers into fits. But the implementation for this solution is also a little simpler, because Type Descriptors are a bit arcane.
Here's the requested proof-of-concept / example implementation:
https://bitbucket.org/jwrush/mutable-generic-example
Can you think of an elegant way to solve this problem?
Honestly, you just dance around the problem, but don't mention the problem itself ;).
If I correctly guess your problem, then the combination of MultiBinding and IMultiValueConverter should do the trick.
HTH.
P.S. BTW, you have immutable class instances, not value objects. With value objects (which are described by struct keyword) you would dance much more no matter if there were setters or not :).

How do I avoid a memory leak with LINQ-To-SQL?

I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect

Resources