Problems using protobuf-net RuntimeTypeModel and precompile with WPF client - wpf

Can anybody shed any light on how I can use the precompiled protobuf-net serializer assembly with WCF and a client (not to serialize/deserialize in code) to speed up first use of a DTO type?
I have managed to gain a lot of petrformance improvement in my large WCF/WPF application by using protobuf-net vs. datacontractserializer. However, even though I can precompile a serialization assembly from my DTO's, I cannot make WCF or it's WPF Client use it. The web service process always takes a long time for any first call from that process involving a new DTO, presumably to generate a serialization assembly on the fly.
How can I instruct the WCF server and/or the WPF client to use my generated assembly?
On a related issue, I have properties of type SolidColorBrush in some DTO's and this makes the precompiler fall over with "No serializer defined for type: System.Windows.Media.SolidColorBrush".
I have some code to add this support to the protobuf-net model, but I cannot understand how to apply it (to the precompiler or my code), when the rest of the DTO's are decorated with attibutes e.g. ProtoContractAttribute.
Any help much appreciated

At the moment, the only way to make WCF use a precompiled model would be configure WCF manually through code, in particular adding a ProtoOperationBehavior manually, and specifying the model:
var behavior = new ProtoOperationBehavior();
behavior.Model = new MyPrecompiledSerializer();
I confess I don't have a full end-to-end WCF example of doing that. I suspect it may be easier for me, in a new release, to tweak ProtoBehaviorExtension and/or ProtoBehaviorAttribute to allow you to specify the custom serializer-type via configuration - but that code does not exist today.
In the interim, if the issue is a slight delay on the first operation, then you can also add a few of the types you need exlicitly toe the default model, and compile it:
RuntimeTypeModel.Default.Add(typeof(Foo), true);
RuntimeTypeModel.Default.Add(typeof(Bar), true);
RuntimeTypeModel.Default.CompileInPlace();
that said: the compilation isn't horrendously slow - I'd be a little surprised if it is causing noticeable delay, unless your model is really complex (hundreds of types). Is it possible the delay is just WCF, network, TCP, etc overheads?
Regarding SolidBrush, and by implication: Color - it is possible to configure them at runtime:
RuntimeTypeModel.Default.Add(typeof(System.Windows.Media.Color), false)
.Add("R", "G", "B", "A");
RuntimeTypeModel.Default.Add(typeof(System.Windows.Media.SolidColorBrush), false)
.Add("Color");
However, I have not yet added a mechanism to do this when using "precompile" - it is much trickier at the technical level: I can't just use an executable method on (say) an attribute, because the assembly being inspected by "precompile" could be for any CLI (Silverlight, WinRT, .NET 1.1, CF, etc) - and as such, it is loaded by very different mechanisms.
My preferred approach would be: don't expose it as System.Windows.Media.Color - write your own DTO class that represents the data (rather than the final implementation), and map between them. Alternatively, it is also possible to write your own utility console exe that acts like "precompile", by configuring the model then calling RuntimeTypeModel.Default.Compile(string,string) or RuntimeTypeModel.Default.Compile(CompilerOptions).

Related

Microsoft Script Control - Blocking scripts' access to the system?

I am developing a commercial VB.net WPF application that needs user generated scripts for controlling the application to be shared between users. The best way that I have come across of accomplishing this so far without writing my own parser is using the Microsoft Script Control.
It would appear that both VBScripts and JScripts run through this control have access to wscript and as a result are too powerful to be shared between programmers and non-technical users for obvious security reasons.
I have considered trying to filter out dangerous scripts with some kind of regex parsing or something but that just seems far too risky and easy to circumvent.
So, is there some way of using this control but blocking its access to the system so that it could be used for controlling only the objects that I give it? If not, could someone recommend a better way of doing this?
I do not particularly mind what language the script would be in at this stage, although having multiple options would be nice.
EDIT: I am basing my conclusion that the control is too powerful for this on the fact that the following JScript code successfully launches notepad when called using the .AddCode and .Run methods of the control.
function test(){
var shell = new ActiveXObject("WScript.shell");
shell.run("notepad.exe", 1);
}
Thanks for all the help,
Sam.
If you just need to kill the ActiveXObject feature which is the entry point to the system, you can silently append some lines to the code you give to the Script Control, like this for example:
ActiveXObject = null; // add this silently
function test(){
var shell = new ActiveXObject("WScript.shell"); // this will now fail
shell.run("notepad.exe", 1);
}
Of course, if you still need to give some functions to your users, you will then need to propose some sort of an API, use the AddObject function (see How To Use the AddObject Method of the Script Control), and the user would use it like this:
ActiveXObject = null; // add this silently
function test(){
// this is a controlled method, because I have added a MyAPI named object
// using AddObject, and this object has a OpenNotepad method.
MyAPI.OpenNotepad();
}
PS: WScript is a, ActiveX Scripting host, so it's not accessible from the Script Control.
PS2: This hack does not work in every Script Control underlying languages. It works in JavaScript, but not in VBScript for example.
From your question, it doesn't look like you gave any consideration to any of the open-source script engines out there. I would suggest that those open-source libraries could solve your sandbox problem much more easily than the Microsoft Script Control.
The way I understand it, your requirements are:
Must be able to pass objects from your program into the scripting environment, so the script can use those objects to interact with the host app.
Must not be allowed to do unsafe things like access files, launch applications, create COM objects, etc.
Given those requirements, you have quite a few options.
The first thing that comes to mind is Jint, an open-source JavaScript interpreter written in .NET that you can embed in your app. You can pass .NET objects to the script. By default, the script can actually access any class in the .NET Framework, but the script is sandboxed by default (using .NET's built-in Code Access Security), so the script can't do anything unsafe. (For example, it could use things like StringBuilder or Regex, but it would get an exception if it tried to use FileStream.) If you want, you can disable the .NET access entirely, but even with it enabled, the default sandbox would probably suit your needs.
If for some reason Jint doesn't meet your requirements, some of the other open-source JavaScript-for-.NET engines I can think of off the top of my head are Jurassic, IronJS, and JavaScript .NET.
If it matters to you, Jint, Jurassic, and IronJS are all available through NuGet.

How does the Composite C1 architecture work?

Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.

WCP: TCP-Binding to HTTP-Binding? (for Silverlight)

i programmed a network-application with C#/WPF and used WCF with a TCP-Binding.
I used this Tutorial: http://www.codeproject.com/KB/IP/WCFWPFChatRoot.aspx.
Now i want a Web-Client-Version. I tried to make the Web-Client with Silverlight,
but if I add the Service-Reference, the compiler tells me, that Silverlight does not support TCP-Bindings.
Is it possible to change the Service to HTTP-Binding without writing a complete new Service?
EDIT:
Maybe i can keep the TCP-Binding. Silverlight 4 supports TCP-Bindung (without security and sessions)
NetTcpBinding tcpBinding = new NetTcpBinding(SecurityMode.None, true);
I already use SecurityMode.None, but then i set the Session-Flag from true to false i still get warnings in VS...
Whould my service work with no-session-mode? I use a callback-interface. Is that sessionhandling in WCF?
Absolutely!
One of the advantages of WCF is that the different components of your service are (mostly) independent. You can change the binding without the changing the implementation, or visa versa and in most cases be just fine. Issues may arise if you are using the special features of a binding, but in most cases there will not be any problems.
In this case, update the configuration and you should be fine.

WCF from Silverlight without using Add Service Reference

David Betz describes in his article how to create reference to WCF without using "Add Service Reference" option:
http://www.netfxharmonics.com/2008/11/Understanding-WCF-Services-in-Silverlight-2
Once WCF service is created, these are the statements within the silverlight:
BasicHttpBinding basicHttpBinding = new BasicHttpBinding();
EndpointAddress endpointAddress = new EndpointAddress("http://localhost:1003/Person.svc");
IPersonService personService = new ChannelFactory<IPersonService>(basicHttpBinding, endpointAddress).CreateChannel();
...
How does one references the types (such as IPersonService interface) created in WCF from Silverlight when I do not use "Add Service Reference" to buid proxies?
Idea is to reference assemblies that contain WCF data contracts in silverlight application, and to do that you need to fool VS so it thinks assembly is a SL assembly, he describes this in detail here
http://www.netfxharmonics.com/2008/12/Reusing-NET-Assemblies-in-Silverlight
and its not so easy, here is what needs to be done
Just use the same ILDasm/Edit/ILAsm
procedure already mentioned to tell
the assembly to use the appropriate
Silverlight assemblies instead of the
.NET assemblies. This is an extremely
simple procedure consisting of nothing
more than a replace, a procedure that
could easily be automated with very
minimal effort. It shouldn't take you
much time at all to write a simple
.NET application to do this for you.
It would just be a simple .NET to
Silverlight converter and validator
(to test for assemblies not supported
in Silverlight). Put that application
in your Post Build Events (one of the
top 5 greatest features of Visual
Studio!) and you're done. No special
binary hex value searching necessary.
All you're doing is changing two well
documented settings (the public key
token and version).
Second solution is a file level solution , you use add link option on files that contain your required data contracts implementations to SL and make sure they only contain types that allow to build SL and dont reference a lot of external assemblies , usually those conditions should be met for WCF services & data contracts.
I can write more but it would be just the copy paste from that link
You also have to split all methods declaration in you IPersonService according to Async pattern (BeginXXX/EndXXX) since Silverlight supports only asyncronous WCF (even in background threads).
As a help to do this, you may add Service Reference, then copy generated IPersonService (all methods will be decoupled) from Reference.cs. Then you may remove the reference.
However, if your service contract is often changed, you have to repeat Add-Service procedure again, and starting from this, I would say, it's easier just use Add-Service-Reference feature, rather than sharing the contract with your app server.
Only one thing that I would like detect. Often you need in namespaces with more complex support of NET within your WCF service. Therefore you must have real reasons to reference to Silverlight subset within your WCF service (or service library). There are many ways to
use so named traditional ways by Add Service Reference. They are presented in good article enter link description here.

Gratuitous use of System.Runtime.Serialization attributes?

Is there any cost/drawback (apart from typing too much) to adorning a class with System.Runtime.Serialization attributes (like [DataContract]) such that it can be used locally as a direct reference to a desktop Client project or as a type for a WCF service? The goal here is to write a data-tier class that can be used in both rich client (WPF) and Web scenarios. My data classes will be in a project that is separate from Client and WCF (*.svc code-behind) code. Is this a valid attempt to reuse code?
Adorning a class, property, or method does not incur any cost - except of the time it takes to write the attribute. The attribute will be complied into the metadata of the type, and then is used by another component to implement additional functionality.
The only drawback I can see is the cost of including attributes in the assembly. They have very little performance impact unless they are used.
I would say none but it does make the .DLL larger and does spread the RTTI in the assembly out a bit which, especially if you push a class over a read block boundary, could slow down assembly load (causing a couple extra blocks to be read that wouldn't have been otherwise). These differences are usually only noticable in cold start testing, however.

Resources