I have a project that loads 3rd party modules (in the form of DLLs) and allows them to execute arbitrary code. The application loading the modules requires elevated privileges as so too will the modules.
The modules are all made in house for this project, so the risk is relatively low at the moment. However, in the future there might be outside modules needing to be loaded.
The modules don't have any need to modify, access, or do anything with any of the drives, so I would like to be able to disable any form of I/O in the modules. I haven't figure out any way to do this, or even where to start.
The dependency injection is from MEF, specifically using the Prism design patterns.
You should have a look at these questions:
How can I use CAS in .NET 4 to lock down my MEF extensions?
Looking for a practical approach to sandboxing .NET plugins
As well as the linked http://msdn.microsoft.com/en-us/library/bb763046.aspx
The short story is that if your application is running in full trust, then code access security attributes won't prevent addins from doing anything that they like. You'd need to load the addins in a security-limited (sandboxed) AppDomain and access them via intra-appdomain remoting. To do that, see "Sandboxing" here: http://msdn.microsoft.com/en-us/magazine/ee677170.aspx
Related
Given the case that you have a basic GUI that must be extensible by plugins not known when the generate run of the main GUI is done. Contributable plugins may consist of some manifest, resources, localization, some code that is executable in the GUI environment and can provide custom widgets.
From what I see in the moment, it could be done by
Let a plugin developer build against the ordinary source, generating a part for the plugin. Then manually register a qx.io.part.Part with the generated parts to the GUI running on the non developer side.
Just load a combined source JS for that plugin, containing the resources and load them manually via eval.
I'd personally prefere the first one, as it already includes everything that might be used by a plugin. But it uses a method that is marked as internal.
Are there any experiences with that? Are there other, more elegant ways to achieve that?
Usually I work with Zend Framework and now I'm developing an application on CakePHP and trying to understand the framework, in particular how to modularize an application.
In ZF there are modules. Every logical subdivision of an application can (and should) be packed to a separate module. It allows to keep the application structure clear.
There are no modules in CakePHP -- instead the framework provides plugins and I firstly thougth, plugins are the "modules" in CakePHP. But a plugin in CakePHP seems to be something more, than a ZF's module -- "behaving much like it would if it were an application on its own". Plugins should apparently be used for bigger things like a blog or a forum, that have characteristics of an independent apllication. So logical units like User, Order, Payment, or CustomerFeedback, that only make sense within the application, are probably not suitable as plugins.
Is there a recommended way / What is the recommended way in CakePHP to separate an application into small and well manageable logical parts and so to build a modularized application?
CakePHP plugins and ZF modules have a little bit in common, both can contain MVC logic, library code, configuration, assets, tests, etc., and can pretty much behave like applications on their own.
However there's no need for such "whole application on its own" level of complexity in order for plugins to be the recommended way of managing logic. Of course they can also be used for less complex stuff like splitting your application into logical (and reusable) units.
A payment plugin, a user management plugin, an order processing plugin, a feedback plugin, that all makes sense, and it's neither wrong nor discouraged to use plugins in such a manner.
If you need some inspiration, check out Croogo for example, it's a CakePHP based CMS that manages its different parts using plugins.
Just an R&D question. We need to develop an application that can be run in a browser that has the capability of performing some system checks to gather support information to be emailed to us. These checks will include basic system information, but also will need to scan the filesystem and pull out version information about various DLLS, executables, and .NET assemblies that might be installed. The idea being that we can direct a client to a page and have the application gather the relevant information needed for support, and potentially even populate some database fields. We need it to have as small a footprint as possible.
I've worked with ActiveX before, and know it is capable of these things, but particularly on modern systems security is a nightmare to get around, with a lot of people blocking ActiveX altogether. Is Silverlight easier to deliver to clients? Does it have a lighter footprint? Is it even capable of doing these things?
Silveright has access to isolated storage, but I don't think it can do what you are looking for (I may be wrong). As for footprint, if I remember correctly, the runtime is reasonably small, and the .xap packages are limited to 4Mb.
Silverlight out-of-browser has access to the file system. http://msdn.microsoft.com/en-us/library/dd550721(v=vs.95).aspx#special_features_for_outofbrowser_applications
If you intend to run your app in the browser, you will still have to configure the trust as if it where oob. http://msdn.microsoft.com/en-us/library/gg192793(v=vs.95).aspx
However, iTunes has a neat way of doing something somewhat related. It has a custom protocol (itms://) that allows the browser to invoke a client side program (iTunes). Then you can embed html in a webpage that passes parameters as command line arguments to that app. The website also knows if the iTunes is installed by a cookie. We this in mind, you might be able to encourage your users to install some small app that setups the custom protocol on install. You could pass command-line parameters to it from the web, and the app will push information from the client back to the server.
To create a real-time experience, you could use sockets + more javascript to update the page with the info you just got off the machine.
HTH,
Silverlight runs in a pretty restricted silo and can't do a lot of low level things - such as checking the file system. So I would say it does not fit your use case, unfortunately.
I would like to know what are the industry standards or suggestion on how are you doing at your end for following situation.
I am creating multiple silverlight projects which get publised at different dates. All these projects uses varios shared code (common dlls). These shared code would be used in client side or server side. My question is, if the shared code changes would you recompile all the afftected project and release or recompile only when you are making change to the actual code which uses the shared component?
For now, client side, we create a assembly reference folder in each silverlight project and put the latest required dlls in it. By doing it, it has all required files in the XAP itself and it will not conflict with other projects and it works fine. With this approach I will not rebuild any other client side code just because common dll changed. If the common dll change is required for multiple projects then drop the latest copy in all affected projects and build them and distribute them.
On the other hand, the server side (Domain Services using EF), all the service code sit under bin folder of the web site. So if i would make a change to a common dll, then not only I need to publish the latest common dll for current project to work, but also recompile all the other services to use the new dll.
Would like to know your opinions and suggestions.
Thanks
There are two approaches possible:
Add Common Code to the solution and have a project reference
Get the build process to build to a folder and reference from there
I prefer first option. I always build and debug using the latest code and do not have to worry about stale references. I have used the second approach in the past and it is messy and can waste your team's time going after debugging bugs that do not exist (old version referenced). In fact I remember Visual Studio sometimes would not get a later version when it was available.
Another alternative for your Silverlight projects would be to use MEF to dynamically download a XAP file containing the common libraries. Then if the common libraries change, you could publish an updated "CommonLibraries.xap", and your Silverlight clients can pick up the refresh independently of the rest of the Silverlight application.
You could follow the same approach with other projects that use these common libraries. The applications could dynamically load the common libraries so that the common libraries can be refreshed independently.
If possible, consider consuming the "common library" code via WCF services.
Currently I have a working Silverlight application that uses .Net RIA Services.
It's structure:
Client-side
Application.Client.UI.dll (Xamls and
basic UI stuff)
Application.Client.BL.dll (Contains the Link to RIA and most of the business logic)
Server-side
Application.Server.Data.dll (Server-side dll that holds the Entity-model and it's generated domain service)
Application.Server.Web.dll (Only the ASP.net hosting container, which references the
Application.Server.Data.dll)
I placed most of the business logic on the client side (Application.Client.BL.dll) for better user-experience (fast reactions) and to free up server resources. My challenge is now to re-use this client-side dll including it's RIA data access capabilities, in a server-side windows service. I'm wondering, is that possible at all? Is the Application.Client.BL.dll still able to consume the existing RIA service, or does that dll require the Silverlight runtime to identify/locate it's service target, and therefore will not work anywhere else.
Curious for your answers
You really shouldn't put any business logic on the client, the guys in security and / or architecture will hate you for it ;-). Furthermore you can't use Silverlight assemblies in ASP.Net or Desktop projects and vice versa. If memory serves correctly, Silverlight uses an entirely different CLR altogether.
I encountered similar needs when working with compact framework assemblies I also wanted to compile for the full framework. I'll describe how I would work around this scenario.
If there exist any issues referencing the Silverlight assembly, consider building two projects as follows:
Project #1 would be your Silverlight library, and should contain all the source files you want to use on the client.
Project #2 would be your Windows Service. Instead of including source files directly, use the "Add Existing Item", find the original source file in project #1, then (and this is the magic), drop down the Add button to choose, instead, choose "Add as Link".
By including the source file as a link, you retain the ability to maintain your source code in one location, but add the ability to compile your code for multiple frameworks. As long as the code relies on assemblies available in both the Silverlight framework and the full .NET framework, then you're money.
Now, regardless of whether you choose a multi-project approach, know that domain context classes have additional constructors that allow you to specify contextual information, such as the URL, for the corresponding domain service. I use the following code in one application to construct a domain context for a domain service that provides personnel data:
var context = new PersonnelDomainContext(
new Uri(ConfigurationManager.AppSettings["PersonnelServiceUrl"]))
In this case, the URL looks something like:
http://website-url/Services/Hyphenated-Namespace-PersonnelDomainService.svc
Of course, when writing a Windows Service, nothing is stopping you from referencing the server-side domain service (not context) assembly directly. With the domain service in hand, you can instantiate a service instance without all the additional configuration and without the additional network XML payload. There are trade-offs to this approach, such as forfeiting centralized configuration management (such as connection strings), but depending on your circumstances, you may find the trade-offs to be worth it.
Happy coding!
Have you considered using fork-reuse? Take a look at:
http://sharednow.blogspot.com/2011/05/its-not-just-reuse.html