Where have my config settings gone? - wpf

After some long and complicated stories, I came upon code very similar to this, and have been using it fine for months. Yesterday I changed a great many things, and now this code no longer works.
public void OpenConfig(string configDir)
{
if (string.IsNullOrWhiteSpace(configDir))
{
configDir = AppDomain.CurrentDomain.BaseDirectory;
}
var sharedConfigPath = Path.Combine(configDir, ConfigFileName);
var map = new ExeConfigurationFileMap { ExeConfigFilename = sharedConfigPath };
sharedConfig = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None);
logger.Trace("'{0}' set 'SharedConfigPath' to '{1}'.", GetType().Name, sharedConfig.FilePath);
}
Then, I get config values like this:
ServicePollingInterval = GetIntegerAppSetting("ServicePollingInterval"),
where
private static int GetIntegerAppSetting(string settingName, bool throwOnMissing = false)
{
string setting = null;
if (settingTag != null)
var settingTag = sharedConfig.AppSettings.Settings[settingName];
{
setting = settingTag.Value;
}
if (setting == null)
{
if (throwOnMissing)
{
logger.Trace("App Setting '{0}' not configured. Throwing an exception.", settingName);
throw new EalsConfigurationException(settingName, "App Setting not configured.");
}
logger.Trace("App Setting '{0}' not configured. Defaulting to -1.", settingName);
return -1;
}
int i;
if (!int.TryParse(setting, out i))
{
logger.Trace("App Setting '{0}' not valid as Integer. Throwing an exception.", settingName);
throw new EalsConfigurationException(setting, "App Setting '{0}' not valid as Integer.");
}
return i;
}
But, now basically overnight, the call var settingTag = sharedConfig.AppSettings.Settings[settingName]; returns settingTag as null, because there are no appSettings items in that collection.
I have been working on this project for months, and I put this config code in way in the beginning because I had several executables running in the same folder, and I want them to all use the same config. I am really stumped (not surprised) that I made no memorable code changes to config.
Can anyone see what I might have screwed up, or guess at what external influences I may have changed, or at anything that could suddenly cause all the sections of this Configuration object to be empty.
One suspicion I have, but cannot trace, is a change in what user that app runs as. It's complicated: I have a WCF service hosted in a Windows Services, consumed by a WPF application.

Trial and error has brought to light some sort of answer. When coding and debugging with separate WCF library, host, and client, projects, you need the same config settings in all three projects. Sometimes VS2013 (I dunno about others) auto-hosts the WCF library, in some unknown process somewhere, and for this reason you need a copy of app.config in the library project, even though it is not an executable.

Related

Authenticate user in WinForms (Nothing to do with ASP.Net)

Note: Cross-posted to ServerFault, based on comments.
Intro
I need to password protect some actions in my application, such as loading/saving files, clicking check-boxes, etc. This is a standard C# .Net 4.0, WinForms application which will run on Windows 7 in a corporate network.
I was about to roll my own very basic system (read obfuscation with wide open backdoors) with a text file of users/passwords/permissions (hashed and salted) until after some searching I found what looks like a
tantalizingly simple approach , but I'm having trouble finding a good tutorial on Roles that isn't about ASP.NET.
Question
So does anyone know of one or more tutorials that show me how to:
Create a Windows User/Group and give that User/Group a Role or Permission.
Note that I'm testing this from my company's networked laptop, but will deploy it on the customer's corporate network (Not sure if this is an issue, or how tricky this will get).
Create winforms/console app sample with even just a single method that prints "Hello World" if I'm authenticated or throws an exception if I'm not?
I've never done Network Admin or anything related and I keep reading about Active Directory and Local Users Vs Networked Users... I was hoping for an approach where I could build to an Interface and just ask Windows if the current user has permission ABC and not care too much about how Windows figured that out. Then I can make a concrete implementation for each Local/Network/ActiveDirectory/etc. use case as required (or if required... as I don't even know that right now).
Background
- read if interested, but not required to answer question
Just to make sure I'm going in the right direction here, basically I need/want to test this on my development PC to make sure it's going to have a good end-user experience for my customer. The problem is that currently they run an Auto-login script for each computer that runs my application and there are several different operators that use my application throughout the day. The customer wants password protection on certain features of my app and only provide that to certain operators. I have no problem fitting this in, as I've expected the request for a while, I just haven't ever programmed authentication before.
I think it's worthwhile to convince my customer to give each operator their own network account and assign whatever permissions they want to that operator or group, in case they need to fire somebody, change permissions, etc. It also means I just open several options for them and they can group those permissions however they see fit based on internal corporate policies, which I really shouldn't have to be worried about (but will be if I have to roll my own, as they're IT department knows almost nothing of my application).
From what I can tell it also makes my life a lot easier by not having to deal with hashing passwords and encryption, etc. and just handle which Role is required to click this or that button.
First of all, you'd have to determine, if you really want a simple role-based-authentication (you may want to read: http://lostechies.com/derickbailey/2011/05/24/dont-do-role-based-authorization-checks-do-activity-based-checks/)
If you're sure it's absolutely sufficient, you're already on the right way with the SO link you provided in your question. It's kind of confusing that there is no support of 'roles' by default in Windows, but there are groups. Groups can be local or remote (e.g. ActiveDirectory), so an admin could assign users to certain groups, that are specific for your application (for an example look here: http://msdn.microsoft.com/en-us/library/ms731200(v=vs.110).aspx)
One key is: You have to prepare your application's central principal, hence fill it with roles, supported for the current user.
Therefore, On the very startup of your application you then check the current active user and set your application wide principal and role(s). This may look like this (just a very simple example):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security;
using System.Security.Principal;
using System.Text;
using System.Threading;
namespace WindowsPrincipalTrial
{
public class Program
{
// you could also move these definitions to a config file
private static IDictionary<string, string> _groupRoleMappings = new Dictionary<string, string>()
{
{"MYAPPUSERGRP", MyRoles.Standard},
{"MYAPPSUPPORTGRP", MyRoles.Extended},
{"MYAPPADMINGRP", MyRoles.Admin},
};
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var allRoleNames = getGroupCorrespondingRoles(windowsId);
var newPrincipal = new GenericPrincipal(windowsId, allRoleNames);
Thread.CurrentPrincipal = newPrincipal;
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
private static string[] getGroupCorrespondingRoles(WindowsIdentity id)
{
// you also could do this more elegant with LINQ
var allMappedRoleNames = new List<string>();
string roleName;
foreach (var grp in id.Groups)
{
var groupName = grp.Translate(typeof(NTAccount)).Value.ToUpper();
if (_groupRoleMappings.TryGetValue(groupName, out roleName))
{
allMappedRoleNames.Add(roleName);
}
}
return allMappedRoleNames.ToArray();
}
}
public static class MyRoles
{
public const string Standard = "standard_role";
public const string Extended = "extended_role";
public const string Admin = "admin_role";
}
}
Then your Application-Principal is set up.
Now you could check access in your code like this:
public void DoSomethingSpecial()
{
if (Thread.CurrentPrincipal.IsInRole(MyRoles.Extended))
{
// do your stuff
}
else
{
// maybe display an error
}
}
Or more drastically:
public void DoSomethingCritical()
{
var adminPermission = new PrincipalPermission(null, MyRoles.Admin);
adminPermission.Demand();
// do stuff
}
what is possible even declarative, as known from ASP.NET:
[PrincipalPermission(SecurityAction.Demand, Role=MyRoles.Admin)]
public void DoSomethingMoreCritical()
{
// do stuff
}
The ugly thing with the latter two examples is, that they throw exceptions, when the right role isn't hit.
So the mapping between roles and groups you have to do quite at the start of your app, according to the systems you want to use (local groups, AD groups, LDAP groups etc.).
If you, however, prefer authentication with actions and roles, after all, have a look at Windows Identity Foundation and Claims Based Authorization! There are already some ready-to-use frameworks out there (e.g. https://github.com/thinktecture/Thinktecture.IdentityModel).
UPDATE:
When it comes to activity based and thereby claims based authorization, I will try in short, how you could achieve it, by using Thinktecture's IdentityModel.
Generally that approach still uses roles internally, but has a kind of translation layer in between. Thinktecture already encapsulates many things needed. Authorization checks in code are then done via claim permissions. They are technically kind of request for an access to a certain resource. For the sake of simplicity I limit my example for actions only, by using one single default resource (since ClaimPermission doesn't accept an empty resource).
If you want to use action#resource pairs, you'd have to modify the code respectively.
At first you need a ClaimsAuthorizationManager
public class MyClaimsAuthorizationManager : ClaimsAuthorizationManager
{
private IActivityRoleMapper _actionToRolesMapper;
public MyClaimsAuthorizationManager(IActivityRoleMapper mapper)
{
_actionToRolesMapper = mapper;
}
public override bool CheckAccess(AuthorizationContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
try
{
var action = getActionNameFromAuthorizationContext(context);
var sufficientRoles = _actionToRolesMapper.GetRolesForAction(action)
.Select(roleName => roleName.ToUpper());
var principal = context.Principal;
return CheckAccessInternal(sufficientRoles, principal);
}
catch (Exception ex)
{
return false;
}
}
protected virtual bool CheckAccessInternal(IEnumerable<string> roleNamesInUpperCase, IClaimsPrincipal principal)
{
var result = principal.Identities.Any(identity =>
identity.Claims
.Where(claim => claim.ClaimType.Equals(identity.RoleClaimType))
.Select(roleClaim => roleClaim.Value.ToUpper())
.Any(roleName => roleNamesInUpperCase.Contains(roleName)));
return result;
}
// I'm ignoring resources here, modify this, if you need'em
private string getActionNameFromAuthorizationContext(AuthorizationContext context)
{
return context.Action
.Where(claim => claim.ClaimType.Equals(ClaimPermission.ActionType))
.Select(claim => claim.Value)
.FirstOrDefault();
}
}
As you may have guessed, IActivityRoleMapper is an interface for a class, that returns the names of all roles, that include permission for a given action.
This class is very individual and I guess you'll find your way implementing it, because it's not the point here. You could do it by hardcoding, loading from xml or from a database. Also you would have to change/extend it, if you wanted to you action#resource pairs for permission requests.
Then you'd have to change the code in main() method to:
using Thinktecture.IdentityModel;
using Thinktecture.IdentityModel.Claims;
using Microsoft.IdentityModel.Web;
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var rolesAsClaims = getGroupCorrespondingRoles(windowsId)
.Select(role => new Claim(ClaimTypes.Role, role))
.ToList();
// just if you want, remember the username
rolesAsClaims.Add(new Claim(ClaimTypes.Name, windowsId.Name));
var newId = new ClaimsIdentity(rolesAsClaims, null, ClaimTypes.Name, ClaimTypes.Role);
var newPrincipal = new ClaimsPrincipal(new ClaimsIdentity[] { newId });
AppDomain.CurrentDomain.SetThreadPrincipal(newPrincipal);
var roleMapper = new ActivityRoleMapper(); // you have to implement
// register your own authorization manager, so IdentityModel will use it per default
FederatedAuthentication.ServiceConfiguration.ClaimsAuthorizationManager = new MyClaimsAuthorizationManager(roleMapper);
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
Finally you can check access this way:
public const string EmptyResource = "myapplication";
public void DoSomethingRestricted()
{
if (!ClaimPermission.CheckAccess("something_restricted", EmptyResource))
{
// error here
}
else
{
// do your really phat stuff here
}
}
Or again, with exceptions:
private static ClaimPermission RestrictedActionPermission = new ClaimPermission(EmptyResource, "something_restricted");
public void DoSomethingRestrictedDemand()
{
RestrictedActionPermission.Demand();
// play up, from here!
}
Declarative:
[ClaimPermission(SecurityAction.Demand, Operation = "something_restricted", Resource = EmptyResource)]
public void DoSomethingRestrictedDemand2()
{
// dostuff
}
Hope this helps.

Why is the Entity Framework inserting when it should update?

I use the following RIA Services call to register and return a Project entity.
// On Server; inside RIA Domain Service
[Invoke]
public Project CreateNewProject(String a_strKioskNumber)
{
Decimal dProjectID = ObjectContext.RegisterProjectNumber(a_strKioskNumber)
.FirstOrDefault() ?? -1m;
// Tried this but it returned zero (0)
//int nChanged = ObjectContext.SaveChanges();
var project = (from qProject in ObjectContext.Projects.Include("ProjectItems")
where qProject.ID == dProjectID
select qProject)
.FirstOrDefault();
if (project == null)
return null;
return project;
}
As you can see, it calls a stored procedure that returns a project ID. It uses this ID to look up the Project entity itself and return it. When the Project entity is returned to the client it is detached. I attach it to the DomainContext and modify it.
// At Client
_activeProject = a_invokeOperation.Value; // <-- Detached
_context.Projects.Attach(_activeProject); // <-- Unmodified
if (_activeProject != null)
{
_activeProject.AuthenticationType = "strong"; // <-- Modified
_activeProject.OwnerID = customer.ID;
_projectItems.Do(pi => _activeProject.ProjectItems.Add(pi));
_activeProject.Status = "calculationrequired";
}
At this point it has an entity state of Modified. When I submit changes it gives me an exception regarding a UNIQUE KEY violation as if it is trying to insert it rather than update it.
// At Client
_context.SubmitChanges(OnProjectSaved, a_callback);
I'm using the same DomainContext instance for all operations. Why should this not work?
What's going wrong? This is rather frustrating.
Edits:
I tried this (as suggested by Jeff):
[Invoke]
public void SaveProject(Project a_project)
{
var project = (from qProject in ObjectContext.Projects
where qProject.ID == a_project.ID
select qProject)
.FirstOrDefault();
project.SubmitDate = a_project.SubmitDate;
project.PurchaseDate = a_project.PurchaseDate;
project.MachineDate = a_project.MachineDate;
project.Status = a_project.Status;
project.AuthenticationType = a_project.AuthenticationType;
project.OwnerID = a_project.OwnerID;
project.ProjectName = a_project.ProjectName;
project.OwnerEmail = a_project.OwnerEmail;
project.PricePerPart = a_project.PricePerPart;
project.SheetQuantity = a_project.SheetQuantity;
project.EdgeLength = a_project.EdgeLength;
project.Price = a_project.Price;
project.ShipToStoreID = a_project.ShipToStoreID;
project.MachiningTime = a_project.MachiningTime;
int nChangedItems = ObjectContext.SaveChanges();
}
It did absolutely nothing. It didn't save the project.
What happens if you add a SaveProject method on the server side and send the object back to the server for saving?
I've not done EF with RIA Services, but I've always sent my objects back to the server for saving. I'm assuming that SubmitChanges call you are making wires up everything properly for you for sending it back to the server, but perhaps it is doing something wrong and handling it manually will fix it.
I dont have the source at the moment but I have seen it recommended that you use a new context for each operation in Silverlight. I ran into a similar problem today and it was because I was using a Service level context that was remembering previous values that I didnt want, I changed to creating a new context for each service call and the behavior became what I expected.
public void SaveResponses(ICollection<Responses> items, Action<SubmitOperation> callback)
{
try
{
SurveysDomainContext _context = new SurveysDomainContext();
foreach (Responses item in items)
{
_context.Responses.Add(item);
}
_context.SubmitChanges(callback, null);
}
catch (Exception)
{
throw;
}
}
As for the notion that one can't use a singleton global DomainContext, this is actually debatable. In my project I use a singleton DomainContext with no issues. In other projects, we have created a new DomainContext for different modules in the app where the entities are reused. There are definitely pros and cons. See:
Strategies for Handling Your DomainContext (external blog)
It seems that the problem is that when you attach your project to the DomainContext it checks the _context.Projects entityset and isn't finding an entity with that primary key, and then assumes that the newly attached entity doesn't exist serverside yet and that submitting changes should insert it. A possible workaround might be to explicitly load the newly created Project into the DomainContext. It would ensure that it sets the correct state on the entity--that is, that the project already exists on the server and that that it's an update instance, rather than an insert instance.
So maybe something like:
//after your Project has already been created serverside with the invoke
_context.Load(_context.SomeQueryThatLoadsYourNewlyCreatedProject(), LoadBehavior.RefreshCurrent, (LoadOperation lo) => {
Project project = lo.Entities.FirstOrDefault(); //is attached and has correct state
if (project != null)
{
project.AuthenticationType = "strong";
project.OwnerID = customer.ID;
project.Do(pi => _activeProject.ProjectItems.Add(pi));
project.Status = "calculationrequired";
_context.SubmitChanges(); //hopefully will trigger an update, rather than an insert
}
});

Silverlight4 calling ASMX web service

I have a Visual Studio solution with a Silverlight project, and a web project which hosts the Silverlight app. The web project also contains an ASMX web service which is called by the Silverlight ap.
As described below, certain calls to the web service work fine, and yet others cause a CommunicationException to be thrown, wrapping a WebException - both with the message "The server returned the following error: 'not found'".
Firstly, here's my original method, which failed as described above (entity names changed for simplicity):
[WebMethod]
public Customer GetCustomer(int id)
{
CustomerDataContext dc = new CustomerDataContext();
return dc.Customers.SingleOrDefault(x => x.Id == id);
}
Secondly, to debug the problem I took Linq to SQL and the database out of the picture, and the below code worked fine:
[WebMethod]
public Customer GetCustomer(int id)
{
Customer c = new Customer() { ID=1, Name="Bob", History = new EntitySet<CustomerHistory>() };
return c;
}
Third, thinking about this, one difference between the two methods is that the first one would include values in the customer history. I extended the second method to include this, and it started failing again:
[WebMethod]
public Customer GetCustomer(int id)
{
Customer c = new Customer() { ID=1, Name="Bob", History = new EntitySet<CustomerHistory>() };
c.History.Add(new CustomerHistory() { Id=1, CustomerId=1, Text="bla" });
return c;
}
I'm stuck with regards to how to progress - my current thinking is that this could be a deserialization issue on the Silverlight side, when the object graph is deeper. This rationally doesn't make sense, but I can't think of anything else. I've confirmed that the transfer size and buffer size are big enough (2GB by default).
Any pointers would be appreciated.
Ahhhh the famous "Not Found" error, try to get details from that error using the tag in your web.config. That will create a log file providing details of the error.
The following link explains exaclty how to do it :
http://blogs.runatserver.com/lppinson/post/2010/04/15/Debugging-WCF-Web-Services.aspx

Consuming WCF Data Services Service Operator (WebGet) async from Silverlight

Having a lot of problems trying to consume a simple service operator in a WCF Data Service from Silverlight. I've verified the following service operator is working by testing it in the browser:
[WebGet]
public IQueryable<SecurityRole> GetSecurityRolesForUser(string userName) {
string currentUsername = HttpContext.Current.User.Identity.Name;
// if username passed in, verify current user is admin and is getting someone else's permissions
if (!string.IsNullOrEmpty(userName)) {
if (!SecurityHelper.IsUserAdministrator(currentUsername))
throw new DataServiceException(401, Properties.Resources.RequiestDeniedInsufficientPermissions);
} else // else nothing passed in, so get the current user's permissions
userName = currentUsername;
return SecurityHelper.GetUserRoles(userName).AsQueryable<SecurityRole>();
}
However no matter how I try using different methods I've found in various online resources, I've been unable to consume the data. I've tried using the BeginExecute() method on boht the DataServiceContext and DataServiceQuery, but I keep getting errors or no data returned in the EndExecute method. I've got to be doing something simple wrong... here's my SL code:
private void InitUserSecurityRoles() {
MyEntities context = new MyEntities(new Uri("http://localhost:9999/MyService.svc"));
context.BeginExecute<SecurityRole>(new Uri("http://localhost:9999/MyService.svc/GetSecurityRolesForUser"), OnComplete, context);
DataServiceQuery<SecurityRole> query = context.CreateQuery<SecurityRole>("GetSecurityRolesForUser");
query.BeginExecute(OnComplete, query);
}
private void OnComplete(IAsyncResult result) {
OnDemandEntities context = result.AsyncState as OnDemandEntities;
var x = context.EndExecute<SecurityRole>(result);
}
Any tips? I'm at a loss right now on how to properly consume a custom service operator from Silverlight (or even sync using my unit test project) from a OData service. I've also verified via Fiddler that I'm passing along the correct authentication stuff as well, even going to far as explicitly set the credentials. Just to be safe, I even removed the logic from the service operator that does the security trimming.
Got it working thanks to #kaevans (http://blogs.msdn.com/b/kaevans):
private void InitUserSecurityRoles() {
DataServiceContext context = new DataServiceContext(new Uri("http://localhost:9999/MyService.svc"));
context.BeginExecute<SecurityRole>(new Uri("/GetSecurityRolesForUser", UriKind.Relative),
(result) => {
SmartDispatcher.BeginInvoke(
() => {
var roles = context.EndExecute<SecurityRole>(result);
UserSecurityRoles = new List<SecurityRole>();
foreach (var item in roles) {
UserSecurityRoles.Add(item);
}
});
}, null);
}
I had to create the SmartDispatcher because this is happening in a ViewModel. Otherwise I could have just used the static Dispatcher.BeginInvoke(). Couldn't get the roles variable to insert into my UserSecurityRoles (type List) directly for sone reason using various techniques, so I just dropped down to adding it manually (code isn't called often nor is it a collection exceeding more than 10 items max... most are <5).

Will Prism OnDemand module loading work in an OOB scenerio?

Should the loading of OnDemand Prism modules work in an OOB scenerio? If so, I cannot seem to make it work. Everything is currently working in browser without any problems. Specifically I:
register my modules in code:
protected override IModuleCatalog GetModuleCatalog() {
var catalog = new ModuleCatalog();
Uri source;
if( Application.Current.IsRunningOutOfBrowser ) {
source = IsolatedStorageSettings.ApplicationSettings[SOURCEURI] as Uri;
}
else {
var src = Application.Current.Host.Source.ToString();
src = src.Substring( 0, src.LastIndexOf( '/' ) + 1 );
source = new Uri( src );
IsolatedStorageSettings.ApplicationSettings[SOURCEURI] = source;
IsolatedStorageSettings.ApplicationSettings.Save();
}
if( source != null ) {
var mod2 = new ModuleInfo { InitializationMode = InitializationMode.OnDemand,
ModuleName = ModuleNames.mod2,
ModuleType = "mod2.Module, mod2.Directory, '1.0.0.0', Culture=neutral, PublicKeyToken=null" ),
Ref = ( new Uri( source, "mod2.xap" )).AbsoluteUri };
catalog.AddModule( mod2 );
}
// per Jeremy Likeness - did not help.
Application.Current.RootVisual = new Grid();
return ( catalog );
}
later request for the module to be loaded is made:
mModuleManager.LoadModule( ModuleNames.mod2 );
and wait for a response to an event published during the initialization of that loaded module.
The module appears to never be loaded, and when the application is running under the debugger there will be a message box that states that the web server returned a 'not found' error. I can take the requesting url for the module and enter it into Firefox and download the module with no problem.
I have not been able to find any reference to this actually being workable, but it seems as though it should. The most I have found on the subject is a blog entry by Jeremy Likeness, which covers loading modules in MEF, but applying his knowledge here did not help.
The server is localhost (I have heard it mentioned that this might cause problems). The server has a clientaccesspolicy.xml file - although I don't expect that is needed.
I am using the client stack and register it during app construction:
WebRequest.RegisterPrefix( Current.Host.Source.GetComponents( UriComponents.SchemeAndServer, UriFormat.UriEscaped ), WebRequestCreator.ClientHttp );
Followup questions:
Can all of the xaps be installed to the client desktop in some manner - or only the main application xap? specify them in appmanifest.xml somehow??
Is it worth it make this work if only the application.xap is installed and the rest of the xaps must be downloaded anyway?
Once I worked on a similar scenario. The trick is having the modules stored in isolated storage and use a module loader that reads from isolated storage when working offline.
This is because otherwise, you can't get download the modules that are in a different .xap file than the Shell.
Thanks,
Damian
It is possible to hook custom module loaders into Prism if you're willing to tweak the Prism source and build it yourself. I was actually able to get this to work pretty easily - in our app, I look on disk first for the module, and if it's not found, I fall back to loading it from the server via a third-party commercial HTTP stack that supports client certificates.
To do this, download the Prism source code, and locate the Microsoft.Practices.Composite.Modularity.XapModuleTypeLoader class. This class uses another Prism class, Microsoft.Practices.Composite.Modularity.FileDownloader, to download the .xap content; but it instantiates it directly, giving you no chance to inject your own or whatever.
So - in XapModuleTypeLoader, I added a static property to set the type of the downloader:
public static Type DownloaderType { get; set; }
Then I modified the CreateDownloader() method to use the type specified above in preference to the default one:
protected virtual IFileDownloader CreateDownloader() {
if (_downloader == null) {
if (DownloaderType == null) {
_downloader = new FileDownloader();
} else {
_downloader = (IFileDownloader)Activator.CreateInstance(DownloaderType);
}
}
return _downloader;
}
When my app starts up, I set the property to my own downloader type:
XapModuleTypeLoader.DownloaderType = typeof(LocalFileDownloader);
Voila - now Prism calls your code to load its modules.
I can send you my LocalFileDownloader class as well as the class it falls back to to load the .xap from the web if you're interested... I suspect though that if you look at Prism's FileDownloader class you'll see that it's simple enough.
With regard to your other questions, the clientaccesspolicy.xml file is probably not needed if the URL the app is installed under is the same one you're talking to, or if you're in elevated trust.
The .xaps can definitely be pre-installed on the client, but it's a bit of work. What we did was write a launcher app that is a standalone .NET 2.0 desktop app. It downloads the main .xap plus certain modules* (checking for updates and downloading only as needed), then uninstalls/reinstalls the app if necessary, then launches the app. The last two are done via sllauncher.exe, which is installed as part of Silverlight. Here's a good intro to that: http://timheuer.com/blog/archive/2010/03/25/using-sllauncher-for-silent-install-silverlight-application.aspx.
Assuming you're running under elevated trust, it should also be possible to pre-fetch the module .xaps from within the SL client, but before they're actually requested due to user action. You'd just need to put them in a folder under My Documents somewhere, and then use the custom module loading approach described above to pull them from there.
*In our case, our main .xap is 2/3 of the application. The rest of our .xaps are small, so we download them on-the-fly, with the exception of some .xaps we created as containers for third-party components. We don't expect to update those very often, so we pre-install them.

Resources