I am using HermiT v1.3.8.4 with OWLAPI v3.5.6 and run into an issue where reasoner.isSatisfiable(clazz) runs forever.
Is there a way to inspect what HermiT is doing, i.e., a way to get debug information?
My current setup looks roughly like this
OWLReasonerFactory reasonerFactory = new Reasoner.ReasonerFactory();
OWLReasonerConfiguration config;
if (this.verbose_output) {
ConsoleProgressMonitor progressMonitor = new ConsoleProgressMonitor();
config = new SimpleConfiguration(
progressMonitor
);
} else {
config = new SimpleConfiguration();
}
OWLReasoner reasoner = reasonerFactory.createReasoner(this.ontology, config);
...
for (OWLClass c: this.ontology.getClassesInSignature(this.include_import_closure)) {
if (!reasoner.isSatisfiable(c)) { // This step takes forever
continue;
}
...
}
Not sure if this helps, but there are some classes that are related to debugging, although I never used those. You could try to use the following config option
Configuration config=new Configuration();
// Lets make HermiT open a debugger window from which we can control the
// further actions that HermiT performs.
// DEBUGGER_HISTORY_ON will cause HermiT to save the deriviation tree for
// each derived conclusion.
// DEBUGGER_NO_HISTORY will not save the derivation tree, so no causes for a
// clash can be given, but the memory requirement is smaller.
config.tableauMonitorType=TableauMonitorType.DEBUGGER_HISTORY_ON;
// Now we can start and create the reasoner with the above created configuration.
Reasoner hermit = new Reasoner(config,ontology);
// This will open the debugger window at runtime and it should say:
// Good morning Dr. Chandra. This is HAL. I'm ready for my first lesson.
// Derivation history is on.
// Reasoning task started: ABox satisfiability
// >
// you can press 'c' to make HermiT continue with checking whether the ontology
// is consistent
hermit.isSatisfiable(c); // for a class 'c'
// HermiT should now have said 'Reasoning task finished: true' in the debugger window.
// Now, you can type 'showModel' to see all the assertions in the ABox that HermiT generated.
Otherwise, maybe the log level could help.
Another way to get feedback is to use a profiler. jvisualvm is included in all recent Oracle JREs, and the sampler mode will provide you with a good clue about what HermiT is doing. It's not a progress monitor but it helps to understand if what's slowing things down is the size of the ontology or a particular construct.
Related
Windows 10 Pro
Latest Simulator
Java Swing Project
I would like to execute "Vector a1 = (Vector) Storage.getInstance().readObject(filePath);"
In a Java Swing Application running on Windows 10 platform, I tried import CodenameOne.jar in Swing package, however when executing above code, get null pointer exception in Storage.getInstance()
Is there a way to execute this in Swing?
Thoughts?
Best Regards.
Thanks, I did not init the Display, however "Display.init(Object m)" requires an Object Argument and the Init method is deprecated.
Can you please provide me the codenameone Display dependencies?
And perhaps a java Swing snippet of code to initialize Display in order to execute Storage.getInstance().readObject(filePath)
Thoughts?
Best Regards
Thanks, Passing init(working directory) solved the Exception thrown.
Here is the Code snippet used to allow me to execute:
Storage.getInstance().readObject(filePath).
String filePath = incSrv.Pwd();// gets working directory
try {
javax.swing.SwingUtilities.invokeLater(new Runnable() {
public void run() {
Display.init(filePath);
String fileName = "A1-MMA.properties";
Vector a1 = (Vector) Storage.getInstance().readObject(filePath);
}
});
} catch (Exception e) {
}
And it does appear to work,
However I am left with a blacked out form that appears modal.
How can I avoid this or dispose it?.
FYI: What I am creating here is a work around for serializing Vector in Codenameone. I Save Vector to file using "Storage.getInstance().writeObject(Path, Vector)"
I convert the file to bytes and write it to the Swing Server VIA socket.
Using Storage.getInstance().readObject(file) on the Swing Server I have deserialize the object into the Vector from my app.
This appears to work well and is more efficient than the current method I use to deliver complex Vectors from the app to the Swing Server.
Can you please let me know if you see a red flag with this workaround?
Like The ability to Storage.getInstance().readObject(file) on the Swing Server may go away?
This method will save a lot of time in movind Vector data to and from the App/Server.
Thoughts Best Regards
Storage.getInstance().readObject(file) // (A1ServiceSrv.java:571)
Caused this Exception:
java.lang.NullPointerException
at com.codename1.io.Storage.init(Storage.java:89)
at com.codename1.io.Storage.getInstance(Storage.java:112)
at Main.A1ServiceSrv.loadVectorFromFile(A1ServiceSrv.java:571)
Regards
12/11/2021:
Thanks Shai,
I am including in my classpath CodenameOne.jar with update date of 12/11/2021 after CN1 refresh.
Getting Same null pointer exception.
Passing in Path "C:\Src1\A1-Arms\A1-Server\A1-MMA.properties" (Absolute Path)
Also Tried "A1-MMA.properties", however I don't think Codenameone knows where my home path is since we are not initializing it as we did with
Display.init("Current Working Directory where files reside");
This is the Fresh Stack Trace w/o calling Display.init (12-20-2021)
java.lang.NullPointerException at
com.codename1.ui.Display.getResourceAsStream(Display.java:3086)
at com.codename1.io.Log.print(Log.java:327)
at com.codename1.io.Log.logThrowable(Log.java:299)
at com.codename1.io.Log.e(Log.java:285)
at com.codename1.io.Storage.readObject(Storage.java:271)
at Main.A1ServiceSrv.loadVectorFromFile(A1ServiceSrv.java:596)
vector = (Vector) Storage.getInstance().readObject(filePath); // (A1ServiceSrv.java:596)
This is hopefully fixed by this commit: https://github.com/codenameone/CodenameOne/commit/72bf283bdaaefe5207bb9fd6787578e3ef61522c if not let me know with a fresh stack
I am creating a MEAN stack and I want to clarify the below.
From the coding standards perspective I know that validations should be executed both at the client side and server side. What I would like to achieve is to execute the exact same validation code so that I do not repeat the code again. This is more like a shared code for client and server side.
so How can I have angular js and Express js invoke same .js file for performing validations? is it even possible?
Thanks!
You sure can do this. This approach is used by RemObjects DataAbstract (http://old.wiki.remobjects.com/wiki/Business_Rules_Scripting_API). The principle here is to define business-rules that will either apply on the client and on the server, or the server only. You will almost never have to check for business-rules ONLY on the client, because you can never "trust" the client to check your business rules.
CQRS and DDD are two architectural principles that could help you here. Domain Driven Design will kind of "clean" or "refine" your code, pushing the infrastructure away from the core "domain" logic. And business rules apply only in the domain, so it's a good idea the keep the domain isolated from the rest.
Command-Query-Responsability-Segretation. I like this one a lot. Basically, you define a set of commands that will be validated before they are applied. There's no more machine-like code that looks like Model.Set('a', 2). Your code, using this principle, will look like MyUnderstandableBusinessObject.UnderstandableCommand(aFriendlyArgument). When it comes to applying business rules, this is very handy that your actual commands reflect the use cases of your domain.
I also always encounter this problem when I work on node.js / javascript projects. The problem is that you do not have a standardized ORM that can be understood by both the client AND the server. This is paradoxal, as node.js and the browser are running on the same language. When I was drawn towards Node.js, I told myself, man both client and server are running the same language, that's going to save sooo much time. But that was kind of false, as there are not that many mature and professional tools out there, even if npm is really active.
I wanted to build an ORM too that could be both understood by the client/server, and add a relational aspect to it (so that it was compatible with SQL) but I kind of abandoned the project. https://github.com/ludydoo/affinity
But, there are a couple of other solutions. Backbone is one, and it's lightweight.
The actual implementation of your business-rule checking here is what you are going to have to work on. You'll want to extract the "validation" part out of your model into another object that will be able to be shared. Something to get you started :
https://jsfiddle.net/ludydoo/y0otcvrf/
BusinessRuleRepository = function() {
this.rules = [];
}
BusinessRuleRepository.prototype.addRule = function(aModelClass, operation, callback) {
this.rules.push({
class: aModelClass,
operation: operation,
callback: callback
})
}
BusinessRuleRepository.prototype.validate = function(object, operation, args) {
_.forIn(this.rules, function(rule) {
if (object.constructor == rule.class && operation == rule.operation) {
rule.callback(object, args)
}
})
}
MyObject = function() {
this.a = 2;
}
MyObject.prototype.setA = function(value) {
aBusinessRuleRepo.validate(this, 'setA', arguments);
this.a = value;
}
// Creating the repository
var aBusinessRuleRepo = new BusinessRuleRepository();
//-------------------------------
// shared.js
var shared = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] < 0) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
shared(aBusinessRuleRepo);
}
//-------------------------------
// Creating the object
var aObject = new MyObject();
try {
aObject.setA(-1); // throws
} catch (err) {
alert('Shared Error : ' + err);
}
aObject.setA(2);
//-------------------------------
// server.js
var server = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] > 100) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
server(aBusinessRuleRepo);
}
//-------------------------------
// on server
try {
aObject.setA(200); // throws
} catch (err) {
alert('Server Error :' + err);
}
The first thing of all is to model and define your domain. You'll have a set of classes that represent your business objects, as well as methods that correspond th business-operations. (I would really go with CQRS for your case)
The model definition would be shared between the client and the server.
You would have to define two files, or two objects. Separated. ServerRules and SharedRules. Those will be a set of Repository.addRule() calls that will register you business rules in the repository. Your client will get the Shared.js business rules, and the server the Shared.js + Server.js business rules. Those business rules will always be applied on your objects this way.
The little example of code I shown you is very simple, and checks business rules only before the command is applied. Maybe you could add a parameter 'beforeCommand' and 'afterCommand' to check business rules before and after changed are made. Then, if you add the possibility of checking business rules after a command is applied, you must be able to rollback the changes (backbone has this functionality I think).
Good luck
You could automate this a little by automatically getting the name of the method you are in (Can I get the name of the currently running function in JavaScript?)
function checkBusinessRules(model, arguments){
businessRuleRepo.validate(model, getCalleeName, arguments);
}
Model.prototype.command = function(arg){
checkBusinessRules(this, arguments);
// perform logic
}
EDIT 2
A small detail i would like to correct on my first answer. Do not implement your business rules on property setters! Use business operation names instead :
You must make sure that you always set your model properties through methods. If you set your model properties directly by assigning a value, you're bypassing the whole business rule processor thing.
The cheap way is to do this through standard setters such as
SetMyProperty(value);
SetAnotherProperty(value);
This is kind of the low-level business rule logic (based on getters and setters). Then, your business rules will also be low-level. Which is kind of bad.
Better, you should do this through business understandable high-level method names such as
RegisterClient(client);
InvalidateMandate(mandate);
Then, your business rules become way more understandable and you'll almost have a good time implementing them.
BusinessRuleRepository.add(ModelClass, "RegisterClient", function(){
if (!Session.can('RegisterClient')) { fail('Unauthorized'); }
})
I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?
Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.
I'm currently taking the angular tutorial using Wisdom framework as back end. As a consequence, I run end-to-end tests using Fluentlenium, as the wisdom framework doc states.
My test for step 3, although dead simple, doesn't pass.
Full test can be found at github : Step03IsImplementedIT
However, here is the offending extract (around lines 30)
#Test
public void canTestPageCorrectly() {
if (getDriver() instanceof HtmlUnitDriver) {
HtmlUnitDriver driver = (HtmlUnitDriver) getDriver();
if(!driver.isJavascriptEnabled()) {
driver.setJavascriptEnabled(true);
}
Assert.assertTrue("Javascript should be enabled for Angular to work !", driver.isJavascriptEnabled());
}
goTo(GoogleShopController.LIST);
// Et on charge la liste des téléphones
FluentWebElement phones = findFirst(".phones");
assertThat(phones).isDisplayed();
FluentList<FluentWebElement> items = find(".phone");
assertThat(items).hasSize(3); // <-- this is the assert that fails
}
Failure message :
canTestPageCorrectly(org.ndx.wisdom.tutorial.angular.Step03IsImplementedIT) Time elapsed: 2.924 sec <<< FAILURE!
java.lang.AssertionError: Expected size: 3. Actual size: 1.
at org.fluentlenium.assertj.custom.FluentListAssert.hasSize(FluentListAssert.java:60)
at org.ndx.wisdom.tutorial.angular.Step03IsImplementedIT.canTestPageCorrectly(Step03IsImplementedIT.java:33)
From that failure, I guess the angular controllers weren't loaded.
How can I make sure they are ? And how can I have a working test ?
Turned out the error wasn't the expected one ... Well, it was, but in a hidden fashion.
HtmlUnitDriver, as one may be aware, is a pure Java implementation of a browser and, as such, has some limitations.
One of its limitation is Javascript interpretation, which seems to go awfully bad with angular ....
To make long things short, the simplest way to fix that is to replace the default driver with firefox one which implies
setting fluentlenium.browser to firefox
make sure driver loads correctly (since firefox.exe should be on path when trying to use its driver) by adding a small assert at the beginning of the test
Final test is then
assertThat(getDriver()).isInstanceOf(FirefoxDriver.class);
goTo(GoogleShopController.LIST);
FluentList<FluentWebElement> items = find("li");
FluentLeniumAssertions.assertThat(items).hasSize(3);
fill("input").with("nexus");
await();
items = find(".phone");
FluentLeniumAssertions.assertThat(items).hasSize(1);
fill("input").with("motorola");
await();
items = find(".phone");
FluentLeniumAssertions.assertThat(items).hasSize(2);
Is there an easy way to dynamically discover all the XAMLs files within all the currently loaded modules (specifically of a Silverlight Prism application)? I am sure this is possible, but not sure where to start.
This has to occur on the Silverlight client: We could of course parse the projects on the dev machine, but that would reduce the flexibility and would include unused files in the search.
Basically we want to be able to parse all XAML files in a very large Prism project (independent of loading them) to identify all localisation strings. This will let us build up an initial localisation database that includes all our resource-binding strings and also create a lookup of which XAML files they occur in (to make editing easy for translators).
Why do this?: The worst thing for translators is to change a string in one context only to find it was used elsewhere with slightly different meaning. We are enabling in-context editing of translations from within the application itself.
Update (14 Sep):
The standard method for iterating assemblies is not available to Silverlight due to security restrictions. This means the only improvement to the solution below would be to cooperate with the Prism module management if possible. If anyone wants to provide a code solution for that last part of this problem there are points available to share with you!
Follow-up:
Iterating content of XAP files in a module-base project seems like a really handy thing to be able to do for various reasons, so putting up another 100 rep to get a real answer (preferably working example code). Cheers and good luck!
Partial solution below (working but not optimal):
Below is the code I have come up with, which is a paste together of techniques from this link on Embedded resources (as suggested by Otaku) and my own iterating of the Prism Module Catalogue.
Problem 1 - all the modules are
already loaded so this is basically
having to download them all a second
time as I can't work out how to
iterate all currently loaded Prism modules.
If anyone wants to share the bounty
on this one, you still can help make
this a complete solution!
Problem 2 - There is apparently a bug
in the ResourceManager that requires
you to get the stream of a known
resource before it will let you
iterate all resource items (see note in the code below). This means I have to have a dummy resource file in every module. It would be nice to know why that initial GetStream call is required (or how to avoid it).
private void ParseAllXamlInAllModules()
{
IModuleCatalog mm = this.UnityContainer.Resolve<IModuleCatalog>();
foreach (var module in mm.Modules)
{
string xap = module.Ref;
WebClient wc = new WebClient();
wc.OpenReadCompleted += (s, args) =>
{
if (args.Error == null)
{
var resourceInfo = new StreamResourceInfo(args.Result, null);
var file = new Uri("AppManifest.xaml", UriKind.Relative);
var stream = System.Windows.Application.GetResourceStream(resourceInfo, file);
XmlReader reader = XmlReader.Create(stream.Stream);
var parts = new AssemblyPartCollection();
if (reader.Read())
{
reader.ReadStartElement();
if (reader.ReadToNextSibling("Deployment.Parts"))
{
while (reader.ReadToFollowing("AssemblyPart"))
{
parts.Add(new AssemblyPart() { Source = reader.GetAttribute("Source") });
}
}
}
foreach (var part in parts)
{
var info = new StreamResourceInfo(args.Result, null);
Assembly assy = part.Load(System.Windows.Application.GetResourceStream(info, new Uri(part.Source, UriKind.Relative)).Stream);
// Get embedded resource names
string[] resources = assy.GetManifestResourceNames();
foreach (var resource in resources)
{
if (!resource.Contains("DummyResource.xaml"))
{
// to get the actual values - create the table
var table = new Dictionary<string, Stream>();
// All resources have “.resources” in the name – so remove it
var rm = new ResourceManager(resource.Replace(".resources", String.Empty), assy);
// Seems like some issue here, but without getting any real stream next statement doesn't work....
var dummy = rm.GetStream("DummyResource.xaml");
var rs = rm.GetResourceSet(Thread.CurrentThread.CurrentUICulture, false, true);
IDictionaryEnumerator enumerator = rs.GetEnumerator();
while (enumerator.MoveNext())
{
if (enumerator.Key.ToString().EndsWith(".xaml"))
{
table.Add(enumerator.Key.ToString(), enumerator.Value as Stream);
}
}
foreach (var xaml in table)
{
TextReader xamlreader = new StreamReader(xaml.Value);
string content = xamlreader.ReadToEnd();
{
// This is where I do the actual work on the XAML content
}
}
}
}
}
}
};
// Do the actual read to trigger the above callback code
wc.OpenReadAsync(new Uri(xap, UriKind.RelativeOrAbsolute));
}
}
Use GetManifestResourceNames reflection and parse from there to get only those ending with .xaml. Here's an example of using GetManifestResourceNames: Enumerating embedded resources. Although the sample is showing how to do this with a seperate .xap, you can do this with the loaded one.
I've seen people complain about some pretty gross bugs in Prism
Disecting your problems:
Problem 1: I am not familiar with Prism but from an object-oriented perspective your Module Manager class should keep track of whether a Module has been loaded and if not already loaded allow you to recursively load other Modules using a map function on the List<Module> or whatever type Prism uses to represent assemblies abstractly. In short, have your Module Manager implement a hidden state that represents the List of Modules loaded. Your Map function should then take that List of Modules already loaded as a seed value, and give back the List of Modules that haven't been loaded. You can then either internalize the logic for a public LoadAllModules method or allow someone to iterate a public List<UnloadedModule> where UnloadedModule : Module and let them choose what to load. I would not recommend exposing both methods simultaneously due to concurrency concerns when the Module Manager is accessed via multiple threads.
Problem 2: The initial GetStream call is required because ResourceManager lazily evaluates the resources. Intuitively, my guess is the reason for this is that satellite assemblies can contain multiple locale-specific modules, and if all of these modules were loaded into memory at once it could exhaust the heap, and the fact these are unmanaged resources. You can look at the code using RedGate's .NET Reflector to determine the details. There might be a cheaper method you can call than GetStream. You might also be able to trigger it to load the assembly by tricking it by loading a resource that is in every Silverlight assembly. Try ResourceManager.GetObject("TOOLBAR_ICON") or maybe ResourceManager.GetStream("TOOLBAR_ICON") -- Note that I have not tried this and am typing this suggestion as I am about to leave for the day. My rationale for it being consistently faster than your SomeDummy.Xaml
approach is that I believe TOOLBAR_ICON is hardwired to be the zeroth resource in every assembly. Thus it will be read very early in the Stream. Faaaaaast. So it is not just avoiding needing SomeDummy.Xaml in every assembly of your project that I am suggesting; I am also recommending micro-optimizations.
If these tricks work, you should be able to significantly improve performance.
Additional thoughts:
I think you can clean up your code further.
IModuleCatalog mm = this.UnityContainer.Resolve<IModuleCatalog>();
foreach (var module in mm.Modules)
{
could be refactored to remove the reference to UnityContainer. In addition, IModuleCatalog would be instantiated via a wrapper around the List<Module> I mentioned in my reply to Problem 1. In other words, the IModuleCatalog would be a dynamic view of all loaded modules. I am assuming there is still more performance that can be pulled out of this design, but at least you are no longer dependent on Unity. That will help you better refactor your code later on for more performance gains.