Best/standard method for slowing down Silverlight Prism module loading (for testing) - silverlight

During localhost testing of modular Prism-based Silverlight applications, the XAP modules download too fast to get a feel for the final result. This makes it difficult to see where progress, splash-screens, or other visual states, needs to be shown.
What is the best (or most standard) method for intentionally slowing down the loading of XAP modules and other content in a local development set-up?
I've been adding the occasional timer delay (via a code-based storyboard), but I would prefer something I can place under the hood (in say the Unity loader?) to add a substantial delay to all module loads and in debug builds only.
Suggestions welcomed*
*Note: I have investigated the "large file" option and it is unworkable for large projects (and fails to create XAP with really large files with out of memory error). The solution needs to be code based and preferably integrate behind the scenes to slow down module loading in a local-host environment.
****Note: To clarify, we are specifically seeking an answer compatible with the Microsoft PRISM pattern & PRISM/CAL Libraries.**

Do not add any files to your module projects. This adds unnecessary regression testing to your module since you are changing the layout of the module by extending the non-executable portion. Chances are you won't do this regression testing, and, who knows if it will cause a problem. Best to be paranoid.
Instead, come up with a Delay(int milliseconds) procedure that you pass into a callback that materializes the callback you use to retrieve the remote assembly.
In other words, decouple assembly resource acquisition from assembly resource usage. Between these two phases insert arbitrarily random amounts of wait time. I would also recommend logging the actual time it took remote users to get the assembly, and use that for future test points so that your UI Designers & QA Team have valuable information on how long users are waiting. This will allow you to cheaply mock-up the end-user's experience in your QA environment. Just make sure your log includes relevant details like the size of the assembly requested.
I posed a question on StackOverflow a few weeks ago about something related to this, and had to deal with the question you posed, so I am confident this is the right answer, born from experience, not cleverness.

You could simply add huge files (such as videos) to your module projects. It'll take longer to build such projects, but they'll also be bigger and therefore take longer to download locally. When you move to production, simply remove the huge files.

Related

How to make obfuscation + error reporting?

Good day!
I want to distribute the c# application and want protect it.
I need:
obfuscation - protection of the source code + text resource files.
error reporting - a report on Unhandled Error.
clear view obfuscated stack trace
ensure there are no changes to source code.
What problems can get out due to the obfuscation (eg serialization / deserialization / reflection / globalization)? Appreciate the complexity of solutions this problems?
What methods / tools / approaches you recommend?
Thanks for help!
Disclaimer: I work for Red Gate.
SmartAssembly does what you're after. For your points in turn:
1) It does control flow obfuscation, method / field renaming, compression / encryption of resources and embedded strings, and separation of methods from their containing classes.
2) Automated error reporting automatically detects and reports unhandled execptions (it also grabs and sends the stack trace, values of all local variables, and some general system info).
3) The obfuscated stack trace gets decoded again on your machine so you can see it in clear view.
4) Not 100% that I know what you mean by this, but tamper protection prevents the app from running at all if any modifications are made to it. If you mean you don't want to make changes to your own source code, it is run as a post-build process so doesn't need any changes to be made to the source.
Re problems you might get with obfuscation, by far the most common are because of reflection (as a result WPF often causes problems), and data binding causes lots of issues too. Most obfuscators should let you exclude individual types and methods which have problems with reflection, though obviously that leaves those types and methods unprotected.
There are other obfuscators too - I know a couple of people who use one from PreEmptive called dotfuscator.
Crypto Obfuscator supports all the features you are looking for including obfuscation, code-protection as well as Exception Reporting (with automatic de-obfuscation as well as full values of all method parameters and local variables).
Another unique feature of Crypto Obfuscator is the Warnings tab shown after obfuscation. This lists all lines of code in your assemblies which can potentially cause the obfuscated assembly to fail. SO you don't have to shoot in the dark trying to figure out why obfuscated assemblies are not working.
DISCLAIMER: I work for LogicNP Software, the developer of Crypto Obfuscator.

Usage of static analysis tools - with Clear Case/Quest

We are in the process of defining our software development process and wanted to get some feed back from the group about this topic.
Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code.
We make use of Clear Case/Quest and RAD
I have been looking at PMD, CPP, checkstyle and FindBugs as a start.
My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this.
The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code.
Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle.
Wondering if others have done this and if it was successfully or can provide lessons learned.
We have:
ClearCase used with Hudson for any "heavy" static analysis step
Eclipse IDE with the tools you mentioned integrated with a smaller set of rules
Note: we haven't really managed to make replica works with our different user bases (US-Europe-Hong-Kong), and we are using CCRC instead of multi-sites.
ClearCase being mainly used in Europe, the analysis step takes place during the night there (UMT time), and use snapshot views to make sure it goes as quickly as possible (a dynamic view involves too much network traffic when accessing large files).
I'd use hudson to run static analysis on scm changes if your code base is not too large, or on periodic builds if it is.
OK, i can't resist... If you team is spread out, why in the world would you use clearcase? As someone who had to use that, when our company switched to Mercurial the team velocity improved immensely. That multi-site junk is just awful.

how to profile silverlight mvvm application with a lot of custom controls

There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing.
All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view.
The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast.
I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow.
I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?
Short answer is - No, there's no super easy way of figuring out why your application is slow.
Long Answer:
I have never used Equatec profiler for Silverlight but it seems similar to dotTrace. Either way, they both end up showing the same information as xPerf.
Basically the information you should have in front of you is saying which methods and classes took up the most time to execute.
If that information points back to Silverlight framework graphics engine (agcore.dll and npctrl.dll), you'll have to start a slow process of figuring out what you did wrong.
At this point I strongly recommend that you'll watch every single talk Seema Ramchandani gave about Silverlight performance. Specifically PDC08, Mix09 and Mix10.
Step #1 of perf optimization: Measure. Measure. Measure.
Have a clear baseline of what you're trying to improve, and set a numeric expectation to when performance is good enough.
That way you can verify that your changes are having a positive impact on performance.
Step #2 of perf optimization: Start removing stuff.
In your case, I'd start commenting out controls out off the form. When perf massively improves, you've found your culprit.
Step #3 of perf optimization: Try to fix the weak link.
That's how I would go about solving this issue.
Sincerely,
-- Justin Angel
Try profiling with the Visual Studio profiler in order to get a good measure of your managed code and the native code executing within Silverlight. The profiler will help point you to where you're spending the most of your time (what the hot path's are) and whether or not your spending it in framework (SL) code or your own code.
The basics of profiling are:
Open a Visual Studio Command Prompt (as admin), 'cd' to the directory where your DLL and PDB files are (typically your "Debug" folder)
VSPerfClrEnv /sampleon
VSPerfCmd -start:sample -output:somefile.vsp VSPerfCmd -globalon VSPerfCmd -launch:"c:\Program Files (x86)\Internet Explorer\iexplore.exe" -args:""
VSPerfCmd -shutdown
VSPerfClrEnv /off
You can find detailed instructions on using the profiler on my blog: http://www.nachmore.com/2010/profiling-silverlight-4-with-visual-studio-2010/
If you find that you are spending time within Silverlight, track down the code path to see where your code is triggering the expensive calls, so that you can then investigate specific solutions based on the cause of the slow down.
Hope that helps,
Oren

A step-up from TiddlyWiki that is still 100% portable?

TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues:
It travels on my USB flash memory stick and runs on any computer, regardless of operating system
No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser)
No Internet connection is needed
In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links)
Set up and configuration are so simple as to be almost zero. This would also mean dependencies are so minimal as to be transparent, or nearly so.
Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow.
Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? (or rather, retrieval!)
NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.
After some time and serious consideration, I will post my own answer.
There is nothing that matches TiddlyWiki.
As for voluminous information, TW can pretty much handle it. (My early discouragements were due to malformed code.) Difficulty accessing information through the interface becomes an issue before any speed problems. This isn't to fault the interface -- it could be more powerful, but that would sacrifice lightness.
Indeed TiddlyWiki can work with VERY large tiddler stores, they don't need to be in the current TiddlyWiki document either.
See "import tiddler" and friends over at http://tiddlytools.com
Before creating Rails, David Heinemeier Hansson wrote a wiki app called Instiki. Like TiddlyWiki, you don't run it from a separately running server*, so it's easy to run locally and move around on a USB drive (exporting the entire content to a zip file with all the html files or all the files in Textile markup). The entire Instiki tgz download is less than 5mb and the app has only one external dependency: Ruby.
So you can run Instiki anywhere you can run Ruby (for instance, on a Nokia N900 phone).
I never built any Instiki sites as large as you describe, but it ought to handle 1 million words in 4,000 pages a lot easier than TiddlyWiki handles 4,000 tiddlers.
Roger_S
* Oh, not to confuse anyone: Instiki uses the embedded webserver WEBrick
You could try installing Portable Apps on your USB drive and adding the XAMPP Package which has Apache, PHP, MySQL all installed and running MediaWiki or other Wiki software on top of it.
http://tiddlyweb.peermore.com/wiki/ maybe exactly what you are looking for.
You can use any TiddlyWiki variant and the data can be delivered via a server and on-demand.
I have recently discovered DokuWikiStick which runs a version of MicroApache. Recommended by LifeHacker... Starting size is about 10MB.
you probably already know this but there's a new version of tiddlywiki out that is still in beta but has been rewritten to allow a more robust environment for the future.
http://tiddlywiki.com/
2020 answer, from 2017
Check out liddly, it's a local tiddlywiki server written in go that fits all your requirements and can run off a USB. It stores tiddlers in a SQLite database, albeit without relational links, making the tiddlywiki interface (presentation) separate from your data(content). It was last updated in 2017 but it still works with the latest tiddlywiki5, you will just have to compile it yourself.

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources