Is there consensus on a view engine for Nancy? - nancy

I was a little startled to learn that Nancy has its own razor implementation, that may or may not behave like razor. In practice, does this cause issues? What are "most people" using for a Nancy view engine? Why was the real razor not used?

First the easy answer. The Razor engine is, by far, the most downloaded view engine available for Nancy https://www.nuget.org/packages?q=nancy.viewengines
Now for the more complicated questions
Why was the real razor not used?
Because the "real" (and by real I'm going to assume that you mean the one used by the ASP.NET stack) Razor engine is tied to the HTTP abstractions that are built into the ASP.NET stack (HttpContext and all its friends) so there is no straight forward way to use it with Nancy.
The slightly longer answer for this is that you have to understand that Razor is really a parser and a Razor view engine is what sits in the middle of the consumer and the parser.
Nancy uses the Razor parser but we have to have our own view engine because that's what enabled Nancy to parse and execute Razor templates.
Now, it does get even more complicated. Many of the features you see in the ASP.NET Razor view engines, such as master pages, partials, various helpers, _ViewStart and so on, are not Razor (the parser) features, but they are an additional feature set that have been built into the view engine (you can almost think of it as middleware).
This means that for our engine we've had to re-implement much of these features because that's what's come to be expected from a Razor view engine.
I would like to point out that, if it was possible, then we would love to ditch our own implementation and use the one built by Microsoft (less code for us to maintain and it would mean we'd support 100% the same feature set), but unfortunately that's not our decision to make.. we can't take a dependency on their abstractions I am afraid
Hope this clears things up
/A

We have been using the Razor implementation from Nancy for a while now. We have run into a couple of issues that are making us either switch to SSVE or abandon Nancy (we do really love Nancy).
The first issue with Razor is you cannot precompile views like you can in MVC which leads to much longer startup times. We have had many complaints about this.
The second issue is there seems to be a long-standing bug in the Razor implementation with Nancy which causes a situation that is only resolved by recycling the application pool. I'm not an expert but it seems when the project is loaded there is a temporary DLL being compiled and generated at that time (this explains the slower load times) but sometimes there a problem which leaves to the instance not being able to be created. It seems to be at this location: https://github.com/NancyFx/Nancy/blob/master/src/Nancy.ViewEngines.Razor/RazorViewEngine.cs#L238. Basically "viewAssembly.GetType("RazorOutput.RazorView")" is NULL at various times which causes only an error message to be displayed on every page, for every user, at all times and the only way to fix it is to reload the application (recycle the application pool)
Just my two cents and I know this post is older but maybe others will see some of the problems we have run into. I've opened a GitHub issue but the bug is hard to reproduce for us and it hasn't gone anywhere.

Related

AngularJS 1.4 full scale upgrade to Angular 8. Should I migrate to 1.5 then upgrade or just rewrite? [duplicate]

This question already has answers here:
AngularJS 1.4 --> Angular 9 migration vs bigbang rewrite [duplicate]
(2 answers)
Closed 2 years ago.
I have done a ton of research and haven't found anything that has helped me decide what will be the best route (vertical slicing, horizontal slicing, or just a complete rewrite). I am working on a very LARGE program that is very ugly with no comments and need to migrate it over to Angular 8 if possible or at least up to Angular 7. A lot seem to recommend https://angular.io/guide/upgrade however, they don't help too much in migrating to 1.5 first. Does anybody have experience with a large scale migration? Currently, the program is not being used so downtime is no issue.
It doesn't seem like it at first, but a re-write is usually a more cost effective solution than an upgrade. It seems like upgrade would be the fastest time to re-deployment but in my experience if you did the two side-by-side you might find the times similar, except that the migration deployment will have to be all or nothing, where as a re-write means you can deploy with reduced functionality and build up the feature.
More importantly, the on-going maintenance of an upgraded site becomes exponentially harder/time consuming. You're really applying band-aides over the top of previous patches and fixes.
There are new concepts, better native support for directives and controls that we used to use 3rd parties for or roll own own, and it's an entirely new language to understand. Take this opportunity to wipe the slate clean of your solution's technical debt.
Rewrite - Hybrid
Do you need to deploy eveything in one go? Are you interested in Re-Branding?
One thing the Microsoft have done well in the past is the hybrid roll-out of their Preview portals.
The best case study IMO is the Azure Portal.
A few years ago we had a pretty fully featured portal interface for managing Azure assets. This would later become known as the Classic Portal when they started work on an entirely new user experience.
At first release the menu system was largely complete, we could navigate most assets in the new portal but when you came to features that had not been redesigned yet the link took you back to the Classic Portal.
So you could do the same, have the two user interfaces deployed to different URLs, start by making sure the authentication and navigation is largely complete but make all links take the user back to the original interface. Then feature by feature, implement the new interface, but because you can't control everything, keep a button or link on each page that takes the user back to the original implementation until your regression testing confirms that you have reached feature parity.
That is another key take-away from the MS Hybrid approach, significant change like this WILL annoy your users. So while you are in transition, allow the user to choose when they themselves migrate over. Initially MS achieved this at login, the user could login at either of the main urls, and based on your profile you would be redirected to the portal of your choice.
The last step is to restrict access to features in the old interface, by making the navigation and links in the old portal navigate directly into the new interface.
- or less intrusive, add an 'end of life' banner in each page that you have compelted the re-write on in the old site.
Do not confuse this with the Preview Mode in Office 365, the Azure Preview Portal was a ground-up re-write and is still in progress. There are many licensing operations that I still use the Classic Portal for as I still manage some Classic Only azure assets that have not yet been re-deployed.
I would consider the following issues when choosing between Migration/Upgrade and a straight up re-write:
Migrate to AngularJS 1.5 first
This operation is only marginally easier than the upgrade to Angular2+. All of the arguments below apply equally to this process as they do the next upgrade.
One of the reasons to goto 1.5 first is to escape the legacy dependencies that do not have a simple direct upgrade to 2+.
During the upgrade to 1.5 you should consider implementing Component based architecture (if the current code does not already do so)
A key element of components is less configuration and simplifed design, so read this as less to upgrade, less that can go wrong
Components are of course more closely aligned with current Angular implementations, if you do not yet use AngularJS Components, that might be a good iterim step to understand before learning Angular 2+
No Comments
This is a bigger red flag than you think. If the code base is not documented then any sort of maintenance becomes incrementally harder as each time the code must be re-read and re-interpreted before you can affect change.
So if a migration is on the cards, where talking about every line of code at some point needed to be re-read and understood to make sure that it works correctly in the
AngularJS 1+ => Angular2+
While some of the core frameworks and 3rd party libraries can be migrated, most controller javascript files cannot be simply migrated to typescript without a fair amount of effort. It's pretty common in javascript to cut a lot of corners in terms of type definitions and locations of definitions that mean after migration you will spend a lot of time going back through most javascript files one method at a time.
Very large
This is a strong candidate for automation or migration but ultimately it means that the total surface area to test, debug and re-design is also very-large. If the initial migration does not compile, it could be a long road of tweaking before you can get the user interface up so you can start interface testing.

Is the Meanstack suitable for production?

I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page

Is Mapstraction still the way to go

I was starting to write a multi-Map JS library, but I see that Mapstraction does that exactly..
I really would like to use Maptraction but it looks a little old (The commit's on GITHUB) (Not a issue if it is still "supported"), also The tutorial page, the Maps do not show up in my browsers.
Any input is highly appreciated.
Thanks
Kim
Depends what your use case is - if you don't want to be tied in to any one map provider and only need standard features then, yes.
Version 2.1 of Mapstraction is currently being prepared (on the release-2.1 branch) which brings some improvements in behavioural consistency across the providers.
Because of the nature of the library development isn't continuous and tends to be reactive to changes in the underlying providers or issues being raised. That said the community is fairly active and you can sign up to the mailing list via the site.

Using Backbone.js offline

I'm evaluating Backbone.js for keeping data and UI synchronized in my web app. However, much of Backbone's value seems to lie in its use of RESTful interfaces. Though I may add server-side backup in the future, my primary use case involves storing all data offline using HTML5 local storage.
Is Backbone overkill for such a use case? If so, is there a better solution, focused solely on updating UI when data changes, and vice versa? (I'm also looking into Knockout and Javascript MVC.)
EDIT: I'm also now looking into Angular.js and jQuery Data Link.
Backbone.js works just as well with local storage as it does with RESTful queries.
I'm a learn-by-example kind of guy so here are some links to get you started:
Todos, a todo application
that uses local storage and
backbone.js, check out the annotated
source to see how it works.
The localStorage adapter is
all you need to get started, take a
look at the annotated source of
that too.
In the past weeks I have evaluated different solution for a scenario close to yours; being a project done in my personal free time and not being a good Javascript programmer, all I needed was something easy to learn to avoid starting from scratch.
Not surprisingly, I had the same candidate: Backbone.js, Javascript MVC and Knockout.js.
Backbone.js won:
I wasn't be required to follow conventions or replace what was already in place
I've easly hacked in its codebase to understand what wasn't clear from the documentation
I've successfully ignored a large amount of its features that was not interesting for me
It gave acceptable performance on busy pages
It works
Backbone.js is lightweight and relatively magic-free; you will probably use a small subset of its feature but it provieds a solid base to develop your solution.
I know it's been a while but you may want to check out backbone-offline project on github: https://github.com/Ask11/backbone.offline
You can also take a look at AFrameJS. I have created a bare-bones proof of concept note-taking app that works offline using HTML5 WebSQL spec, but also want to create an adapter that uses localStorage as well. My personal opinion (and I am biased) is that using an MVC library of any sorts is going to help you in the long run - the value of libraries such as Backbone, Knockout, and AFrame lie in their ability to reduce the cognitive load of the developer by enforcing a good separation of concerns. Data related functionality reside in models, displaying that data resides in Views, and the glue is kept in Controllers. Separating these three concepts might seem pedantic at first, but the end result is code that is easier to develop, easier to test, easier to maintain, and easier to reuse. A basic tutorial on using AFrameJS can be found on my site at: http://www.shanetomlinson.com/2011/aframejs-tutorial-for-noobs/

WPF - Does anyone actually use XBAPS and is there a good reason why they are used

Apart from the fact people get to view the app in a browser which may be familiar. Is there any actual compelling reason to use the XBAP model in WPF rather than a straightfoward stand alone WPF app.
All I can see are potential security issues and restrictions but no benefits. Am I missing something?
I have used an XBAP, once.
We needed full-trust, and we needed the application to act as if it were browser hosted. XBAP was the only real option we had, and I'm glad it was there.
Outside of this tiny nitch, Silverlight & Click Once are better all around options.
In practice No and No would be the answers to your questions. I have never actually seen them used in production nor is there ever really a justified reason to use them.
As Kent mentioned Silverlight or Click Once is almost always a better option.
One could argue, in a full trust Windows only environment, XBAPS gives you the ability to leverage the full WPF framework with the flexibility of web deployment. Of course that is what Click once is for. However, in my experience ClickOnce is a nightmare for anything more then a simple, single application install so you might argue in favor of XBAP to avoid ClickOnce headaches.
But again, my response would be, Silverlight is likely a better choice.
We use it to have a single sourced solution for an application that can run in a browser but also as a desktop application. Both full trust.
Actually a modular designed app consisting of xbap(s) communicating via webservice is very efficient. This type of scenario would allow for execution of the modular pieces to run concurrent and in separate memory spaces. This benefits the user and the application's developer(s).
The app would not run in an ie but rather a custom browser shell to control the flow and execution of the application itself. It does seem like a lot of work when everything could simply run with in a single or multiple projects but this type of solution would be pertinent in large enterprise app(s). The application Programmer(s) will be able to work on segments or distinct parts of the app which contain distinct functions, utilities, and capabilities. The user never knows or realizes that each part is actually running independently because it appears seamless. The partial trust issue is eliminated because the shell is not an xbap and has full permission. Now to the good stuff ... if there happens to be a fault (that never happens right?) other parts of the application continue to execute without failure. Try-catch-finally work great until you miss one... Last but not least no more complicate background thread processing it's in a browser and by default is async. Most systems will have multiple windows open at a time each window simply contains a browser running an xbap. Unique...Yes...Useful...Yes... It is a different approach but it is clean and simple.
Life is a race ... When you reach the finish line who will be there cheering for you and will you be proud of the race that was run?
XBAPs using Partial Trust are useful if you have a requirement that the WPF client should be run without requiring admin privileges and without installing anything on the clients machine (disregarding the user's profile cache that is)
I was thinking the same thing, here is my takeaway.
The main reason is the user experience, WPF apps are more powerful and easier to write than Silverlight. People will click on a web site, but will think twice about installing an application. An XBAP is very close to a website experience, and can out perform Click Once and Silverlight.
However since it only works for a very narrow user base, it would probably be best for intranet applications.
WPF, XBAP, Silverlight - What do I use?

Resources