I was starting to write a multi-Map JS library, but I see that Mapstraction does that exactly..
I really would like to use Maptraction but it looks a little old (The commit's on GITHUB) (Not a issue if it is still "supported"), also The tutorial page, the Maps do not show up in my browsers.
Any input is highly appreciated.
Thanks
Kim
Depends what your use case is - if you don't want to be tied in to any one map provider and only need standard features then, yes.
Version 2.1 of Mapstraction is currently being prepared (on the release-2.1 branch) which brings some improvements in behavioural consistency across the providers.
Because of the nature of the library development isn't continuous and tends to be reactive to changes in the underlying providers or issues being raised. That said the community is fairly active and you can sign up to the mailing list via the site.
Related
This question already has answers here:
AngularJS 1.4 --> Angular 9 migration vs bigbang rewrite [duplicate]
(2 answers)
Closed 2 years ago.
I have done a ton of research and haven't found anything that has helped me decide what will be the best route (vertical slicing, horizontal slicing, or just a complete rewrite). I am working on a very LARGE program that is very ugly with no comments and need to migrate it over to Angular 8 if possible or at least up to Angular 7. A lot seem to recommend https://angular.io/guide/upgrade however, they don't help too much in migrating to 1.5 first. Does anybody have experience with a large scale migration? Currently, the program is not being used so downtime is no issue.
It doesn't seem like it at first, but a re-write is usually a more cost effective solution than an upgrade. It seems like upgrade would be the fastest time to re-deployment but in my experience if you did the two side-by-side you might find the times similar, except that the migration deployment will have to be all or nothing, where as a re-write means you can deploy with reduced functionality and build up the feature.
More importantly, the on-going maintenance of an upgraded site becomes exponentially harder/time consuming. You're really applying band-aides over the top of previous patches and fixes.
There are new concepts, better native support for directives and controls that we used to use 3rd parties for or roll own own, and it's an entirely new language to understand. Take this opportunity to wipe the slate clean of your solution's technical debt.
Rewrite - Hybrid
Do you need to deploy eveything in one go? Are you interested in Re-Branding?
One thing the Microsoft have done well in the past is the hybrid roll-out of their Preview portals.
The best case study IMO is the Azure Portal.
A few years ago we had a pretty fully featured portal interface for managing Azure assets. This would later become known as the Classic Portal when they started work on an entirely new user experience.
At first release the menu system was largely complete, we could navigate most assets in the new portal but when you came to features that had not been redesigned yet the link took you back to the Classic Portal.
So you could do the same, have the two user interfaces deployed to different URLs, start by making sure the authentication and navigation is largely complete but make all links take the user back to the original interface. Then feature by feature, implement the new interface, but because you can't control everything, keep a button or link on each page that takes the user back to the original implementation until your regression testing confirms that you have reached feature parity.
That is another key take-away from the MS Hybrid approach, significant change like this WILL annoy your users. So while you are in transition, allow the user to choose when they themselves migrate over. Initially MS achieved this at login, the user could login at either of the main urls, and based on your profile you would be redirected to the portal of your choice.
The last step is to restrict access to features in the old interface, by making the navigation and links in the old portal navigate directly into the new interface.
- or less intrusive, add an 'end of life' banner in each page that you have compelted the re-write on in the old site.
Do not confuse this with the Preview Mode in Office 365, the Azure Preview Portal was a ground-up re-write and is still in progress. There are many licensing operations that I still use the Classic Portal for as I still manage some Classic Only azure assets that have not yet been re-deployed.
I would consider the following issues when choosing between Migration/Upgrade and a straight up re-write:
Migrate to AngularJS 1.5 first
This operation is only marginally easier than the upgrade to Angular2+. All of the arguments below apply equally to this process as they do the next upgrade.
One of the reasons to goto 1.5 first is to escape the legacy dependencies that do not have a simple direct upgrade to 2+.
During the upgrade to 1.5 you should consider implementing Component based architecture (if the current code does not already do so)
A key element of components is less configuration and simplifed design, so read this as less to upgrade, less that can go wrong
Components are of course more closely aligned with current Angular implementations, if you do not yet use AngularJS Components, that might be a good iterim step to understand before learning Angular 2+
No Comments
This is a bigger red flag than you think. If the code base is not documented then any sort of maintenance becomes incrementally harder as each time the code must be re-read and re-interpreted before you can affect change.
So if a migration is on the cards, where talking about every line of code at some point needed to be re-read and understood to make sure that it works correctly in the
AngularJS 1+ => Angular2+
While some of the core frameworks and 3rd party libraries can be migrated, most controller javascript files cannot be simply migrated to typescript without a fair amount of effort. It's pretty common in javascript to cut a lot of corners in terms of type definitions and locations of definitions that mean after migration you will spend a lot of time going back through most javascript files one method at a time.
Very large
This is a strong candidate for automation or migration but ultimately it means that the total surface area to test, debug and re-design is also very-large. If the initial migration does not compile, it could be a long road of tweaking before you can get the user interface up so you can start interface testing.
I'm working on an app written in Codename One together with the parse4cn1 library, the combination of which is a real pleasure to use. However, I need support for a few things in parse4cn1 that are not implemented, most importantly ACL and was wondering if Chidiebere has any hints on how to do this (e.g. how did you implement parse4cn1 yourself - from scratch or copying the open source Parse SDK for Android)? If I manage to do something of a decent quality I will try to share back. Thanks in advance
I never got around implementing ACLs (it's still on the TODO list). parse4cn1's interface closes resembles the Parse Android SDK interface and I'll like it to stay that way for convenience. In this case, the interface of interest would be the ParseACL which is documented here.
The actual implementation will need to be done via REST API calls.
Things to bear in mind:
We use the Android SDK API simply for defining methods and signatures for the corresponding class in ParseACL but do not use the SDKs for anything can be be done via REST.
By design, any calls requiring the master key will not be supported in parse4cn1 due to security considerations. If really needed, the functionality should be exposed via server-side cloud code.
Pull requests without unit tests for the added functionality or breaking existing tests will be rejected.
See also the Contributions section of the parse4cn1 github repo.
Good luck with your implementation and I hope to see a PR from you soon ;)
It was implemented from a Java port on top of the REST API's here but was later modified to use the SDK's to allow things like push (which are now no longer relevant).
In the past I just contributed pull a request to the project to get the fixes/features I needed. It was really easy to work with and compile.
I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page
Is there a way to have one product definition and have it publish to multiple sites? I am looking for this ability specifically in DNN or Umbraco, either with free or paid extensions. I did install both the platforms and played with the free extensions and looked for any extension offering such functionality but did not find one. Any links or pointers are highly appreciated!
I had looked up for this info in many places before reaching over to the expert pool here, hoping to get some hints;
In umbraco there is the built in /base extension (http://our.umbraco.org/wiki/reference/umbraco-base) which enables you to access product data that is maintained in Umbraco from other websites. Base is REST-ish so the implementation is well documented - you can access the data as XML or JSON (Returning Json instead of XML with Umbraco Base).
Also as the implementation is REST-ish the other websites that consume the content maintained in the core site could be written in anything that can consume a REST feed eg html & javascript.
It's not 100% clear to me what setup you're after, but if you're looking to set up a traditional Authoring/Delivery configuration - one of the few paid offerings Umbraco has is called Courier. It's a very reasonably priced (~$135USD,/99EUR) deployment manager that handles syncing content between two sites, i.e., Authoring and a Delivery server.
It's a very smart tool that manages content, configuration, and dependencies. It's neat and also supports a great open-source project!
If you're looking to setup something more like a centralized product database that is used by many sites - amelvin is on good pointer with BASE. They have a nice api where you may also set up your own webservice (beyond their own webservice functaionality!).
If you need this centralized product data to notify the other sites to update their caches - i encourage you to look into the 'distributedCall' functionality.
There's a bit of documentation on distributed calls in this load-balancing tutorial that may help understand the concept a bit better.
...Hope this helps get pointed in the right direction.
I'm evaluating Backbone.js for keeping data and UI synchronized in my web app. However, much of Backbone's value seems to lie in its use of RESTful interfaces. Though I may add server-side backup in the future, my primary use case involves storing all data offline using HTML5 local storage.
Is Backbone overkill for such a use case? If so, is there a better solution, focused solely on updating UI when data changes, and vice versa? (I'm also looking into Knockout and Javascript MVC.)
EDIT: I'm also now looking into Angular.js and jQuery Data Link.
Backbone.js works just as well with local storage as it does with RESTful queries.
I'm a learn-by-example kind of guy so here are some links to get you started:
Todos, a todo application
that uses local storage and
backbone.js, check out the annotated
source to see how it works.
The localStorage adapter is
all you need to get started, take a
look at the annotated source of
that too.
In the past weeks I have evaluated different solution for a scenario close to yours; being a project done in my personal free time and not being a good Javascript programmer, all I needed was something easy to learn to avoid starting from scratch.
Not surprisingly, I had the same candidate: Backbone.js, Javascript MVC and Knockout.js.
Backbone.js won:
I wasn't be required to follow conventions or replace what was already in place
I've easly hacked in its codebase to understand what wasn't clear from the documentation
I've successfully ignored a large amount of its features that was not interesting for me
It gave acceptable performance on busy pages
It works
Backbone.js is lightweight and relatively magic-free; you will probably use a small subset of its feature but it provieds a solid base to develop your solution.
I know it's been a while but you may want to check out backbone-offline project on github: https://github.com/Ask11/backbone.offline
You can also take a look at AFrameJS. I have created a bare-bones proof of concept note-taking app that works offline using HTML5 WebSQL spec, but also want to create an adapter that uses localStorage as well. My personal opinion (and I am biased) is that using an MVC library of any sorts is going to help you in the long run - the value of libraries such as Backbone, Knockout, and AFrame lie in their ability to reduce the cognitive load of the developer by enforcing a good separation of concerns. Data related functionality reside in models, displaying that data resides in Views, and the glue is kept in Controllers. Separating these three concepts might seem pedantic at first, but the end result is code that is easier to develop, easier to test, easier to maintain, and easier to reuse. A basic tutorial on using AFrameJS can be found on my site at: http://www.shanetomlinson.com/2011/aframejs-tutorial-for-noobs/