IdentityServer3 vs Azure Active Directory vs AWS Directory Services - azure-active-directory

I'm evaluating the above three identity management technologies and wanted to try to find out the advantages/disadvantages and get a sense for when I should be using IdentityServer3 over the other technologies. I have three scenarios:
Internal MVC Client to Web API
External Phone Client to Web API
Internal Web API to Web API

Brock Allen's Comments:
According Brock Allen, the creator of IdentityServer:
Well, the main thing that differentiates IdentityServer is the ability
to customize the entire token service and have control of the user
data. SaaS products are very limited in customization because for the
most part they don't let you upload arbitrary code to alter or change
behavior and they often encapsulate the database of users. On the
other hand, this means you have to host IdSvr (which can be cloud
hosted) and you need to build a database for your users. So if you
need the control, IdSvr is a good choice.
Also, I should note that very often IdSvr is used in conjunction with
other identity providers (like ADFS or AAD). IdSvr is deployed in
between the apps and the ultimate IdPs, again, usually to allow the
customization that the apps need, yet still centralized and
consolidated.
Source
My Own Findings
Disclaimer: I looked into this for use by the company I work for, who had existing infrastructure I had to cater to, so the solution I chose is skewed in that direction. Even so I've tried to give an impartial summary of my own thoughts during my research.
Azure Active Directory
Azure Active Directory is a hosted identity solution, so there is far less setup (especially if like me, you discover that you are already using it for Office 365). Out of the box, it provides some very nice features that can get you started very quickly.
The premium version has monitoring and reporting capabilities (Connect Health) so you can see who is logging into your system, it has two factor authentication, an identity management website and Microsoft is monitoring logins (a bit like cloudflare for identity), so it should in theory provide some added security. However, the customization of the UI is very basic, you have to pay for the premium features and using the Azure Portal to do identity management (if you go with the free version) is kind of a pain.
The documentation is pretty good and there are samples on GitHub with Microsoft devs actively monitoring the issues which was helpful. Some links I found useful:
Documentation Home Page
Documentation for each flow
Samples covering every flow
Introduction Video 1 and Video 2.
Build Videos 1, 2, 3.
IdentityServer
IdentityServer is the Swiss Army knife of Identity management. It can do everything but does require a small amount of setup and a little more knowledge of the identity space. It can do most things that I listed above and a lot more beyond.
It has to be noted that even if you are using Azure Active Directory, there may still be reasons for choosing IdentityServer which I had not initially considered. For example, if you have more than one source of user data e.g. You are using AD and also a SQL database of users, then IdentityServer can be used to point to both of these sources of user information. In theory it should also make it easier to switch from AD to something else entirely as it decouples things.
The project is actively developed, has code samples for all the authentication flows and you can get answers from the community. Some links I found useful:
IdentityServer4 GitHub
Samples covering every flow
IdentityManager (A separate application for handling users, groups and roles).
Introduction Video
Authentication Flows
Fact: Security is hard. There are lots of different ways of doing authentication called flows. I put this link here because I found it very useful for understanding them.
(source: azurecomcdn.net)
Summary
I discounted AWS Directory Services as it's very young even though the company I work for uses AWS. We also use Office 365, so I discovered that we already had an Azure Active Directory linked to an on-premises active directory server. Even so, IdentityServer is still a valid contender for reasons I explained above. We are still trialing both solutions...
What you decide to choose depends entirely on the problem you have. Which should you choose? Well, it depends on the number of developers, time, money and effort you can expend setting this up. There is no one size fits all solution. Really, the differences in the two products above are the differences between a SaaS and PaaS solution.

Related

Ad Blocker w/ Segment.io

I'm considering using segment.io for several of my client-side 3rd party API needs, but I'm a little concerned about ad-blockers.
My app has no ads, but I do a lot of event-tracking for product analytics, as well as error tracking.
Segment.io offers a nice all-in-one solution, but if it's blocked, and all my eggs are in that basket, then, well, I won't have any eggs left, or however that idiom ends.
So my question is: is there a way to integrate multi-purpose event tracking (segment.io, keen.io, etc.) that isn't as susceptible to ad-blocking?
My app is mostly serverless, using a Firebase+AWS Lambda setup, so I've tried to think of some kind of back-end solution, but no big ideas so far.
ETA: I'm not looking to track adblocking users or violate anyone's trust. my question is about event-tracking unrelated to a user's identity, and whether or not that's possible in an all-in-one event tracking library that might be ad-blocked.
First, I'd typically consider such blocking to be "privacy" blocking instead of ads. So instead of Adblock it's more likely to be Ghostery or uBlock Origin.
Although most website uses of analytics are benign (improving conversion rates, catching browser exceptions, etc), the concern many have is that it allows the third party analytics sites (including segment, etc) to track users across multiple websites. Now most of these analytics sites are also not interested in that, but better safe than sorry?
The ethics of wanting to have analytics about all your webapp use are far more nuanced than "privacy good, tracking bad" and I don't think this is the forum for it, so I'll provide you a technical answer. Just note that your disclaimer about not wanting to "track adblocking users" is not really valid. If your aim is to gather analytics about them, that's still essentially tracking. Otherwise just use a hosted solution and realise that maybe 10-20% of users don't provide you with analytics.
The bad news: basically every "hosted" analytics solution is or will be in the block lists. Not only are their API hosts directly blocked, but there are also blocks in placed based on the name of JS files you may try to include.
The good news: you can work around it if you relay events through your own API, and AWS API Gateway which you may already be using is perfect for this.
There are multiple steps to this.
Step 1: The analytics provider need to provide the option of a fully bundled/built JS file. If they require you to pull the script dynamically from their own servers then it will be blocked there before it even downloads.
Step 2: Rename the bundled script so that it doesn't trigger any filename-based blocks, e.g. rename from mixpanel.umd.js to mp.js, and add it to your server.
Step 3: Create an API gateway to relay events back to the "correct" API (e.g. to api.analyticshost.com). You can actually do this with AWS API gateway only (no lambda required) if you pass through the right headers and URL params.
Step 4: Initialise the library to use your API host rather than the default one.
The result of this is (a) the browser no longer needs to dynamically pull the analytics from the analytics provider's CDN, and instead gets it from your server, and (b) the browser sends it to your API and then relayed through to the analytics provider's.
When gathering analytics segment also provides server side tracking libraries. This can be quite useful when you want to gather metrics for certain types of events that might be blocked by users on the client. At it's simplest, Segment has an HTTP Source but there are a number of popular languages supported as well.
https://segment.com/docs/connections/sources/catalog/#server
The classic example is the order complete event, this would typically happen in your server once that transaction has been committed to a database. Regardless of browser configuration, you could send this event from the server and track.
Be sure you respect the users consent management settings here though.
A lot of valid points are already mentioned in the accepted answer, I would mention a few technical considerations to minimize ad blockers impact on tracking tools (Segment, Google Tag Manager, etc):
Develop for server-side tracking. Whatever is on server cannot be blocked by ad blockers. However, this is usually tricky and very custom, Segment gives some examples on it as well.
Use managed client-side proxy solutions like DataUnlocker. This is great and does not require many code changes.
Use self-hosted open-source solutions for proxying Google Analytics and Google Tag Manager like this or this. I believe these solutions can be extended to support Segment as well.

Where can I get a one-off server of Active Directory for Developing against?

We're not a windows shop, but one of our products is going to need to optionally integrate with Active Directory - things like SSO etc.
I'd really rather not go through the rigamarole of setting up a whole server just to develop against it and then leave it hanging around for testing purposes.
Is there a simple cloud-based service where I can purchase a server running active directory for a month or two just for development purposes? I looked into Amazon EC2 but it looks like you may still need to go through a significant set up (I may be wrong on this).
Even if you find a provider that can do hosted AD, I don't know if you'll be able to avoid the setup and configuration that goes along with it. Active Directory can be configured in so many different ways that adequately testing against it really demands more than just a default, vanilla AD domain. (I've had to deal with far too many applications that made unwarranted assumptions about how Active Directory is structured, and it's infuriating. Accounts aren't always in the default "Users" container! You can have multiple domains in a forest! Sometimes the CN isn't the userid! Aargh!)
Anyway... if you really do want to host AD on a cloud service, it can be done, but it's rare, and it sounds like it's fragile. Here's a link to a discussion on the Amazon Web Services developer forum about using AD on EC2:
http://developer.amazonwebservices.com/connect/message.jspa?messageID=150845
The document provided by garysu22 looks particularly useful, but it's also 25 pages of tweaks and workarounds... so again, lots of setup and configuration.
By the way, I'm concerned that Amazon's whitepaper on hosting AD on EC2, which used to be here...
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2435
...seems to have gone missing. I'm not sure what that means, but it would make me nervous.
(EDIT: I'm not the only one: http://justinbrodley.com/?p=60)
Now for an answer to a question you didn't ask...
I've developed against Active Directory very successfully using a local virtual machine running Windows Server and AD. I highly recommend it. You'll need a reasonably powerful machine with plenty of memory and storage, of course, but any modern development box should handle it without breaking a sweat.
With this sort of setup, you get all the niceties of a VM environment, like snapshot and rollback (so you can break stuff, even deliberately, and fix it quickly) and easy network isolation (you can make the VM visible to just the host dev box, for example)... and you can make the entire thing go away when you don't need it by just suspending the VM.
Of course, you'll still have to go through the initial AD setup and configuration, but for the kind of AD setup(s) you'll need, that's pretty easy. If you're going to be doing any serious development against AD, that's valuable experience you'll want to have anyway. Active Directory is its own sort of beast, with more than its fair share of idiosyncrasies; the better you understand it, the happier your customers will be.
Good luck!
I think you want AD Lightweight Directory Service. You can run it on any server without going through the whole AD setup/hardening process. You won't be able to use all of the AD tools against it (Users and Computers, and Trusts MMC plugins), but it will behave like AD for prototyping and development. If you see posts about ADAM (Active Directory Application Mode), AS LDS is just the latest name of the same idea.

Single Sign On for a Web App

I have been trying to understand how this problem is solved for over a month now. I really need to come up with a general approach that work. I have a theory, but I'm just not sure it's the easiest (or correct) approach and I haven't been able to find any information to support my ideas.
Here's the scenario:
1) You have a complex web application that offers secure content on a subscription basis.
2) Users are required to log in to your application with user name and password.
3) You sell to large corporations, which already have a corporate authentication technology (for example, Active Directory).
4) You would like to integrate with the corporate authentication mechanism to allow their users to log onto your Web App without having to enter their user name and password.
Now, any solution you come up with will have to provide a mechanism for:
adding new users
removing users
changing user information
allowing users to log in
Ideally, all these would happen "automagically" when the corporate customer made the corresponding changes to their own authentication.
Now, I have a theory that the way to do this (at least for Active Directory) would be for me to write a client-side app that integrates with the customer's Active Directory to track the targeted changes, and then communicate those changes to my Web App. I think that if this communication were done via Web Services offered by my web app, then it would maintain an unhackable level of security, which would obviously be a requirement for these corporate customers.
I've found some information about a Microsoft product called Active Directory Federation Service (ADFS) which may or may not be the right approach for me. It seems to be a bit bulky and have some requirements that might not work for all customers.
For other existing ID scenarios (like Athens and Shibboleth), I don't think a client application is necessary. It's probably just a matter of tying into the existing ID services.
I would appreciate any advice anyone has on anything I've mentioned here. In particular, if you can tell me if my theory is correct about providing a client-side app that communicates with server-side Web Services, or if I'm totally going in the wrong direction. Also, if you could point me at any web sites or articles that explain how to do this, I'd really appreciate it. My research has not turned up much so far.
Finally, if you could let me know of any Web applications that currently offer this service (particularly as tied to a corporate Active Directory), I would be very grateful. I am wondering if other B2B Web app's like salesforce.com, or hoovers.com offer a similar service for their corporate customers.
I hate being in the dark and would greatly appreciate any light you can shed ...
Jeremy
Shibboleth is designed to support exactly this scenario. However it will rely on your customers' companies implementing the identity provider mechanisms. At the moment, that's only really common in universities. Further, if you want user information (any more than just a pseudonymous identifier), you'd need the company to agree to release those attributes to you.
I find it hard to believe that many companies would open their corporate authentication system to you, just to provide SSO.
You might find it better to rely on OpenID or similar, and using a "remember me" cookie to reduce the need for people to enter passwords.
One basic problem with your approach is that you're considering your web app in isolation. Employees at your client's company won't just require SSO to your web app but also some/few/many others, and extending your approach would require a bespoke implementation for each of those to enable access.
Hence the widespread adoption of OpenAthens and Shibboleth in the academic library community to leverage the use of locally-issued credentials. A typical medium/large university can subscribe to various products/services from more than fifty different publishers, and by deploying OpenAthens/Shibboleth they can take advantage of the SAML open standard (SAML is the protocol that Shibboleth uses) that is seeing increased take-up not only in the academic sector, but also in the commercial sector.
John's answer above points to another issue: there are a number of open standards that have recently emerged, SAML and OpenID among them. So content providers are having to decide whether they want to implement some or all of these natively, but they use separate technology stacks and so the implementation and support costs can be duplicated.
Quite a few major publishers have implemented OpenAthens as this supports Athens, SAML/Shibboleth and OpenID in a single platform, with options to plug in other technologies too, or writing a custom module to allow an internal app to connect, e.g. an invoicing or entitlements system recording which clients' users are logging in.
This sector of access management is definitely moving towards open standards, so building your own method would be depriving access to your app for a large number of users

Access database sharing strategies

What are the strategies you employ to let multiple people work on an access database?
Is it possible to host it online and have its features still functional without having to develop a custom frontend?
MS Access as a software has a few nice features that don't require any programming to configure:
Drop down lists - choose one
Multi Checkbox lists - choose multiple
Is it possible to get all of these features available even when hosted online? I'm basically thinking of an alternate way to quickly get people to work with data using GUI features like the above without going the webapp<>MySQL way.
You have some good comments here. Keep in mind that things have changed quite a bit for access 2010.
Access 2010 allows you to build web applications. The development process is very much the same as it’s been for years, but you can’t use VBA in forms for these web applications (you use a new macro language). This new feature set allows you to publish applications you build to a website. Here is an video of an application of mine running in access 2010, and at the halfway point in the video I switch to running the access application 100% in a web browser:
http://www.youtube.com/watch?v=AU4mH0jPntI
The above is for access 2010…due out this year. The above will require you to be running SharePoint services, or use an hosting service that supports "access web" services.
For previous versions of access, for all intents and purposes, it’s not a web based system at all. Now when you say multiple users, you have to clarify what kind of users and where they plan to be. If your users are on a local office network, then MS access can be used as a multi user system right out of a box with no additional coding and programming required. It is recommended however that you split your application into a front end part that’s deployed on each user’s computer. This Concept as outlined in the following article of mine.
http://www.members.shaw.ca/AlbertKallal/Articles/split/index.htm
Now, perhaps the users are going to be on notebooks and in different locations all over the country? In this type of case you are attempting to connect over a wide area network, or have users connect to the application over the Internet. This is a different problem. In this type of scenario, a good solution is to use something like SQL server for the backend, and you continue to deploy the Access front ends to each user’s computer. This application tends to be about the most cost affordable also. And using sql server + ms-access means you get to continue developing in Access for the most part like you always done. Another way to accomplish wide area use without resorting to sql server is to use something called terminal services. I outline these possibilities in the following article:
http://www.members.shaw.ca/AlbertKallal//Wan/Wans.html
As mentioned, a few others here posted links to some of the new SharePoint features that you can consider using, but they not out untill later this year.
Multi-user Access apps are pretty easy to do for small workgroup user populations in the 15-25 ranger or smaller. Above that, a developer should consider upsizing to a server back end, with the trade-off being greater administrative overhead for the server vs. having to program the app more carefully if you retain the Jet/ACE back end.
As to online access, you this isn't possible over HTTP, but if you have a Windows Terminal Server available, you can host your app there and give users access to that. This is actually an extremely easy and efficient and inexpensive way to support remote users of an app, though the larger the user population, the more problematic it becomes. But by the time an Access app has a user population that would strain a Windows Terminal Server setup, you're no longer going to be using a Jet/ACE back end.
And with a server back end, you could give access to a SQL Server on a VPN over the Internet, and if you write your Access app really efficiently, even over a standard broadband connection, your users could still work productively.
Then there's the future of Access: in Access 2010, a great deal of work has been done to integrate with a host of new features in Sharepoint 2010. If you create your A2010 app using the new type of Access web forms and reports, your app can be uploaded to a Sharepoint server running the new Access Services, and it can then be used running in a web browser (not limited to IE and not dependent on any plugins or web controls, as was the case in the past with the completely worthless Access Data Access Pages). The data store can either be a SQL Server, or you could keep it Jet/ACE for users not accessing it via the web browser, and have the data stored in Sharepoint for the online users. Also, you can have an app integrated with Sharepoint running locally in Access that uses Sharepoint when connected to the Internet, and still be able to work offline when disconnected. When connected again, you synch your local changes with the Sharepoint server, resolve any differences and continue working.
The features are really quite remarkable, and according to what I've heard and seen, if the Access app is built entirely of web forms and reports, it will look and function identically when run in Access and when run in the web browser via Sharepoint. And if you need to have client-side features that you don't expose to the users running the app in the browser, you can still use traditional Access objects!
The Access development team's blog has a number of posts on what's coming in A2010, and there's a good video posted there demonstrating how A2010 integrates with Sharepoint 2010's new Access Services.
This constitutes a quantum leap in Access's web capabilities, which were previously almost non-existent, and I'm quite excited about this. I was formerly quite wary of the changes being made to Access that seemed entirely to make it a servant of Sharepoint, but now I can see that the benefit to Access users and Access developers will be huge.
One way i've heard of, is to import the access database into a SQL Server database.
(Almost any version will do.).
Then link to the SQL Server database with Access and let users use it as they did before.
Look at this link: http://office.microsoft.com/en-us/access/HA010345991033.aspx
If you want an online solution i'd recommend going with a normal web application architecture. (Talking to a proper database.).
I have never needed to support it myself, but from what I heard so far, performance dramatically breaks down as soon as you need to support multiple users writing simultaneously. I think this is because Access uses simple file locking to implement isolation, and this just is not the right technique for a concurrent database system.
Hosted on-line? Do you mean on the network? Technically it will work on a network but there is a reason MS-Access in not in Visual Studio - it is not considered a development platform - it is a desktop application. When MS-Access first hit the scene many people built applications using it. The multiuser functionality just is not there. Upto four or five users is ok. But I would not go for more.

When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com?

I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS).
My understanding so far is:
IaaS: Raw Hardware (Processors, Networks, Storage).
PaaS: OS, System Softwares, Development Framework, Virtual Machines.
SaaS: Software Applications.
It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept.
EDIT: Ok, I will put it in more specific way -
Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2?
Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic?
Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?
Good question! As you point out, the different offerings fit into different categories:
EC2 is Infrastructure as a Service; you get VM instances, and do with them as you wish. Rackspace Cloud Servers are more or less the same.
Azure, App Engine, and Salesforce are all Platform as a Service; they offer different levels of integration, though: Azure pretty much lets you run arbitrary background services, while App Engine is oriented around short lived request handler tasks (though it also supports a task queue and scheduled tasks). I'm not terribly familiar with Salesforce's offering, but my understanding is that it's similar to App Engine in some respects, though more specialized for its particular niche.
Cloud offerings that fall under Software as a Service are everything from infrastructure pieces like Amazon's Simple Storage Service and SimpleDB through to complete applications like Fog Creek's hosted FogBugz and, of course, StackExchange.
A good general rule is that the higher level the offering, the less work you'll have to do, but the more specific it is. If you want a bug tracker, using FogBugz is obviously going to be the least work; building one on top of App Engine or Azure is more work, but provides for more versatility, while building one on top of raw VMs like EC2 is even more work (quite a lot more, in fact), but provides for even more versatility. My general advice is to pick the highest level platform that still meets your requirements, and build from there.
This is an excellent question. Full disclosure as I am partial to Azure but have experience with the others.
Where I think Azure stands out from the others is the quick transition from on prem to the cloud. For example -
SQL Azure - change connection string, upload DB, go!
Queues work a lot like MSMQ.
Blobs are pretty much blobs any way you shake them but they scale like crazy.
The table storage component is good because it provides incredible scalability for name/value pairs - but takes some getting used to.
Service Bus is my favorite of the services because it allows for a variety of communications paradigms. Two SB endpoints first try to connect to each other, if they cannot, then they route through the cloud - makes for very secure and scalable processing when firewalls tend to get in the way.
Access control list - paired typically with the service bus to make sure the right people access the right things - think SAML in the cloud.
I hope that helps!
My cloud experience is currently limited to Salesforce.com
For standard business operations and automation it provides a significant number of features that allow us to get apps up and running very quickly. We are particularly benefitting from the following:
Security (Administrators can control access to objects and fields)
Workflow & Approvals
Automatic UI generation
Built in reporting and dashboards
Entire system (including our custom changes) is accessible via web services
Ability to make the data in the system available through public sites (e.g. eCommerce)
Large library of third party apps to solve standard problems
The platform does NOT solve every problem.
I would not use the platform to model a nuclear power station or build the next twitter.
The major points of cloud computing is to save on costs by paying for usage and enable immediate deployment of computing resources.
The costs are not purely x amount of cents per instance per hour. The costs include maintenance, development, administration, etc. The huge benefit of cloud, in my mind is to liberate the customers from having to manage anything that is not within the realm of their core business competency. If I am an insurance business, I want my developers to concentrate on my insurance problems that help solve needs of my claims, rates, etc. I would rather avoid dealing with problems of email servers, file servers, document repositories, and administrating OS patches, service packs, etc.
Thus, in my opinion, the biggest benefits are derived from the SaaS and PaaS cloud offerings. One should go to IaaS only when PaaS or SaaS have serious restrictions to specific needs (i.e. I need to install a set of proprietary COM components and Azure does not support them).
SaaS is good for commodity type of applications that are not the core line of business for the client, but are more of a utility. These are your typical Messaging systems, Portals, Document Repositories, Email systems, CRMs, ERP's, Accounting, etc. etc. etc. Why reinvent the wheel by writing your own when you can customize a well supported third party product.
PaaS is great for core line of business software that supports companies' main business offering. Abstracts clients from having to deal with OS management and lets clients concentrate on the business system development - something that noone else can do for the client.
One can also take advantage of the benefits of PaaS (let's say, Google App Engine) and extend it, at times and if necessary, by pulling out some virtual machines from IaaS providers (e.g. Amazon) to do some number crunching then just send back the output to Google App Engine.
This way, you get the best of both worlds -- you can rapidly develop scalable apps in GAE, then you can always augment it by running any program you want from Amazon virtual machines.
This keeps changing, now Windows Azure also supports VM, so it is also an IaaS provider now.
Now how about Free Amazon EC2 for a year to do a better comparision. Check this out.
http://www.buzzingup.com/2010/10/amazon-announces-free-cloud-services-for-new-developers/

Resources