Handling DataAccessException in Spring Security - database

Right now I've got Spring Security protecting an application using basic authentication. The user details are coming from a JDBC source. If the database goes down, the internals of the user loading mechanism will throw a DataAccessException. The default authentication provider class, DaoAuthenticationProvider, catches the exception and maps it back to an AuthenticationServiceException. The end result of such a mapping is that the browser/client receives HTTP code 401.
What I want to do is to handle database unavailability in a different way. At the very least, I want this to be handled by responding with HTTP 503 but I would prefer if it redirected to an error page. How can I achieve this?
EDIT: Ritesh's solution was partially correct. The missing steps apart from implementing your own Basic entry point is to also use v3.0.3 of the security schema so that the <http-basic/> element has the entry-point-ref attribute. If you don't use this special attribute, the default Basic filter will always use its own Basic entry point implementation.

The BasicAuthenticationEntryPoint sends 401 for AuthenticationException. You can create your own custom entry point to handle AuthenticationServiceException and send out 503.
Other option is not to do anything in entry point and use SimpleMappingExceptionResolver and/or implement your own HandlerExceptionResolver.

Related

Azure AD B2C Custom Policy Localized REST API Conflict Response

This is sort of an extension of this question here. I have a policy that calls a REST API. The API returns an error message and this message needs to be localized.
One way is to of course get the API to return a localized message, but is there a way for the CustomPolicy itself to localize the error code? According to the CustomPolicy Docs, a REST API can send an error code along with the Conflict error code. Our thinking was to use this error code as a key and select a localized message (from the messageValue enum mentioned in the answer in the link).
However, we can't seem to capture/handle the error data returned by the API. The Policy seems to handle error codes by itself and we would like to know if it is possible to inject localized exception/error messages from the policy itself.
Thanks in advance!
Edit: A little more information about the setup. We have a TechnicalProfile that has a DisplayWidget and a ValidationTechnicalProfile. The DisplayWidget is used for entering & verifying the user's phone/email and the ValidationTechnicalProfile makes the final call to the RestAPI with all the user's information to register him/her. This RestAPI call output is what we want to localize.
The suggestion in the linked SO question, from what I understand, is that we integrate another DisplayClaim (that references an enum) in the DisplayWidget, and depending on the ErrorCode returned by the call, change it to display the appropriate code. However, as per my understanding, this would also require editing the API to return only 200 along with a code. This code would indicate the true nature of the result - success or a code for one of the enums to be displayed.
Our aim therefore is to check if there is a way to follow the Policy's flow (disrupt the SignUp/SignIn process) but at the same time localize the API's displayed response.
We managed to find a workaround to this, so I'm posting this here for anyone else who might be interested in this.
Our restriction for localizations was the fact that used Phrase to manage our translations and wanted the CustomPolicy specific translations all in one place. Our CD workflow was as follows:
PolicyCommit -> Build Variable Replacement through PS -> Release Variable Replacement and localized strings replacement through PS & Policy Uploads
Barring the policy from localizing the APIs response, we had the following options to achieve this:
Sending the language to the API and having the API return the appropriate error message
in the appropriate language. We were reluctant to follow this because of a multitude of reasons, but mostly because we would also have to handle different regions, etc. in the API - something the policy does by itself.
We actually had only one API that we called, and also only two error messages that were used. Hence we created an enum with the two error messages that would be localized. We then used a chain of InputClaimsTransformations that did the following:
Repeat Steps 1 through 3 for all the errors
1. CreateStringClaim (Create ClaimTypes for each of the error codes, holding the index of the error code in the enum)
2. GetMappedValueFromLocalizedCollection (Make the localized enum choose and hold the value of the required error code)
3. AddItemToStringCollection (Add the localized error from the enum to a StringCollection)
4. GenerateJson (Add the error codes StringCollection to the JSON payload to be sent to the API)
This way, the policy performed the localization for all the errors and we sent them along with the request to the API. The API, when an error occurred, picked one of the error messages from the policy and sent it back. This method was for us, because of our CD structure and Phrase integration, much easier than actually having the translations in a file hosted on the cloud to be accessed by the API.
Hope this helps someone; I can also add code in case someone needs it :)

IdentityServer4 IDX20108 invalid as per HTTPS scheme

I'm new to IdentityServer4 (2.5) and certificate setup so please bear with me. I think that I've chased down everything I could. I am using it with ASP.Net Core 2.2.0 in a proof of concept app. I have OpenIdConnect with an authority app and a client using cookies with X509Certificate2. Works great on my local machine; however, when I deploy to IIS I get this error:
System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'https://my.com/mpauth/.well-known/openid-configuration'. ---> System.ArgumentException: IDX20108: The address specified 'http://my.com/mpauth/.well-known/openid-configuration/jwks' is not valid as per HTTPS scheme. Please specify an https address for security reasons. If you want to test with http address, set the RequireHttps property on IDocumentRetriever to false.
The problem is here - http://my.com/mpauth/.well-known/openid-configuration/jwks. If I put that in the browser I get an error; however, if I change http to https I get the data. What setting controls this?
TL;DR
In most cases IdentityServer defers the base hostname/URI from the incoming request but there might be deployment scenarios which require enforcing it via the IssuerUri and/or PublicOrigin options as documented here.
More Info
The URL you are getting in your exception is part of the discovery lookup. It is necessary for validating tokens (e.g. in an applications auth middleware).
There should be a first request to .../.well-known/openid-configuration (the main discovery document) that refers to several other URIs and one of them should be the jwks (signing key sets). In most cases the other URIs in openid-configuration will point to the same primary hostname and protocol scheme your identity server is using. In your case it looks like the scheme changes to HTTP which might be unwanted in this day and age.
Is it possible, that the deployed IdentityServer lives behind a load balancer/SSL termination appliance? This could cause behavior.
I am not sure about IIS details but there might also be some kind of default hostname/URI thing at play.

Best practices for model validation using a REST API and a javascript front-end such as Angular

I'm transitioning towards more responsive front-end web apps and I have a question about model validation. Here's the set-up: the server has a standard REST API for inserting, updating, retrieving, etc. This could be written in Node or Java Spring, it doesn't matter. The front-end is written with something like Angular (or similar).
What I need is to figure out where to put the validation code. Here's the requirements:
All validation code should be written in one place only. Not both client and server. this implies that it should reside on the server, inside the REST API when persisting.
The front-end should be capable of understanding validation errors from the server and associating them to the particular field that caused the error. So if the field "username" is mandatory, the client can place an error next to that field saying "Username is mandatory".
It should be possible to validate correct variable types. So if we were expecting a number or a date and got a string instead, the error would be something like "'Yo' is not a correct date."
The error messages should be localized to the user's language.
Can anyone help me out? I need something simple and robust.
Thanks
When validating your input and it fails you can return a response in appropriate format (guessing you use JSON) to contain the error messages along with a proper HTTP error code.
Just working on a project with a Symfony backend, using FOSRestBundle to provide proper REST API. Using the form component of Symfony whenever there's a problem with the input a well structured JSON response is generated with error messages mapped to the fields or the top level if for example there's unexpected input.
After much research I found a solution using the Meteor.js platform. Since it's a pure javascript solution running on both the server and the client, you can define scripts once and have them run on both the client and the server.
From the official Meteor documentation:
Files outside the client, server and tests subdirectories are loaded on both the client and the server! That's the place for model definitions and other functions.
Wow. Defining models and validation scripts only once is pretty darn cool if you ask me. Also, there's no need to map between JSON and whatever server-side technology. Plus, no ORM mapping to get it in the DB. Nice!
Again, from the docs:
In Meteor, the client and server share the same database API. The same exact application code — like validators and computed properties — can often run in both places. But while code running on the server has direct access to the database, code running on the client does not. This distinction is the basis for Meteor's data security model.
Sounds good to me. Here's the last little gem:
Input validation: Meteor allows your methods and publish functions to take arguments of any JSON type. (In fact, Meteor's wire protocol supports EJSON, an extension of JSON which also supports other common types like dates and binary buffers.) JavaScript's dynamic typing means you don't need to declare precise types of every variable in your app, but it's usually helpful to ensure that the arguments that clients are passing to your methods and publish functions are of the type that you expect.
Anyway, sounds like I've found the a solution to the problem. If anyone else knows of a way to define validation once and have it run on both client and server please post an answer below, I'd love to hear it.
Thanks all.
To be strict, your last gate keeper of validation for any CRUD operations is of course on server-side. I do not know what is your concern that you should handle your validation on one end only(either server or client), but usually doing on both sides is better for both user experience and performance.
Say your username field is a mandatory field. This field can be easily handled in front-end side; before a user click submit and then been sent to the server and then get returned and shows the error code. You can save that round trip with a one liner code in front-end.
Of course, one may argue that from client-side the bad guys may manipulate the data and thus bypassing the front-end validation. That goes to my first point - your final gate keeper in validation should be on your server-side. That's why, data integrity is still the server's job. Make sure whatever that goes into your database is clean, dry and valid.
To answer you question, (biased opinion though) AngularJS is still a pretty awesome framework to let you do front-end validation, as well as providing a good way to do server-side error handling.

Best practices for authentication and authorization in Angular without breaking RESTful principles?

I've read quite a few SO threads about authentication and authorization with REST and Angular, but I'm still not feeling like I have a great solution for what I'm hoping to do. For some background, I'm planning to building an app in AngularJS where I want to support:
Limited guest access
Role-based access to the application once authenticated
Authentication via APIs
All of the calls to the REST API will be required to occur over SSL. I'd like to do build the app without breaking RESTful principles, namely not keeping session state stored on the server. Of course, whatever is done vis-a-vis authorization on the client-side has to be reinforced on the server side. Since we need to pass the entire state with each request, I know I need to pass some sort of token so that the backend server receiving the REST request can both authenticate and authorize the call.
With that said, my main question is around authentication - what are the best practices here? It seems there are lots of different approaches discussed, here's just a few that I've found:
http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html
http://frederiknakstad.com/2013/01/21/authentication-in-single-page-applications-with-angular-js/
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
There was a similar question asked (AngularJS best practice application authentication), but unless I'm misunderstanding the answer, it seems to imply that a server session should be used, which is breaking RESTful principles.
My main concern with the Amazon AWS and the George Reese article is it seems to assume that the consumer is a program, rather than an end user. A shared secret can be issued to a programmer in advance, who can then use it to encode calls here. This isn't the case here - I need to call the REST API from the app on behalf of the user.
Would this approach be enough? Let's say I have a session resource:
POST /api/session
Create a new session for a user
To create a session, you need to POST a JSON object containing the "username" and "password".
{
"email" : "austen#example.com",
"password" : "password"
}
Curl Example
curl -v -X POST --data '{"username":"austen#example.com","password":"password"}' "https://app.example.com/api/session" --header "Content-Type:application/json"
Response
HTTP/1.1 201 Created {
"session": {
"id":"520138ccfa4634be08000000",
"expires":"2014-03-20T17:56:28+0000"
}
}
Status Codes
201 - Created, new session established
400 - Bad Request, the JSON object is not valid or required information is missing
401 - Unauthorized, Check email/password combo
403 - Access Denied, disabled account or license invalid
I'm leaving out the HATEOAS details for clarity. On the backend, there would be a new, limited duration session key created and associated with the user. On subsequent requests, I could pass this as part of the HTTP headers:
Authorization: MyScheme 520138ccfa4634be08000000
Then the backend servers would be responsible for digesting this out of the request, finding the associated user and enforcing authorization rules for the request. It should probably update the expiration for the session as well.
If all this is happening over SSL, am I leaving the door open to any kind of attacks that I should be protecting against? You could try to guess session keys and place them in the header, so I suppose I could additionally append a user GUID to the session key to further prevent brute force attacks.
It's been a few years since I've actively programmed and I'm just getting back into the swing here. Apologies if I'm being obtuse or unnecessarily reinventing the wheel, just hoping to run my ideas by the community here based on my reading thus far and see if they pass the litmus test.
When someone asks about REST authentication, I defer to the Amazon Web Services and basically suggest "do that". Why? Because, from a "wisdom of the crowds" point of view, AWS solves the problem, is heavily used, heavily analyzed, and vetted by people that know and care far more than most about what makes a secure request than most. And security is a good place to "not reinvent the wheel". In terms of "shoulders to stand on", you can do worse than AWS.
Now, AWS does not use a token technique, rather it uses a secure hash based on shared secrets and the payload. It is arguably a more complicated implementation (with all of its normalization processes, etc.).
But it works.
The downside is that it requires your application to retain the persons shared secret (i.e. the password), and it also requires the server to have access to that a plain text version of the password. That typically means that the password is stored encrypted, and it then decrypted as appropriate. And that invite yet more complexity of key management and other things on the server side vs secure hashing technique.
The biggest issue, of course, with any token passing technique is Man in the Middle attacks, and replay attacks. SSL mitigates these mostly, naturally.
Of course, you should also consider the OAuth family, which have their own issues, notably with interoperability, but if that's not a primary goal, then the techniques are certainly valid.
For you application, the token lease is not a big deal. Your application will still need to operate within the time frame of the lease, or be able to renew it. In order to do that it will need to either retain the user credential or re-prompt them for it. Just treat the token as a first class resource, like anything else. If practical, try and associate some other information with the request and bundle it in to the token (browser signature, IP address), just to enforce some locality.
You are still open to (potential) replay problems, where the same request can be sent twice. With a typical hash implementation, a timestamp is part of the signature which can bracket the life span of the request. That's solved differently in this case. For example, each request can be sent with a serial ID or a GUID and you can record that the request has already been played to prevent it from happening again. Different techniques for that.
Here is an incredible article about authentication and login services built with angular.
https://medium.com/opinionated-angularjs/7bbf0346acec
This SO question do a good job of summing up my understanding of REST
Do sessions really violate RESTfulness?
If you store a token in a session you are still creating state on the server side (this is an issue since that session is typically only stored on the one server, this can be mitigated with sticky sessions or other solutions).
I'd like to know what your reasoning is for creating a RESTful service though because perhaps this isn't really a large concern.
If you send a token in the body along with every request (since everything is encrypted with SSL this is okay) then you can have any number of servers (load balanced) servicing the request without any previously knowledge of state.
Long story short I think aiming for RESTful implementations is a good goal but being purely stateless certainly creates an extra layer of complexity when it comes to authentication and verifying authorization.
Thus far I've started building my back-ends with REST in mind, making URIs that make sense and using the correct HTTP verbs, but still use a token in a session for the simplicity of authentication (when not using multiple servers).
I read through the links you posted, the AngularJS one seems to focus just on the client and doesn't seem to explicitly address the server in that article, he does link to another one (I'm not a Node user so forgive me if my interpretation is wrong here) but it appears the server is relying on the client to tell it what level of authorization it has which is clearly not a good idea.

Validation on route change in Backbone.js

I have the following tiny dilemma: I have a backbone app, which is almost entirely route based, i.e. if I do to nameoftheapp/photos/1/edit I should go to the edit page for a given photo. The problem is, since my view logic happens almost 100% on the client side (I use a thin service-based server for storage and validation) how do I avoid issues of the sort of an unauthorized user reaching that page? Of course, I can make the router do the check if the user is authorized, but this already leads to duplication of efforts in terms of validation. Of course, I cannot leave the server side without validation, because then the API would be exposed to access of any sort.
I don't see any other way for now. Unless someone comes up with a clever idea, I guess I will have to duplicate validation both client and server-side.
The fundamental rule should be "never trust the client". Never deliver to the client what they're not allowed to have.
So, if the user goes to nameoftheapp/photos/1/edit, presumably you try to fetch the image from the server.
The server should respond with a HTTP 401 response (unauthorized).
Your view should have an error handler for this and inform the user they're not authorized for that - in whatever way you're interested in - an error message on the edit view, or a "history.back()" to return to the previous "page".
So, you don't really have to duplicate the validation logic - you simply need your views to be able to respond meaningfully to the validation responses from the server.
You might say, "That isn't efficient - you end up making more API calls", but those unauthorized calls are not going to be a normal occurrence of a user using the app in any regular fashion, they're going to be the result of probing, and I can find out all the API calls anyway by watching the network tab and hit the API directly using whatever tools I want. So, there really will be no more API traffic then if you DID have validation in the client.
I encountered the same issue a while ago, and it seems the best practice is to use server-side validation. My suggestion... Use a templating engine like Underscore, which is a dependency of Backbone, design the templates, and for those routes that only authenticated users or those with rights to do so, can access... you ask the server for the missing data (usually small pieces of json data) based on some CSRF token, or session_id, or both, (or any other server-side validation method you choose), and you render the template... otherwise you render a predefined error with the same template... Logic is simple enough...

Resources