Best practices for authentication and authorization in Angular without breaking RESTful principles? - angularjs

I've read quite a few SO threads about authentication and authorization with REST and Angular, but I'm still not feeling like I have a great solution for what I'm hoping to do. For some background, I'm planning to building an app in AngularJS where I want to support:
Limited guest access
Role-based access to the application once authenticated
Authentication via APIs
All of the calls to the REST API will be required to occur over SSL. I'd like to do build the app without breaking RESTful principles, namely not keeping session state stored on the server. Of course, whatever is done vis-a-vis authorization on the client-side has to be reinforced on the server side. Since we need to pass the entire state with each request, I know I need to pass some sort of token so that the backend server receiving the REST request can both authenticate and authorize the call.
With that said, my main question is around authentication - what are the best practices here? It seems there are lots of different approaches discussed, here's just a few that I've found:
http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html
http://frederiknakstad.com/2013/01/21/authentication-in-single-page-applications-with-angular-js/
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
There was a similar question asked (AngularJS best practice application authentication), but unless I'm misunderstanding the answer, it seems to imply that a server session should be used, which is breaking RESTful principles.
My main concern with the Amazon AWS and the George Reese article is it seems to assume that the consumer is a program, rather than an end user. A shared secret can be issued to a programmer in advance, who can then use it to encode calls here. This isn't the case here - I need to call the REST API from the app on behalf of the user.
Would this approach be enough? Let's say I have a session resource:
POST /api/session
Create a new session for a user
To create a session, you need to POST a JSON object containing the "username" and "password".
{
"email" : "austen#example.com",
"password" : "password"
}
Curl Example
curl -v -X POST --data '{"username":"austen#example.com","password":"password"}' "https://app.example.com/api/session" --header "Content-Type:application/json"
Response
HTTP/1.1 201 Created {
"session": {
"id":"520138ccfa4634be08000000",
"expires":"2014-03-20T17:56:28+0000"
}
}
Status Codes
201 - Created, new session established
400 - Bad Request, the JSON object is not valid or required information is missing
401 - Unauthorized, Check email/password combo
403 - Access Denied, disabled account or license invalid
I'm leaving out the HATEOAS details for clarity. On the backend, there would be a new, limited duration session key created and associated with the user. On subsequent requests, I could pass this as part of the HTTP headers:
Authorization: MyScheme 520138ccfa4634be08000000
Then the backend servers would be responsible for digesting this out of the request, finding the associated user and enforcing authorization rules for the request. It should probably update the expiration for the session as well.
If all this is happening over SSL, am I leaving the door open to any kind of attacks that I should be protecting against? You could try to guess session keys and place them in the header, so I suppose I could additionally append a user GUID to the session key to further prevent brute force attacks.
It's been a few years since I've actively programmed and I'm just getting back into the swing here. Apologies if I'm being obtuse or unnecessarily reinventing the wheel, just hoping to run my ideas by the community here based on my reading thus far and see if they pass the litmus test.

When someone asks about REST authentication, I defer to the Amazon Web Services and basically suggest "do that". Why? Because, from a "wisdom of the crowds" point of view, AWS solves the problem, is heavily used, heavily analyzed, and vetted by people that know and care far more than most about what makes a secure request than most. And security is a good place to "not reinvent the wheel". In terms of "shoulders to stand on", you can do worse than AWS.
Now, AWS does not use a token technique, rather it uses a secure hash based on shared secrets and the payload. It is arguably a more complicated implementation (with all of its normalization processes, etc.).
But it works.
The downside is that it requires your application to retain the persons shared secret (i.e. the password), and it also requires the server to have access to that a plain text version of the password. That typically means that the password is stored encrypted, and it then decrypted as appropriate. And that invite yet more complexity of key management and other things on the server side vs secure hashing technique.
The biggest issue, of course, with any token passing technique is Man in the Middle attacks, and replay attacks. SSL mitigates these mostly, naturally.
Of course, you should also consider the OAuth family, which have their own issues, notably with interoperability, but if that's not a primary goal, then the techniques are certainly valid.
For you application, the token lease is not a big deal. Your application will still need to operate within the time frame of the lease, or be able to renew it. In order to do that it will need to either retain the user credential or re-prompt them for it. Just treat the token as a first class resource, like anything else. If practical, try and associate some other information with the request and bundle it in to the token (browser signature, IP address), just to enforce some locality.
You are still open to (potential) replay problems, where the same request can be sent twice. With a typical hash implementation, a timestamp is part of the signature which can bracket the life span of the request. That's solved differently in this case. For example, each request can be sent with a serial ID or a GUID and you can record that the request has already been played to prevent it from happening again. Different techniques for that.

Here is an incredible article about authentication and login services built with angular.
https://medium.com/opinionated-angularjs/7bbf0346acec

This SO question do a good job of summing up my understanding of REST
Do sessions really violate RESTfulness?
If you store a token in a session you are still creating state on the server side (this is an issue since that session is typically only stored on the one server, this can be mitigated with sticky sessions or other solutions).
I'd like to know what your reasoning is for creating a RESTful service though because perhaps this isn't really a large concern.
If you send a token in the body along with every request (since everything is encrypted with SSL this is okay) then you can have any number of servers (load balanced) servicing the request without any previously knowledge of state.
Long story short I think aiming for RESTful implementations is a good goal but being purely stateless certainly creates an extra layer of complexity when it comes to authentication and verifying authorization.
Thus far I've started building my back-ends with REST in mind, making URIs that make sense and using the correct HTTP verbs, but still use a token in a session for the simplicity of authentication (when not using multiple servers).
I read through the links you posted, the AngularJS one seems to focus just on the client and doesn't seem to explicitly address the server in that article, he does link to another one (I'm not a Node user so forgive me if my interpretation is wrong here) but it appears the server is relying on the client to tell it what level of authorization it has which is clearly not a good idea.

Related

Authorizing requests to get webpack code chunks

When chunking client code with webpack, i.e. using React lazy loading, is there any way to restrict access to a particular chunk by gating it behind some sort of authorization credentials?
All approaches I've seen so far have involved client-side code that only renders a given component when certain conditions are met on the client side (e.g. something along the lines of this as a simple example). This does not strike me as particularly secure, more "security by obscurity" than anything: a motivated attacker could modify either the client code they do have, or perhaps their own memory, to force the browser to request the additional chunks - or just send requests to the server using some other agent.
Is there any way to enforce this on the server side, in the CDN that serves the client code? Perhaps some way to attach credentials to the requests to obtain additional chunks such that the CDN could verify that the client is indeed authorized to be sent the code? (The server side authorization would be trivial enough to implement myself, but I don't know of a way to have the requests actually attach the credentials - that whole mechanism is opaque and I can't find any documentation.)
Alternatively, is there some justification for the client-side-only approach actually being secure?

Is it possible to invalidate a token with stateless logic (no database)?

I need a serverless communication between a server(less backend) and a client. The client asks for a token. When the client makes a request with this token, the backend generates a new token and sends it back to the client. When the client tries to make a request with a previous token, the backend rejects it. I don't want the backend to keep track, either in ram or in a database, a whitelist or a blacklist of valid/invalid tokens. The backend is allowed to have a static lookup table or/and a static rule/algorithm to perform this logic if needed (to use the information inside token's payload).
So, is it possible to achieve something like this ? Is there a way to apply certain kind of information inside each token to know wether you have accepted it once or not ?
In your scenario the server is stateless (at least regarding authentication) so you cannot use the state of the server to discriminate if a received token is already used.
Also the moment you generate the token it is before it's first usage, so you cannot inject into it anything that tells if it was used or not: simply you don't have that information at that time of course.
So basically if your only information containers are these two (stateless server and self generated token) the answer is no no matter how it is done; simply there is no place where this bit of information (is this token last one or not) can be placed in the moment the information is generated (at the first usage time).
Theoretically speaking you can send this information to a third party entity and ask it back when you need it.. ..but this is just cheating: if you are not accepting a DB or RAM or filesystem storage, I suppose that sending this information somewhere through an API or else is just an excluded option as the other ones.
May be you can try TOTP, https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm. This is the same algorithm used for MFA.
Here is the python implementation, you can find implementation in other languages as well.
https://pypi.org/project/pyotp/2.0.1/
Backend:
Create a random key and save it static database.
When client request for token create a token(+base64 conversion) using random key and send back to client.
When your server gets the token(-base64 conversion). Using the same random key verify the received token.
The totp algorithm will ensure that the old tokens are not valid.
You token is usually valid for 30 secs. so you may want decide on managing the validity of tokens.

sanitizing data before API call in web app

I have a React application which has a form input. The user will fill out this form and once finished this data will be sent via POST request to a service (Spring Boot app) which will persist the data. The web app also has a search function and will send query params via GET request to the same Spring Boot app.
I am sanitizing the data when it is received in the Spring Boot application using a Filter.
My question is, since the server side is validating the data and stripping out possible XSS attack code, is it necessary to sanitize data inputted into the form on the React app side too? If so, would I do this just before the API call is made? i.e. have code to strip out dangerous characters before the data is added to POST payload, or as soon as data is read in from input text fields?
I have read numerous posts online and answers here on SO. I understand that it seems most important to validate on the server side since client code can't be trusted. The thing I am not clear on is since the client code is accessible to any possible attacker, can't they just bypass any validation mechanism on the client side making it pointless to add on the client in the first place? The only advantage I can see right now is detecting dangerous input as early as possible.
Thanks
It is useless to do client side sanitisation - is wasting of time and giving false feeling of security for developers.
If you want to do sanitisation of input(arguably it is not necessary, if your clients encode output), you have to do it on the server anyway.
“The only advantage I can see right now is detecting dangerous input
as early as possible.”
The experience hacker will bypass client validation anyway.
You should not put effort to provide naive hacker a feedback as early as possible :)
If you backend uses .Net, see AntiXSS in ASP.Net Core
Its not pointless to do client-side validation - for one, its good UX - you should never allow a user to enter invalid data into input fields otherwise they will be presented with a litany of server-side error messages after submission.
Secondly it can deter casual attackers who may just want to see what happens if they enter ' into the username field... (a sql injection attack). But who otherwise may not be bothered to get out a web proxy and start a full-on attack.

How to protect RESTful service?

Contemplating building an Angular 2 front-end to my website. My question is not necessarily related to Angular but I want to provide full context.
Application logic that displays content to user would shift to the client. So on the server side, I would need to expose data via a RESTful JSON feed. What worries me, is that someone can completely bypass my front-end and execute requests to the service with various parameters, effectively scraping my database. I realize some of this is possible by scraping HTML but exposing a service with nicely formatted data is just a no-brainer.
Is there a way to protect the RESTful service from this? In other words, is there a way to ensure such service would only respond to my Angular 2 application call? Authentication certainly isn't a solution here - I don't want to force visitors to authenticate and the scraper could very well authenticate and get access, anyway.
I would recommend JWT Authorization. One such implementation is OAuth. Basically you get a json web token ( JWT ) that has been signed by an authority you trust that tells about the user and what resources they can access on your api.
If the request doesn't include an Authorization token - your API rejects it.
If the token has been tampered with by someone trying to grant themselves privledges after the token is signed by the authorization authority - your API rejects it.
It is a pretty cool piece of kit.
This site has information about OAuth implementations in different languages, hopefully your favorite is listed.
Some light bed time reading.
There is no obvious way to do it that I know of, but a lot of people seem to be looking at Amazon S3 as a model. If you put credentials in your client code, then anyone getting the client code can see them. I might suggest that you could write the server to pass a time limited token back to the browser with the client code. The client code would be required to pass it back to the server for access. This would prevent anyone from writing their own client code, as only client code sent by the server would work, though only for some period of time. The user might occasionally get timeouts, but that depends on how strict you want to make the token timeouts. Of course, even this kind of thing could be hacked by someone making a client request to get a copy of the token to use with their own client API, but at that point you should be proud that someone is trying so hard to use your API! I have not tried to write such a thing, so I don't have any practical experience with the issue. I myself have wondered about it, but also don't have enough experience with this architecture to see what, if anything, others have been doing. What do angularJS forums suggest?
Additional References: Best Practices for securing a REST API / web service
I believe the answer is "No".
You could do some security by obscurity type stuff. Your rest API could expose garbled data and you could have some function that was "hidden" in your code un-garble it. Though obviously this isn't fool proof, but if you expose data on a public site it's out there regardless of server or client rendering.

Best authentication solution for RESTful Database Server

I'm writing a RESTful Database Server called Phoenix. Being an easy interface into an entire application's data, security is quite an important issue, and I'm interested in what authentication solutions any of you could suggest.
It needs to be:
Secure - it's got to be very hard to break. Signing requests could be a good way of doing this, but considering it's REST there aren't many parameters that are sent so I don't know what good singing would do.
Minimal - I'd rather it didn't take four requests to compare six tokens in HMAC-signed requests - the USP of the server is it's simplicity, so authentication from clients has got to be easy.
Implementable - it has to fit the system, which is a database server. So, for instance, oAuth wouldn't work here.
I'd love to hear your suggestions - thank you!
Jamie
Not much information here about what your security or implementation needs are. The quick answers are Basic or Digest over SSL, or signed requests. Are there reasons not to use these?
Signing requests typically adds a timestamp and/or a nonce, so any request can be authenticated. See the Amazon AWS authentication documentation for a description and libraries.
I have a similar server. I choose to use OAuth signing for its simplicity,
http://oauth.net/core/1.0#signing_process
We don't enforce the nonce, just limit the timestamp to a short window (30 seconds) to thwart replay.
The OAuth library is available on many platforms so you don't have to write much code to implement it. Don't know why you think OAuth is not implementable.
For each client allowed to access the data, it's assigned a consumer_key and a consumer_secret. All the requests are signed with consumer_secret so only client knowing the secret can get access.
We also considered other options. HTTTP Basic Auth over SSL is too expensive. HTTP Digest Auth is too slow because it needs to wait for a challenge.

Resources