I need a serverless communication between a server(less backend) and a client. The client asks for a token. When the client makes a request with this token, the backend generates a new token and sends it back to the client. When the client tries to make a request with a previous token, the backend rejects it. I don't want the backend to keep track, either in ram or in a database, a whitelist or a blacklist of valid/invalid tokens. The backend is allowed to have a static lookup table or/and a static rule/algorithm to perform this logic if needed (to use the information inside token's payload).
So, is it possible to achieve something like this ? Is there a way to apply certain kind of information inside each token to know wether you have accepted it once or not ?
In your scenario the server is stateless (at least regarding authentication) so you cannot use the state of the server to discriminate if a received token is already used.
Also the moment you generate the token it is before it's first usage, so you cannot inject into it anything that tells if it was used or not: simply you don't have that information at that time of course.
So basically if your only information containers are these two (stateless server and self generated token) the answer is no no matter how it is done; simply there is no place where this bit of information (is this token last one or not) can be placed in the moment the information is generated (at the first usage time).
Theoretically speaking you can send this information to a third party entity and ask it back when you need it.. ..but this is just cheating: if you are not accepting a DB or RAM or filesystem storage, I suppose that sending this information somewhere through an API or else is just an excluded option as the other ones.
May be you can try TOTP, https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm. This is the same algorithm used for MFA.
Here is the python implementation, you can find implementation in other languages as well.
https://pypi.org/project/pyotp/2.0.1/
Backend:
Create a random key and save it static database.
When client request for token create a token(+base64 conversion) using random key and send back to client.
When your server gets the token(-base64 conversion). Using the same random key verify the received token.
The totp algorithm will ensure that the old tokens are not valid.
You token is usually valid for 30 secs. so you may want decide on managing the validity of tokens.
I use libcurl share+easy interface and I need to "fix up" some cookie info that is set by a webserver.
In my case I use multiple threads and I would like to know at what point received cookie is "shared" to all other curl handles and when it's the right time to fix received cookie data:
right when I received it from remote server (but at this point I'm not sure if the corrupt cookie data might be picked up by some other thread that was making a new http request at the same time)
on making new requests to ensure that I don't end up using corrupt cookie in new http requests.
Here's my code flow. I call curl_easy_perform. When response containing Set-Cookie comes in, libcurl at first parses that cookie and stores it in its internal store (which gets shared in case of curl share interface).
Then curl_easy_perform returns and now I try to check if server send specific cookie that I need to "fix up". To check that cookie the only way is to use CURLINFO_COOKIELIST.
My question is: from the time curl parsed incoming Set-Cookie header (with invalid cookie data) to the time when I inspect cookies using CURLINFO_COOKIELIST the updated invalid cookie might be picked up by another thread. That means that to avoid that issue I don't see any other options other than inspecting cookies on each new request in case if there is another thread out there that might have updated cookies with invalid data.
Even in this case I still may end up using invalid cookie data. In other words, there is no proper solution for this problem.
What's the right approach?
Typically when using libcurl in multiple threads, you use one handle in each thread and they don't share anything. Then it doesn't matter when you modify cookies since each handle (and thus thread) operates independently.
If you make the threads share cookie state, like with the share interface, then you have locking mutexes setup that protects the data objects from being accessed from more than one thread at a time anyway so you can just proceed and update the cookies using the correct API whenever you feel like.
If you're using the multi interface, it does parallel transfers in the same thread and thus you can update cookies whenever you like without risking any problems with parallelisms.
I have an application written in angularjs and a dropwizard backend. All API calls are ajax, with the exception of file downloads, which is done by performing a redirect to a standard GET request.
All API calls are secured through a token which is passed as a Token header. We use SSL for all APIs.
The download GET request works but I'm having a hard time figuring out how to secure it. I have no way of setting a custom header, which is required to pass the token. So theoretically, I'm left with two options, clearly none of them acceptable: 1. Pass the token as one the GET parameters 2. Leave the download unsecured.
Any ideas how to secure file download?
Putting a secret token in a URL query parameter isn't great because URL tend to be leakable, for example through history/logging/referrers. There are ways to mitigate this: for example you could have the server side issue a download token that is only good for one use or for a limited amount of time. Or the client could pass a time-limited token created using a signature over the secret token that the server side could verify.
Alternatively you could, just for this one interface (eg path-limited, quitckly-expiring) put the token in a cookie.
Another approach is to download the whole file through AJAX, thus allowing you to set the header as normal. Then you have to present the content as a downloadable local resource, which requires a cocktail of browser-specific hacks (eg using data: or filesystem: URLs, and potentially links with the download attribute). Given the complication this isn't usually worth bothering with, especially if the file is very large which may present further storage constraints.
I am using ionic framework as a matter of course I use angular.js for front end. on the back end, I use spring-boot for data handling and API management.
I have used a single session and csrf token exchange between client and server.
However, I have been asked to use in some sections one extra security control. as an example one section of application can take as long as server is alive. Another section can stay alive till couple of weeks and another section will ask in every single request or every single.
How can I handle this design problem?
Modern webapps use JSON Web Tokens (JWT)
there is also an angular package you can use.
These tokens are sent with every request and contain arbitrary information about the user or other data. They are issued by your API on successful login and stored in your frontend. The issued token is then attached to every request header when requesting your API. You can then in the backend decode the token and determine if the user has all the required rights to continue, if the token is still valid our outdated for your different use cases.
I am not familiar with your backend solution but i am sure you can find some jwt packages for it or implement an easy solution yourself. (Googling for spring jwt gave quite some results)
I've read quite a few SO threads about authentication and authorization with REST and Angular, but I'm still not feeling like I have a great solution for what I'm hoping to do. For some background, I'm planning to building an app in AngularJS where I want to support:
Limited guest access
Role-based access to the application once authenticated
Authentication via APIs
All of the calls to the REST API will be required to occur over SSL. I'd like to do build the app without breaking RESTful principles, namely not keeping session state stored on the server. Of course, whatever is done vis-a-vis authorization on the client-side has to be reinforced on the server side. Since we need to pass the entire state with each request, I know I need to pass some sort of token so that the backend server receiving the REST request can both authenticate and authorize the call.
With that said, my main question is around authentication - what are the best practices here? It seems there are lots of different approaches discussed, here's just a few that I've found:
http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html
http://frederiknakstad.com/2013/01/21/authentication-in-single-page-applications-with-angular-js/
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
There was a similar question asked (AngularJS best practice application authentication), but unless I'm misunderstanding the answer, it seems to imply that a server session should be used, which is breaking RESTful principles.
My main concern with the Amazon AWS and the George Reese article is it seems to assume that the consumer is a program, rather than an end user. A shared secret can be issued to a programmer in advance, who can then use it to encode calls here. This isn't the case here - I need to call the REST API from the app on behalf of the user.
Would this approach be enough? Let's say I have a session resource:
POST /api/session
Create a new session for a user
To create a session, you need to POST a JSON object containing the "username" and "password".
{
"email" : "austen#example.com",
"password" : "password"
}
Curl Example
curl -v -X POST --data '{"username":"austen#example.com","password":"password"}' "https://app.example.com/api/session" --header "Content-Type:application/json"
Response
HTTP/1.1 201 Created {
"session": {
"id":"520138ccfa4634be08000000",
"expires":"2014-03-20T17:56:28+0000"
}
}
Status Codes
201 - Created, new session established
400 - Bad Request, the JSON object is not valid or required information is missing
401 - Unauthorized, Check email/password combo
403 - Access Denied, disabled account or license invalid
I'm leaving out the HATEOAS details for clarity. On the backend, there would be a new, limited duration session key created and associated with the user. On subsequent requests, I could pass this as part of the HTTP headers:
Authorization: MyScheme 520138ccfa4634be08000000
Then the backend servers would be responsible for digesting this out of the request, finding the associated user and enforcing authorization rules for the request. It should probably update the expiration for the session as well.
If all this is happening over SSL, am I leaving the door open to any kind of attacks that I should be protecting against? You could try to guess session keys and place them in the header, so I suppose I could additionally append a user GUID to the session key to further prevent brute force attacks.
It's been a few years since I've actively programmed and I'm just getting back into the swing here. Apologies if I'm being obtuse or unnecessarily reinventing the wheel, just hoping to run my ideas by the community here based on my reading thus far and see if they pass the litmus test.
When someone asks about REST authentication, I defer to the Amazon Web Services and basically suggest "do that". Why? Because, from a "wisdom of the crowds" point of view, AWS solves the problem, is heavily used, heavily analyzed, and vetted by people that know and care far more than most about what makes a secure request than most. And security is a good place to "not reinvent the wheel". In terms of "shoulders to stand on", you can do worse than AWS.
Now, AWS does not use a token technique, rather it uses a secure hash based on shared secrets and the payload. It is arguably a more complicated implementation (with all of its normalization processes, etc.).
But it works.
The downside is that it requires your application to retain the persons shared secret (i.e. the password), and it also requires the server to have access to that a plain text version of the password. That typically means that the password is stored encrypted, and it then decrypted as appropriate. And that invite yet more complexity of key management and other things on the server side vs secure hashing technique.
The biggest issue, of course, with any token passing technique is Man in the Middle attacks, and replay attacks. SSL mitigates these mostly, naturally.
Of course, you should also consider the OAuth family, which have their own issues, notably with interoperability, but if that's not a primary goal, then the techniques are certainly valid.
For you application, the token lease is not a big deal. Your application will still need to operate within the time frame of the lease, or be able to renew it. In order to do that it will need to either retain the user credential or re-prompt them for it. Just treat the token as a first class resource, like anything else. If practical, try and associate some other information with the request and bundle it in to the token (browser signature, IP address), just to enforce some locality.
You are still open to (potential) replay problems, where the same request can be sent twice. With a typical hash implementation, a timestamp is part of the signature which can bracket the life span of the request. That's solved differently in this case. For example, each request can be sent with a serial ID or a GUID and you can record that the request has already been played to prevent it from happening again. Different techniques for that.
Here is an incredible article about authentication and login services built with angular.
https://medium.com/opinionated-angularjs/7bbf0346acec
This SO question do a good job of summing up my understanding of REST
Do sessions really violate RESTfulness?
If you store a token in a session you are still creating state on the server side (this is an issue since that session is typically only stored on the one server, this can be mitigated with sticky sessions or other solutions).
I'd like to know what your reasoning is for creating a RESTful service though because perhaps this isn't really a large concern.
If you send a token in the body along with every request (since everything is encrypted with SSL this is okay) then you can have any number of servers (load balanced) servicing the request without any previously knowledge of state.
Long story short I think aiming for RESTful implementations is a good goal but being purely stateless certainly creates an extra layer of complexity when it comes to authentication and verifying authorization.
Thus far I've started building my back-ends with REST in mind, making URIs that make sense and using the correct HTTP verbs, but still use a token in a session for the simplicity of authentication (when not using multiple servers).
I read through the links you posted, the AngularJS one seems to focus just on the client and doesn't seem to explicitly address the server in that article, he does link to another one (I'm not a Node user so forgive me if my interpretation is wrong here) but it appears the server is relying on the client to tell it what level of authorization it has which is clearly not a good idea.