I am developing an alexa skill that requires account linking. The Account linking succeeds first 2 times(enable skill-disable it- and again re-enable the skill). Account linking fails only when I re-enable immediately after disabling the skill. I use Code grant auth type. The data (in the query string state/code/etc) are successfully redirected back to amazon's redirect/return url value. But Amazon ends the account linking process with a message stating that the account linking process failed at this time. Could anyone has any idea? Your help is much appreciated.
Answer: Finally I figured out the issue. The authorization server runs in 2 machines (instanes). The authorization server uses concurrent dictionary to store the access tokens. The concurrent dictionary uses local memory (in proc memory). During the authentication, the Amazon connected to one of the Auth server; that Auth server stores the access code in it's memory store. When the Amazon tries to get the access code from my authorization server using the code value that was returned to the Amazom previously, the second authorization server got hit. The second auth server does not have the access token for the amazon provided code, hence it invalidates the request. The solution is to use the shared store (out of process memory like REDIS cache) to store the access codes. So that both authorization servers can serve the request by referring the same store.
Related
I need a serverless communication between a server(less backend) and a client. The client asks for a token. When the client makes a request with this token, the backend generates a new token and sends it back to the client. When the client tries to make a request with a previous token, the backend rejects it. I don't want the backend to keep track, either in ram or in a database, a whitelist or a blacklist of valid/invalid tokens. The backend is allowed to have a static lookup table or/and a static rule/algorithm to perform this logic if needed (to use the information inside token's payload).
So, is it possible to achieve something like this ? Is there a way to apply certain kind of information inside each token to know wether you have accepted it once or not ?
In your scenario the server is stateless (at least regarding authentication) so you cannot use the state of the server to discriminate if a received token is already used.
Also the moment you generate the token it is before it's first usage, so you cannot inject into it anything that tells if it was used or not: simply you don't have that information at that time of course.
So basically if your only information containers are these two (stateless server and self generated token) the answer is no no matter how it is done; simply there is no place where this bit of information (is this token last one or not) can be placed in the moment the information is generated (at the first usage time).
Theoretically speaking you can send this information to a third party entity and ask it back when you need it.. ..but this is just cheating: if you are not accepting a DB or RAM or filesystem storage, I suppose that sending this information somewhere through an API or else is just an excluded option as the other ones.
May be you can try TOTP, https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm. This is the same algorithm used for MFA.
Here is the python implementation, you can find implementation in other languages as well.
https://pypi.org/project/pyotp/2.0.1/
Backend:
Create a random key and save it static database.
When client request for token create a token(+base64 conversion) using random key and send back to client.
When your server gets the token(-base64 conversion). Using the same random key verify the received token.
The totp algorithm will ensure that the old tokens are not valid.
You token is usually valid for 30 secs. so you may want decide on managing the validity of tokens.
I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html
i have been reading on IdentityServer4 and my understanding is that (at a high level) once IdentityServer4 is set up, a registered client can make API calls to API resources that are defined, if the client has been granted that access.
Using C#, i can:
1. Make a request for an access token from IdentityServer4, and then,
2. Pass this token along with my request to an API.
My question is, since the token has a defined lifetime, say 3600 seconds, is it correct to say that the client needs to store this token locally and use it for all its API calls within the 3600 seconds? If so, this would mean the client should somehow know when the token has expired. How would this be achieved?
Another question i have is how the 'Refresh' tokens work. When do they 'kick-in' in this whole process.
Thanks
Long story short, it's up to the client to be responsible for renewing tokens it uses. This can be based on the known expiry time (with a bit of a buffer) but OAuth also defines standard error responses from API endpoints that can indicate to a client that a new token is required. Clients should respect these and act accordingly.
It depends on the grant type being used to. E.g. using client credentials, although maybe not the most efficient, it may be desirable to get a new token for every call or "session" (i.e. multiple calls related to processing a given task) to avoid this complexity.
I am getting error on test and development server for uploading file with azure blob storage.IT is uploading locally without any problem.We are using Nuget for File handling. On debugging we are getting error on container.CreateIfNotExist()
Could anybody help me solving the error.
Thanks in advance!
Based on your description, I assumed that you are using the azure storage client library WindowsAzure.Storage for uploading files to your blob storage.
On debugging we are getting error on container.CreateIfNotExist()
If you construct the CloudStorageAccount with the AccountName and AccountKey, please make sure your AccountKey is correct, and you could login into azure portal and check with it. If you construct the CloudStorageAccount via the account-level SAS token, please make sure the SAS token is valid and it contains the related permissions. Moreover, you could re-generate your account key or new SAS token to narrow this issue. Also, you could leverage fiddler to capture the network traces when executing the operations to narrow this issue.
Additionally, you need to check your server time. As Authentication for the Azure Storage Services states as follows:
The storage services ensure that a request is no older than 15 minutes by the time it reaches the service. This guards against certain security attacks, including replay attacks. When this check fails, the server returns response code 403 (Forbidden).
Also, you could Enabling Storage Logging and Accessing Log Data to retrieve the detailed error message.
Problem was Microsoft.ApplicationInsights 2.4.0.
Solved by downgrading to 2.3.0.
It is really strange but below links really helped me out to solve the issue
Azure Storage Emulator 403 Forbidden
Azure CloudBlobContainer.CreateIfNotExists() throws Forbidden (403) on Local Development
Thanks!
Ok so recently I have been in need of creating a application with WebRTC for video voice etc.
So after looking into some libraries I found SimpleWebRTC to be pretty handly looking:
https://github.com/andyet/SimpleWebRTC
So what I am interested in is how do I implement a STUN/TURN server? (Would be great if someone could explain the differences in plain English!) And also is there a authentication mechanism. At the moment my app contacts my database and logins in user etc, but the stun and turn server would be private and not in any way involved in the authentication procedure.
So basically:
What is the best way to implement STUN/TURN
Is there any authentication mechanism?
Note, this is for a hybrid app so I will be using JavaScript/AngularJS for this. The main reason why I chose SimpleWebRTC.
Thank you!
I suggest you use an existing STUN or TURN server like coturn.
STUN servers are very lightweight and often left without authentication. A STUN server basically tells a client what its IP address appears to be, which is necessary to make peer connections across NAT (network address translation) boundaries.
TURN servers are very resource intensive because they relay media; all of the media for a call can go through the TURN server, so it's important to secure TURN. You use TURN servers in situations where UDP may be blocked, or for particular kinds of NATs that cause problems.
The authentication for coturn's TURN server can take one of two forms:
Simple (username, password) pair
TURN REST API. This uses a secret between the TURN server and another entity. The entity issues tokens with expiration times, and the TURN server verifies the token has not expired and was issued with knowledge of the shared secret. This is passed by the TURN client as a username, password pair in a format described in the documentation.