Limit the usage of Kerberos TGTs - active-directory

I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!

The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].

Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html

Related

Regression on SQL Server Connection from Standard Logic App

I have been developing Standard Logic Apps with SQL Server successfully for some time, but suddenly can no longer connect. I'm using Azure AD Integrated as my Authentication Type, which I know is OK as I use the same credentials in SSMS. If I try to create a new credential, it is apparently successful but on save the Logic App says "The API connection reference XXX is missing or not valid". Something has changed, but I don't know what ... help!
per above, this was submitted to M/S and has been resolved as follows: the root cause is if a Logic App Parameter name includes an embedded space the problem with SQL connections is triggered. This is a pernicious problem, as the error message is quite unrelated to the root cause. Further, since embedded spaces are supported in Logic Apps e.g. in Step Names, it is easy to assume the same applies across the board.

How to make MongoDb (local database) also be accessible via internet the same time

my plan of setup is to make the local database accessible to the local computers because of heavy manipulation of data and it needs a fast response, but at the same time, I wanted to access it via internet when I'm away from the local network. is this possible?
Currently Using MERN stack
I tried MLAB but the response of data is pretty slow
Thank you in advance
it looks like you're going down a dangerous path. MongoDB comes with local authentication and a standard 27017 port. To make it available online you need to
remove authentication (which is not on by default)
change or remove the bindIp option
ensure the port is not blocked by firewall
This can be done in the config file.
https://docs.mongodb.com/manual/reference/configuration-options/
However, what you really want to do, probably to create routes within express so that users can communicate with your mongo is a structured and safe manner. More information in this here http://mean.io/2017/10/31/getting-started-mean-io/

WebRTC and authentication implementations

Ok so recently I have been in need of creating a application with WebRTC for video voice etc.
So after looking into some libraries I found SimpleWebRTC to be pretty handly looking:
https://github.com/andyet/SimpleWebRTC
So what I am interested in is how do I implement a STUN/TURN server? (Would be great if someone could explain the differences in plain English!) And also is there a authentication mechanism. At the moment my app contacts my database and logins in user etc, but the stun and turn server would be private and not in any way involved in the authentication procedure.
So basically:
What is the best way to implement STUN/TURN
Is there any authentication mechanism?
Note, this is for a hybrid app so I will be using JavaScript/AngularJS for this. The main reason why I chose SimpleWebRTC.
Thank you!
I suggest you use an existing STUN or TURN server like coturn.
STUN servers are very lightweight and often left without authentication. A STUN server basically tells a client what its IP address appears to be, which is necessary to make peer connections across NAT (network address translation) boundaries.
TURN servers are very resource intensive because they relay media; all of the media for a call can go through the TURN server, so it's important to secure TURN. You use TURN servers in situations where UDP may be blocked, or for particular kinds of NATs that cause problems.
The authentication for coturn's TURN server can take one of two forms:
Simple (username, password) pair
TURN REST API. This uses a secret between the TURN server and another entity. The entity issues tokens with expiration times, and the TURN server verifies the token has not expired and was issued with knowledge of the shared secret. This is passed by the TURN client as a username, password pair in a format described in the documentation.

SOLR/Lucene Server Location

I am pondering the question of proper location of the SOLR server.
This is usually what we have today:
Server Side:
Node or RoR or IIS
Client:
Singe Page App or rendered by a server.
DB:
RDMBS - MsSQO, Postgre, MySQL or some other Relational database.
Thinking where to put a SOLR server. However I think, I am positive it should be not placed to be accessible from the internet, let alone accessible from the client. I think it should be behind main server, and the main server should send queries to SOLR and return to the client. Additionally, place SOLR behind the firewall and white list the server.
Is this good thinking or there is something else entirely that I am not seeing?
As the docs say:
First and foremost, Solr does not concern itself with security either at the document level or the communication level.
You are right: you should never have a publicly-visible Solr server for this reason. In our setup at work we have it firewalled so only our main webserver can access it (i.e. using whitelisting). As part of our API, requests for data therefore must go through the webserver, allowing us to authenticate users, as well as not give users free reign to execute whatever they want.
If you want to use the web client, you can always temporarily whitelist your IP and remove it afterwards. While it is possible for an attacker to spoof your IP and thus gain access, a hacker has to be very determined and explicitly targeting your application, has to know both the whitelisted IP and the Solr IP, and has to know all this for the short time it is whitelisted. Such a setup is therefore secure enough for your needs.

WCF security between WinForms client and Shared Host webserver

Ok,
I have developed this WinForms client, which interacts with a server (ASPX Application) by means of WCF calls. I would now like to deploy the server to my shared webhost, but I'm kinda new to WCF and especially the security possibilities behind it.
The goal is to kind of secure the WCF service, so that not everybody that knows or finds out the endpoint address can call it. Rather, only my WinForms client must be able to call the WCF service.
I do not need authentication on a user basis, so no authentication is required from the user of the client. But I want only instances of this WinForms client to be able to interact with the service. The information passed between server and client is not very sensitive, so it's not really required to secure it, but it's a plus if it can easily be done.
Is this possible with a Shared Host (IIS) environment (no HTTPS at disposal) ? What bindings and options should I use ? I suppose wsHttpBinding, but how would I setup the security options ?
Using .NET 4.0
Thanks
From what I understand, you have an internet-facing service which you want to limit to only your client app to be able to call - correct? Or do you envision other clients (like PHP, Ruby etc.) also wanting to call into your service at some point?
To secure your message, you have two options in WCF - message or transport security. Over the internet, with an unknown number of hops between your client and your service, transport security doesn't work - you're left with message security (encrypting the message as it travels across the 'net). For this to work, you typically add a digital certificate to your service (only server-side) that the client can discover and use to encrypt the messages with. Only your service will be able to decrypt them - so you're safe on that end.
The next point is: who can call your service? If you want to be totally open to anyone, then yes, you need wsHttpBinding (or the RESTful variant - webHttpBinding). If you want to allow non-.NET clients, you're typically limited to no authentication (anyone can call), or username/password schemes which you will validate on the server side against a database of valid users.
If you only want to allow your own .NET client in, then you can do several things:
disable metadata on your service; with this, you would "hide" your endpoints and the services they provide - someone using a "metadata scanner" (if that exists) wouldn't be able to just stumble across your service and find out what methods it provides etc. This however also makes it impossible for another developer outside your organization to do an Add Service Reference to your service.
you could define and use a custom binary http binding - only other clients with this setup could even call your service. The binary http binding would bring some speed improvements, too. See this blog post on how to do this.
you need to somehow identify those callers that are allowed in - one possible method would be to put an extra header into your WCF messages that you then check for on the server side. This would simply make sure that a casual hacker who discovers your service and figures out the binary http binding would still be rejected (at least for some time). See this blog post here on how to implement such a message inspector extension for WCF.
the ultimate step would be to install a digital certificate on the client machine along with your service. You would then set up your client side proxy to authenticate with the service using that certificate. Only client machine that have that certificate could then call into your service.
It really depends on how far you want to go - WCF gives you a lot of options, but you need to decide how much effort you want to put into that .
The first thing you need to ask your self is: "What can someone do to your WCF service if they connected their own customized client?" Look at all of the functionality that is being exposed via WCF and assume that it could be accessed at will. You have absolutely no control over the client, and you will never have this ability.
HTTPS is beautiful, its a damn shame that your forced to be vulnerable to OWASP A9: Insufficient Transport Layer Protection. If it where up to me, I would move to a different host that cared about security. If you are throwing usernames and passwords over the network, then your putting your users in danger.
One of the biggest problems I have seen with a WCF service is that they had a "executeQuery()" function that was exposed. The developer allowing the client to build queries to be executed by the server. This approach is fundamentally flawed as you are just handing your database over to an attacker. This type of vulnerability isn't SQL Injection, it falls under CWE-602: Client-Side Enforcement of Server-Side Security.
Along the same lines as CWE-602 is OWASP A4: Insecure Direct Object References. Could an attacker fool your WCF service into thinking its another user by providing a different user id? Are you trusting the client to tell the truth?
The next classification of vulnerabilities that you must take into consideration is OWASP A1: Injection, other wise known as "Taint and Sink". For instance if you are exposing a function where one of its parameters is being used in a CreateProcess() which is invoking cmd.exe. This parameter could be controlled by the attacker, and there for this variable is "tainted", the call to CreateProcess() is a "sink". There are many types of vulnerabilities along these lines, including but not limited to; SQL Injection, LDAP Injection, XPATH Injection. These types of vulnerabilities affect all web applications.

Resources