Good morning,
I was going through the Postgresql configuration files, and recently noticed that there is an ssl option. I was wondering when this is required.
Say if you have an app server and a database server - not running inside a private network. If a user tries to log in, if SSL is not enabled will the app server transmit the user's password in cleartext to the database when looking up if it is a valid username/password?
What is standard practice here? Should I be setting up my DB to use SSL?
If that is the case, is there any difference in the connection settings in config/database.yml in my Rails app?
Thanks!
Like for other protocols, using SSL/TLS for PostgreSQL allows you to secure the connection between the client and the server. Whether you need it depends on your network environment.
Without SSL/TLS the traffic between the client and the server will be visible by an eavesdropper: all the queries and responses, and possibly the password depending on how you've configured your pg_hba.conf (whether the client is using md5 or a plaintext password).
As far as I'm aware, it's the server that requests MD5 or plaintext password authentication, so an active Man-In-The-Middle attacker could certainly downgrade that and get your password anyway, when not using SSL/TLS.
A well-configured SSL/TLS connection should allow you to prevent eavesdropping and MITM attacks, against both passwords and data.
You can require SSL to be used on the server side using sslhost in pg_hba.conf, but that's only part of the problem. Ultimately, just like for web servers, it's up to the client to verify that SSL is used at all, and that it's used with the right server.
Table 31-1 in the libpq documentation summarises the levels of protection you get.
Essentially:
if you think you have a reason to use SSL, disable, allow and prefer are useless (don't take "No" or "Maybe" if you want security).
require is barely useful, since it doesn't verify the identity of the remote server at all.
verify-ca doesn't verify the host name, which makes it vulnerable to MITM attacks.
The one you'll want if security matters to you is verify-full.
These SSL mode names are set by libpq. Other clients might not use the same (e.g. pure Ruby implementation or JDBC).
As far as I can see, ruby-pg relies on libpq. Unfortunately, it only lists "disable|allow|prefer|require" for its sslmode. Perhaps verify-full might work too if it's passed directly. However, there would also need a way to configure the CA certificates.
Considering data other than the password. If you use or not i pretty much a security posture issue. How safe do you need your system to be? If the connection is just over your private network then you anyone on that network can listien in. If that is acceptable that dont use SSL, I not enable it. If the connection is ove r internet SSL should be enable.
As #Wooble says. You should never send the password as cleartext in the first place you have a problem. The stanard solution in this case is to store a hash in the database and only send the hash for validation.
Here is som link about the rails part
Related
I am implementing a Web proxy (in C), with the end goal of implementing some simple caching and adblocking. Currently, the proxy supports normal HTTP sites, and also supports HTTPS sites by implementing tunneling with HTTP CONNECT. The proxy works great running from localhost and configured with my browser.
Despite all of this, I'll never be able to implement my desired features as long as the proxy can not decrypt HTTPS traffic. The essence of my question is: what general steps do I need to take to be able to decrypt this traffic and implement what I would like? I've been researching this, and there seems to be a good amount of information on existing proxies that are capable of this, such as Squid.
Currently, my server uses select() and keeps all client ids in an fd_set. When a CONNECT request is made, it makes a TCP connection to the specified host, and places the file descriptor of both the client and the host into the fd_set. It also places the tuple of fd's into a list, and the list is scanned whenever more data is ready from select() to see if data is coming from an existing tunnel. The data is then read and forwarded blindly. I am struggling to see how to intercept this data at all, due to the nature of the CONNECT verb requiring opening a simple TCP socket to the desired host, and then "staying out of it" while the client and host set up their own SSL sockets. I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
As a brief aside, this project is solely for my own use, so no security or advanced functionality is needed. I just need it to work for one browser, and I am happy to get any warnings from the browser if certificate-spoofing is the best approach.
proxy can not decrypt HTTPS traffic
You are trying to mount a man-in-the-middle attack. SSL is designed to prevent that. But - there is a weak point - a list of trusted certificate authorities.
I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
You can get inspiration from Fiddler. The Fiddler has its own CA certificate (certification authority) and once you add this CA certificate as trusted, then Fiddler generates server certificates for each connection you use on the fly.
It comes with serious security consideration, your browser will trust any site. I've even seen using the Fiddler core inside a malware, so be careful
I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html
Assume the following:
I have a WPF Application which reads a text from a file an sends the
text to my server REST API via a HTTPS and the server sends a
response which depends on the text which was send in request
The WPF Application should be the only one which gets a useful response
to this request - so the WPF Application has to show somehow to
the server, that the request is send from the application itself.
The user of the WPF Application should not be asked to enter any login credentials
What are the best practices here?
My thoughts:
the WPF Application could send a hard-coded password along with the
request which is checked on the server side - but that sounds not
like a good solution to me because the security depends on the fact that
nobody is able to sniff the HTTPS Request.
Is it possible to sniff the HTTPS Request to get the password easily?
Thanks in advance
If your server already supports HTTPS the client knows the server is trusted based on the cert it is using, so that side is handled. (client trusts server)
To ensure trust the server needs to do the same. (server trusts client) The client should hold a cert it can pass to the server so the server can verify the clients identity.
Like always this brings up the problem of how to hide the key in the client, of which there are various schemes but since the client needs to get the key eventually you cannot prevent a dedicated hacker from finding that info, only make it harder for them. (obfuscation etc)
Depending on your application the best is a simple white-list of clients allowed to connect. Some apps can do this but many cannot since they don't have the users IP's etc, but it's something else to keep in mind if it fits your use-case.
You can send a password to the server like you suggest. As long as the message is encrypted (HTTPS) your probably fine. Nothing is 100% secure. It can be intercepted via a man-in-the-middle style attack, but these are fairly rare, or at least very targeted, so it would depend on what your software does etc.
Ok so recently I have been in need of creating a application with WebRTC for video voice etc.
So after looking into some libraries I found SimpleWebRTC to be pretty handly looking:
https://github.com/andyet/SimpleWebRTC
So what I am interested in is how do I implement a STUN/TURN server? (Would be great if someone could explain the differences in plain English!) And also is there a authentication mechanism. At the moment my app contacts my database and logins in user etc, but the stun and turn server would be private and not in any way involved in the authentication procedure.
So basically:
What is the best way to implement STUN/TURN
Is there any authentication mechanism?
Note, this is for a hybrid app so I will be using JavaScript/AngularJS for this. The main reason why I chose SimpleWebRTC.
Thank you!
I suggest you use an existing STUN or TURN server like coturn.
STUN servers are very lightweight and often left without authentication. A STUN server basically tells a client what its IP address appears to be, which is necessary to make peer connections across NAT (network address translation) boundaries.
TURN servers are very resource intensive because they relay media; all of the media for a call can go through the TURN server, so it's important to secure TURN. You use TURN servers in situations where UDP may be blocked, or for particular kinds of NATs that cause problems.
The authentication for coturn's TURN server can take one of two forms:
Simple (username, password) pair
TURN REST API. This uses a secret between the TURN server and another entity. The entity issues tokens with expiration times, and the TURN server verifies the token has not expired and was issued with knowledge of the shared secret. This is passed by the TURN client as a username, password pair in a format described in the documentation.
I am using the variables to configure the same "connection string" between two applications, since the two do access the same database of users.
Can I set the same SQL Server (Nano 10GB) in more than one application to use transformation for web.config?
This is not currently possible since there is no way to have the connectionsstring injected into other applications than the one that has the add-on provisioned. Feel free to add this as a feedback suggestion.
It is possible, but requires some legwork. Basically you need to have one app with a known location (URL is fine) that the others can ask for the Connection String. The hard part is doing it securely enough. I'm partway there...
I've rigged up a system where you have a password that both of your Apps know in AppSettings, and then have the Secondary Website send a Public Key to the Primary Website with the password. Who then encodes the connection string, and sends it back.
The password CAN be injected by Appharbor when it does a deploy. And the connection string is also setup on the deploy. Ideally you'd use SSL but I don't have that setup and it makes life hard when working locally.
Proof Of Concept: https://bitbucket.org/Rangoric/database-coordination/overview
It does work, just start both of the website projects in there, and go to http://localhost:4002/Database and you will see what is in the connection String of the Primary website.
EDIT: I just realized that since you can piggyback the SSL Cert of appharbor with the free subdomain they give you, you can use that URL for added security if you don't have your own SSL cert.