Connecting to an LDAP server via a Corporate Proxy - c

I'm using the OpenLDAP API in C to connect to an external LDAP server and retrieve certain information. However, the software needs to run behind a HTTP CONNECT corporate proxy.
OpenLDAP doesn't expose the underlying socket calls, so is there a way to use the OpenLDAP API to specify a proxy to go through?
LDAP* lp;
int res = ldap_initialize(&lp, "ldap://some-server.com:389");
... /* Can I specify a proxy server somehow here? */
ldap_sasl_bind_s(m_connection, "", LDAP_SASL_SIMPLE, &cred, NULL, NULL, NULL);
I looked through the manual and did some Googling and found LDAP_OPT_URI which is an option code that can be passed to ldap_set_option, along with a URI. The manual describes the purpose of this option as :
"Sets/gets a comma- or space-separated list of URIs to be contacted by
the library when trying to establish a connection."
That description seems a bit vague to me, but I thought it might sound like this could allow me to set a proxy URL. However, I tried it and it has no effect anyway.
So, does OpenLDAP provide some way to connect via a proxy?

Related

Decrypting HTTPS traffic with a proxy

I am implementing a Web proxy (in C), with the end goal of implementing some simple caching and adblocking. Currently, the proxy supports normal HTTP sites, and also supports HTTPS sites by implementing tunneling with HTTP CONNECT. The proxy works great running from localhost and configured with my browser.
Despite all of this, I'll never be able to implement my desired features as long as the proxy can not decrypt HTTPS traffic. The essence of my question is: what general steps do I need to take to be able to decrypt this traffic and implement what I would like? I've been researching this, and there seems to be a good amount of information on existing proxies that are capable of this, such as Squid.
Currently, my server uses select() and keeps all client ids in an fd_set. When a CONNECT request is made, it makes a TCP connection to the specified host, and places the file descriptor of both the client and the host into the fd_set. It also places the tuple of fd's into a list, and the list is scanned whenever more data is ready from select() to see if data is coming from an existing tunnel. The data is then read and forwarded blindly. I am struggling to see how to intercept this data at all, due to the nature of the CONNECT verb requiring opening a simple TCP socket to the desired host, and then "staying out of it" while the client and host set up their own SSL sockets. I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
As a brief aside, this project is solely for my own use, so no security or advanced functionality is needed. I just need it to work for one browser, and I am happy to get any warnings from the browser if certificate-spoofing is the best approach.
proxy can not decrypt HTTPS traffic
You are trying to mount a man-in-the-middle attack. SSL is designed to prevent that. But - there is a weak point - a list of trusted certificate authorities.
I am simply asking for the right direction for how I can go about using the proxy as a MITM attacker in order to read and manipulate the data coming in.
You can get inspiration from Fiddler. The Fiddler has its own CA certificate (certification authority) and once you add this CA certificate as trusted, then Fiddler generates server certificates for each connection you use on the fly.
It comes with serious security consideration, your browser will trust any site. I've even seen using the Fiddler core inside a malware, so be careful

Sharing Application.conf between backend and frontend

I am working on an webApp whose backend is in Scala and frontend is in Angularjs, backend configuration is driven by application.conf, which contains all info of services, host and port configuration.
The current implementation of frontend takes the config from applicaton.conf in a manner
echo "xstream {
service {
host = 0.0.0.0
port = 9090
SSL = false
yarnPort = 8088
metricsPort = 8082
}
" > assets/json/application.conf
via network call, which exposes the application.conf in the network call.
I am looking for the solution where the single application.conf can be shared between the frontend and backend without, application.conf being exposed in the network call as that would lead to risk of sharing sensitive info.
From your description it seems that you are sending data from the server to the web application on an unencrypted channel. This is a bad idea for all sorts of reasons, so you should really consider fixing that first. Worrying about the security of the Application.conf seems a minor issue compared to all the other data you are going to be exposing on the wire.
If you absolutely have to use an insecure channel, then there are two options open:
Implement your own encryption within the data on that channel
Create a second secure channel for passing the sensitive data
For the first option there are a number of Scala encryption libraries to choose from.
For the second option you can (theoretically) create a separate TLS connection using an SSL library without the server certificate checking (which is, I presume, the reason for not using https in the first place)
Stackoverflow is not the place to ask for recommendations, so you need to do your own research to find suitable libraries for whichever option you choose.

SOAP UI not able to talk to Salesforce whereas browser can

I am not able to connect to https://test.salesforce.com/services/oauth2/token form SoapUI (ver 5.2.1). I have tried the PRO version and other older versions (4.6.xx) as well.
I can access the website from the web-browser. The GET to this URL gives me the response where as SOAPUI says HttpHostConnectException connection to https://test.salesforce.com/ refused.
I have checked that there is direct connection available from my PC to this address. I have tried adding https.proxyHost and https.proxyPort settings in soapui.vmoptions and sopaui.bat but of no use.
I have also tried playing around with Preemptive Authentication settings in SOAPUI without success
My organization has firewall which has white listed this address. I have also confirmed that firewall settings does allow to connect thru non standard clients (such as ApacheHttpClient).
If I use a Java Program using URLConnection using the proxy, it works.
At this point it seems to me that SOAPUI is not honoring the proxy settings.
Please share if anyone has similar experience and how did they resolve it.
Regards
Ash

TCP Connections to Postgres Secure? SSL Required?

Good morning,
I was going through the Postgresql configuration files, and recently noticed that there is an ssl option. I was wondering when this is required.
Say if you have an app server and a database server - not running inside a private network. If a user tries to log in, if SSL is not enabled will the app server transmit the user's password in cleartext to the database when looking up if it is a valid username/password?
What is standard practice here? Should I be setting up my DB to use SSL?
If that is the case, is there any difference in the connection settings in config/database.yml in my Rails app?
Thanks!
Like for other protocols, using SSL/TLS for PostgreSQL allows you to secure the connection between the client and the server. Whether you need it depends on your network environment.
Without SSL/TLS the traffic between the client and the server will be visible by an eavesdropper: all the queries and responses, and possibly the password depending on how you've configured your pg_hba.conf (whether the client is using md5 or a plaintext password).
As far as I'm aware, it's the server that requests MD5 or plaintext password authentication, so an active Man-In-The-Middle attacker could certainly downgrade that and get your password anyway, when not using SSL/TLS.
A well-configured SSL/TLS connection should allow you to prevent eavesdropping and MITM attacks, against both passwords and data.
You can require SSL to be used on the server side using sslhost in pg_hba.conf, but that's only part of the problem. Ultimately, just like for web servers, it's up to the client to verify that SSL is used at all, and that it's used with the right server.
Table 31-1 in the libpq documentation summarises the levels of protection you get.
Essentially:
if you think you have a reason to use SSL, disable, allow and prefer are useless (don't take "No" or "Maybe" if you want security).
require is barely useful, since it doesn't verify the identity of the remote server at all.
verify-ca doesn't verify the host name, which makes it vulnerable to MITM attacks.
The one you'll want if security matters to you is verify-full.
These SSL mode names are set by libpq. Other clients might not use the same (e.g. pure Ruby implementation or JDBC).
As far as I can see, ruby-pg relies on libpq. Unfortunately, it only lists "disable|allow|prefer|require" for its sslmode. Perhaps verify-full might work too if it's passed directly. However, there would also need a way to configure the CA certificates.
Considering data other than the password. If you use or not i pretty much a security posture issue. How safe do you need your system to be? If the connection is just over your private network then you anyone on that network can listien in. If that is acceptable that dont use SSL, I not enable it. If the connection is ove r internet SSL should be enable.
As #Wooble says. You should never send the password as cleartext in the first place you have a problem. The stanard solution in this case is to store a hash in the database and only send the hash for validation.
Here is som link about the rails part

silverlight accept invalid certificate

I'm doing https web requests in silverlight using "WebRequest"/"WebResponse" framework classes.
Problem is: I do a request to an url like: https://12.34.56.78
I receive back a versign signed certificate which has as subject a domain name like: www.mydomain.com.
Hence this results in a remote certificate mismatch error.
First question: Can I somehow accept the invalid certificate, and get the WebBresponse content ? (even if it involves using other libraries, I'm open to it)
Additional details: (for those interested on why I need this scenario)
I'm trying to give a client access to a silverlight app deployed on a test server.
Client accesses the silverlight app at: www.mydomain.com/app
Then I do some rest requests to: https://xx.mydomain.com
Problem is I don't want to do requests on https://xx.mydomain.com, since that is on our productive server. For this reason I use https://12.34.56.78 instead of https://xx.mydomain.com.
Client has some firewalls/proxies and if I simply change his hosts file and map https://xx.mydomain.com to 12.34.56.78, web requests don't resolve to the mapped IP.
I say this because on his network webrequests fail if I try that, on my network I can use the hosts changing without problems.
UPDATE: Fixed the problem by deploying test releases to an alternative: https://yy.domain.com and allowing the user to configure for test purposes, the base url to which I do requests to be: https://yy.domain.com.
Using an certificate that contained the IP in the subject or an alternative subject would've probably worked too, but would have cost some money to be issued by a certified provider and would not be so good because IP's might change.
After doing more research looks like Microsoft won't add this feature too soon, unless there's a scenario for non-testing/debugging uses.
See: http://connect.microsoft.com/VisualStudio/feedback/details/368047/add-system-net-servicepointmanager-servercertificatevalidationcallback-property

Resources