I am using the variables to configure the same "connection string" between two applications, since the two do access the same database of users.
Can I set the same SQL Server (Nano 10GB) in more than one application to use transformation for web.config?
This is not currently possible since there is no way to have the connectionsstring injected into other applications than the one that has the add-on provisioned. Feel free to add this as a feedback suggestion.
It is possible, but requires some legwork. Basically you need to have one app with a known location (URL is fine) that the others can ask for the Connection String. The hard part is doing it securely enough. I'm partway there...
I've rigged up a system where you have a password that both of your Apps know in AppSettings, and then have the Secondary Website send a Public Key to the Primary Website with the password. Who then encodes the connection string, and sends it back.
The password CAN be injected by Appharbor when it does a deploy. And the connection string is also setup on the deploy. Ideally you'd use SSL but I don't have that setup and it makes life hard when working locally.
Proof Of Concept: https://bitbucket.org/Rangoric/database-coordination/overview
It does work, just start both of the website projects in there, and go to http://localhost:4002/Database and you will see what is in the connection String of the Primary website.
EDIT: I just realized that since you can piggyback the SSL Cert of appharbor with the free subdomain they give you, you can use that URL for added security if you don't have your own SSL cert.
Related
I have been developing Standard Logic Apps with SQL Server successfully for some time, but suddenly can no longer connect. I'm using Azure AD Integrated as my Authentication Type, which I know is OK as I use the same credentials in SSMS. If I try to create a new credential, it is apparently successful but on save the Logic App says "The API connection reference XXX is missing or not valid". Something has changed, but I don't know what ... help!
per above, this was submitted to M/S and has been resolved as follows: the root cause is if a Logic App Parameter name includes an embedded space the problem with SQL connections is triggered. This is a pernicious problem, as the error message is quite unrelated to the root cause. Further, since embedded spaces are supported in Logic Apps e.g. in Step Names, it is easy to assume the same applies across the board.
I'm pretty new to Kerberos. I'm testing the Single Sign On feature using Kerberos. The environment: Windows clients (with Active Directory authentication) connecting to an Apache server running on Linux machine. The called cgi script (in Perl) connects to a DB server using the forwarded user TGT. Everything works fine (I have the principals, the keytab files, config files and the result from the DB server :) ). So, if as win_usr_a on Windows side I launch my CGI request, the CGI script connects to the remote DB and queries select user from dual and it gets back win_usr_a#EXAMPLE.COM.
I have only one issue I'd like to solve. Currently the credential cache stored as FILE:.... On the intermediate Apache server, the user running the Apache server gets the forwarded TGTs of all authenticated users (as it can see all the credential caches) and while the TGTs lifetime are not expired it can requests any service principals for those users.
I know that the hosts are considered as trusted in Kerberos by definition, but I would be happy if I could limit the usability of the forwarded TGTs. For example can I set the Active Directory to limit the forwarded TGT to be valid only to request a given service principal? And/Or is there a way to define the forwarded TGT to make it able to be used only once, namely after requesting any service principal, become invalid. Or is there a way the cgi script could detect if the forwarded TGT was used by someone else (maybe check a usage counter?).
Now I have only one solution. I can define the lifetime of the forwarded TGT to 2 sec and initiate a kdestroy in the CGI script after the DB connection is established (I set that the CGI script can be executed by the apache-user, but it cannot modify the code). Can I do a bit more?
The credential caches should be hidden somehow. I think defining the credential cache as API: would be nice, but this is only defined for Windows. On Linux maybe the KEYRING:process:name or MEMORY: could be a better solution as this is local to the current process and destroyed when the process is exited. As I know apache create a new process for a new connection, so this may work. Maybe KEYRING:thread:name is the solution? But - according to the thread-keyring(7) man page - it is not inherited by clone and cleared by execve sys call. So, if e.g. Perl is called by execve it will not get the credential cache. Maybe using mod_perl + KEYRING:thread:name?
Any idea would be appreciated! Thanks in advance!
The short answer is that Kerberos itself does not provide any mechanism to limit the scope of who can use it if the client happens to have all the necessary bits at a given point in time. Once you have a usable TGT, you have a usable TGT, and can do with it what you like. This is a fundamentally flawed design as far as security concerns go.
Windows refers to this as unconstrained delegation, and specifically has a solution for this through a Kerberos extension called [MS-SFU] which is more broadly referred to as Constrained Delegation.
The gist of the protocol is that you send a regular service ticket (without attached TGT) to the server (Apache) and the server is enlightened enough to know that it can exchange that service ticket to itself for a service ticket to a delegated server (DB) from Active Directory. The server then uses the new service ticket to authenticate to the DB, and the DB see's it's a service ticket for win_usr_a despite being sent by Apache.
The trick of course is that enlightenment bit. Without knowing more about the specifics of how the authentication is happening in your CGI, it's impossible to say whether whatever you're doing supports [MS-SFU].
Quoting a previous answer of mine (to a different question, focused on "race conditions" when updating the cache)
If multiple processes create tickets independently, then they have no
reason to use the same credentials cache. In the worst case they would
even use different principals, and the side effects would be...
interesting.
Solution: change the environment of each process so that KRB5CCNAME
points to a specific file -- and preferably, in an
application-specific directory.
If your focus in on securing the credentials, then go one step further and don't use a cache. Modify your client app so that it creates the TGT and service tickets on-the-fly and keeps it private.
Note that Java never publishes anything to the Kerberos cache; it may either read from the cache or bypass it altogether, depending on the JAAS config. Too bad the Java implementation of Kerberos is limited and rather brittle, cf. https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jdk_versions.html and https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/jaas.html
Good morning,
I was going through the Postgresql configuration files, and recently noticed that there is an ssl option. I was wondering when this is required.
Say if you have an app server and a database server - not running inside a private network. If a user tries to log in, if SSL is not enabled will the app server transmit the user's password in cleartext to the database when looking up if it is a valid username/password?
What is standard practice here? Should I be setting up my DB to use SSL?
If that is the case, is there any difference in the connection settings in config/database.yml in my Rails app?
Thanks!
Like for other protocols, using SSL/TLS for PostgreSQL allows you to secure the connection between the client and the server. Whether you need it depends on your network environment.
Without SSL/TLS the traffic between the client and the server will be visible by an eavesdropper: all the queries and responses, and possibly the password depending on how you've configured your pg_hba.conf (whether the client is using md5 or a plaintext password).
As far as I'm aware, it's the server that requests MD5 or plaintext password authentication, so an active Man-In-The-Middle attacker could certainly downgrade that and get your password anyway, when not using SSL/TLS.
A well-configured SSL/TLS connection should allow you to prevent eavesdropping and MITM attacks, against both passwords and data.
You can require SSL to be used on the server side using sslhost in pg_hba.conf, but that's only part of the problem. Ultimately, just like for web servers, it's up to the client to verify that SSL is used at all, and that it's used with the right server.
Table 31-1 in the libpq documentation summarises the levels of protection you get.
Essentially:
if you think you have a reason to use SSL, disable, allow and prefer are useless (don't take "No" or "Maybe" if you want security).
require is barely useful, since it doesn't verify the identity of the remote server at all.
verify-ca doesn't verify the host name, which makes it vulnerable to MITM attacks.
The one you'll want if security matters to you is verify-full.
These SSL mode names are set by libpq. Other clients might not use the same (e.g. pure Ruby implementation or JDBC).
As far as I can see, ruby-pg relies on libpq. Unfortunately, it only lists "disable|allow|prefer|require" for its sslmode. Perhaps verify-full might work too if it's passed directly. However, there would also need a way to configure the CA certificates.
Considering data other than the password. If you use or not i pretty much a security posture issue. How safe do you need your system to be? If the connection is just over your private network then you anyone on that network can listien in. If that is acceptable that dont use SSL, I not enable it. If the connection is ove r internet SSL should be enable.
As #Wooble says. You should never send the password as cleartext in the first place you have a problem. The stanard solution in this case is to store a hash in the database and only send the hash for validation.
Here is som link about the rails part
I'm doing https web requests in silverlight using "WebRequest"/"WebResponse" framework classes.
Problem is: I do a request to an url like: https://12.34.56.78
I receive back a versign signed certificate which has as subject a domain name like: www.mydomain.com.
Hence this results in a remote certificate mismatch error.
First question: Can I somehow accept the invalid certificate, and get the WebBresponse content ? (even if it involves using other libraries, I'm open to it)
Additional details: (for those interested on why I need this scenario)
I'm trying to give a client access to a silverlight app deployed on a test server.
Client accesses the silverlight app at: www.mydomain.com/app
Then I do some rest requests to: https://xx.mydomain.com
Problem is I don't want to do requests on https://xx.mydomain.com, since that is on our productive server. For this reason I use https://12.34.56.78 instead of https://xx.mydomain.com.
Client has some firewalls/proxies and if I simply change his hosts file and map https://xx.mydomain.com to 12.34.56.78, web requests don't resolve to the mapped IP.
I say this because on his network webrequests fail if I try that, on my network I can use the hosts changing without problems.
UPDATE: Fixed the problem by deploying test releases to an alternative: https://yy.domain.com and allowing the user to configure for test purposes, the base url to which I do requests to be: https://yy.domain.com.
Using an certificate that contained the IP in the subject or an alternative subject would've probably worked too, but would have cost some money to be issued by a certified provider and would not be so good because IP's might change.
After doing more research looks like Microsoft won't add this feature too soon, unless there's a scenario for non-testing/debugging uses.
See: http://connect.microsoft.com/VisualStudio/feedback/details/368047/add-system-net-servicepointmanager-servercertificatevalidationcallback-property
I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like:
http://{client}.domain.com/
and login, where the {client} part is their business name. so for example, google's would be:
http://google.domain.com/
What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software.
Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it?
As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing.
Some examples of this below:
Silverlight Cross Domain Services
Silverlight Cross Domain Policy Helpers
On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that.
I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers.
What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help?
Georgi:
That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc.
Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this?
The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit:
http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base.
#Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose.
The server-side approach in more detail:
Direct *.example.com to the same IP in your DNS settings.
The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue...
There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript.
#Ates:
That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain...
So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling.
I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe.