Unprotect a particular <Location> when the entire site is protected with mod-auth-openidc - mod-auth-openidc

I have an apache 2.4 site protect with mod-auth-openidc. Is there a way to unprotect a particular within that protected area? Right now my apache config has one small paragraph where mod-auth-openidc is configured to protect the entire site. I discovered there are three out of a possible 137 that don't need to be protected. If there is no way to unprotect a then I will have to individually protect or unprotect all 137 . That's a lot of configuration changes.
In the past I've been able to unprotect specific locations by creating a tag and specifying Allow from all and Satisfy any. This doesn't work when using mod-auth-openidc. I also found a post that talked about providing public access by using SetEnvIf but that doesn't work either.
<Location /mynewsite>
# Protect everything using oidc
AuthType openid-connect
Require claim "sub~^employeeGroup2*"
# Don't protect employeeplans
SetEnvIf Request_URI "(/mynewsite/employeeplans/)$" allow
Order allow,deny
Allow from env=allow
Satisfy any
</Location>
I'm not going to create 137 config paragraphs in my apache config. If there is no solution I'll have to scrap this project.

Using AuthType None on those paths should do it.

Related

Fixing domain display on directory redirects

I have multiple domains directing to multiple directories on my system, as an example...
shopwebsite.co.uk > public_html/useraccounts/shopsite
carwebsite.co.uk > public_html/useraccounts/carsite
foodwebsite.co.uk > public_html/useraccounts/foodsite
This has been okay for a while until I realized that framed forwarding caused the mobile responsiveness to stop responding and so I changed all of the domains to simple redirects. This does work although as you can now work out, whenever somebody types in one of these URLs the website displays as something like:
https://mymainwebsite.co.uk/useraccounts/foodsite
Which is causing a few problems for me in various ways. What I am looking to achieve is to attach each domain to its directory path while keeping the URLs and being able to use the domain properly while also keeping mobile responsiveness.
Now this may seem like a really simple situation although I have a couple different domain hosts and my website hosting is via Hostinger, so I am not able to manipulate certain back end features, making it slightly more difficult for me. I am also still learning and really don't have much knowledge in DNS, IP forwarding, etc so I wouldn't know what I'm looking for?
If somebody can point me in the right direction I can more than likely figure the rest out on my own. I can access my htaccess file for the main website as well as DNS and ability to park domains etc...
Hope somebody can help me find a solution. Thanks.
I think you are looking for something called "Virtual Host". This is a configuration of the web server that allows you to run multiple web sites on the same server. In Apache simple virtual host looks like this:
<VirtualHost *:80>
DocumentRoot "/public_html/useraccounts/shopsite"
ServerName shopwebsite.co.uk
# Other directives here
</VirtualHost>
<VirtualHost *:80>
DocumentRoot "/public_html/useraccounts/carsite"
ServerName carwebsite.co.uk
# Other directives here
</VirtualHost>
There are no redirects in this method. The browser is connecting to the webserver and is sending "Host" header - the domain name typed in the URL, then it returns the files from the directory you configured in "DocumentRoot" directive for this "ServerName".
You should call your hosting provider support team and ask them how do you manage virtual hosts.

Using subdomain cookies with mod_auth_openidc

So I have a wildcard host on an Apache Server using mod_auth_openidc
The relevant bits of my Apache config are:
<VirtualHost *:443>
ServerAlias *.sub.mydomain.com
OIDCRedirectURI https://sub.mydomain.com/oauth2callback
OIDCCookieDomain sub.mydomain.com
Is there anything that would prevent a user from authenticating with foo.sub.mydomain.com, then also being authenticated with bar.sub.mydomain.com without having to log in again?
No, that works since the session cookie is set on sub.domain.com and as such received on foo.sub.mydomain.com as well as bar.sub.mydomain.com.
What you describe in the comment is not really an attack since it is the same user in the same browser. Sort of the equivalent of what is mentioned above, except handled manually in the browser... It would be a problem is you could somehow steal a cookie from another user but then again that would be an attack not specific to mod_auth_openidc and is impossible assuming everything runs over https and there's no malware in the browser.
If you really need separation you can split out in to virtual hosts and run a different mod_auth_openidc configuration in each host. Then the Apache cookies won't be reusable across the two hosts. Of course both hosts would still redirect to the OP for authentication and an SSO session+cookie may exist there that binds the two sessions together implicitly.

Nagios: creating custom menu for an authenticated user

Is there any 'easy' way to create customized web gui (for example, menu, default home page etc.) for a Nagios authenticated user? I have created a user for a customer, who has access to certain hostgroups only. But after logging in, the user can obviously see the default menu, which is customized for internal use. How can I prevent this?
There are ways to restrict what a user sees in the standard gui, check the manual pages. Basically, a user will see only those hosts and services which have contact lists containing this user. You can do a bit more configuration for special cases in the etc/cgiauth.cfg file.
If you want to restrict a user to very few predefined pages, you can do that with a few tricks in the web server configuration. You should have some understanding of how apache config files work for this, and this assumes you can distinguish your customer from your company employees using their IP address. If you can't, you can use groups and AuthGroupFiles, but it will be a bit harder that way.
The basic idea is:
Allow everyone access to the static pages, images, css stuff etc.
Allow access to the CGIs only from the IPs your company uses
create special URLs for the customer that "hide" the real CGIs
This needs mod_authz, mod_rewrite and mod_proxy together with mod_proy_http to work.
You should have a nagios.conf in your web server directory; its exact location and contents depend on distribution and on whether you're using a RPM or compiled nagios yourself, so your directory paths may vary.
In the configuration for the CGI scripts, we put
<Directory /usr/local/nagios/sbin>
Order deny, allow
Deny from all
Allow from 127.0.0.1
Allow from 1.2.3.4 # <-- this should be the address of the webserver
Allow from 192.168.1.0/24 # <-- this should be the addresses your company use
require valid-user
</Directory>
This denies access to the CGIs to everyone but you.
Then, we define a few web pages that get rewritten to CGI scripts:
<Location />
RewriteEngine On
RewriteRule customer.html$ http://127.0.0.1/nagios/cgi-bin/status.cgi?host=customerhost [P]
</Location>
So when anyone accesses customer.html, the server will fetch http://127.0.0.1/nagios/cgi-bin/status.cgi?host=customerhost using its internal proxy; this will create a new request to the CGI that seems to come from 127.0.0.1 and thus match the "Allow from 127.0.0.1" rule.
Mod_proxy still needs come configuration:
ProxyRequests On
<Proxy *>
AddDefaultCharset off
Order deny,allow
Deny from all
Allow from 1.2.3.4 # <--- again, use your server IP
Allow from 127.0.0.1
</Proxy>
which restricts the proxy to internal apache use and prevents other people from the internet from using your proxy for anything else.
Of course, it's still the original CGIs that get executed, but your customer can't use them directly, he'll only be able to access the ones you've made available in your RewriteRules. The links, and action pulldown, will still be there, but accessing them will result in error messages.
If you still want more, use a programming language of your choice (I've done this with perl, but php, phyton, ruby, ... should work just as well), parse the objects.cache and status.dat files, and create your very own UI. Once you've written a few library functions to parse those files (which shouldn't be too difficult, their syntax is trivial), creating your own GUI is just as hard, or as easy, as programming any other kind of Web UI.
After some research, I have found a work-around for my case. The solution lies in the fact, that by default Nagios uses a single password file (for http auth) for two different directiories:
$NAGIOS_HOME/sbin (where the cgi files are stored) and
$NAGIOS_HOME/share (HTML and PHP files are stored)
This means, anyone authenticating as a user gets access to both the folders and subfolders automatically. This can be prevented by using seperate password file for the folders above.
Here is a snippet from a custom nagios.conf file with two different password files:
## BEGIN APACHE CONFIG SNIPPET - NAGIOS.CONF
ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin"
<Directory "/usr/local/nagios/sbin">
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw1>
Require valid-user
</Directory>
Alias /nagios "/usr/local/nagios/share"
<Directory "/usr/local/nagios/share">
Options None
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw2
Require valid-user
</Directory>
## END APACHE CONFIG SNIPPETS
Now for example, lets make a custom directory for customer1 under /var/www/html/customer1 and copy all the html and php files from Nagios ../share directory there and customize them and add an alias in Apache.
Alias /customer1 "/var/www/html/customer1"
<Directory "/var/www/html/customer1">
Options None
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw3
Require user customer1
</Directory>
Now one can add the same user/password for customer1 at password files 1 and 3 so that they can have access to the custom web gui and to the cgi scripts. Of course beforehand one must set appropriate contact groups in Nagios so that after authentication the customer sees only the groups he/she is a contact for. The default Nagios share directory is secured with the nagios-admin (or whatever) user/password which resides in password files 2 and of course in 1.

How to make Apache restrict access to file except to internal redirects

I am currently developing an Apache module and after parsing POST data from a request to another page, I make an internal redirect to a PHP page that makes some final operations and echoes out an HTML meta refresh tag. This in turn makes the browser refresh, requesting the first page.
The problem is, I don't want explicit outside requests to be able to access that page, but let the module do the internal redirect successfully.
Is there a way I can do this? I have tried using:
<Directory /var/www/cc_jnlp/php/>
<Files session_init.php>
Order allow,deny
Deny from all
</Files>
</Directory>
...but that just blocks all requests, regardless of whether it was or not an internal redirect.
Try with the following configuration:
<Directory /var/www/cc_jnlp/php/>
<Files session_init.php>
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Files>
</Directory>
A good approach would be to send something with the request that would identify it as a legitimate one. My first approach was to generate a big random number at the start of the server activity, and transmit it along with the data. The module would identify all requests to that page, and deny those that didn't include that specific query argument. Problem was, this was susceptible to bruteforcing, and the only way to counter it was to increase key size.
My definitive solution will use the Apache Notes system to transmit the data instead, and assuming that only the Apache server itself can manipulate that data, we can safely deny all requests that don't include it.

Hiding Drupal admin

I'm trying to hide my Drupal7 admin pages from the Internet, only allowing access from internal lan's.
I'm trying to hide /admin /user and */edit, but is there anything else I need to deny, to disable access to all parts of the Drupal admin?
<Location ~ "(/([aA][dD][mM][iI][nN]|[uU][sS][eE][rR])|/[eE][dD][iI][tT])">
Order deny,allow
Deny from all
Allow from 12.193.10.0/24
</Location>
Apache seems to accept this, and urlencoding chars in the request seems to be resolved before the request is handled (e.g.: /%55ser)
Edit: I've noticed parameterized paths, so I'm going to check for these kinds also: ?q=admin
There are more than those you've listed, the */delete pages for one.
Modules can tell Drupal that certain paths (other than those beginning admin/) are supposed to be administrative by implementing hook_admin_paths().
You can invoke the same hook to get a list of all the patterns that should be treated as administrative, and update your vhost file accordingly:
$paths = module_invoke_all('admin_paths');
A devel printout of the $paths variable looks like this:
It should give you a pretty good idea of the paths you need to hide. The printout will probably look completely different for your installation, it depends what modules you have installed.

Resources