I have multiple domains directing to multiple directories on my system, as an example...
shopwebsite.co.uk > public_html/useraccounts/shopsite
carwebsite.co.uk > public_html/useraccounts/carsite
foodwebsite.co.uk > public_html/useraccounts/foodsite
This has been okay for a while until I realized that framed forwarding caused the mobile responsiveness to stop responding and so I changed all of the domains to simple redirects. This does work although as you can now work out, whenever somebody types in one of these URLs the website displays as something like:
https://mymainwebsite.co.uk/useraccounts/foodsite
Which is causing a few problems for me in various ways. What I am looking to achieve is to attach each domain to its directory path while keeping the URLs and being able to use the domain properly while also keeping mobile responsiveness.
Now this may seem like a really simple situation although I have a couple different domain hosts and my website hosting is via Hostinger, so I am not able to manipulate certain back end features, making it slightly more difficult for me. I am also still learning and really don't have much knowledge in DNS, IP forwarding, etc so I wouldn't know what I'm looking for?
If somebody can point me in the right direction I can more than likely figure the rest out on my own. I can access my htaccess file for the main website as well as DNS and ability to park domains etc...
Hope somebody can help me find a solution. Thanks.
I think you are looking for something called "Virtual Host". This is a configuration of the web server that allows you to run multiple web sites on the same server. In Apache simple virtual host looks like this:
<VirtualHost *:80>
DocumentRoot "/public_html/useraccounts/shopsite"
ServerName shopwebsite.co.uk
# Other directives here
</VirtualHost>
<VirtualHost *:80>
DocumentRoot "/public_html/useraccounts/carsite"
ServerName carwebsite.co.uk
# Other directives here
</VirtualHost>
There are no redirects in this method. The browser is connecting to the webserver and is sending "Host" header - the domain name typed in the URL, then it returns the files from the directory you configured in "DocumentRoot" directive for this "ServerName".
You should call your hosting provider support team and ask them how do you manage virtual hosts.
Related
Newbie/wannabe/failing webmaster here...I have a Google VM Debian9 instance running Apache2. For the past few months, i had no domain name, so i faked one using a virtual host (crm.fake_example.com) since i was the only user. I added the IP and my fake address to my local hosts file, and everything worked just fine as I stood up a web-based, open source software application on the VM.
I've now pointed a subdomain (crm.real_example.com) to the IP address of my VM. I've waited a full week for DNS propogation (yes, overkill) and have used MXtoolbox - DNSLookup to verify that the subdomain resolves to the correct IP address. I erased any previous entries to my local hosts file.
Visiting the site from my local machine (or any machine) yields the same error ("ERR_NAME_NOT_RESOLVED").
However, if I edit my local hosts file to include the IP and crm.real_example.com...it works just fine again.
I feel like MXtoolbox proves to me that DNS is working, so the problem must exist on my VM somewhere (can someone verify this thinking?) - but I've checked/re-checked all of the following...
On the Google VM apache2 server:
/etc/hosts is a clean file (nothing from crm.fake_example.com exists in there any longer)
/etc/apache/sites-available/mysite.conf has the necessary virtual host info (see below)
note: I know virtual host is not required for only one site...but i plan to add more so want it working using this
/etc/apache/sites-enabled/mysite.conf activated via a2ensite
config.inc.php (a vtigercrm-specific config file) has been reconfigured for crm.real_example.com
<VirtualHost *:80>
ServerAdmin me#real_example.com
ServerName crm.real_example.com
DocumentRoot /var/www/html/crm/
<Directory /var/www/html/crm/>
Options FollowSymlinks
AllowOverride All
Require all granted
</Directory>
ErrorLog /var/log/apache2/crm_error.log
CustomLog /var/log/apache2/crm_access.log combined
</VirtualHost>
I could also list all the things i've done on my local machine like clearing DNScache, cleaning up my hosts file, etc...but the same thing happens from any other machine with no previous successful attempts at hitting this server.
After a full week of looking at the same things over and over...any guidance is appreciated!
So I have a wildcard host on an Apache Server using mod_auth_openidc
The relevant bits of my Apache config are:
<VirtualHost *:443>
ServerAlias *.sub.mydomain.com
OIDCRedirectURI https://sub.mydomain.com/oauth2callback
OIDCCookieDomain sub.mydomain.com
Is there anything that would prevent a user from authenticating with foo.sub.mydomain.com, then also being authenticated with bar.sub.mydomain.com without having to log in again?
No, that works since the session cookie is set on sub.domain.com and as such received on foo.sub.mydomain.com as well as bar.sub.mydomain.com.
What you describe in the comment is not really an attack since it is the same user in the same browser. Sort of the equivalent of what is mentioned above, except handled manually in the browser... It would be a problem is you could somehow steal a cookie from another user but then again that would be an attack not specific to mod_auth_openidc and is impossible assuming everything runs over https and there's no malware in the browser.
If you really need separation you can split out in to virtual hosts and run a different mod_auth_openidc configuration in each host. Then the Apache cookies won't be reusable across the two hosts. Of course both hosts would still redirect to the OP for authentication and an SSO session+cookie may exist there that binds the two sessions together implicitly.
Is there any 'easy' way to create customized web gui (for example, menu, default home page etc.) for a Nagios authenticated user? I have created a user for a customer, who has access to certain hostgroups only. But after logging in, the user can obviously see the default menu, which is customized for internal use. How can I prevent this?
There are ways to restrict what a user sees in the standard gui, check the manual pages. Basically, a user will see only those hosts and services which have contact lists containing this user. You can do a bit more configuration for special cases in the etc/cgiauth.cfg file.
If you want to restrict a user to very few predefined pages, you can do that with a few tricks in the web server configuration. You should have some understanding of how apache config files work for this, and this assumes you can distinguish your customer from your company employees using their IP address. If you can't, you can use groups and AuthGroupFiles, but it will be a bit harder that way.
The basic idea is:
Allow everyone access to the static pages, images, css stuff etc.
Allow access to the CGIs only from the IPs your company uses
create special URLs for the customer that "hide" the real CGIs
This needs mod_authz, mod_rewrite and mod_proxy together with mod_proy_http to work.
You should have a nagios.conf in your web server directory; its exact location and contents depend on distribution and on whether you're using a RPM or compiled nagios yourself, so your directory paths may vary.
In the configuration for the CGI scripts, we put
<Directory /usr/local/nagios/sbin>
Order deny, allow
Deny from all
Allow from 127.0.0.1
Allow from 1.2.3.4 # <-- this should be the address of the webserver
Allow from 192.168.1.0/24 # <-- this should be the addresses your company use
require valid-user
</Directory>
This denies access to the CGIs to everyone but you.
Then, we define a few web pages that get rewritten to CGI scripts:
<Location />
RewriteEngine On
RewriteRule customer.html$ http://127.0.0.1/nagios/cgi-bin/status.cgi?host=customerhost [P]
</Location>
So when anyone accesses customer.html, the server will fetch http://127.0.0.1/nagios/cgi-bin/status.cgi?host=customerhost using its internal proxy; this will create a new request to the CGI that seems to come from 127.0.0.1 and thus match the "Allow from 127.0.0.1" rule.
Mod_proxy still needs come configuration:
ProxyRequests On
<Proxy *>
AddDefaultCharset off
Order deny,allow
Deny from all
Allow from 1.2.3.4 # <--- again, use your server IP
Allow from 127.0.0.1
</Proxy>
which restricts the proxy to internal apache use and prevents other people from the internet from using your proxy for anything else.
Of course, it's still the original CGIs that get executed, but your customer can't use them directly, he'll only be able to access the ones you've made available in your RewriteRules. The links, and action pulldown, will still be there, but accessing them will result in error messages.
If you still want more, use a programming language of your choice (I've done this with perl, but php, phyton, ruby, ... should work just as well), parse the objects.cache and status.dat files, and create your very own UI. Once you've written a few library functions to parse those files (which shouldn't be too difficult, their syntax is trivial), creating your own GUI is just as hard, or as easy, as programming any other kind of Web UI.
After some research, I have found a work-around for my case. The solution lies in the fact, that by default Nagios uses a single password file (for http auth) for two different directiories:
$NAGIOS_HOME/sbin (where the cgi files are stored) and
$NAGIOS_HOME/share (HTML and PHP files are stored)
This means, anyone authenticating as a user gets access to both the folders and subfolders automatically. This can be prevented by using seperate password file for the folders above.
Here is a snippet from a custom nagios.conf file with two different password files:
## BEGIN APACHE CONFIG SNIPPET - NAGIOS.CONF
ScriptAlias /nagios/cgi-bin "/usr/local/nagios/sbin"
<Directory "/usr/local/nagios/sbin">
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw1>
Require valid-user
</Directory>
Alias /nagios "/usr/local/nagios/share"
<Directory "/usr/local/nagios/share">
Options None
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw2
Require valid-user
</Directory>
## END APACHE CONFIG SNIPPETS
Now for example, lets make a custom directory for customer1 under /var/www/html/customer1 and copy all the html and php files from Nagios ../share directory there and customize them and add an alias in Apache.
Alias /customer1 "/var/www/html/customer1"
<Directory "/var/www/html/customer1">
Options None
AllowOverride None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Nagios Access"
AuthDigestFile /usr/local/nagios/etc/.digest_pw3
Require user customer1
</Directory>
Now one can add the same user/password for customer1 at password files 1 and 3 so that they can have access to the custom web gui and to the cgi scripts. Of course beforehand one must set appropriate contact groups in Nagios so that after authentication the customer sees only the groups he/she is a contact for. The default Nagios share directory is secured with the nagios-admin (or whatever) user/password which resides in password files 2 and of course in 1.
I have a mobile website locally hosted in IIS7. I want to view it in my android phone. This is not the usual case of accessing 127.0.0.1, which I have done already.
I set an inbound rule in Firewall for port 80. My IP, assigned by router, 198.162.2.10, which when open in my mobile shows the IIS7 image.
The problem I face is, I usually do the project in D:\ drive than in wwwroot, not to loose any files in case of OS crash or re-install. So to link these projects, I create a website in IIS and give a naming like myprofile.net and assign it to 127.0.0.1. So when I type myprofile.net in the browser, the website is rendered.
How do I access this website, from my android phone?
I know a "workaround" to
copy all the websites into wwwroot? That negates the main point.
connecting phone to PC's ad-hoc or vice versa? I don't want this, either of this puts my computer from internet, and developing without internet is impossible.
upload to ftp server and view from phone? This is what I am currently doing.
Is there a solution?
You could make a symlink to your websites' files on the D:\ to your wwwroot directory.
Here's a tutorial link
and a custom command for you:
C:\mklink /D C:\path\to\wwwroot\NameOfWebsite D:\path\to\website\to\create\the\symlink\for\NameOfWebsite
Also if you may need to updated your virtualhost file... I use apache, been 6+ years since I used IIS :/ so hopefully this is relevant:
example of what you'll want to change:
<VirtualHost *:80>
ServerAdmin gopikrishna.s#gmail.com
DocumentRoot "C:\path\to\wwwroot\NameOfWebsite"
ServerName NameOfWebsite
<Directory "C:\path\to\wwwroot\NameOfWebsite">
Options Indexes FollowSymLinks ExecCGI MultiViews #NOTE THESE OPTIONS Follow SymLinks is NEEDED
AllowOverride All
Order allow,deny
Allow from all
</Directory>
#EXAMPLE alias if you have any.
#Alias /graphics "D:/workspace/graphics"
</VirtualHost>
You will also have to update the hosts file located normally in
C:\Windows\System32\drivers\etc
right click notepad and run as administator to make edit's to the system file hosts.
click File->Open
locate your hosts file directory
change Text Documents (.txt) to All Files (.*) to see the hosts file (among others)
open the hosts file and add this line to the bottom of the file:
127.0.0.1 NameOfWebsite
This line should have the same name you put as your ServerName in your httpd-vhosts.conf file or the IIS equivalent.
Save the hosts file and reset your IIS server
Now you should be able to run in your android's browser (if connected to your local router which the IIS server is also connected to) the web server's IP address which as mention in your post would be
198.162.2.10
and that should show your websites in your android's browser.
Also be sure to put which ever website you want to see in your browser at the top of any other vhost configurations.
That is to say:
in your hosts file make sure if their are other lines like
127.0.0.1 some.site.local
127.0.0.1 another.site
127.0.0.1 NameOfWebsite <------ This should go
That you put the one you want to see at the top of the list like so:
127.0.0.1 NameOfWebsite <-------- HERE
127.0.0.1 some.site.local
127.0.0.1 another.site
And for the httpd-vhosts.conf file make sure the whole ........
Section is above all others.
I hope this helps a bit, feel free to add anything or subtract anything changes are more than welcomed. Forgive me if I over spoke on this post or gave details you already knew.
I think the symlinks are basically all you need, but the other stuff probably will help as well.
I'm trying to hide my Drupal7 admin pages from the Internet, only allowing access from internal lan's.
I'm trying to hide /admin /user and */edit, but is there anything else I need to deny, to disable access to all parts of the Drupal admin?
<Location ~ "(/([aA][dD][mM][iI][nN]|[uU][sS][eE][rR])|/[eE][dD][iI][tT])">
Order deny,allow
Deny from all
Allow from 12.193.10.0/24
</Location>
Apache seems to accept this, and urlencoding chars in the request seems to be resolved before the request is handled (e.g.: /%55ser)
Edit: I've noticed parameterized paths, so I'm going to check for these kinds also: ?q=admin
There are more than those you've listed, the */delete pages for one.
Modules can tell Drupal that certain paths (other than those beginning admin/) are supposed to be administrative by implementing hook_admin_paths().
You can invoke the same hook to get a list of all the patterns that should be treated as administrative, and update your vhost file accordingly:
$paths = module_invoke_all('admin_paths');
A devel printout of the $paths variable looks like this:
It should give you a pretty good idea of the paths you need to hide. The printout will probably look completely different for your installation, it depends what modules you have installed.