I am using Check_MK based monitoring on Nagios.
Check_MK Version: 1.2.0p4
OS: Linux
Nagios Core 3.2.3
I want to fetch the Nagios page of remote server to local server using MK Livestatus.
I am curious, How could I achieve this?
Nagios Check_mk Multisite (plugin)
This plugin allow user to view/manage distributed nagios using single Web based Interface.
However by default it doesn’t support pnp4nagios graphs (hosts/services from remote nagios) access using (single) Multisite URL.
To access PNP4nagios graphs of hosts/services from remote nagios using (single) Multisite URL, we need to Add Apache Proxy redirect setting.
multisite.mk Conf file-
This is my “check_mk/multisite.mk” conf file. (from Primary multisite Server (production server), SITE1 and SITE2 are two remote nagios)
OMD[production]:~$ cat etc/check_mk/multisite.mk
…
….
sites = {
#Primary site
“local” : {
“alias” : “PRODUCTION”
},
# Remote site
“SITE1″: {
“alias”: “SITE1″,
“socket”: “tcp:XXX.XXX.X.XX:6557″,
“url_prefix”: “/SITE1/”,
“nagios_url”: “/SITE1/nagios”,
“nagios_cgi_url”: “/SITE1/nagios/cgi-bin”,
“pnp_url”: “/SITE1/pnp4nagios”,
},
# Remote site
“SITE2″: {
“alias”: “SITE2″,
“socket”: “tcp:XXX.XXX.X.XX:6557″,
“url_prefix”: “/SITE2/”,
“nagios_url”: “/SITE2/nagios”,
“nagios_cgi_url”: “/SITE2/nagios/cgi-bin”,
“pnp_url”: “/SITE2/pnp4nagios”,
},
}
….
…..
OMD[production]:~$
After making the changes in multisite.mk file the MK Livestatus of remote nagios will be visible at local site.
Related
I'm trying to proxy outbound API calls made from a Google App Engine application via a Google Compute Engine server VM instance running Squid proxy server.
The aim is that the REST api calls will all be made from a static ip address so that the 3rd party API will be able to identify and permit the calls via their firewall.
I have read and followed the instructions on this post:
connect Google App Engine and Google Compute Engine
I have managed to do the following so far:
Created a Google cloud compute VM and successfully assigned it a static external IP address.
Created a Serverless VPC access connector successfully (all resources are located in the same GAE region).
Added the vpc_access_connector name to my app.yaml in the Google App Engine project (which runs on Node.js).
Deployed the app using gcloud beta, with api calls being targeted towards the internal IP address of the proxy server, using the correct Squid default port (3128).
On issuing a request from the GAE app, I can see from the server logs that the correct IP address and port are being attempted but get the following error: "Error: tunneling socket could not be established, cause=connect ECONNREFUSED [my-internal-ip-address]:3128"
I've also tried running a curl command from the cloud shell interface, but the request times out every time.
If anyone could help solve this issue, I will be very grateful.
Here is a possible example of how to proxy outbound HTTP requests from an App Engine Standard application on NodeJS runtime via a Compute Engine VM running Squid, based on a slight modification of the available Google Cloud Platform documentation 1 2 and Quickstarts 3.
1. Create a Serverless VPC Access conector: Basically follow 2 to create the connector. After updating the gcloud components and enabling the Serverless VPC Access API on your project running the following command should suffice:
gcloud compute networks vpc-access connectors create [CONNECTOR_NAME] \
--network [VPC_NETWORK] \
--region [REGION] \
--range [IP_RANGE]
2. Create a Compute Engine VM to use as proxy: Basically follow 1 to set up a Squid proxy server:
a. Reserve a static external IP address and assign it to a Compute Engine VM.
b. Add a Firewall rule to allow traffic on Squid's default port: 3128. This command should work if you are using the default VPC network: gcloud compute firewall-rules create [FIREWALL_RULE_NAME] --network default --allow tcp:3128
c. Install Squid on the VM with the following command sudo apt-get install squid3.
d. Enable the acl localnet src entries in the Squid config files for the VPC Access connector:
sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid/squid.conf
sudo sed -i 's:#\(acl localnet src [IP_RANGE]/28.*\):\1:' /etc/squid/squid.conf
For example: if you used 10.8.0.0 as value for the [IP_RANGE] field for creating the connector, it should look something like sudo sed -i 's:#\(acl localnet src 10.8.0.0/28.*\):\1:' /etc/squid/squid.conf
e. Start the server with sudo service squid start
3. Modifications on App Engine application: Based on the Quickstart for Node.js modify the following files in order to create an application that crawls a webpage using the request-promise library and displays the HTML of the webpage. The request is send to the webpage using the VPC Access connector and the VM as a proxy with the modifications of the app.yaml and app.js files.
a. package.json
...
"test": "mocha --exit test/*.test.js"
},
"dependencies": {
"express": "^4.16.3",
"request": "^2.88.0",
"request-promise": "^4.2.5"
},
"devDependencies": {
"mocha": "^7.0.0",
...
b. app.js
'use strict';
// [START gae_node_request_example]
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res
.status(200)
.send('Hello, world!')
.end();
});
//Add a handler to test the web crawler
app.get('/test', (req, res) => {
var request = require('request-promise');
request('http://www.input-your-awesome-website.com')
.then(function (htmlString) {
res.send(htmlString)
.end();
})
.catch(function (err) {
res.send("Crawling Failed...")
.end();
});
});
// Start the server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`App listening on port ${PORT}`);
console.log('Press Ctrl+C to quit.');
});
// [END gae_node_request_example]
c. app.yaml
runtime: nodejs10
vpc_access_connector:
name: "projects/[PROJECT]/locations/[REGION]/connectors/[CONNECTOR_NAME]"
env_variables:
HTTP_PROXY: "http://[Compute-Engine-IP-Address]:3128"
HTTPS_PROXY: "http://[Compute-Engine-IP-Address]:3128"
Each time you go to the /test handler monitor that the requests go through the proxy by using sudo tail -f /var/log/squid/access.log command from the VM and checking the changes on the logs.
Notes: The connector, application and VM need to be on the same region to work and these are the supported regions for the connector.
I'm trying to connect to google sql cloud instance from custom runtime environment in App Engine.
When I follow the doc to connect using unix domain socket, it works. The problem is when I try to connect using a TCP connect. It shows:
Warning: mysqli_connect(): (HY000/2002): Connection refused in
/var/www/html/index.php on line 3
Connect error: Connection refused
This is my app.yaml file:
runtime: custom
env: flex
beta_settings:
cloud_sql_instances: testing-mvalcam:europe-west1:testdb=tcp:3306
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
The Dockerfile:
FROM php:7.0-apache
ENV PORT 8080
CMD sed -i "s/80/$PORT/g" /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf && docker-php-entrypoint apache2-foreground
RUN docker-php-ext-install mysqli
RUN a2enmod rewrite
COPY ./src /var/www/html
EXPOSE $PORT
And index.php:
<?php
$link = mysqli_connect('127.0.0.1', 'root', 'root', 'test');
if (!$link){
die('Connect error: '. mysqli_connect_error());
}
echo 'successfully connected';
mysqli_close($link);
?>
What am I doing Wrong?
The ip address ‘172.17.0.1’ is related with the docker container where the webserver is running, you can get more context on that in this documentation.
The documentation page you’re using might be lacking on adjusting the use case if you’re deploying with a presence of a Dockerfile. In the following documentation you can read more information about App Engine flexible runtimes.
As demonstrated by the documentation you’re using (remember to click on the TCP CONNECTION tab on this page), on the section of the app.yaml related to Cloud SQL instances information about the TCP port in use by the database server is needed.
Recently I successfully installed laravels homestead VM. Now I want access to my db via PhpMyAdmin, ideally my PhpMyAdmin from my localhost setup (XAMPP).
Is this possible?
I've came across an article that's stated I can install phpmyadmin in my ubuntu VM but when I destroy the VM I need to reinstall PMA over and over.
Is there any way I can have a UI for databases in de VM homestead?
By default PhpMyAdmin runs on port 8000 so make sure to forward the port so you can access it from your host
either from Vagrantfile directly config.vm.network, add the following line
"forwarded_port", guest: 8000, host: 8000
or you can update `Homestead.yaml' and add the following
ports:
- send: 8000
to: 8000
I'm running GAE dev server within a Vagrant guest precise64 box with the following network setup (in my Vagrantfile):
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.network :forwarded_port, guest: 8080, host: 9090
end
Which does its thing:
[default] Forwarding ports...
[default] -- 8080 => 9090 (adapter 1)
I start my App Engine server with:
goapp serve
or
dev_appserver.py myappfolder
This starts app engine dev server as expected:
INFO 2013-11-22 dispatcher.py] Starting module running at: http://localhost:8080
In all cases, I'm able to ssh in to the Vagrant guest and curl localhost:8080 successfully.
Unfortunately, from the host I'm unable to get a response from localhost:9090 when running GAE dev web server. Additionally, I've made sure that I don't have anything interfering with the port 9090 on the host machine. Also, I'm almost positive this isn't related to Vagrant as I spun up a quick node.js web server on 8080 and was able to reach it from the host. What am I missing?!!!
You must run the Google App Engine Go dev web server on 0.0.0.0 when leveraging Vagrant port forwarding. Like so:
goapp serve -host=0.0.0.0
See the answers here for more info on ensuring the guest web server is not bound to 127.0.0.1 which is loopback. Web servers that bind to 127.0.0.1 (like App Engine Go dev web server does) by default should be overridden to use 0.0.0.0.
I have Apache Tomcat 6.0.35(Wndows 2008) with a bunch of application installed.
I have deleted webapps\ROOT*, renamed my application to ROOT.war, deployed and now I have my application as a root(the following URLs are used in the applications - http:/exampleapp.com/, http:/exampleapp.com/SomePostUri). Port 8080 changed to 80.
How I can forbid access(via HTTP) to the following applications:
1) Tomcat's Manager application (http:/exampleapp.com/manager/html) - allow access only from localhost.
2) All others installed web applications (e.g. http:/exampleapp.com/docs) - allow access only from localhost.
?
I installed Apache HTTP and joined it with Tomcat via mod_jk.
Solved.