How to export Nagios data into CSV? - database

I am a newbie and I am doing this for my project. I am able to install and monitor nagios successfully. But I am required to export these data into csv. Can anyone help me in this?
Thank you so much xx

You can set the host_perfdata_file and service_perfdata_file directives in your nagios.cfg configuration file to output performance data to the specified file path in the format specified by the host_perfdata_file_template and service_perfdata_file_template directive.
Writing Performance Data To Files
You can have Nagios write all host and service performance data
directly to text files using the host_perfdata_file and
service_perfdata_file options. The format in which host and service
performance data is written to those files is determined by the
host_perfdata_file_template and service_perfdata_file_template
options.
An example file format template for service performance data might
look like this:
service_perfdata_file_template=[SERVICEPERFDATA]\t$TIMET$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$
By default, the text files will be opened in "append" mode. If you
need to change the modes to "write" or "non-blocking read/write"
(useful when writing to pipes), you can use the
host_perfdata_file_mode and service_perfdata_file_mode options.
Additionally, you can have Nagios periodically execute commands to
periocially process the performance data files (e.g. rotate them)
using the host_perfdata_file_processing_command and
service_perfdata_file_processing_command options. The interval at
which these commands are executed are governed by the
host_perfdata_file_processing_interval and
service_perfdata_file_processing_interval options, respectively.
Source: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/perfdata.html
Performance Data Processing Option
Format: process_performance_data=<0/1>
Example: process_performance_data=1
This value determines whether or not Nagios will process host and service check performance data.
0 = Don't process performance data (default)
1 = Process performance data
Host Performance Data Processing Command
Format: host_perfdata_command=<command>
Example: host_perfdata_command=process-host-perfdata
This option allows you to specify a command to be run after every host
check to process host performance data that may be returned from the
check. The command argument is the short name of a command definition
that you define in your object configuration file. This command is
only executed if the process_performance_data option is enabled
globally and if the process_perf_data directive in the host definition
is enabled.
Service Performance Data Processing Command
Format: service_perfdata_command=<command>
Example: service_perfdata_command=process-service-perfdata
This option allows you to specify a command to be run after every
service check to process service performance data that may be returned
from the check. The command argument is the short name of a command
definition that you define in your object configuration file. This
command is only executed if the process_performance_data option is
enabled globally and if the process_perf_data directive in the service
definition is enabled.
Host Performance Data File
Format: host_perfdata_file=<file_name>
Example: host_perfdata_file=/usr/local/nagios/var/host-perfdata.dat
This option allows you to specify a file to which host performance
data will be written after every host check. Data will be written to
the performance file as specified by the host_perfdata_file_template
option. Performance data is only written to this file if the
process_performance_data option is enabled globally and if the
process_perf_data directive in the host definition is enabled.
Service Performance Data File
Format: service_perfdata_file=<file_name>
Example: service_perfdata_file=/usr/local/nagios/var/service-perfdata.dat
This option allows you to specify a file to which service performance
data will be written after every service check. Data will be written
to the performance file as specified by the
service_perfdata_file_template option. Performance data is only
written to this file if the process_performance_data option is enabled
globally and if the process_perf_data directive in the service
definition is enabled.
Host Performance Data File Template
Format: host_perfdata_file_template=<template>
Example: host_perfdata_file_template=[HOSTPERFDATA]\t$TIMET$\t$HOSTNAME$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$
This option determines what (and how) data is written to the host
performance data file. The template may contain macros, special
characters (\t for tab, \r for carriage return, \n for newline) and
plain text. A newline is automatically added after each write to the
performance data file.
Service Performance Data File Template
Format: service_perfdata_file_template=<template>
Example: service_perfdata_file_template=[SERVICEPERFDATA]\t$TIMET$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$
This option determines what (and how) data is written to the service
performance data file. The template may contain macros, special
characters (\t for tab, \r for carriage return, \n for newline) and
plain text. A newline is automatically added after each write to the
performance data file.
Source: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/configmain.html#process_performance_data
NOTE: If you've followed the directions to setup pnp4nagios in "Bulk Mode", you've probably already done this. In that case, you just need to refer to the path you specified in host_perfdata_file and service_perfdata_file. But if not, here's how you do it for pnp4nagios:
Processing of performance data has to be enabled in nagios.cfg
process_performance_data=1
Additionally some new directives are required
#
# service performance data
#
service_perfdata_file=/usr/local/pnp4nagios/var/service-perfdata
service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERFDATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE$\tSERVICESTATETYPE::$SERVICESTATETYPE$
service_perfdata_file_mode=a
service_perfdata_file_processing_interval=15
service_perfdata_file_processing_command=process-service-perfdata-file
#
# host performance data starting with Nagios 3.0
#
host_perfdata_file=/usr/local/pnp4nagios/var/host-perfdata
host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$
host_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
host_perfdata_file_processing_command=process-host-perfdata-file
Source: https://docs.pnp4nagios.org/pnp-0.6/config#bulk_mode
EDIT: Here's an easier way to get generate CSV data on-demand.
Browse to http:///nagios/cgi-bin/avail.cgi
Fill out the steps of the wizard.
Be sure to check the "Output in CSV Format" checkbox on the 3rd screen.
Click "Create Availability Report!" button.
CSV file will be generated and downloaded in your browser.

Related

Check whether file has read only flag in UWP

I am working on an UWP text editor. I have added desktop extension to it to modify system files and other read only files. The problem I have is there is no reliable way to detect if a file has read-only attribute. FileInfo.IsReadOnly doesn't work and StorageFile.Attributes has FileAttributes.ReadOnly when file is dragged and dropped from file explorer.
How do I reliably check whether the file has read only flag or not?
While there is no way to detect the readonly attribute by using dotnet methods, however GetFileAttributesExFromApp can be used to get a lot of attributes(readonly, hidden etc.) of the file that aren't available via StorageFile api. Also, SetFileAttributesFromApp can be used to change/remove these attributes.
Edit
After some research and deep dive in MSDN, I came to know about RetrievePropertiesAsync(IEnumerable<String>) and
SavePropertiesAsync(IEnumerable<KeyValuePair<String,Object>>) methods for Windows.Storage.FileProperties.StorageItemContentProperties which can be used to get and set properties by name (Full list of properties names), the name System.FileAttributes can be used to get file attributes and can be used to detect if read-only flag is present. While retrieving properties always works modifying properties will only work if app has write access to file (Windows.Storage.StorageFile.Attributes doesn't contain ReadOnly flag), although SetFileAttributesFromApp works for that scenario but limitation of SetFileAttributesFromApp is it won't work for sensitive file types (.bat, .cmd etc.). So both those methods could be used combined to have maximum effect.
You can see the Attributes property has ReadOnly or not.
var filePicker = new Windows.Storage.Pickers.FileOpenPicker();
filePicker.ViewMode = Windows.Storage.Pickers.PickerViewMode.Thumbnail;
filePicker.SuggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.VideosLibrary;
foreach (string format in HelperUP.subtitlesFormat)
filePicker.FileTypeFilter.Add(format);
var file = await filePicker.PickSingleFileAsync();
if (file == null)
return;
Debug.WriteLine(file.Attributes);
The reason FileAttributes.ReadOnly throws an exception is that the System.IO APIs don't have access to arbitrary file locations on the hard drive in UWP.
On the other hand, a StorageFile opened in the app via drag & drop has this attribute set too, which is a problem that is continuously being discussed and hopefully will be fixed in a future version.
The only workaround I can think of (apart from always using the desktop extension) is declaring the broadFileSystemAccess capability (I have described the process here for example). This is a capability which gives you access to the whole filesystem and allows you to get a file using an arbitrary path with the StorageFile.GetFileFromPathAsync method (see Docs). Please note you will need to explain why this capability is required when you submit the application to the Microsoft Store.
With broad filesystem access, you could take the drag & drop StorageFile, take its Path and retrieve the same file again using StorageFile.GetFileFromPathAsync. This new copy of the file will no longer have the "false-positive" Read Only attribute and will reflect the actual attribute state from the filesystem.

Behave print all tests to a text file

I have been asked to provide a list of every behave Feature and Scenario we run as part of our regression pack for a document to an external client (not the steps)
As our regression test suite is currently around 50 feature files with at least 10 scenarios in each I would rather not copy and paste manually.
Is there a way to export the Feature name and ID and then the name and ID of each scenario that comes under that feature either to a CSV or text file?
Currently our behave tests are run locally and I am using PyCharm IDE to edit them in.
I have found a roundabout way to do this.
Set Behave to export to an external txt file using the command
outfiles = test_list
Then use the behave -d command to run my tests as a dry run.
This then populates the txt file with the feature, scenario and step of every test.
I can export this to Excel and through filtering can isolate the feature and scenario lines removing the steps and then use text to columns to split the feature/scenario description from its test path/name.
If there is a less roundabout way of doing this it would be good to know as it looks like this is information we will need to be able to provide on a semi regular basis moving forwards.
You can take advantage of context.scenario to get scenario name and feature name and then write them into a text file.
You should put these code in after_scenario in environment.py so that you also can get the scenario status.
I am using this for export scenario name, status and feature name into a text file. Each will be separated by "|". I later import this file to a excel file for reporting.
Here is the code you can use for reference:
def write_scenario_summary(context, scenario, report_path):
try:
# scenario status could be [untested, skipped, passed, failed]
status = scenario.compute_status().name.upper()
feature = ReportingHelper.get_feature_name(scenario)
logging_info = '{status} | {feature} | | {scenario_name}'.format(
status=status,
feature=feature,
scenario_name=scenario.name)
print(logging_info, file=open(report_path, 'a'))
def get_feature_name(scenario):
feature_file_path = scenario.feature.filename
return os.path.basename(feature_file_path)
Hope it helps.

Caching Dynamically Generated Pages

We are looking to speed up our website. More specifically, we are looking to lower TTFB. Our website consists mainly of pages that are dynamically generated based on the url path (a subject is extracted) and on parameters in the url path.
These entries are put into an sql query that pulls in all the right data from our database executed with php.
Here is the issue:
These queries work perfectly to generate the pages and all the information associated with them (e.g. tags).
However, rerunning the code everytime a visitor goes to a page takes a significant amount of time, resulting in a high TTFB/server response time. In essence, these pages only need to be updated using the sql queries once every month. In between that, it should be possible to serve them as preloaded/pregenerated static HTML pages (until we indicate a refresh). We are currently using Cloudflare as a CDN which has been great in speeding up the website already. Now, even though we have the page rule with "Cache Everything" on, we can still see it reruns the php code including sql queries.
The question:
Does anybody know a good way to accomplish this goal of caching the dynamic part of the website? Whether that's with Cloudflare, or via another way? I know that Akamai offers this service but evidently, there is some switching cost associated with swapping the website to another CDN, and we are rather satisfied with Cloudflare.
Thanks in advance!
If your website has hundreds of pages with many visitors everyday, you might want to implement some sort of caching mechanism for your website to speed up page loading time. Each client-server request consist of multiple database queries, server response and the processing time increasing overall page loading time. The most common solution is to make copies of dynamic pages called cache files and store them in a separate directory, which can later be served as static pages instead of re-generating dynamic pages again and again.
Understanding Dynamic pages & Cache Files
Cache files are static copies generated by dynamic pages, these files are generated one time and stored in separate folder until it expires, and when user requests the content, the same static file is served instead of dynamically generated pages, hence bypassing the need of regenerating HTML and requesting results from database over and over again using server-side codes. For example, running several database queries, calculating and processing PHP codes to the HTML output takes certain seconds, increasing overall page loading time with dynamic page, but a cached file consist of just plain HTML codes, you can open it in any text editor or browser, which means it doesn’t require processing time at all.
Dynamic page :— The example in the picture below shows how a dynamic page is generated. As its name says, it’s completely dynamic, it talks to database and generates the HTML output according to different variables user provides during the request. For example a user might want to list all the books by a particular author, it can do that by sending queries to database and generating fresh HTML content, but each request requires few seconds to process also the certain server memory is used, which is not a big deal if website receives very few visitors. However, consider hundreds of visitors requesting and generating dynamic pages from your website over and over again, it will considerably increase the pressure, resulting delayed output and HTTP errors on the client’s browser.
dynamic-page-example
Cached File :— Picture below illustrates how cached files are served instead of dynamic pages, as explained above the cached files are nothing but static web pages. They contain plain HTML code, the only way the content of the cached page will change is if the Web developer manually edits the file. As you can see cached files neither require database connectivity nor the processing time, it is an ideal solution to reduce server pressure and page loading time consistently.
cached-file-example
PHP Caching
There are other ways to cache dynamic pages using PHP, but the most common method everyone’s been using is PHP Output Buffer and Filesystem Functions, combining these two methods we can have magnificent caching system.
PHP Output buffer :— It interestingly improves performance and decreases the amount of time it takes to download, because the output is not being sent to browser in pieces but the whole HTML page as one variable. The method is insanely simple take a look at the code below :
<?php
ob_start(); // start the output buffer
/* the content */
ob_get_contents(); gets the contents of the output buffer
ob_end_flush(); // Send the output and turn off output buffering
?>
When you call ob_start() on the top of the code, it turns output buffering on, which means anything after this will be stored in the buffer, instead of outputting on the browser. The content in the buffer can be retrieved using ob_get_contents(). You should call ob_end_flush() at the end of the code to send the output to the browser and turn buffering off.
PHP Filesystem :— You may be familiar with PHP file system, it is a part of the PHP core, which allow us to read and write the file system. Have a look at the following code.
$fp = fopen('/path/to/file.txt', 'w'); //open file for writing
fwrite($fp, 'I want to write this'); //write
fclose($fp); //Close file pointer
As you can see the first line of the code fopen() opens the file for writing, the mode ‘w’places the file pointer at the beginning of the file and if file does not exist, it attempts to create one. Second line fwrite() writes the string to the opened file, and finally fclose()closes the successfully opened file at the beginning of the code.
Implementing PHP caching
Now you should be pretty clear about PHP output buffer and filesystem, we can use these both methods to create our PHP caching system. Please have a look at the picture below, the Flowchart gives us the basic idea about our cache system.
php-cache-system
The cycle starts when a user request the content, we just check whether the cache copy exist for the currently requested page, if it doesn’t exist we generate a new page, create cache copy and then output the result. If the cache already exist, we just have to fetch the file and send it to the user browser.
Take a look at the Full PHP cache code below, you can just copy and paste it in your PHP projects, it should work flawlessly as depicted in above Flowchart. You can play with the settings in the code, modify the cache expire time, cache file extension, ignored pages etc.
<?php
//settings
$cache_ext = '.html'; //file extension
$cache_time = 3600; //Cache file expires afere these seconds (1 hour = 3600 sec)
$cache_folder = 'cache/'; //folder to store Cache files
$ignore_pages = array('', '');
$dynamic_url = 'http://'.$_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] . $_SERVER['QUERY_STRING']; // requested dynamic page (full url)
$cache_file = $cache_folder.md5($dynamic_url).$cache_ext; // construct a cache file
$ignore = (in_array($dynamic_url,$ignore_pages))?true:false; //check if url is in ignore list
if (!$ignore && file_exists($cache_file) && time() - $cache_time < filemtime($cache_file)) { //check Cache exist and it's not expired.
ob_start('ob_gzhandler'); //Turn on output buffering, "ob_gzhandler" for the compressed page with gzip.
readfile($cache_file); //read Cache file
echo '<!-- cached page - '.date('l jS \of F Y h:i:s A', filemtime($cache_file)).', Page : '.$dynamic_url.' -->';
ob_end_flush(); //Flush and turn off output buffering
exit(); //no need to proceed further, exit the flow.
}
//Turn on output buffering with gzip compression.
ob_start('ob_gzhandler');
######## Your Website Content Starts Below #########
?>
<!DOCTYPE html>
<html>
<head>
<title>Page to Cache</title>
</head>
<body>
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer ut tellus libero.
</body>
</html>
<?php
######## Your Website Content Ends here #########
if (!is_dir($cache_folder)) { //create a new folder if we need to
mkdir($cache_folder);
}
if(!$ignore){
$fp = fopen($cache_file, 'w'); //open file for writing
fwrite($fp, ob_get_contents()); //write contents of the output buffer in Cache file
fclose($fp); //Close file pointer
}
ob_end_flush(); //Flush and turn off output buffering
?>
You must place your PHP content between the enclosed comment lines, In fact I’d suggest putting them in separate header and footer file, so that it can generate and serve cache files for all the different dynamic pages. If you read the comment lines in the code carefully, you should find it pretty much self explanatory.
You can do this at the edge with cloudflare to get better performance. The nice thing is you can set the domain to "Under Development" at any time to see code changes - without having to change a server-side mechanism.
Your Page Rule would look something like the below. Note the url variable with wildcard. This means any url with that variable name will be cached at the edge. You should see TTFB + entire html download under 50ms easily.
Also note the Expiration. You said a month so I chose that setting. But I'd probably make it "1 Day". Just so I can keep an eye on things.

file arrival in blob storage trigger data factory pipeline

I need to invoke a Data Factory V2 pipeline when a file is placed in a blob container.
I have tried using Powershell to check if the file is present, the issue I have there is that if the file is not there, and tell me its not there, I then place the file in the container and Powershell will still tell me its not there, though perhaps if it reruns the variable will get a fresh value and tell ites there? Maybe there is a way around that? If yes, I can then use the result to invoke the pipeline with the Powershell script. Am I along the right lines here?
Other option will be to write a t-sql query that will give a true/false result if the row condition is met, but I am not sure how I can use this result within/against DFv2. In the IF condition module?
Tried a Logic app but it was kind of useless. It would be great if I could get some suggestions in some ways to trigger the pipeline on the arrival of the file in the blob container, there is more than one way to skin a cat, so open to any and all ideas. Thank you.
This is now available as an event trigger with ADF V2 as announced in this bog post on June 21, 2018.
Current documentation on how to set it up is available here: Create a trigger that runs a pipeline in response to an event.
From the documentation:
As soon as the file arrives in your storage location and the corresponding blob is created, this event triggers and runs your Data Factory pipeline. You can create a trigger that responds to a blob creation event, a blob deletion event, or both events, in your Data Factory pipelines.
There is a note to be wary of:
This integration supports only version 2 Storage accounts (General purpose).
Event triggers can be one, or both of:
Microsoft.Storage.BlobCreated
Microsoft.Storage.BlobDeleted
With firing conditions from the following:
blobPathBeginsWith
blobPathEndsWith
The documentation also provides the following examples of event trigger firing conditions over blobs:
Blob path begins with('/containername/') – Receives events for any blob in the container.
Blob path begins with('/containername/foldername') – Receives events for any blobs in the containername container and foldername folder.
Blob path begins with('/containername/foldername/file.txt') – Receives events for a blob named file.txt in the foldername folder under the containername container.
Blob path ends with('file.txt') – Receives events for a blob named file.txt at any path.
Blob path ends with('/containername/file.txt') – Receives events for a blob named file.txt under container containername.
Blob path ends with('foldername/file.txt') – Receives events for a blob named file.txt in foldername folder under any container.

can we drop a file to a folder location automatically using camel,or at a set period of time(not intervals)?

Iam trying to automate the testing of a java bundle,Which will process once a file is dropped in a particular folder.
can we drop a file to a folder location automatically using camel,or at a set period of time(not intervals)?
is this possible purely by camel or should we incorporate other frameworks?
sure, you can use the camel-file component to produce (create files somewhere) and consume (read/process files from somewhere) and optionally control the initial/polling delays easily with attributes...
here is a simple example of consuming->processing->producing
from("file://inputdir").process(<dosomething>).to("file://outputdir")
alternatively, you could periodically produce a file and drop it somewhere
from("timer://foo?fixedRate=true&period=60000").process(<createFileContent>").to("file://inputdir");
Although camel could do this by creating a timer endpoint, then setting the file content and writing to a file endpoint, my answer would be to simply use a bash script. No camel needed here.
Pseudo bash script:
while [ true ]
do
cp filefrom fileto
pauze 10s
done

Resources