I have written a script and it is printing multiline output. But in nagios it is showing only one line. Does anyone know how to print multilines in Nagios
Multiline ouput is possible only with Nagios 3 and newer.
First of all, you can use html tag <br/> for each new line of desired output.
Next important thing is disable HTML tag escaping in your cgi.cfg on Nagios server.
Find escape_html_tags=1 and change to escape_html_tags=0.
Then you restart Nagios server.
Some advice about Nagios plugins output can be found here: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/pluginapi.html
PS: By default Nagios will only read the first 4 KB of data that a plugin returns.
Yes! I agree, multiline output is possible only with Nagios 3 and above. However your do not need to send <br/> tags to achieve this. You can simply end each line with '\r\n'. Use the following format.
DISK OK - free space: / 3326 MB (56%); | /=2643MB;5948;5958;0;5968
/ 15272 MB (77%);
/boot 68 MB (69%);
/home 69357 MB (27%);
/var/log 819 MB (84%); | /boot=68MB;88;93;0;98
/home=69357MB;253404;253409;0;253414
/var/log=818MB;970;975;0;980
The performance data is optional. Even the following format is acceptable. (Note that each line end with a \r\n)
Weather OK - Temperature OK: 22 C
Humidity (77%) | Temp=22;35;39;40; Humidity=77;80;90;99;
For more details, check the official documentation on multiline output.
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/pluginapi.html
Related
My beloved web radio has an icecast2 instance and it just works. We have also a Matomo instance to track visits on our WordPress website, using only Free/Libre and open source software.
The main issue is that, since Matomo tracks visits via JavaScript, direct visits to the web-radio stream are not intercepted by Matomo as default.
How to use Matomo to track visits to Icecast2 audio streams?
Yep it's possible. Here my way.
First of all, try the Matomo internal import script. Be sure to set your --idsite= and the correct path to your Matomo installation:
su www-data -s /bin/bash
python2.7 /var/www/matomo/misc/log-analytics/import_logs.py --show-progress --url=https://matomo.example.com --idsite=1 --recorders=2 --enable-http-errors --log-format-name=icecast2 --strip-query-string /var/log/icecast2/access.log
NOTE: If you see this error
[INFO] Error when connecting to Matomo: HTTP Error 400: Bad Request
In this case, be sure to have all needed plugins activated:
Administration > System > Plugins > Bulk plugin
So, if the script works, it should start printing something like this:
0 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
Parsing log /var/log/icecast2/access.log...
1013 lines parsed, 200 lines recorded, 99 records/sec (avg), 200 records/sec (current)
If so, immediately stop the script to avoid to import duplicate entries before installing the definitive solution.
To stop the script use CTRL+C.
Now we need to run this script every time the log is rotated, before rotation.
The official documentation suggests a crontab but I don't recommend this solution. Instead, I suggest to configure logrotate instead.
Configure the file /etc/logrotate.d/icecast2. From:
/var/log/icecast2/*.log {
...
weekly
...
}
To:
/var/log/icecast2/*.log {
...
daily
prerotate
su www-data -s /bin/bash --command 'python2.7 ... /var/log/icecast2/access.log' > /var/log/logrotate-icecast2-matomo.log
endscript
...
}
IMPORTANT: In the above example replace ... with the right command.
Now you can also try it manually:
logrotate -vf /etc/logrotate.d/icecast2
From another terminal you should be able to see its result in real-time with:
tail -f /var/log/logrotate-icecast2-matomo.log
If it works it means everything will work perfectly and automatically, importing all visits every day, without any duplicate and without missing any lines.
More documentation here about the import script itself:
https://github.com/matomo-org/matomo-log-analytics
More documentation here about logrotate:
https://linux.die.net/man/8/logrotate
Git CMD line noob here, how do I change the default plus/minus (+/-) signs to something more unique, such as (>>>/<<<) or (|/~). Or any other symbol not as common as (+/-)!
Reason: I am trying to automate a report that collects all the changes to our schema.sql files. I have the line below that does an adequate job:
git log -p --since="14 days ago" -- *Schema*.sql
My only real issue with the output is the plus/minus (+/-) signs which are used to show what has been added or removed:
+ This line was added
- This line was removed
Comments in SQL (t-SQL) are two minus signs (--), so when a comment is removed, I end up with this:
--- This comment was removed
If I can substitute the (+/-) with a unique value I can format the results and make a nice, pretty report for the people that want to see things like that. Thanks in advance!
--output-indicator-new=<char>
--output-indicator-old=<char>
--output-indicator-context=<char>
Specify the character wanted for -old.
https://git-scm.com/docs/git-log#_common_diff_options
I don't know if git can do this natively, but you can certainly achieve what you want by piping the output of git log into sed. For example to change the plus to '$' and the minus to '%' in your report you could use the following command:
git log -p --since="14 days ago" -- *Schema*.sql | sed 's/^+/$/g' | sed 's/^-/%/g'
Basically I have a python script that i run against a web page and it outputs a phone number contact if it finds one.
What i'm trying to code is a bash script which can automate its use on a list of urls. Which saves the results to a file so i can see which urls produced a phone number contact and which urls didn't. I want this in the same txt file with each urls result on a different line so that i can deduce which urls produced which results.
EG
$cat log.txt
HTTPError: 404
224 265 899
HTTPError: 404
847 718 9300, + 1, 662 538 6500
The problem is that when i run my bash script, the http errors print to screen and the phone numbers print to the log file.
Meaning i cant deduce which urls produced which results.
So at the moment my bash script reads a URLS.txt file into an array.
Then adds the script command to the beginning of each url in the array.
Then using a for loop, executes each command..
I've tried to search for a way to add a string to each line of the log.txt everytime the python command is executed but have had no luck.
If i could do this I wouldnt need the errors as i could deduce which urls didnt work because there would just be the string on each line that didnt produce a result.
So if i set the string to "none?" the log.txt would look like this
none?
none? 224 265 899
none?
none? 847 718 9300, + 1, 662 538 6500
This is the code i have after i have put the URLs into an array.
scr1="python3 extract.py "
scr1arr1=( "${array1[#]/#/$scr1}" )
for each1 in "${scr1arr1[#]}"
do
$each1 >> log.txt
done
So this is what happens on screen when i execute the script.
HTTPError: 404
HTTPError: 404
and the log.txt looks like this
224 265 8990,
847 718 9300, + 1, 662 538 6500
Am i doing this all wrong? / Approaching the problem the wrong way??
I've literally spent days trying to work this out, I'm loosing sleep over it! I am very new to coding, so sorry if this has been covered already. I've searched many forums far and wide and not been able to find a solution.
Your script is apparently printing errors to stderr, not stdout, so you need to add 2>&1 to redirect stderr to the same file as stdout. Try this:
for each1 in "${scr1arr1[#]}"
do
"$each1"
done > log.txt 2>&1
The US Naval Observatory has an API that outputs a JSON file containing the sunrise and sunset times, among other things, as documented here.
Here is an example of the output JSON file:
{
"error":false,
"apiversion":"2.0.0",
"year":2017,
"month":6,
"day":10,
"dayofweek":"Saturday",
"datechanged":false,
"lon":130.000000,
"lat":30.000000,
"tz":0,
"sundata":[
{"phen":"U", "time":"03:19"},
{"phen":"S", "time":"10:21"},
{"phen":"EC", "time":"10:48"},
{"phen":"BC", "time":"19:51"},
{"phen":"R", "time":"20:18"}],
"moondata":[
{"phen":"R", "time":"10:49"},
{"phen":"U", "time":"16:13"},
{"phen":"S", "time":"21:36"}],
"prevsundata":[
{"phen":"BC","time":"19:51"},
{"phen":"R","time":"20:18"}],
"closestphase":{"phase":"Full Moon","date":"June 9, 2017","time":"13:09"},
"fracillum":"99%",
"curphase":"Waning Gibbous"
}
I'm relatively new to using JSON, but I understand that everything in square brackets after "sundata" is a JSON array (please correct me if I'm wrong). So I searched for instructions on how to get a value from a JSON array, without success.
I have downloaded the file to my system using:
wget -O usno.json "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130"
I need to extract the time (in HH:MM format) from this line:
{"phen":"S", "time":"10:21"},
...and then use it to create a variable (that I will later write to a separate file).
I would prefer to use Bash if possible, preferably using a JSON parser (such as jq) if it'll be easier to understand/implement. I'd rather not use Python (which was suggested by a lot of the articles I have read previously) if possible as I am trying to become more familiar with Bash specifically.
I have examined a lot of different webpages, including answers on Stack Overflow, but none of them have specifically covered an array line with two key/value pairs per line (they've only explained how to do it with only one pair per line, which isn't what the above file structure has, sadly).
Specifically, I have read these articles, but they did not solve my particular problem:
https://unix.stackexchange.com/questions/177843/parse-one-field-from-an-json-array-into-bash-array
Parsing JSON with Unix tools
Parse json array in shell script
Parse JSON to array in a shell script
What is JSON and why would I use it?
https://developers.squarespace.com/what-is-json/
Read the json data in shell script
Thanks in advance for any thoughts.
Side note: I have managed to do this with a complex 150-odd line script made up of "sed"s, "grep"s, "awk"s, and whatnot, but obviously if there's a one-liner JSON-native solution that's more elegant, I'd prefer to use that as I need to minimise power usage wherever possible (it's being run on a battery-powered device).
(Side-note to the side-note: the script was so long because I need to do it for each line in the JSON file, not just the "S" value)
If you already have jq you can easily select your desired time with:
sun_time=$(jq '.sundata[] | select(.phen == "S").time' usno.json)
echo $sun_time
# "10:21"
If you must use "regular" bash commands (really, use jq):
wget -O - "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130" \
| sed -n '/^"sundata":/,/}],$/p' \
| sed -n -e '/"phen":"S"/{s/^.*"time":"//'\;s/...$//\;p}
Example:
$ wget -O - "http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130" | sed -n '/^"sundata":/,/}],$/p' | sed -n -e '/"phen":"S"/{s/^.*"time":"//'\;s/...$//\;p}
--2017-06-10 08:02:46-- http://api.usno.navy.mil/rstt/oneday?ID=iOnTheSk&date=today&tz=0&coords=30,130
Resolving api.usno.navy.mil (api.usno.navy.mil)... 199.211.133.93
Connecting to api.usno.navy.mil (api.usno.navy.mil)|199.211.133.93|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: ‘STDOUT’
- [ <=> ] 753 --.-KB/s in 0s
2017-06-10 08:02:47 (42.6 MB/s) - written to stdout [753]
10:21
I'm writing a program which requires knowledge of the current load on the system, and the activity of any users (it's a load balancer).
This is a university assignment, and I am required to use the w command. I'm having a hard time parsing this command because it is very verbose. Any suggestions on what I can do would be appreciated. This is a small part of the program, and I am free to use whatever method i like.
The most condensed version of w which still has the information I require is `w -u -s -f' which produces this:
10:13:43 up 9:57, 2 users, load average: 0.00, 0.00, 0.00
USER TTY IDLE WHAT
fsm tty7 22:44m x-session-manager
fsm pts/0 0.00s w -u -s -f
So out of that, I am interested in the first number after load average and the smallest idle time (so i will need to parse them all).
My background process will call w, so the fact that w is the lowest idle time will not matter (all i will see is the tty time).
Do you have any ideas?
Thanks
(I am allowed to use alternative unix commands, like grep, if that helps).
Are you allowed to use other Unix commands? You could use grep, sed or head/tail to get the lines you need, and cut to split them up as needed.
Check out: http://www.gnu.org/s/libc/manual/html_node/Regexp-Subexpressions.html#Regexp-Subexpressions
Use regular expressions to match [0-9]+\.[0-9]{2} on the first line. You may have to fiddle with which characters are escaped. That will give you 3 load averages.
The remaining output is column-based, so if you count off the string positions from w, you'll be able to just strncpy the interesting bits.
Another possible theory (which sounds like it goes against the assignment, but I'd keep it in mind) is to go grab the source code of w and hack it up to just tell you the information via function calls. If you're feeling really hardcore, you can learn all the library api calls and do it directly that way.
I found i can use a combination of commands like so:
w -u -s -f | grep load | cut -d " " -f 11
and
w -u -s -f | grep tty | cut -d " " -f 13
the first takes the output of w, uses grep to only select the line with load, and then cuts everything except for the 11th chunk of data (delimiter is a space), which is the first load number with a comma.
the second does something similar, only for user load. And if there are multiple loads, its a list.
This is easy enough to parse, unless someone has an objection, or suggestion to improve it.