My organization is using Nagios with the check_mk plugin to monitor our nodes. My question is: is it possible run a manual check from the command line? It is important, process-wise, to be able to test a configuration change before deploying it.
For example, I've prepared a configuration change which uses the ps.perf check type to check the number of httpd processes on our web servers. The check looks like this:
checks = [
( ["web"], ALL_HOSTS, "ps.perf", "Number of httpd processes", ( "/usr/sbin/httpd", 1, 2, 80, 100 ) )
]
I would like to test this configuration change before committing and deploying it.
Is it possible to run this check via the command line, without first adding it to main.mk? I'm envisioning something like:
useful_program -H my.web.node -c ps.perf -A /usr/sbin/httpd,1,2,80,100
I don't see any way to do something like this in the check_mk documentation, but am hoping there is a way to achieve something like this.
Thanks!
that is easy to check.
Just make your config changes and then run:
cmk -nv HOSTNAME.
That (-n) will try run everything and return (-v) the output.
So can see the same results like later in the GUI.
List the check
$check_mk -L | grep ps.perf
if it listing ps.perf then run following command,
$check_mk --checks=ps.perf -I Hostname
Related
I'm passing a variable to my .dacpac but the text received is not what I passed. Example command:
sqlpackage /v:TextTest="abc]123" /Action:Publish /SourceFile:"my.dacpac" /TargetDatabaseName:MyDb /TargetServerName:"."
My variable $(TextTest) comes out as "abc]]123" instead of the original "abc]123".
Is there anything I can do to prevent SqlPackage from corrupting my input variables before they are passed to the .dacpac scripts?
Unfortunately, I don't think there is a good answer. This appears to be a very old bug. I'm seeing references to this issue going back 10 years.
Example A: https://web.archive.org/web/20220831180208/https://social.msdn.microsoft.com/forums/azure/en-US/f1d153c2-8f42-4148-b313-3449075c612f/sql-server-database-project-sqlcmd-variables-with-closing-square-brackets
They mention a "workaround" in the post, but they link to a Microsoft Connect issue which no longer exists and is not available on archive.org.
My best guess is that the "workaround" is to generate the deploy script rather than publishing, and then manually modify the variable value in the script...which is not really a workaround if you are working on a build/release pipeline or any sort of automation.
I tried testing this to see if it would make any difference using Microsoft.SqlServer.Dac.DacServices.Publish() directly (via dbatools PowerShell module), but unfortunately the problem exists there as well.
I also tested it against every keyboard accessible symbol and that is the only character it seems to have a problem with.
Another option, though still not great, is to generate the deployment script, then execute it using SQLCMD.EXE.
So for example this would work:
sqlpackage /Action:Script `
/DeployScriptPath:script.sql `
/SourceFile:foobar.dacpac `
/TargetConnectionString:'Server=localhost;Database=foobar;Uid=sa;Password=yourStrong(!)Password' `
/p:CommentOutSetVarDeclarations=True
SQLCMD -S 'localhost' -d 'foobar' -U 'sa' -P 'yourStrong(!)Password' `
-i .\script.sql `
-v TextTest = "abc]123" `
-v DatabaseName = "foobar"
/p:CommentOutSetVarDeclarations=True - This setting is key, otherwise SQLCMD will be overridden by what's in the file. Just make sure you specify ALL variables, and not just the one you need. So open the file to look at what is commented out and make sure you are supplying what is needed.
It's not a great option...but it's at least scriptable and doesn't require manual intervention.
My beloved web radio has an icecast2 instance and it just works. We have also a Matomo instance to track visits on our WordPress website, using only Free/Libre and open source software.
The main issue is that, since Matomo tracks visits via JavaScript, direct visits to the web-radio stream are not intercepted by Matomo as default.
How to use Matomo to track visits to Icecast2 audio streams?
Yep it's possible. Here my way.
First of all, try the Matomo internal import script. Be sure to set your --idsite= and the correct path to your Matomo installation:
su www-data -s /bin/bash
python2.7 /var/www/matomo/misc/log-analytics/import_logs.py --show-progress --url=https://matomo.example.com --idsite=1 --recorders=2 --enable-http-errors --log-format-name=icecast2 --strip-query-string /var/log/icecast2/access.log
NOTE: If you see this error
[INFO] Error when connecting to Matomo: HTTP Error 400: Bad Request
In this case, be sure to have all needed plugins activated:
Administration > System > Plugins > Bulk plugin
So, if the script works, it should start printing something like this:
0 lines parsed, 0 lines recorded, 0 records/sec (avg), 0 records/sec (current)
Parsing log /var/log/icecast2/access.log...
1013 lines parsed, 200 lines recorded, 99 records/sec (avg), 200 records/sec (current)
If so, immediately stop the script to avoid to import duplicate entries before installing the definitive solution.
To stop the script use CTRL+C.
Now we need to run this script every time the log is rotated, before rotation.
The official documentation suggests a crontab but I don't recommend this solution. Instead, I suggest to configure logrotate instead.
Configure the file /etc/logrotate.d/icecast2. From:
/var/log/icecast2/*.log {
...
weekly
...
}
To:
/var/log/icecast2/*.log {
...
daily
prerotate
su www-data -s /bin/bash --command 'python2.7 ... /var/log/icecast2/access.log' > /var/log/logrotate-icecast2-matomo.log
endscript
...
}
IMPORTANT: In the above example replace ... with the right command.
Now you can also try it manually:
logrotate -vf /etc/logrotate.d/icecast2
From another terminal you should be able to see its result in real-time with:
tail -f /var/log/logrotate-icecast2-matomo.log
If it works it means everything will work perfectly and automatically, importing all visits every day, without any duplicate and without missing any lines.
More documentation here about the import script itself:
https://github.com/matomo-org/matomo-log-analytics
More documentation here about logrotate:
https://linux.die.net/man/8/logrotate
I try to execute the giraph custom code from eclipse IDE, and when i try to run the code its showing Exception in thread “main” java.lang.IllegalArgumentException: checkLocalJobRunnerConfiguration: When using LocalJobRunner, must have only one worker since only 1 task at a time!
So i want to set the giraph.SplitMasterWorker=false.How to set it and where to set it?
pass -ca giraph.SplitMasterWorker=false to your application as an argument.
If you are running giraph on a single node cluster, then paste "-ca giraph.SplitMasterWorker=false" would help. However, if you try to run giraph on multi-nodes cluster such as AWS EC2 base on hadoop version 2.x.x, then I definitely recommend to modify the mapred-site.xml file adding parameter such mapred.job.tracker value in it.
giraph.SplitMasterWorker=false is the variable you have to set while calling the giraph runner. This can be passed in as a custom variable under -ca. Also I think you are using -w parameter, if you running on your local machine it should not be more than 1 since there are no slave nodes to work as a worker
E.g. hadoop jar /usr/local/giraph1.0/giraph-examples/target/giraph-examples-1.1.0-for-hadoop-2.7.0-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.ConnectedComponentsComputation -vif org.apache.giraph.io.formats.IntIntNullTextInputFormat -vip -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op -w 5 -ca giraph.SplitMasterWorker=false
I'm trying to specify an FBX file in MEL using the command
file -f -pmt 0 -options "v=0;" -typ "FBX" -o
on one computer this works great. On another, it fails but DOES work if I use
-typ "Fbx"
I think I'd like to query for the supported translators in my script, then either select the correct one or report an error. Is this possible? Am I mis-diagnosing the problem?
MEL has a command called pluginInfo. You could write a simple function that will return the proper spelling based on that. pluginInfo -v -query "fbxmaya"; will provide the version of the fbx plugin. I haven't used MEL in a while so I'm not gonna try to make this perfect but maybe something like if(pluginInfo -v -query "fbxmaya") ) string fbxType = "FBX" else( string fbxType = "Fbx"). Then just plug that var into file -f -pmt 0 -options "v=0;" -typ $fbxType -o.
It might be a different version of fbx. You'd have to provide another line which determines the version of fbx on that particular machine and pipes in the correct spelling.
This is the process we perform manually.
$ sudo su - gvr
[gvr/DB:DEV3FXCU]/home/gvr>
$ ai_dev.env
Gateway DEV3 $
$ gw_report integrations long
report is ******
Now i am attempting to automate this process using a shell script:
#!/bin/ksh
sudo su - gvr
. ai_dev3.env
gw_report integrations long
but this is not working. Getting stuck after entering the env.
Stuck at this place (Gateway DEV3 $)
You're not running the same commands in the two examples - gw_report long != gw_report integrations long. Maybe the latter takes much longer (or hangs).
Also, in the original code you run ai_dev.env and in the second you source it. Any variables set when running a script are gone when returning from that script, so I suspect this accounts for the different behavior.