gdbus introspect with --recurse AND --xml options together do not recurse - dbus

I would like to recursively explore an interface with the --recurse
gdbus introspect --system --dest "org.fedoraproject.FirewallD1" --object-path "/org/fedoraproject/FirewallD1" --recurse
add out put the result as XML with the --xml option
gdbus introspect --system --dest "org.fedoraproject.FirewallD1" --object-path "/org/fedoraproject/FirewallD1" --xml
However both together only give the top level interface as XML
gdbus introspect --system --dest "org.fedoraproject.FirewallD1" --object-path "/org/fedoraproject/FirewallD1" --recurse --xml
The end goal is using this file with CreateInterface in java-dbus to create my class files. (That does not have a recursion option for its object-path file generation.)
Is there a way of recursively generating the dbus introspection XMLfile?
I am on Redhat 7 BTW.

Related

Use curl to post file from pipe

How might i take the output from a pipe and use curl to post that as a file?
E.g. the following workds
curl -F 'file=#data/test.csv' -F 'filename=test.csv' https://mydomain#apikey=secret
I'd like to get the file contents from a pipe instead but I can't quite figure out how to specify it as a file input. My first guess is -F 'file=#-' but that's not quite right.
cat data/test.csv | curl -F 'file=#-' -F 'filename=test.csv' https://mydomain#apikey=secret
(Here cat is just a substitute for a more complex sequence of events that would get the data)
Update
The following works:
cat test/data/test.csv | curl -XPOST -H 'Content-Type:multipart/form-data' --form 'file=#-;filename=test.csv' $url
If you add --trace-ascii - to the command line you'll see that curl already uses that Content-Type by default (and -XPOST doesn't help either). It was rather your fixed -F option that did the trick!

Using Secrets API with dbus-send

I'm trying to figure out how to get a password from the keyring using dbus-send, but I'm struggling to understand what the session parameter is.
Here's where I've got to:
#!/bin/bash
# Find key path
KEY_PATH=$(dbus-send --dest=org.freedesktop.secrets --print-reply=literal /org/freedesktop/secrets org.freedesktop.Secret.Service.SearchItems dict:string:string:"mount-point","/home/s/.mozilla/firefox" | grep -Eo '/\S+')
# Unlock keyring
RESULT=$(dbus-send --dest=org.freedesktop.secrets --print-reply=literal /org/freedesktop/secrets org.freedesktop.Secret.Service.Unlock array:objpath:$KEY_PATH | grep -Eo '/\S+')
# If unlocked...
if [ "$RESULT" = "$KEY_PATH" ]; then
# Get password
PASSWORD=$(dbus-send --dest=org.freedesktop.secrets --print-reply=literal /org/freedesktop/secrets org.freedesktop.Secret.Service.GetSecrets array:objpath:$KEY_PATH objpath:<WHAT IS SESSION?>)
# Mount ecryptfs firefox directory
echo $PASSWORD | ecryptfs-simple -o key=passphrase,ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=no,ecryptfs_enable_filename_crypto=yes,no_sig_cache=yes /home/s/.mozilla/.firefox-ecryptfs /home/s/.mozilla/firefox
firefox $#
fi
I'm lost as to how to get a session to fetch the password.
The session needs to be created using:
org.freedesktop.Secret.Service.OpenSession (
IN String algorithm,
IN Variant input,
OUT Variant output,
OUT ObjectPath result);
https://specifications.freedesktop.org/secret-service/latest/re01.html
Here is an example of creating a non-encrypted session. Be aware the password returned by GetSecret will be a plain text as it uses a non-encrypted session:
dbus-send --dest=org.freedesktop.secrets --print-reply=literal /org/freedesktop/secrets org.freedesktop.Secret.Service.OpenSession string:plain variant:string:''
The output is the objpath to the created session:
variant /org/freedesktop/secrets/session/s31
Then, theoretically, you can pass the session to GetSecrets. For example:
dbus-send --dest=org.freedesktop.secrets --print-reply=literal /org/freedesktop/secrets org.freedesktop.Secret.Service.GetSecrets array:objpath:/org/freedesktop/secrets/collection/login/6 objpath:/org/freedesktop/secrets/session/s31
Note: /org/freedesktop/secrets/collection/login/6 is the object path returned by SearchItems.
However, this does not work with dbus-send. I think this is because the session is likely closed as soon as dbus-send returns.
If you use d-feet, the session is retained until the d-feet window is closed. So, you will be able to get the password using d-feet though. But, I understood that you want to automate it.
I suggest you use python3's keyring which offers to get a password using an encrypted session.

nagios check_http HTTP CRITICAL - Unable to open TCP socket, works fine from command line

I have the following service definition:
define service{
use my-service ; Name of service template to use
host_name dra
service_description https://www.example.com
check_command check_http!-I my.ip.address --ssl -H www.example.com
notifications_enabled 1
retry_check_interval 2
normal_check_interval 5
contact_groups myadmins
}
The service check keeps failing with
Name or service not known
HTTP CRITICAL - Unable to open TCP socket
However, if I run http_check from the command line, I get a 200 OK result:
/usr/lib/nagios/plugins/check_http -I my.ip.address --ssl -H www.example.com -v
.....
HTTP OK: HTTP/1.1 200 OK - 9176 bytes in 0.074 second response time |time=0.073543s;;;0.000000 size=9176B;;;0
Note also that the URL in question works just fine from a browser, the certificate is valid, etc. I also use the exact same service definition for a bunch of other sites, and they all work fine. The only thing I can think of is that this remote host is running on DigitalOcean and has a "Floating IP" assigned to it. I tried replacing my.ip.address above (and also in the host definition of the nagios config file) with either the Floating IP or the "standard" IP assigned to the host, and it makes no difference.
How is it possible that the same command would fail when run by nagios, but succeed when run manually?
The answer to my question is: don't use check_http, use
use check_https_hostname, and
make sure that the host_name stanza is the actual hostname
which requires matching the host_name stanzas in all the service and host definitions in the same cfg file.
So:
define service{
use my-service ; Name of service template to use
host_name www.example.com
service_description https://www.example.com
check_command check_https_hostname
notifications_enabled 1
retry_check_interval 2
normal_check_interval 5
contact_groups myadmins
}
Here is why: it becomes clear by looking at the definitions of check_http and check_https_hostname which are in the /etc/nagios-plugins/config/http.cfg file in my installation.
# 'check_http' command definition
define command{
command_name check_http
command_line /usr/lib/nagios/plugins/check_http -H '$HOSTADDRESS$' -I '$HOSTADDRESS$' '$ARG1$'
}
# 'check_https_hostname' command definition
define command{
command_name check_https_hostname
command_line /usr/lib/nagios/plugins/check_http --ssl -H '$HOSTNAME$' -I '$HOSTADDRESS$' '$ARG1$'
}
You will notice that the -H and -I arguments in check_http get the same value $HOSTADDRESS$, while in check_https_hostname they get $HOSTNAME$ and $HOSTADDRESS$, respectively.
The fact that I built my original command as check_http!-I my.ip.address --ssl -H www.example.com did not really matter. In the end, the /usr/lib/nagios/plugins/check_http command got two values for -I and two for -H, and the second pair was being ignored.
This did break "thanks" to Cloudflare, because the IP address dynamically assigned by Cloudflare to my www.example.com was not the same as the actual host IP address that I had specified in my host definition.
Finally, I wanted to mention that what helped me figure this out was setting
debug_level=-1
debug_verbosity=1
in my /etc/nagios3/nagios.cfg file and then looking through /var/log/nagios3/nagios.debug.
Also, check out all the different variants of the check_http commands in /etc/nagios-plugins/config/http.cfg. There are some very useful ones.

How to convert multiple documents using the Document Conversion service ina script bash?

How can I convert more than one document using the Document Conversion service.
I have between 50-100 MS Word and PDF documents that I want to convert using the convert_document API method?
For example, can you supply multiple .pdf or *.doc files like this?:
curl -u "username":"password" -X POST
-F "config={\"conversion_target\":\"ANSWER_UNITS\"};type=application/json"
-F "file=#\*.doc;type=application/msword"
"https://gateway.watsonplatform.net/document-conversion-experimental/api/v1/convert_document"
That gives an error unfortunately: curl: (26) couldn't open file "*.doc".
I have also tried "file=#file1.doc,file2.doc,file3.doc" but that gives errors as well.
The service only accept one file at a time, but you can call it multiple time.
#!/bin/bash
USERNAME="<service-username>"
PASSWORD="<service-password>"
URL="https://gateway.watsonplatform.net/document-conversion-experimental/api/v1/convert_document"
DIRECTORY="/path/to/documents"
for doc in *.doc
do
echo "Converting - $doc"
curl -u "$USERNAME:$PASSWORD" \
-F 'config={"conversion_target":"ANSWER_UNITS"};type=application/json' \
-F "file=#$doc;type=application/pdf" "$URL"
done
Document Conversion documentation and API Reference.

why is checking a URL failing when run through icinga?

I created my own command to check a specific URL
define command{
command_name check_url
command_line /usr/lib/nagios/plugins/check_http -f follow -H '$HOSTNAME$' -I '$HOSTADDRESS$' -u '$ARG1$'
}
If I run my command from the command line, it works:
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u http://www.example.com/server-status
HTTP OK: HTTP/1.1 200 OK - 4826 bytes in 0.011 second response time |time=0.010625s;;;0.000000 size=4826B;;;0
But when run through Icinga, I'm getting
HTTP WARNING: HTTP/1.1 404 NOT FOUND - 314 bytes in 0.011 second response time
My guess is for check_http plugin for -u option you should provide the url appended after the server name not the whole url.
Ex.
/usr/lib/nagios/plugins/check_http -f follow -H www.example.com -u /server-status
Your manual test is not equivalent to your command definition.
The distinction with -H/-I is subtle, but very important.
When I have problems like this, where Icinga is abstracting exactly how it is executing the command, I find it helpful to find out precicely what Icinga is executing. I would accomplish this as follows:
Move check_http to a temporary location
# mv /usr/lib/nagios/plugins/check_http /usr/lib/nagios/plugins/check_http_actual
Make a bash script that Icinga will call instead of the actual check_http script
# vi /usr/lib/nagios/plugins/check_http
In that file, create this simple bash script, which simply echos the command line arguments it was called with, then exits:
#!/bin/bash
echo $#
Then of course, make that bash script executable:
# chmod +x /usr/lib/nagios/plugins/check_http
Now in Icinga, run the check_http command. At that point, the return status shown in the Icinga web interface will show exactly how Icinga is calling check_http. Seeing the raw command, it should be obvious as to what Icinga is doing wrong. Once you correct Icinga's mistake, you can simply move the original check_http script back into place:
# mv /usr/lib/nagios/plugins/{check_http_actual,check_http}

Resources