Using gdbus to start a systemd service - dbus

I've created a new systemd service that I would like to be able to active via a dbus call. The service simply executes a shell script.
I've defined the service here:
/lib/systemd/system/testamundo.service
[Unit]
Description=Testamundo
[Service]
Type=dbus
BusName=org.freedesktop.testamundo
ExecStart=/home/test/systemd/testamundo.sh
I've also defined a D-Bus service for it here:
/usr/share/dbus-1/system-services
[D-BUS Service]
Name=org.freedesktop.testamundo
Exec=/usr/sbin/console-kit-daemon --no-daemon
User=root
SystemdService=testamundo.service
I am attempting to start it using gdbus, this is the command I'm trying to use:
sudo gdbus call --system --dest org.freedesktop.systemd1 --object-path /org/freedesktop/systemd1 --method org.freedesktop.systemd1.StartUnit "org.freedesktop.testamundo"
If I use --system as I did above the command returns with an Unknown Method error, if I use --session it returns with an exit code 1 from the child process. When I look at journalctl with --session and --system I can see the command, but beyond that no additional information.
Appreciate any thoughts or advice, thanks!

Your dbus command is using non-existing interfaces. First of all it is org.freedesktop.systemd1.Manager.Start unit not org.freedesktop.systemd1.StartUnit. Second, org.freedesktop.systemd1.Manager.Start needs 2 parameters, service name and start mode. Ref: http://www.freedesktop.org/wiki/Software/systemd/dbus/
You have defined a dbus service but you are bypassing dbus by directly asking systemd to activate the service. Other note is, dbus actually sends a signal to systemd not a method call.
You have everything in place, if you just do an introspection on your service, it should be activate.
sudo gdbus call --system --dest org.freedesktop.testamundo --object-path /org/freedesktop/testamundo --method org.freedesktop.DBus.Introspectable. Introspect

Related

Connecting to an external HTTP api behind a proxy from nifi

I have a apache/nifi:latest instance spun inside an Amazon Linux 2 EC2. For reference, see this guide: here
I have a QuerySalesforceObject ver. 1.18.0 that makes use of StandardOauth2AccessTokenProvider.
The oauth2 provider url is configured at https://test.salesforce.com/services/oauth2/token
I can curl this url from the box and from inside the docker container just fine (I don’t get a timeout).
[root#ip-10-229-18-107 \~\]# docker exec -it nifi_container_persistent /bin/sh
printenv | grep -i proxy
HTTPS_PROXY=http://proxy.MY_DOMAIN.com:3128
no_proxy=localhost,127.0.0.1,MY_DOMAIN.com,.amazonaws.com
NO_PROXY=localhost,127.0.0.1, MY_DOMAIN.com,.amazonaws.com
https_proxy=http://proxy.MY_DOMAIN.com:3128
http_proxy=http://proxy.MY_DOMAIN.com:3128
HTTP_PROXY=http://proxy.MY_DOMAIN.com:3128
curl https://test.salesforce.com/services/oauth2/token
{"error":"unsupported_grant_type","error_description":"grant type not supported"}#
But when I run the task, oauth2 fails with an error
java.io.UncheckedIOException: OAuth2 access token request failed
Caused by: java.net.SocketTimeoutException: connect timed out
This leads me to believe the proxy settings are not being honored by the class. How can I fix this?
Here’s more info on this class: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-oauth2-provider-nar/1.17.0/org.apache.nifi.oauth2.StandardOauth2AccessTokenProvider/index.html
The standard way to interface with HTTP resources with a proxy in Nifi is via StandardProxyConfigurationService: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-proxy-configuration-nar/1.19.1/org.apache.nifi.proxy.StandardProxyConfigurationService/index.html
If a component does not have this property, then it means it does not support it.
You can try bootstrapping proxy settings into nifi with /opt/nifi/nifi-current/conf/bootstrap.conf. But there is no standard and support of proxy is not guaranteed. Implementation (bugs and all) depends on the library. aws-java-sdk ver. 1x, for example, has a bug where nonProxyHosts is not honoured. https://github.com/aws/aws-sdk-java/issues/2797
java.arg.18=-Dhttp.nonProxyHosts="foo|localhost|*.bar.org"
java.arg.19=-Dhttp.proxyHost=proxy.foo.com
java.arg.20=-Dhttp.proxyPort=123
java.arg.21=-Dhttp.proxyUser=foo
java.arg.22=-Dhttp.proxyPassword=bar
java.arg.23=-Dhttps.nonProxyHosts="foo|localhost|*.bar.org"
java.arg.24=-Dhttps.proxyHost=proxy.foo.com
java.arg.25=-Dhttps.proxyPort=123
java.arg.26=-Dhttps.proxyUser=foo
java.arg.27=-Dhttps.proxyPassword=bar

Monitor Nagios client services only when NRPE on client is up

Is there any way to achieve below scenario in Nagios using NRPE?
Nagios box will first check if NRPE on client box is up and if yes it wil check on other services configured for that client. If NRPE is down on client, it will throw notification for NRPE and will stop checking rest of the services configured for that client box until NRPE comes up.
This setting is what are you looking for. Look at your nagios.cfg
# DISABLE SERVICE CHECKS WHEN HOST DOWN
# This option will disable all service checks if the host is not in an UP state
#
# While desirable in some environments, enabling this value can distort report
# values as the expected quantity of checks will not have been performed
host_down_disable_service_checks=1
Check your hosts status via check_nrpe. Create new command in your config, if you don't have it:
define command{
command_name check-host-alive-nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$
}
Now, use this command in your host definition, something like that:
define host {
host_name your_server
address your_server
use generic-host
check_command check-host-alive-nrpe
}
When the NRPE on remote host stop responding due some problems, this host will be in CRITICAL state and remote check for services will be temporary disabled.
After you configure this don't forget restart your Nagios service.
PS: This setting works only with Nagios 4+
I achieved this via service dependency where all nrpe checks depend on nrpe service availability check.
define servicedependency{
hostgroup linux-servers
# host_name xyz.example.com
service_description check_nrpe_alive
dependent_service_description check_disk,check_mem,chech_load,check_time,check_disk
execution_failure_criteria w,c,u
notification_failure_criteria u,w,c,o
}
Below is check_nrpe_alive check command definition.
define command{
command_name check_nrpe_alive
command_line $USER1$/check_nrpe -H $HOSTADDRESS$
}
Also needed to set soft_service_dependencies=1 in nagios.cfg
# SOFT STATE DEPENDENCIES
# This option determines whether or not Nagios will use soft state
# information when checking host and service dependencies. Normally
# Nagios will only use the latest hard host or service state when
# checking dependencies. If you want it to use the latest state (regardless
# of whether its a soft or hard state type), enable this option.
# Values:
# 0 = Don't use soft state dependencies (default)
# 1 = Use soft state dependencies
# Changing fpr service dependency
#soft_state_dependencies=0
soft_state_dependencies=1
When nrpe service on client is in CRITCAL state Nagios will only send out notification for check_nrpe_alive but not any dependent service checks. This is tested on Nagios core 4.4.6

systemd ignores services from overlayFS

I want to build a system with applications on another partition, an app-filessystem. All binaries, configs and service files which belong to the application should be in the app-fs.
I'm using the following versions: kernel 4.9.x, systemd v234.
The app-partition is mounted at /opt, this includes following files:
/opt/usr/bin/app-binary
/opt/etc/systemd/system/multiuser.target/link_2_app.service
/opt/lib/systemd/system/app.service
Here is the service file:
[Unit]
Description=The application description.
After=syslog.target basic.target
[Service]
ExecStart=/opt/usr/bin/app-binary
Type=simple
[Install]
WantedBy=multi-user.target
To synchronize the files with rootfilesystem I created 2 overlays, this could be the /etc/fstab entries (sorry for format, one line didn't work):
/dev/app-partition /opt auto defaults,x-systemd.mount 0 2
overlay /etc overlay defaults,x-systemd.mount, x-systemd.after=opt.mount,lowerdir=/etc,upperdir=/opt/etc,workdir=/work/etc 0 2
overlay /lib/systemd/system overlay defaults,x-systemd.mount,x-systemd.after=opt.mount,lowerdir=/lib/systemd/system,upperdir=/opt/lib/systemd/system,workdir=/work/lib 0 2
This is handled before local-fs.target is reached.
Result
I can start the app successfully but manually with systemctl start app.service. The status with "systemctl status app.service" says it is enabled. But the app is not starting at boot time. Systemd does not give a message about trying to start the app.
Questions
Is there a way to debug this behaviour? When does systemd check the service files? Is there a way to trigger it again? Are there other ways to handle this use case with systemd?
systemd is checking the unit files once at start, an init script which is creating the overlayFS before systemd startup can handle this use case.
Another idea (but not tested) would be: systemctl daemon-reload
systemctl daemon-reload
I test this not work.
In my situation . I use systemd-networkd,and I overlay
For disabling autostart of services seems using
ConditionPathIsSymbolicLink=#OVERLAY_UPPER_LAYER#/etc/systemd/system/../XX.service
as additional condition in service drop-in works

Google App Engine endpointscfg.py command starting 1.8.6 does not accept argument -f

This problem just started in Google App Engine version 1.8.6:
When executing command (based on instruction https://developers.google.com/appengine/docs/python/endpoints/gen_clients):
endpointscfg.py get_client_lib java -o . -f rest your_module.YourApi
We get error:
endpointscfg.py: error: unrecognized arguments: -f
The command with argument -f execute without any issue for Google App Engine version 1.8.5.
With 1.8.6, I don't know how to generate client end point library, because of this error. If you have a workaround, please help.
When you use get_client_lib to generate client library, rest format is the only option. So if you intend to generate a Rest client library, simply remove ".f rest" option. And you will get your Rest client without any problem.
If you want to use RPC client (which is currently only supported in iOS client). Please refer to https://developers.google.com/appengine/docs/python/endpoints/consume_ios for instruction.
I think one piece might be missing from the documentation above. In order to get the api-v1-rpc.discovery, you need to run get_discovery_doc command like following:
endpointscfg.py get_discovery_doc -o . -f rpc your_module.YourApi
Hope it helps.

Calling url when apache service starts

Is it possible to call a url when the apache service starts?
Why not just add a line to your apachectl script to call a URL via wget or similar ?
I presume you're doing this to test startup behaviour ? You can check the error code from wget (or whichever tool you use) and then take appropriate action.

Resources