Is it possible to set the custom JVM Options env.java.opts when submitting a job without specifying it in the conf/flink-conf.yaml file?
The reason I am asking is I want to use some custom variables in my log4j. I am also running my job on YARN.
I have tried the following command using the CLI and it strips everything off from the = sign onwards
$ flink run -m yarn-cluster -yn 2 -yst -yD env.java.opts="-DappName=myapp -DcId=mycId"
At the moment this is not possible due to the way Flink parses the dynamic properties. Flink assumes that dynamic properties have the form -D<KEY>=<VALUE> and that <VALUE> does not contain any = which is clearly wrong. Thus, for the moment you have to specify the env.java.opts via flink-conf.yaml.
I've opened a JIRA issue to fix this problem.
Update
The problem has been fixed for Flink >= 1.3.0 and >= 1.2.2.
A simple solution which I tried was passing the configuration parameters in application.properties as arguments like below,
~/flink/bin/flink run app.jar --Brokers=Broker1:9093 --TopicName=some-topic
Also you can also pass in the parameters as a properties file,
~/flink/bin/flink run app.jar -Dspring.config.name=<full-path>/application.properties
Related
I need to specify different Flink settings for different applications. In other words, each application should run with its custom file flink-conf.yaml. What is the proper way to do it?
I found some old recommendations to declare FLINK_CONF_DIR pointing to a custom directory with Flink configuration files (for example: How could I override configuration value in Apache Flink?). However, the official Flink documentation does not mention the FLINK_CONF_DIR variable at all (as of Flink 1.13). Therefore I have doubts, that this way is officially recommended and supported by Flink developers.
UPDATE 1: Details on application running
I am running Flink on YARN in the Application mode. Here is how I launch the application:
"$flink_home/bin/flink" \
run-application \
--target yarn-application \
--class com.example.App1
The out-of-the-box Flink configuration is located in the $flink_home/conf directory. As I have several applications App1, App2, ..., I want them to use their respective Flink configurations instead of the out-of-the-box configuration.
TL;DR: The paragraph about FLINK_CONF_DIR was accidentally removed when the Flink on YARN docs were rewritten for the Flink 1.12 release. It is still the intended and supported way to establish per-application settings in YARN clusters.
Other ways to override the configuration:
You can override the settings specified in the cluster's flink-conf.yaml file with settings you specify on the command line, as described in this answer.
You can also override specific settings from the global configuration in your code, e.g.:
Configuration conf = new Configuration();
conf.setString("state.backend", "filesystem");
env = StreamExecutionEnvironment.getExecutionEnvironment(conf);
You can also load all of the settings in a flink-conf.yaml file from your application code, via
FileSystem.initialize(GlobalConfiguration.loadConfiguration("/path/to/conf/directory"));
And with Kubernetes you can mount different ConfigMaps for different applications.
I am using custom Nagios plugins for the first time and am running into this error when I create a service for the plugin.
(No output on stdout) stderr: execvp(/usr/local/nagios/libexec/check_load.py, ...) failed. errno is 2: No such file or directory
The plugin works when I run it on the command line, however does not work when it runs within Nagios.
I followed these steps to get the plugin into Nagios
https://assets.nagios.com/downloads/nagiosxi/docs/Managing-Plugins-in-Nagios-XI.pdf
Here is what it looks like in the Nagios UI
The plugin is in the correct path: /usr/local/nagios/libexec and the resource.cfg file has the same path within it.
I tried two separate plugins, both which work on the command line, and the result is the same error.
The error indicates the file location is incorrect, however the plugin is in the specified directory and runs with no errors within that directory.
I am totally stumped and appreciate any help.
For anyone reading this, I solved the problem.
The first time I added the plugin, I forgot to add the python extension. When I updated the already created plugin, Nagios still threw the error.
Once I completely deleted the plugin and re-created it the 'file not found', error went away.
I faced a similar issue when I was trying to add a custom plugin ( I had custom plugins in ruby and python ).
The issue was the missing shebang line at the start of the script (which determines the script's ability to be executed like a standalone executable).
For example, if you have a python plugin custom-plugin.py then make sure this script has shebang at the start of script #!/usr/bin/env python3. Also if you have other scripts (ruby, bash etc.) make sure to add the appropriate path at the start of your scripts.
Also, check the path for plugins Nagios version. For my setup path was /usr/local/nagios/libexec/ and make sure your custom plugin is executable and has correct ownership permissions.
Sample custom template I used :
define command {
command_name check_switch_health
command_line /usr/local/nagios/libexec/check_snmp.rb --host $HOSTADDRESS$ --model "$ARG1$" --community "$ARG2$"
}
The above workaround worked for me.
i'm trying z.load in apache zeppelin as following:
%dep
z.load("/zeppelin-0.5.6-incubating-bin-all/lplibs/hive/csv-serde-1.0.5-jar-with-dependencies.jar")
I get an ERROR and it says (not sure this is the error):
Must be used before SparkInterpreter (%spark) initialized
Hint: put this paragraph before any Spark code and restart Zeppelin/Interpreter
this zeppelin section is the first i have in my notebook so i'm not sure what its complaining about..
Right now I can't check your problem, but you should restart interpreter (pushing restart button) before loading dependency jar file.
There might be a chance that Sparkcontext has already been started by other notebook.
So as Kangrok mentioned, just restart Spark interpreter.
But apart from that, why don't you use the latest zeppelin, in which you don't need to use %dep to load your dependencies. Instead it can load from Interpreter screen.
More details can be found here https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/manual/dependencymanagement.html
I am following this tutorial: https://wiki.opendaylight.org/view/Getting_started
I am trying to use the following code in opendaylight using karaf
ovs-vsctl show
But the command window says Command not found: ovs-vsctl
I have installed all the necessary libraries and the local host server (http://localhost:8181/dlux/index.html) is running fine. But somehow odl can't find ovs.
Can anyone tell me what's the error? I am running win 8.
Thank you
You need to run this command outside of karaf terminal.
Firstly, you should have ovs(Open Virtual Switch) or Mininet installed, and then create one or two open switches.
Basically, you started the SDN controller in karaf, and now in the step you are encountering problem, the switches need to be assigned ODL controller as their manager.
You must check also that ovsdb is already installed in karaf.
For that, try to execute the next command:
feature:list | grep ovsdb
That command will display all the ovsdb components/features that are available in your karaf distribution. The third column will indicate you if a given component is already installed or not (if you see an X, that means that the component is installed). If you want to install a component/feature:
feature:install <name_of_the_feature>
After that, try to execute it outside of karaf, as Sidhant01 has indicated you before.
Try to do it with sudo:
sudo ovs-vsctl show.
If you want to configure ovsdb in an active mode:
tools-vm:~$ sudo ovs-vsctl set-manager tcp:127.0.0.1:6640
tools-vm:~$ sudo ovs-vsctl show
98d8cf7a-44b1-4b02-a60c-7d832409d06f
Manager "tcp:127.0.0.1:6640"
is_connected: true
ovs_version: "2.0.2"
Cheers
Am using the NagiosQL to configure the Nagios core, i have a problem with check_mysql_health plugin it is not returning status information in Nagios core but if run same command in command-line it is gives the proper result. Can anyone suggest me some solution for this?
Thanks
Somesh
Finally i got the solution for my problem, to get the status information in Nagios core we have to set the path of the check_mysql_health plugin in 4 different config files /etc/nagios/command-plugin.cfg - add the plugin path,/etc/nagios/nrpe.cfg - add the plugin path, /etc/nagiosql/commands.cfg - add the plugin path at command definition, /etc/nagios/objects/commands.cfg - define the command for check_mysql_health, and the Additional templates must be a generic templates in service panel of NagiosQL