Passing command line arguments to spark submit through zeppelin web ui - apache-zeppelin

I want to configure the zeppeline spark interpreter. I would like to pass --conf "spark.cassandra.connection.host=<ip>" --conf "spark.cassandra.input.split.size_in_mb=32" and --jars $(echo /home/sysadmin/ApacheSpark/jar/*.jar | tr ' ' ',') option to spark submit through my zeppelin ui interpreter.
How can I pass them?
Since I have many cassandra machines, I would like to create multiple spark interpreter and therefore do not want to add the configuration in the zeppelin-env file as stated here.

You can use inline configuration to do that which is pretty easy and intuitive.
e.g.
%spark.conf
spark.jars. jar1,jar2
spark.cassandra.connection.host <ip>
spark.cassandra.input.split.size_in_mb 32

Related

Robot Framework Parameterizing using yaml file

Hi can anybody help me parameterize the string word so it will fetch from my yaml. I tried to run however I'm getting an error it shows failed: Using YAML variable files requires PyYAML module to be installed. Typically you can install it by running pip install pyyaml. but I already install pyyaml on my local machine. your response is highly appreciated. Thank you so much
Expected Result: ${String} parameter should get the value from my robot.yaml (Ralph) value
VS Terminal Screenshot:
.robot screenshot
robot.yaml file screenshot:
CMD Screenshot:
In robot.yaml define PYTHONPATH like this:
PYTHONPATH:
- .
- string: "RALPH"
Make sure you have installed PyYAML, then include robot.yaml and collections library in the robot file:
Variables path_to_file/robot.yaml
Library Collections
After this you can extract string value inside the test like this:
${value} = pop from dictionary ${PYTHONPATH[1]} string
log to console ${value}
This will print:
RALPH
Second item in PYTHONPATH list is a dictionary, so you first need to access ${PYTHONPATH[1]} and then pop the needed key (in your case string) in order to return its value.

Unable to install devstack with designate

I am new to the OpenStack environment and started to get into it with a small DevStack setup. I worked the following instructions on a Ubuntu 18.04 machine through and everything worked fine. In order to play with some dns zones I started to research about designate. After adapting the following instructions to my setup I got some errors.
Executing stack.sh produces the following error:
++/opt/stack/designate/devstack/plugin.sh:source:5 set +o xtrace
2021-01-12 21:44:39.009 | Initializing Designate
DROP DATABASE
Could not load 'database': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'pool': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'tlds': type object 'deprecated' has no attribute 'WALLABY'
usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
[--log-config-append PATH] [--log-date-format DATE_FORMAT]
[--log-dir LOG_DIR] [--log-file PATH] [--nodebug]
[--nouse-journal] [--nouse-json] [--nouse-syslog]
[--nowatch-log-file]
[--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal]
[--use-json] [--use-syslog] [--watch-log-file]
{} ...
designate: error: argument category: invalid choice: 'database' (choose from )
Error on exit
World dumping... see /opt/stack/logs/worlddump-2021-01-12-214442.txt for details
nova-compute: no process found
neutron-dhcp-agent: no process found
neutron-l3-agent: no process found
neutron-metadata-agent: no process found
neutron-openvswitch-agent: no process found
I was not sure if my setup was legit. So I tried to use the example config from the designate tutorial. But the same problem occurred.
My actual local.conf:
[[local|localrc]]
USE_PYTHON3=True
ADMIN_PASSWORD=***
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DEST=/opt/stack
SERVICE_HOST=192.168.1.***
HOST_IP=$SERVICE_HOST
disable_service mysql
enable_service postgresql
enable_plugin designate https://opendev.org/openstack/designate
enable_service tempest
Checking the plugin.sh. It looks like the error occurred from this function:
function init_designate {
# (Re)create designate database
recreate_database designate utf8
# Init and migrate designate database
$DESIGNATE_BIN_DIR/designate-manage database sync
init_designate_backend
}
Hope somebody can give me a hint to run DevStack with designate.
Thanks in advance.
The issue you are having is a version mismatch with the cloud install and the designate plugin. Designate is expecting a newer verison of the oslo_log package.
Check that the "devstack" version you have checked out is on the master branch.
The line:
enable_plugin designate https://opendev.org/openstack/designate
Is pulling the master branch of designate for the devstack plugin.
If you are trying to install on a stable branch version OpenStack, you will need to specify a reference for the devstack plugin as well (example, stable/victoria):
enable_plugin designate https://opendev.org/openstack/designate stable/victoria
As mentioned above, you will also need to enable the designate services:
enable_service designate,designate-central,designate-api,designate-worker,designate-producer,designate-mdns

PowerCLI vSphere set-annotation

I am able to set notes in annotation with below PowerCLI vSphere.
Input: serverdetails.txt
name,notes
Server 1, This is an application server : .Net
Command:
Import-Csv "C:\temp\serverdetails.txt" | %{ Set-Vm -Name $.Name -Description $.Name -Confirm:$false }
Current Output in annotation i get is as below, complete content in one line.
This is an application server : .Net
However, i need below output in annotation (in two lines):
Line 1: This is an application server
Line 2: .Net
That's not really how the notes field operates, it's more designed around displaying information in the multi-VM view where multi-line output wouldn't work.
If you're looking for something that can display multiple lines, in an prescriptive manner, look at using Tags or even Annotations/Custom Attributes.

How to retry failed scenario in Behave using python

Can someone please tell me how I can run a failed test again in Behave using Python?
I want to re-run the failed test case automatically if it fails.
The behave library actually has a RerunFormatter which can help you rerun the failing scenarios of your previous test-run. It creates a text file of all your failing scenarios like:
# -- file:rerun.features
# RERUN: Failing scenarios during last test run.
features/auth.feature:10
features/auth.feature:42
features/notifications.feature:67
To use the RerunFormatter all you need to do is put it in your behave configuration file (behave.ini):
# -- file:behave.ini
[behave]
format = rerun
outfiles = rerun_failing.features
To rerun the failing scenarios, use this command:
behave #rerun_failing.features
I know that's a later answer but it could help others.
There is another approach that also could help, it's implementing it under the environment.py file, you could do the retry by a specific tag.
Provides support functionality to retry scenarios a number of times
before their failure is accepted. This functionality can be helpful
when you use behave tests in an unreliable server/network
infrastructure.
For example, I am running tag "#smoke_test" on CI, so I choose this tag to patch with retry condition.
First, on your environment.py import the following:
# -- file: environment.py
from behave.contrib.scenario_autoretry import patch_scenario_with_autoretry
Then add the method:
# -- file:environment.py
#
def before_feature(context, feature):
for scenario in feature.scenarios:
if "smoke_test" in scenario.effective_tags:
patch_scenario_with_autoretry(scenario, max_attempts=3)
*max_attempts are by default set as 3. I just described there to make it explicit that you can actually set how many retries you want.

Loading .owl files in marklogic

Is it possible to load .owl files using mlcp?
I tried with -input_file_type rdf but it gives error as below:
bin/mlcp.sh import -host localhost -port 9010 -username uname
-password pwd -mode local -input_file_path /home/user/semantics/data -input_file_type rdf -input_file_pattern '.*.owl'
FATAL contentpump.RDFReader: dbpedia1.owl: Element or attribute do not
match QName production: QName::=(NCName':')?NCName. FATAL
contentpump.RDFReader: dbpedia2.owl: Element or attribute do not match
QName production: QName::=(NCName':')?NCName.
What am I missing here ?
MarkLogic documentation lists the supported triples file formats:
.rdf
.ttl
.json
.n3
.nt
.nq
.trig
Maybe you convert your .owl file to one of those formats, at which point you could use MLCP to load it. I tried plugging your example into a format converter, but that didn't work. Perhaps it's because we only have a snippet here.
MarkLogic should be able to process .owl files, but I think Joshua is right that MarkLogic is expecting .owl files to contain RDF/XML. You can also see that from the list of Mimetypes in the Admin interface. It lists the .owl extension as 'application/owl+xml', and RDF/XML seems to be the more common serialization of OWL.
Might just be that if you rename the file to .nt that it works..
HTH!

Resources