z.load in apache zeppelin results in error - apache-zeppelin

i'm trying z.load in apache zeppelin as following:
%dep
z.load("/zeppelin-0.5.6-incubating-bin-all/lplibs/hive/csv-serde-1.0.5-jar-with-dependencies.jar")
I get an ERROR and it says (not sure this is the error):
Must be used before SparkInterpreter (%spark) initialized
Hint: put this paragraph before any Spark code and restart Zeppelin/Interpreter
this zeppelin section is the first i have in my notebook so i'm not sure what its complaining about..

Right now I can't check your problem, but you should restart interpreter (pushing restart button) before loading dependency jar file.

There might be a chance that Sparkcontext has already been started by other notebook.
So as Kangrok mentioned, just restart Spark interpreter.
But apart from that, why don't you use the latest zeppelin, in which you don't need to use %dep to load your dependencies. Instead it can load from Interpreter screen.
More details can be found here https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/manual/dependencymanagement.html

Related

KNIME Command Line Execution - ClassNotFoundException

I'd like to schedule a KNIME workflow. The workflow does its job very good as long as I start it from the KNIME GUI application. When I execute the same workflow via command line, java complains that com.microsoft.sqlserver.jdbc.SQLServerDriver
could not be found (ClassNotFoundException).
I invoke it via:
"D:\Progamme\KNIME\knime.exe" -nosplash -application -consoleLog org.knime.product.KNIME_BATCH_APPLICATION -preferences="absolutepathto\preferences.epf" -workflowDir="absolutepathto\workflow"
Since the error message signals missing content in the java CLASSPATH I also tried to add the parameters
-vmargs -classpath .;"absolutepathto/sqljdbc42.jar"
But still I earn a java slap, pointing to the same error...
I also tried to run the command from within the knime.exe's directory and I also tried to add the JAR file to Preferences -> Java -> Build Path -> Classpath Variable / User Libraries (referenced via the -preference argument). But that had no effect.
Did anybody face the same problems? Maybe with other third party JARs?
It is all about a Database connector that is configured like this:
Does the integrated security maybe force a misleading error?
System spec: KNIME 3.2.2 on Windows Server 2008 R2
Update - extract from preferences file
/configuration/org.eclipse.core.net/org.eclipse.core.net.hasMigrated=true
/configuration/org.eclipse.ui.ide/MAX_RECENT_WORKSPACES=10
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES=<list of some workspaces>
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES_PROTOCOL=3
/configuration/org.eclipse.ui.ide/SHOW_RECENT_WORKSPACES=false
/configuration/org.eclipse.ui.ide/SHOW_WORKSPACE_SELECTION_DIALOG=true
Is there maybe a problem due to the fact that it is a shared KNIME instance among several users and the command line execution does not know which workspace has to be chosen? Is the workspace somehow needed and why?
Partial Solution:
I finally managed it but I don't know exactly why it works now. What I did was to load a fresh portable version of KNIME and ran the same commands only changing the executable path to the new portable version. Before that I started the portable version once to set the workspace directory and register the database driver in preferences dialog and .ini file, nothing else, same configuration so far as the shared KNIME instance. What I am really wondering abpout is that from now on the commands are also working with the shared KNIME instance. I really don't know what caused the change that let KNIME find the driver class.
Info
Because I encountered a few more problems within shared environment in KNIME command line mode, that led to undeterministic execution results, I wrote a little .NET library. This gives me more flexibility/control over the workflow execution (which returncodes and error messages occured and so on). You can find it here if you're interested: KnimeNet
I took a very minimal approach:
cd "C:\Program Files\KNIME"
.\knime -nosplash -noexit -consoleLog -reset -application org.knime.product.KNIME_BATCH_APPLICATION -workflowFile="D:\Work\Knime Workflows\Output\CMD_Test.knwf" -preferences="D:\Work\Knime Workflows\Output\CMD_Test.epf"

Apache Zeppelin running on spark occurs java ConnectionException

I want to ask some question about using appache-zeppelin installation.
I downloaded the zeppelin-0.5.5-incubating-bin-all
configure export JAVA_HOME=/sparkDemo/java-1.8.0-openjdk in zeppelin-env.sh and zeppelin.server.port 8084 in zeppelin-site.xml. I didn't configure SPARK_HOME in zeppelin-env.sh because i wanna use Zeppelin embedded Spark libraries.
But when i run the zeppelin tutorial code in my window browser,occur the following error: enter image description here
And even i configure SPARK_HOME, export MASTER in zeppelin-env.sh and create new interpreter in zeppelin web UI,the same error occurs.
Thanks a lot for responding me!
Stack Trace here
As mentioined in other answers, most probably the issue is that Interpreter process quite due to some error.
More details on particular error could be found in:
Interpreter process log
./logs/zeppelin-interpreter-<interpreter name>-<username>-<hostname>.log
and ZeppelinServer process log under
./logs/zeppelin-<username>-<hostname>.log

Opendaylight (odl) ovs-vsctl not found error

I am following this tutorial: https://wiki.opendaylight.org/view/Getting_started
I am trying to use the following code in opendaylight using karaf
ovs-vsctl show
But the command window says Command not found: ovs-vsctl
I have installed all the necessary libraries and the local host server (http://localhost:8181/dlux/index.html) is running fine. But somehow odl can't find ovs.
Can anyone tell me what's the error? I am running win 8.
Thank you
You need to run this command outside of karaf terminal.
Firstly, you should have ovs(Open Virtual Switch) or Mininet installed, and then create one or two open switches.
Basically, you started the SDN controller in karaf, and now in the step you are encountering problem, the switches need to be assigned ODL controller as their manager.
You must check also that ovsdb is already installed in karaf.
For that, try to execute the next command:
feature:list | grep ovsdb
That command will display all the ovsdb components/features that are available in your karaf distribution. The third column will indicate you if a given component is already installed or not (if you see an X, that means that the component is installed). If you want to install a component/feature:
feature:install <name_of_the_feature>
After that, try to execute it outside of karaf, as Sidhant01 has indicated you before.
Try to do it with sudo:
sudo ovs-vsctl show.
If you want to configure ovsdb in an active mode:
tools-vm:~$ sudo ovs-vsctl set-manager tcp:127.0.0.1:6640
tools-vm:~$ sudo ovs-vsctl show
98d8cf7a-44b1-4b02-a60c-7d832409d06f
Manager "tcp:127.0.0.1:6640"
is_connected: true
ovs_version: "2.0.2"
Cheers

"Your GStreamer installation is missing a plug-in." (GstURIDecodeBin)

I have: gstreamer-sdk, gstreamer-ffmpeg, gstreamer-plugins-good, bad, and ugly. I googled everywhere for this error and have found nothing relevant. I'm going a little nuts trying to figure out this error:
Error received from element decodebin20: Your GStreamer installation is missing a plug-in.
Debugging information: gstdecodebin2.c(3576): gst_decode_bin_expose (): /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20:
no suitable plugins found
It throws when I run my gstreamer program. Any ideas on why?
You may not be missing any plugins at all.
This error can be a result of just an unlinked pipeline.
Playbin2(decodebin2) got some changes that made it unable to automatically link up some pipelines that formally worked, for example transcoding a decoder to an encoder. In my case, explicitly adding the ffdec_h264 that it used to add automatically fixed it.
Relying on the Playbin2 can be very frustrating when it does not work. Using the setup below, you can create a .png diagram of the pipeline in various phases of construction. It's very helpful in finding why it isn't linking up.
export GST_DEBUG_DUMP_DOT_DIR=~/gstdump
for f in $GST_DEBUG_DUMP_DOT_DIR/*.dot ; do dot -T png $f >$f.png; done
This tool also lets you learn from it how to link up pipelines, and replace them with explicit ones that are easier to debug and less likely to break.
In Fedora, I resolved this issue removing gstreamer1-vaapi.x86_64:
sudo yum remove gstreamer1-vaapi.x86_64
uridecodebin is part of the "base" plugin set, so make sure you have gstreamer-plugins-base.
Another thing to look into is your LD_LIBRARY_PATH and GST_PLUGIN_PATH. If they point to a different GStreamer installation, it could cause problems like this. Also, if you didn't install GStreamer with a package manager, you may need to set your LD_LIBRARY_PATH to point to it (or better yet, install it with a package manager).
Pleas try to use gst-inspect command to find out if environment is correctly setup.
use gst-launch -v playbin2 uri = "your_uri_here" to find more information to trace this issue.

Is JRebel necessary to run Maven?

Greetings,
I am trying to start a scala/liftweb project for deployment on Google App Engine. To do this, i need to package it up as a .war using maven.
However, whenever I run the 'mvn' command, I am met with:
Error opening zip file or JAR manifest missing : /Applications/JRebel/jrebel.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Is there something wrong with my maven or do I need Jrebel? I see jrebel is not free which is why I am so surprised.
thanks!
No, JRebel is definitely not required to run Maven.
As Matt mentioned, JRebel is not required to run Maven. However, ZeroTurnaround does offer a free version that works with Scala. You can get it here:
http://sales.zeroturnaround.com/
As for your error - it indicates you are trying to start the JVM as though you are using JRebel. What is the full Maven command you are running? What is in your MAVEN_OPTS environment variable? If either of them contain something like -noverify -javaagent:/Applications/JRebel/jrebel.jar, then that's your problem.
One of the reason of the problem is a blank in the path of jrebel.jar
Make sure that there is no blank in the path like in "Program Files"

Resources