using libsvm classifier in Weka and heap size - heap-memory

problem in working with WEKA 3.6
I want to utilize libsvm.jar but the error "not in the classpath" occurs when I run weka with the command prompt as below:
java -Xmx900m -jar Weka.jar
please note that we use libsvm library when I run weka in normal mode (with the initial heap size and without using command prompt).
besides, I use windows 7-32bit.
help me plz I am running out of time.
thanks

Find out where libsvm.jar is and include it in the classpath, e.g.
java -Xmx900m -classpath '.:/usr/share/java/libsvm.jar' -jar Weka.jar
The actual location of libsvm.jar will depend on your system.

The other answer wasn't working for me. According to the Weka documentation, to include LibSVM in the classpath requires not using the "-jar weka.jar" option. Specifying -jar overrides the classpath that you are trying to set.
Instead, use
java -Xmx900m -classpath $CLASSPATH:weka.jar:libsvm.jar weka.gui.GUIChooser

Related

Concordion Unable to find specification

java.lang.RuntimeException: Unable to find specification: com/concordion/Concordion.html
I'm using Concordion 2.2.0 with Junit 5 jupiter using the Junit 4 vintage engine and a TFS build agent using maven. The maven surefire picks up the Concordion java file but simply can't find the corresponding Concordion html and so the auto-tests fail.
The html specification file is in the resources directory but it doesn't matter where I put it, the surefire / concordion libraries can't find it!
The specification files need to be on the classpath in the same package as the Java class. Typically this is under the src/test/resources folder. See https://concordion.org/coding/java/markdown/#locating-the-specification for more details.
Are you able to provide a simplified test case showing the issue?
Moving the specifications to the same location as the java files (src/test/java or src/main/java) should get it working in the short term.

How to run Apache CXF wadl2java with JDK 12?

The following command used to work flawlessly:
C:\tools\apache-cxf-3.3.1\bin\wsdl2java -client -d generated foo.wsdl
It no longer works with the latest version of JDK - 12. I have downloaded the latest version of Apache CXF, and still get the same error:
-Djava.endorsed.dirs=C:\tools\apache-cxf-3.3.1\bin\..\lib\endorsed is not supported. Endorsed standards and standalone APIs
in modular form will be supported via the concept of upgradeable modules.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Could anyone offer a tip on how to remedy this?
I got the Apache CXF 3.3.1 wsdl2java utility to work with the latest OpenJDK 11 by doing 4 things:
Pull down this jar and place it into the {CXF_HOME}/lib directory: https://mvnrepository.com/artifact/javax.jws/jsr181-api/1.0-MR1
Pull down this jar and also place it in the {CXF_HOME}/lib directory: https://mvnrepository.com/artifact/javax.xml.ws/jaxws-api/2.3.1
In my case, since I'm running on a Mac, I vi'd the wsdl2java script and made sure these two jars are explicitly being set on the CXF classpath, by doing the following declaration within the script right before the execution of the java command:cxf_classpath=${cxf_classpath}:../lib/jaxws-api-2.3.1.jar:../lib/jsr181-api-1.0-MR1.jar
Lastly, I removed the '-Djava.endorsed.dirs="${cxf_home}/lib/endorsed"' parameter from the java command at the end of the script, since newer JDKs no longer support this argument, so my command now looks like this:$JAVA_HOME/bin/java -Xmx${JAVA_MAX_MEM} -cp "${cxf_classpath}" -Djava.util.logging.config.file=$log_config org.apache.cxf.tools.wsdlto.WSDLToJava "$#"
Now, using OpenJDK11, I'm able to point to an external WSDL file and successfully generate the client code I need to consume this SOAP service with the following command:
./wsdl2java -client -d src https://somewhere.com/service\?wsdl
Whether or not this all works yet is TBD in terms of being able to call and consume the SOAP service I'm coding against, but I've at least now overcome the Java9+ support issue with this tool specific to generating client code from a WSDL.
If your needs are different, I would at least remove the '-Djava.endorsed.dirs="${cxf_home}/lib/endorsed"' JVM parameter and start calling the wsd2java command with the parameters you need set and just start iteratively adding back in the missing libs it starts throwing java.lang.NoClassDefFoundError errors for.
Their FAQ specifically says starting in 3.3.x, Java 9+ will be supported but something clearly dropped the ball between the no-longer-supported hardcoded JVM arguments still being passed in the utility and the missing libraries to support the newer JDKs where these legacy libs have been removed.
Hope this helps someone out there unfortunate enough to ALSO still be programming against SOAP endpoints but trying to at least keep the client-side code you're writing up to date and taking advantage of the newer features of the modern JDK.

Why I cannot use spark interpreter in zeppelin?

In the picture, it turned out that I can use python interpreter but not spark. I have no idea why. Please give me help. I'm totally lost.🥺
Here is the command window of zeppelin
My code is just 1 + 1 , to test whether I can run different interpreter.
here is the bash window
It seems, you have incompatible versions of Java and Scala or JAVA_HOME is not set.
Please go thru this related question, how to fix and about the issue
Failed to initialize compiler: object java.lang.Object in compiler mirror not found

java eclipse hadoop map reduce program unable to access my files stored in hdfs

My java eclipse hadoop map reduce program is displaying an error unable to locate the input file. I had copied the files to hadoop directory via terminal using hadoop commands. I can see the files in java eclipse dfs location. And also using the command hadoop dfs -ls in terminal. When i created a normal folder (not hdfs) then the problem get solved. But then program is accesing the file from local file system.
I had installed hadoop 1.2.1 on redhat server 32 bit, using java eclipse luna, i had already included hadoop plugins and external jar files from the hadoop library. Input and output path are given through run time arguments
First of all,Hadoop eclipse plugins doesn't have great reliability. I had the same problem when using the plugin with the Eclipse Luna. But that compatibility issue got solved when i used Eclipse Juno. And there is no suitable plugin available for Hadoop 2.x versions.
You can use the tool Maven to manage all the Hadoop dependencies just like Hadoop eclipse plugin except that you should run the Job from the terminal.
Link on how to use Maven with Hadoop
Accept my answer if it fits your case. :)

Running Solr with Jetty

I'm having a little trouble understanding how Solr fits in with Jetty, and why I can't seem to get the start.jar in the distribution package to work.
I can run all of the example configurations via java -jar start.jar. However, when I try to run something like the follwing --
java -Dsolr.solr.home=/Users/jwwest/solr -jar $(brew --prefix solr)/libexec/example/start.jar
-- the following error occurs:
java.io.FileNotFoundException: No XML configuration files specified in start.config or command line.
at org.eclipse.jetty.start.Main.start(Main.java:506)
at org.eclipse.jetty.start.Main.main(Main.java:95)
I opened up the start.jar file, and there is a start.config file located inside of the jar which I'm assuming should handle this configuration for me. I'm not understanding why it will work when run from inside of the distribution examples directory, but not outside of it.
You also need to define the jetty.home property. Try:
java -Dsolr.solr.home=/Users/jwwest/solr -jar $(brew --prefix solr)/libexec/example/start.jar -Djetty.home=$(brew --prefix solr)/libexec/example
You can see the effective command line start.jar generates by using the --dry-run command line flag.
java -jar start.jar --dry-run
That will output everything with full path names so you can run it from outside the directory.
Source: http://www.eclipse.org/jetty/documentation/9.0.0.M3/advanced-jetty-start.html
The start.jar is a jetty specific mechanism that works to build out all the classpath requirements for starting up Jetty. It is generally only used in the scope of the jetty distribution. Pulling the start.jar out of the configuration and placing it somewhere else renders the default configuration of the start.config rather moot.
My understanding of Solr is that it bundles itself with a distribution of jetty, placing what it needs to run into the distribution and repackages it as its own. They may have a custom start.config file that further adds its own locations for classpath resources and the like, or not.
The exception you are seeings stems from the start.config file expecting an etc/ directory containing jetty.xml formatted xml files which are used to configure the jetty process.
Jetty being often used in an embedded format has little to do with this issue, it is simply a common use case because jetty is incredibly easy to embed into an application. Embedded instances of jetty rarely (if ever) leverage a start.jar...instead it is up to the embedding application to manage its own classpath.
First, you need to change your folder where start.jar is located, then execute the same command.
Jetty is often used as embedded container. If you want to use the jetty, then a good start would be to copy the example directory and rename it to what you want it to be. The solr directory is the one for basic configuration.
Else it is recommended to use tomcat and the solr.war file.

Resources