java eclipse hadoop map reduce program unable to access my files stored in hdfs - file

My java eclipse hadoop map reduce program is displaying an error unable to locate the input file. I had copied the files to hadoop directory via terminal using hadoop commands. I can see the files in java eclipse dfs location. And also using the command hadoop dfs -ls in terminal. When i created a normal folder (not hdfs) then the problem get solved. But then program is accesing the file from local file system.
I had installed hadoop 1.2.1 on redhat server 32 bit, using java eclipse luna, i had already included hadoop plugins and external jar files from the hadoop library. Input and output path are given through run time arguments

First of all,Hadoop eclipse plugins doesn't have great reliability. I had the same problem when using the plugin with the Eclipse Luna. But that compatibility issue got solved when i used Eclipse Juno. And there is no suitable plugin available for Hadoop 2.x versions.
You can use the tool Maven to manage all the Hadoop dependencies just like Hadoop eclipse plugin except that you should run the Job from the terminal.
Link on how to use Maven with Hadoop
Accept my answer if it fits your case. :)

Related

Concordion Unable to find specification

java.lang.RuntimeException: Unable to find specification: com/concordion/Concordion.html
I'm using Concordion 2.2.0 with Junit 5 jupiter using the Junit 4 vintage engine and a TFS build agent using maven. The maven surefire picks up the Concordion java file but simply can't find the corresponding Concordion html and so the auto-tests fail.
The html specification file is in the resources directory but it doesn't matter where I put it, the surefire / concordion libraries can't find it!
The specification files need to be on the classpath in the same package as the Java class. Typically this is under the src/test/resources folder. See https://concordion.org/coding/java/markdown/#locating-the-specification for more details.
Are you able to provide a simplified test case showing the issue?
Moving the specifications to the same location as the java files (src/test/java or src/main/java) should get it working in the short term.

Ubuntu .db file: No Application Installed for OLE2 Compound Document Storage

I am doing a machine Learning project using Columbia Gaze Dataset
http://www.cs.columbia.edu/CAVE/databases/columbia_gaze/
I am using Ubuntu LTS 16.04
There are files like "Thumbs.db" and when i try to open them i have error
No Application For OLE2 Compound Document Storage
I have also read many other posts but none of them helped
Is there a tool to open this .db file(ole2 format) ?
Or must i turn into Windows10 instead of Ubuntu?
*I tried to do the following but without success:*
1) ripole tool(didnt extract to /tmp directory)
2) JDBC (to view database with libreoffice base as .odb. Problem with encodings) with UcanAccess tool
I also found the following source codes but couldn't find any solution how to manipulate them
1) gfs-html-1.14.42 , OpenMcdf 2.0 and oletools-0.46

How to setup Flink Local Cluster

I am trying to use Flink local on Linux and Windows, for my bachelor
thesis. I have found these steps for local setup:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/quickstart/setup_quickstart.html#start-a-local-flink-cluster
When I try this I got only errors like this:
-bash: bin/start-local.sh: No such file or directory
When I go to the directory of the start-local.sh file then I got
/flink-1.1.2/flink-dist/src/main/flink-bin/conf/flink-conf.yaml: No such file or directory
Same problem with Windows.
What do I have to change so that it works?
It seems that you have downloaded the sources. It is necessary to download one of binaries from here: https://flink.apache.org/downloads.html#binaries. Then, follow the given instructions for local setup.
Of course if you want to build Flink from sources, use this guide: https://github.com/apache/flink#building-apache-flink-from-source.

appcfg.cmd java version; 1.7 installed; 1.6 in path; tells me it needs 1.6 ti yokiad

I try to use this command to deploy my application to
appspot.google.com:
c:\a\appeng\bin\appcfg.cmd --use_java7 update c:\a\u3e
Generates the error messsage.
C:\a>c:\a\appeng\bin\appcfg.cmd --use_java7 update c:\a\u3e
Registry key 'Software\JavaSoft\Java Runtime Environment\CurrentVersion'
has value '1.7', but '1.6' is required.
Error: could not find java.dll
Error: could not find Java SE Runtime Environment.
I tried setting the path to use the Java 1.6 SDK we downloaded
but that did not help or change any thing.
The web resources talk about what version of Java is used
by the application once it appears on Google's servers; I
did not see anything about the Java version for the upload
process including developers.google.com/appengine/docs/java/gettingstarted/uploading and developers.google.com/appengine/docs/java/tools/uploadinganapp#Command_Line_Arguments as well as searching this site specifically and checking google.
Can I deploy
an application from the computer in my house without
deinstalling the Java 1.7 I use for other purposes?
Thank you for looking at this question. I resolved the problem. It was not related to Google Application Development
Server. It was a difficult-to-resolve path problem to the directory where the Java executables were kept.

Using JSVC to daemonize a Java app packaged with the Maven One-Jar Plugin

Here is the problem:
I have packaged my Java application into a single jar using the Maven plugin One-Jar.
Now I want to run the application as a Unix Daemon using JSVC, i.e. Apache Commons Daemon.
I am using JSVC as follows (which works for Jars made with the Maven assembly plugin, etc):
jsvc -user $USER -home $HOME -pidfile $PID_PATH -cp $PATH_TO_ONE_JAR my.package.MyClass
The error is this:
jsvc.exec error: Cannot find daemon loader org/apache/commons/daemon/support/DaemonLoader
jsvc.exec error: Service exit with a return value of 1
Does anyone know if it is even possible to use JSVC and One-Jar together, since One-Jar uses a custom class loader? The jar runs just fine when I run java -jar my-one-jar.jar.
What can be done?
Thank you for any insight!
I had to add all jars dependencies to the classpath option from jsvc. It seems jsvc doesn't use the jars inside another jar
If you use the (poorly-documented) Maven Shade Plugin instead of One-jar (they can achieve similar results as each other), it should solve your problems. It unpacks the dependent jars and stores the class files directly in the fat Jar (rather than having jars within the jar). I have used it to create an executable jar for running under JSVC with some success.
Of course, things are seldom as simple as they sound. With the Shade plugin, you may have to do some work to relocate classes when there are conflicts in your dependency tree, or use resource transformers to handle your non-Java resource files. But hopefully not.
(Of course Mkyong.com has a guide on this)

Resources