I am getting the following exception while trying to deploy an app on appengine
mvn gcloud:deploy [Error gcloud app command exit code is: 2]
what does exit code 2 means here?
Stack trace
[ERROR] Failed to execute goal com.google.appengine:gcloud-maven-plugin:2.0.9.121.v20160815:deploy (default-cli) on project userprofilev2: Error: gcloud app command exit code is: 2 -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.google.appengine:gcloud-maven-plugin:2.0.9.121.
v20160815:deploy (default-cli) on project userprofilev2: Error: gcloud app command exit code is: 2
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Error: gcloud app command exit code is: 2
at com.google.appengine.gcloudapp.AbstractGcloudMojo.startCommand(AbstractGcloudMojo.java:463)
at com.google.appengine.gcloudapp.GCloudAppDeploy.execute(GCloudAppDeploy.java:43)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more
upgrading gcloud version using 'gcloud components update' commmand, fixed my problem
Related
I am trying out flink example as explained in flink docs in a single node yarn cluster.
As mentioned in this discussion HADOOP_CONF_DIR is also set like below before executing the yarn command.
export HADOOP_CONF_DIR=/etc/hadoop/conf
On executing the below command
ubuntu#vrni-platform:~/build-target/flink$ ./bin/flink run-application -t yarn-application ./examples/streaming/TopSpeedWindowing.jar
It is failing with the below errors
The program finished with the following exception:
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment.
Diagnostics from YARN: Application application_1614159836384_0045 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1614159836384_0045_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2021-02-24 16:19:39.409]File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have made the log level DEBUG and I do see that flink-dist_2.12-1.12.1.jar is getting copied to /home/ubuntu/.flink/application_1614159836384_0045.
2021-02-24 16:19:37,768 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Got modification time 1614183577000 from remote path file:/home/ubuntu/.flink/application_1614159836384_0045/TopSpeedWindowing.jar
2021-02-24 16:19:37,769 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Copying from file:/home/ubuntu/build-target/flink/lib/flink-dist_2.12-1.12.1.jar to file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar with replication factor 1
I have placed the entire DEBUG logs here.
Nodemanger logs have warnings like below
2021-02-24 16:36:34,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1614159836384_0047
2021-02-24 16:36:34,220 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1614159836384_0047_01_000001
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user ubuntu
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047 = file:/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047
2021-02-24 16:36:34,247 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2021-02-24 16:36:34,268 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: { file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar, 1614184593000, FILE, null } failed: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The entire nodemanger logs are here.
Can someone let me know what is going wrong? Does flink not support single node yarn cluster for development?
Flink Version 1.12.1
There was a configuration issue in my setup. In my setup hadoop-yarn-nodemenager is running with yarn user.
ubuntu#vrni-platform:/tmp/flink$ ps -ef | grep nodemanager
yarn 4953 1 2 05:53 ? 00:11:26 /usr/lib/jvm/java-8-openjdk/bin/java -Dproc_nodemanager -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/heap-dumps/yarn -XX:+ExitOnOutOfMemoryError -Dyarn.log.dir=/var/log/hadoop-yarn -Dyarn.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Xmx512m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=yarn -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
I was executing the ./bin/flink command as ubuntu user and yarn user does not have permission to write to ubuntu's home folder in my setup.
ubuntu#vrni-platform:/tmp/flink$ echo ~ubuntu
/home/ubuntu
ubuntu#vrni-platform:/tmp/flink$ echo ~yarn
/var/lib/hadoop-yarn
It appears flink needs permission to write to user's home directory to create a .flink folder even when the job is submitted in yarn. It is working fine for me if I run the flink with yarn user in my setup.
Summary: mapr dependency could not be found and thus the Flink build on master branch fails.
Failed to execute goal on project flink-mapr-fs: Could not resolve
dependencies for project org.apache.flink:flink-mapr-fs:jar:1.10-SNAPSHOT:
Failed to collect dependencies at com.mapr.hadoop:maprfs:jar:5.2.1-mapr
MapR has been in shutdown talks as a company - i'm unclear as to whether they're even open. What is the answer/ what to do here? I'd like to simply "skip"/ignore the mapr connector but not sure how to do so..
Details:
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal on project flink-mapr-fs: Could not resolve dependencies for project org.apache.flink:flink-mapr-fs:jar:1.10-SNAPSHOT: Failed to collect dependencies at com.mapr.hadoop:maprfs:jar:5.2.1-mapr
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:221)
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resolveProjectDependencies(LifecycleDependencyResolver.java:127)
at org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAreResolved(MojoExecutor.java:245)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:199)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.project.DependencyResolutionException: Could not resolve dependencies for project org.apache.flink:flink-mapr-fs:jar:1.10-SNAPSHOT: Failed to collect dependencies at com.mapr.hadoop:maprfs:jar:5.2.1-mapr
at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:180)
at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:195)
... 23 more
Caused by: org.eclipse.aether.collection.DependencyCollectionException: Failed to collect dependencies at com.mapr.hadoop:maprfs:jar:5.2.1-mapr
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:291)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies(DefaultRepositorySystem.java:316)
at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:172)
... 24 more
Caused by: org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for com.mapr.hadoop:maprfs:jar:5.2.1-mapr
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:282)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:198)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCachedArtifactDescriptor(DefaultDependencyCollector.java:535)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifactDescriptorResult(DefaultDependencyCollector.java:519)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:409)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:363)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:351)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:254)
... 26 more
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact com.mapr.hadoop:maprfs:pom:5.2.1-mapr from/to mapr-releases (https://repository.mapr.com/maven/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:267)
... 33 more
Caused by: org.eclipse.aether.transfer.ArtifactTransferException: Could not transfer artifact com.mapr.hadoop:maprfs:pom:5.2.1-mapr from/to mapr-releases (https://repository.mapr.com/maven/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:43)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:355)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:581)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:249)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:520)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:421)
... 36 more
Caused by: org.apache.maven.wagon.TransferFailedException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Just verify you have the below entry in "flink/flink-filesystems/flink-mapr-fs/pom.xml"
<profile>
<id>unsafe-mapr-repo</id>
<activation>
<property>
<name>unsafe-mapr-repo</name>
</property>
</activation>
<repositories>
<!-- MapR -->
<repository>
<id>mapr-releases</id>
<url>http://repository.mapr.com/maven/</url>
<snapshots><enabled>false</enabled></snapshots>
<releases><enabled>true</enabled></releases>
</repository>
</repositories>
</profile>
Once it is verified, run the below command from "flink/flink-filesystems/flink-mapr-fs", to ensure the build was successful for the below project.
mvn clean package -DskipTests -Punsafe-mapr-repo.
Once this is successful, run the same maven package command from "flink/flink-filesystems", to ensure the build was successful and after that you should be good to run from flink project.
This will ensure that you have a successful target-build for the flink project.
I'm running Apache Flink 1.4.2 (Without bundled hadoop) on Mac, and I've managed to run the WordCount example. However, when I try to run the Gelly example following instructions in Runnung Gelly Examples , I stumble into the following error:
java.lang.ClassNotFoundException: org.apache.flink.graph.generator.random.BlockInfo
May I know how can I fix it?
I first started the cluster by
./bin/start-cluster.sh
Starting cluster.
[INFO] 1 instance(s) of jobmanager are already running on myhostname.
Starting jobmanager daemon on host myhostname.
[INFO] 1 instance(s) of taskmanager are already running on myhostname.
Starting taskmanager daemon on host myhostname.
Then in another terminal, I run
./bin/flink run examples/gelly/flink-gelly-examples_2.11-1.4.2.jar --algorithm GraphMetrics --order directed --input RMatGraph --type integer --scale 20 --simplify directed --output print
and the following errors
Cluster configuration: Standalone cluster with JobManager at localhost/127.0.0.1:6123
Using address localhost:6123 to connect to JobManager.
JobManager web interface address http://localhost:8081
Starting execution of program
Submitting job with JobID: 099acb25fa34be1fe63fb47296605f69. Waiting for job completion.
Connected to JobManager at Actor[akka.tcp://flink#localhost:6123/user/jobmanager#57507119] with leader session id 00000000-0000-0000-0000-000000000000.
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The program execution failed: Failed to submit job 099acb25fa34be1fe63fb47296605f69 (RMatGraph (s20e16d) ⇨ GraphMetrics ⇨ Hash [integer])
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:492)
at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:456)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:444)
at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
at org.apache.flink.graph.Runner.execute(Runner.java:452)
at org.apache.flink.graph.Runner.main(Runner.java:507)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:396)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:802)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:282)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1054)
at org.apache.flink.client.CliFrontend$1.call(CliFrontend.java:1101)
at org.apache.flink.client.CliFrontend$1.call(CliFrontend.java:1098)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1098)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Failed to submit job 099acb25fa34be1fe63fb47296605f69 (RMatGraph (s20e16d) ⇨ GraphMetrics ⇨ Hash [integer])
at org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1325)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:447)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:122)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.graph.generator.random.BlockInfo
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:115)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:73)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1859)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1710)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1550)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
at java.util.HashSet.readObject(HashSet.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:437)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:424)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:412)
at org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:58)
at org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1253)
... 19 more
This particular error shows up because when you have copied the jar files to the lib/ folder, but didn't stop and start the flink cluster against after doing the copy (i.e. you had the cluster running before you performed the copy).
If you stop and start the cluster using ./bin/stop-cluster.sh and ./bin/start-cluster.sh after copying the jar files to the lib folder as mentioned on this page the problem gets fixed.
Also that's probably why you didn't get the error with new installation because you copied the jar files to the lib folder before starting the flink cluster.
I installed a sonar server on my own pc (Windows XP), and I deployed sonar on Tomcat with a MySQL database.
I successfully analysed an old project. But with my current project, I get this exception when trying to execute mvn sonar:sonar:
EDIT:
[INFO] [14:39:37.254] Sensor CoberturaSensor done: 218 ms
[INFO] [14:39:37.254] Sensor Maven dependencies...
[INFO] [14:39:41.426] Sensor Maven dependencies done: 4172 ms
[INFO] [14:39:41.848] Execute decorators...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:53.354s
[INFO] Finished at: Tue May 15 14:39:43 CEST 2012
[INFO] Final Memory: 21M/138M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (d
efault-cli) on project XXX: Can not execute Sonar: Fail to decorate 'com.myPackage
.myClass': 793 -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal o
rg.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project XXX: Can
not execute Sonar
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:217)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBu
ild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
eStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
cher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
352)
Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar
at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:118)
at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65)
at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:90)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(Default
BuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:209)
... 19 more
Caused by: org.sonar.api.utils.SonarException: Fail to decorate 'com.myPackage.myClass
'
at org.sonar.batch.phases.DecoratorsExecutor.executeDecorator(Decorators
Executor.java:84)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(Decorators
Executor.java:70)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(Decorators
Executor.java:63)
at org.sonar.batch.phases.DecoratorsExecutor.decorateResource(Decorators
Executor.java:63)
at org.sonar.batch.phases.DecoratorsExecutor.execute(DecoratorsExecutor.
java:55)
at org.sonar.batch.phases.Phases.execute(Phases.java:94)
at org.sonar.batch.bootstrap.ProjectModule.doStart(ProjectModule.java:14
3)
at org.sonar.batch.bootstrap.Module.start(Module.java:83)
at org.sonar.batch.bootstrap.BatchModule.analyze(BatchModule.java:115)
at org.sonar.batch.bootstrap.BatchModule.doStart(BatchModule.java:105)
at org.sonar.batch.bootstrap.Module.start(Module.java:83)
at org.sonar.batch.bootstrap.BootstrapModule.doStart(BootstrapModule.jav
a:109)
at org.sonar.batch.bootstrap.Module.start(Module.java:83)
at org.sonar.batch.bootstrapper.Batch.startBatch(Batch.java:72)
at org.sonar.batch.bootstrapper.Batch.execute(Batch.java:59)
at org.sonar.maven3.SonarMojo.execute(SonarMojo.java:143)
at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:113)
... 23 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 793
at org.sonar.plugins.core.timemachine.tracking.HashedSequenceComparator.
equals(HashedSequenceComparator.java:34)
at org.sonar.plugins.core.timemachine.ViolationTrackingBlocksRecognizer.
computeLengthOfMaximalBlock(ViolationTrackingBlocksRecognizer.java:48)
at org.sonar.plugins.core.timemachine.ViolationTrackingDecorator.mapViol
ations(ViolationTrackingDecorator.java:130)
at org.sonar.plugins.core.timemachine.ViolationTrackingDecorator.decorat
e(ViolationTrackingDecorator.java:70)
at org.sonar.batch.phases.DecoratorsExecutor.executeDecorator(Decorators
Executor.java:79)
... 39 more
I don't understand where the problem comes from. Please anyone have an answer? Can it come from my POM configuration?
PS: the command lines mvn clean install and mvn site work correctly...
You're facing a bug of Sonar 3.0 (=> http://jira.codehaus.org/browse/SONAR-3442). We have release Sonar 3.0.1 (see http://www.sonarsource.org/downloads/) that fixes this issue.
I tried to download and install sonar and then running an analysis with maven sonar plugin.
I have tried first Sonar version 2.14, and now 2.13.1, but I get the same error over and over again.
It seems it is unable to download the jdbc driver from localhost (the sonar server) because I get this error message: "Fail to download the
file: http://localhost:9000/deploy/jdbc-driver.jar (no proxy): Read timed out -> [Help 1]"
But If I go to http://localhost:9000/deploy/jdbc-driver.jar with my browser, it downloads the file without problems, so it seems that the sonar server is working the correct way, and that the maven sonar plugin is doing something wrong that I can not figure out.
I have also tried to alter sonar's context root to /sonar, but I get the same error.
I have configured sonar to use postgreSQL instead of the embedded Derby database, but still no luck. Accessing Sonar from a browser works like a charm, but I can not run mvn sonar:sonar so at this point my sonar installation is pretty useless... :(
Here is the stack trace from running maven with the -e option:
[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project betterfarmer-engine: Can not execute Sonar: Fail to download the
file: http://localhost:9000/deploy/jdbc-driver.jar (no proxy): Read timed out -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.0:sonar (default-cli) on project betterfarmer-engi
ne: Can not execute Sonar
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar
at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:118)
at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65)
at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:90)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: org.sonar.api.utils.SonarException: Fail to download the file: http://localhost:9000/deploy/jdbc-driver.jar (no proxy)
at org.sonar.api.utils.HttpDownloader.download(HttpDownloader.java:130)
at org.sonar.batch.bootstrap.ArtifactDownloader.downloadJdbcDriver(ArtifactDownloader.java:58)
at org.sonar.batch.bootstrap.JdbcDriverHolder.<init>(JdbcDriverHolder.java:43)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.picocontainer.injectors.AbstractInjector.newInstance(AbstractInjector.java:147)
at org.picocontainer.injectors.ConstructorInjector$1.run(ConstructorInjector.java:319)
at org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:274)
at org.picocontainer.injectors.ConstructorInjector.getComponentInstance(ConstructorInjector.java:341)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.getComponentInstance(AbstractInjectionFactory.java:56)
at org.picocontainer.behaviors.AbstractBehavior.getComponentInstance(AbstractBehavior.java:64)
at org.picocontainer.behaviors.Stored.getComponentInstance(Stored.java:91)
at org.picocontainer.DefaultPicoContainer.getInstance(DefaultPicoContainer.java:689)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:638)
at org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:663)
at org.sonar.api.platform.ComponentContainer.getComponentByType(ComponentContainer.java:113)
at org.sonar.batch.bootstrap.Module.getComponentByType(Module.java:131)
at org.sonar.batch.bootstrap.BootstrapModule.configure(BootstrapModule.java:63)
at org.sonar.batch.bootstrap.Module.init(Module.java:49)
at org.sonar.batch.bootstrap.Module.init(Module.java:41)
at org.sonar.batch.Batch.<init>(Batch.java:37)
at org.sonar.maven3.SonarMojo.executeBatch(SonarMojo.java:143)
at org.sonar.maven3.SonarMojo.execute(SonarMojo.java:136)
at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:113)
... 23 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.MeteredStream.read(MeteredStream.java:134)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2959)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2953)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1262)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1236)
at org.sonar.api.utils.HttpDownloader.download(HttpDownloader.java:125)
... 48 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Your Sonar is probably not correctly installed.
Don't you have errors in the server log?
Are you able to browse the web application at http://localhost:9000?
I'd suggest to start with a fresh install (in another folder than the previous one) and check that you don't have env properties that point to the previous install location before starting the new Sonar.