[Zeppelin]Cannot call methods on a stopped SparkContext - apache-zeppelin

When we use spark through zeppelin spark interpreter in share per note model, sometimes we get the following error info:
WARN [2019-11-11 13:37:29,610] ({pool-2-thread-16} NotebookServer.java[afterStatusChange]:2302) - Job 20191028-172705_1731645157 is finished, status: ERROR, exception: null, result: %text java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:233)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:165)
org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:87)
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:617)
org.apache.zeppelin.scheduler.Job.run(Job.java:188)
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
The currently active SparkContext was created at:
(No active SparkContext.
we have reproduce this error info by the following steps:
create two notes, one is note A, the other is note B
for note A and B, run some paragraghs and see all succeeded
now, delete note A and then we run next paragragh in B, then the error happened
How can we solve this problem? is this a issue of zeppelin spark interpreter itself?

I have run into the same error message. From your description above I realized that I was running two Zeppelin instances with the same interpreter which were pointing to the same data sources (but different notebooks). On my side, I solved the issue by closing the second Zeppelin instance, and restarting my interpret.
I would suggest that you investigate the configs of your interpreter, being global or per user, to see if that may not be the cause of the problem.

Related

Beam on EMR throws a java.util.ServiceConfigurationError

I have an Apache Beam application(using beam version 2.23.0) that I am trying to deploy on AWS EMR(emr-5.30.1) with Flink(1.10.0) preinstalled.
The application is running with no issues when I deploy it on my local docker flink cluster. But when I do
flink run -m yarn-cluster -c my_class my_jar.jar
on the master node of the EMR cluster
I get
java.util.ServiceConfigurationError: com.fasterxml.jackson.databind.Module: Provider com.fasterxml.jackson.module.jaxb.JaxbAnnotationModule not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at com.fasterxml.jackson.databind.ObjectMapper.findModules(ObjectMapper.java:1054)
at org.apache.beam.sdk.options.PipelineOptionsFactory.<clinit>(PipelineOptionsFactory.java:471)
at org.myapp.main(MainApp.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Seems like the issue is with
org.apache.beam.sdk.options.PipelineOptionsFactory.<clinit>(PipelineOptionsFactory.java:471) but I am not clear on what is causing this behaviour.
Can someone please advise what may cause this?
Thank you in advance!
That is probably a classloading issue.
On EMR Flink EC2 instance, there are already some jars, and these libraries are loaded before your own dependencies. So, the version that is used at runtime is the one provided by EMR, not the one you have as a dependency in your own pom.xml.
There are multiple solutions:
in your pom.xml, use the same version than the one provided by EMR
in EC2 instance, replace the EMR version by yours
change the order of library loading
whatever the solution, you need to send to Flink all the required dependencies, no only the jar that contains your own code

ERROR [QueryResource] Cannot get excel for query

I executed a query in Saiku and tried to export it to Excel. It throws error page.
Below are the error logs :::
10:05:22,885 ERROR [QueryResource] Cannot get excel for query (01976CF4-EB20-DE88-94CA-E8E8F2A74EA5)
java.lang.NullPointerException
at sun.font.FontManager.getDefaultPlatformFont(FontManager.java:3409)
at sun.java2d.SunGraphicsEnvironment$2.run(SunGraphicsEnvironment.java:263)
at java.security.AccessController.doPrivileged(Native Method)
at sun.java2d.SunGraphicsEnvironment.<init>(SunGraphicsEnvironment.java:164)
at sun.awt.X11GraphicsEnvironment.<init>(X11GraphicsEnvironment.java:254)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Please assist in resolving this.
You need to have the system fonts installed on your box, I'm guessing you're running a headless Linux system?

Running Flink Program on a Remote Cluster

I have a program in Apache Flink. I tested and ran it on the local machine and every thing works fine. To run the program on a remote cluster, I did necessary changes as mentioned in Apache Flink Official Website.
I did the following changes:
The two points below
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment("taskManagerName",
portNo,paralelismNo);
Fixing the necessary paths to read input files and write outputs.
Generate a thin jar out of the program and put the necessary jar
libraries into a folder besides my project jar file called
myproj.jar.
copying the data and the jar library and myproj.jar into the cluster
and run the following command remotely on the cluster:
java -cp pathToJarLib \\* -jar myproj.jar
But, I get the below error and I don't have any clue to fix the issue. There are no relevant log files which can aid me in fixing this issue.
Error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/common/functions/MapFunction
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
at java.lang.Class.getMethod0(Class.java:2813)
at java.lang.Class.getMethod(Class.java:1663)
at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.common.functions.MapFunction
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 6 more
Your classpath is obviously not complete. Try to submit via bin/flink run myproj.jar. This sets up the classpath correctly.

Use Infinite Graph without installing the Product

I am currently writing a Infinite Graph Database Scanner where user can connect to remote InfiniteGraph by providing *.boot file. I am using Blueprint implementation of the InfiniteGraph
i.e. com.tinkerpop.blueprints.impls.ig.IGGraph.
Now the code works perfectly when the machine already has installed version of InfiniteGraph, but fails in other cases. I tried to bundle the bin folder from the installation directory with in my project, but it still fails.
The code I am using :
IGGraph graph = new IGGraph("D:\\PROPERTY_GRAPH_TEST.boot");
for (Vertex vertex : graph.getVertices()) {
System.out.println("vertex.toString() = " + vertex.toString());
}
The exception I am gettings :
Exception in thread "main" java.lang.RuntimeException: com.objy.db.ObjyRuntimeException: Query setup error: Configuration Error: Unable to find the objectivity.crg file.
at com.tinkerpop.blueprints.impls.ig.IGGraph.<init>(IGGraph.java:67)
at com.globalids.test.TestIGGraph.main(TestIGGraph.java:13)
Caused by: com.objy.db.ObjyRuntimeException: Query setup error: Configuration Error: Unable to find the objectivity.crg file.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at com.objy.pm.ErrorManager.exceptionToThrow(Unknown Source)
at com.objy.pm.ErrorManager.interpretKernelErrors(Unknown Source)
at com.objy.pm.ErrorManager.checkRegisteredErrors(Unknown Source)
at com.objy.pm.ExternalInterface.localErrorCheck(Unknown Source)
at com.objy.pm.ExternalInterface.checkedLong(Unknown Source)
at com.objy.pm.ExternalInterface.QueryScanItr(Unknown Source)
at com.objy.pm.QueryScanItr.<init>(Unknown Source)
at com.objy.db.internal.Query.execute(Unknown Source)
at com.infinitegraph.impl.ConnectionManager.verifyCompatability(ConnectionManager.java:211)
at com.infinitegraph.impl.ConnectionManager.connect(ConnectionManager.java:98)
at com.infinitegraph.GraphFactory.openGraph(GraphFactory.java:227)
at com.infinitegraph.GraphFactory.open(GraphFactory.java:86)
at com.tinkerpop.blueprints.impls.ig.IGGraph.<init>(IGGraph.java:62)
... 1 more
Can anyone help regarding this problem ??
Thank you in advance.
Thanks for your question. In fact the distribution requires more than just the "bin" copied over to run successfully. Can you make sure that the "etc" and "plugins" directory are each copied into the same directory as your "bin" directory? This is due to the fact that InfiniteGraph uses the location of the "bin" directory to find the other configuration files in the "etc" and "plugins" directories (where the file, objectivity.crg, and other required files are located). You can email support#objectivity.com if you have any further questions. Thanks!

tomcat 6 startup exception

I'm running tomcat6 on centos 6, keeps getting the following error in the log upon startup. I have a pretty standard out of the box configuration, it's a new install.
org.apache.catalina.mbeans.ServerLifecycleListener lifecycleEvent
SEVERE: destroyMBeans: Throwable
javax.management.MalformedObjectNameException: Cannot create object name for org.apache.catalina.connector.Connector#d02b2b6
at org.apache.catalina.mbeans.MBeanUtils.createObjectName(MBeanUtils.java:764)
at org.apache.catalina.mbeans.MBeanUtils.destroyMBean(MBeanUtils.java:1416)
at org.apache.catalina.mbeans.ServerLifecycleListener.destroyMBeans(ServerLifecycleListener.java:678)
at org.apache.catalina.mbeans.ServerLifecycleListener.destroyMBeans(ServerLifecycleListener.java:1005)
at org.apache.catalina.mbeans.ServerLifecycleListener.destroyMBeans(ServerLifecycleListener.java:971)
at org.apache.catalina.mbeans.ServerLifecycleListener.lifecycleEvent(ServerLifecycleListener.java:154)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.StandardServer.stop(StandardServer.java:748)
at org.apache.catalina.startup.Catalina.stop(Catalina.java:643)
at org.apache.catalina.startup.Catalina.start(Catalina.java:618)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
According to this tomcat6 bug report:
https://issues.apache.org/bugzilla/show_bug.cgi?id=48612
It is a bug in 6.0.24 (current for centos6) and fixed in subsequent versions. We'll have to wait for the fix to trickle down.
Whether there's a workaround is not specified. Whether it is actually SEVERE is not specified... Too bad.
If you install tomcat6-webapps package the error disappears, probably a missed dependency.

Resources