How to setup up Zeppelin to use SQL Server Windows Authentication - sql-server

I'm trying to get my Zeppelin notebook to use Windows Authentication to connect to MS SQL Server. I've gotten local authentication to work using the JDBC. I've gotten Zeppelin working authentication with Active Directory. This is the final step to get the notebook to working. This should be possible right?
In my interpreter I have:
Properties
zeppelin.jdbc.auth.type = Kerberos
zeppelin.jdbc.integratedSecurity = true
Dependencies
/opt/zeppelin/interpreter/mssql/mssql-jdbc-6.4.0.jre8.jar
But when I try out my notebook I get this error:
java.lang.ClassNotFoundException: org.apache.hadoop.security.UserGroupInformation$AuthenticationMethod
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.zeppelin.jdbc.security.JDBCSecurityImpl.getAuthtype(JDBCSecurityImpl.java:65)
at org.apache.zeppelin.jdbc.security.JDBCSecurityImpl.createSecureConfiguration(JDBCSecurityImpl.java:42)
at org.apache.zeppelin.jdbc.JDBCInterpreter.open(JDBCInterpreter.java:190)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

zeppelin.jdbc.auth.type = true is not a valid setting
Types of authentications' methods supported are SIMPLE, and KERBEROS
https://zeppelin.apache.org/docs/latest/interpreter/jdbc.html#more-properties

Related

Could not initialize class sun.security.ssl.SSLExtension when deploying spring boot with mssql database to glassfish 5.1

I am new to glassfish server and I am trying to deploy a Spring boot application in glassfish 5.1.
My Spring boot application communicates with database via JPA and Hibernate.
When I compile and deploy with mysql, deployment went fine.
However, when I change the database to Microsoft Sql Server (MSSQL) deployment fails with the error below:
java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: Could not initialize class sun.security.ssl.SSLExtension
at com.sun.enterprise.web.WebApplication.start(WebApplication.java:136)
at org.glassfish.internal.data.EngineRef.start(EngineRef.java:122)
at org.glassfish.internal.data.ModuleInfo.start(ModuleInfo.java:291)
at org.glassfish.internal.data.ApplicationInfo.start(ApplicationInfo.java:352)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:500)
at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:219)
at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:491)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:540)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2$1.run(CommandRunnerImpl.java:536)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$2.execute(CommandRunnerImpl.java:535)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:566)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$3.run(CommandRunnerImpl.java:558)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:557)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1465)
at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$1300(CommandRunnerImpl.java:110)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1847)
at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1723)
at org.glassfish.admin.rest.resources.admin.CommandResource.executeCommand(CommandResource.java:408)
at org.glassfish.admin.rest.resources.admin.CommandResource.execCommandSimpInMultOut(CommandResource.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)
at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)
at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)
at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:377)
at org.glassfish.admin.rest.adapter.JerseyContainerCommandService$3.service(JerseyContainerCommandService.java:174)
at org.glassfish.admin.rest.adapter.RestAdapter.service(RestAdapter.java:179)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:463)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:168)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:206)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:180)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:242)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:539)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:593)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:573)
at java.lang.Thread.run(Thread.java:750)
]]
Server: Galssfish 5.1
Databse used: mssql
Dependency: runtimeOnly 'com.microsoft.sqlserver:mssql-jdbc'
Java: jdk8
Properties file Config:
spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.datasource.url=jdbc:sqlserver://127.0.0.1;databaseName=test_db
spring.datasource.username= SA
spring.datasource.password= Somepassword
Kindly assist.
Thanks to the explanation in Stackoverflow-glassfish5, the issue is now resolved.
There is a problem with Glassfish 5 dependency on Java8 sun package which is causing a clash during deployment.
To fix the issue, we needed to delete the sun folder from grizzly-npn-bootstrap.jar.
Location is: {{glassfish5_home}}/glassfish5/glassfish/modules/endorsed/grizzly-npn-bootstrap.jar.
Procedure is to extract it with a zip application (Winrar), then delete the sun folder and save.
Restart Glassfish 5 and then you are good.

Problem with creating a data pipeline from SQL Server to BigQuery using cloud data fusion

I am trying to create a data pipeline from "SQL SERVER (from GCP VM)" To "BigQuery" using CLOUD DATA FUSION; I have done all the below setup configurations,
Created the new instance in Cloud data fusion.
Added this as a service account in IAM & Admin.
Installed the JDBC driver in SQL Server plugin
Create the wrangler and read the data from SQL server using this SQL Server plugin (in this step I can successfully authenticate my SQL server and I can see my SQL table data in it)
I Completed the pipleine config by adding Bigquery as a sink.
And I try run the pipeline and it end up with few errors; I have tried few google search but I didn't get the answer.
I was able to create a data fusion pipeline between "GCS To BigQuery" and it was working fine. but this "SQL server to big query" pipeline showing some Error.
Could anyone please help me on this?
Here is the error details,
2020-01-10 13:00:47,528 - WARN [Thread-95:o.a.h.m.LocalJobRunner#589] - job_local976595976_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:491) ~[hadoop-mapreduce-client-common-2.9.2.jar:na]
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:551) ~[hadoop-mapreduce-client-common-2.9.2.jar:na]
java.lang.NullPointerException: null
at org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.createDBRecordReader(DataDrivenDBInputFormat.java:281) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at io.cdap.plugin.db.batch.source.DataDrivenETLDBInputFormat.createDBRecordReader(DataDrivenETLDBInputFormat.java:124) ~[1578661227434-0/:na]
at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.createRecordReader(DBInputFormat.java:245) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at io.cdap.cdap.etl.batch.preview.LimitingInputFormat.createRecordReader(LimitingInputFormat.java:51) ~[cdap-etl-core-6.1.0.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.dataset.input.MultiInputFormat.createRecordReader(MultiInputFormat.java:92) ~[na:na]
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:521) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) ~[hadoop-mapreduce-client-core-2.9.2.jar:na]
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:270) ~[hadoop-mapreduce-client-common-2.9.2.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_232]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_232]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_232]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_232]
2020-01-10 13:00:50,841 - ERROR [MapReduceRunner-phase-1:i.c.c.i.a.r.ProgramControllerServiceAdapter#97] - MapReduce Program 'phase-1' failed.
java.lang.IllegalStateException: MapReduce JobId job_local976595976_0001 failed
at com.google.common.base.Preconditions.checkState(Preconditions.java:176) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService.run(MapReduceRuntimeService.java:416) ~[na:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:52) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2$1.run(MapReduceRuntimeService.java:450) [na:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
2020-01-10 13:00:50,842 - ERROR [MapReduceRunner-phase-1:i.c.c.i.a.r.ProgramControllerServiceAdapter#98] - MapReduce program 'phase-1' failed with error: MapReduce JobId job_local976595976_0001 failed. Please check the system logs for more details.
java.lang.IllegalStateException: MapReduce JobId job_local976595976_0001 failed
at com.google.common.base.Preconditions.checkState(Preconditions.java:176) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService.run(MapReduceRuntimeService.java:416) ~[na:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:52) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2$1.run(MapReduceRuntimeService.java:450) [na:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
2020-01-10 13:00:50,916 - ERROR [WorkflowDriver:i.c.c.d.SmartWorkflow#552] - Pipeline '0f084034-33a9-11ea-95f6-8e2648ebe039' failed.
2020-01-10 13:00:51,225 - ERROR [WorkflowDriver:i.c.c.i.a.r.w.WorkflowProgramController#89] - Workflow service 'workflow.default.0f084034-33a9-11ea-95f6-8e2648ebe039.DataPipelineWorkflow.20288f05-33a9-11ea-a505-8e2648ebe039' failed.
java.lang.IllegalStateException: MapReduce JobId job_local976595976_0001 failed
at com.google.common.base.Preconditions.checkState(Preconditions.java:176) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService.run(MapReduceRuntimeService.java:416) ~[na:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:52) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.internal.app.runtime.batch.MapReduceRuntimeService$2$1.run(MapReduceRuntimeService.java:450) ~[na:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_232]
As per issue records reported, you have persisted with java.lang.nullpointerexception error, that might reflect the usage of a null when the object required within an application run path.
Assuming the fact that you've successfully configured JDBC driver, I would recommend to check the source Database Properties across your pipeline in order to determine the undefined field, supposedly can be Import Query property field, that is used to import data from specified table by supplying SELECT query with appropriate $CONDITIONS if the number of splits to generate is more than 1:
SELECT * FROM <table> WHERE $CONDITIONS
UPDATE:
https://issues.cask.co/browse/CDAP-16453
It's a known issue, fixed in 6.1.2
"Same error on MySQL 5.x
Strange enough, if you deploy the pipeline and run it it works...
I'm thinking about decoupling pipelines to have small sql-to-storage and the big pipeline in the outgoing flow"
regards
Virgilio

Spark check fails

I'm trying to set up my interpreter but I'm lost and I need some help. I set up my environmental variables (I think) but when I try to check the spark version using
$spark sc.version
I get this error:
java.lang.NullPointerException
at org.apache.thrift.transport.TSocket.open(TSocket.java:170)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:62)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:133)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:407)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:315)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I this saying that my Java is wrong? I have my environment variable for java home set like so:
export JAVA_HOME=/usr/java8
So I would expect the interpreter to use java8 and work.
Any help is appreciated!

Zeppelin throws java.lang.NullPointerException when starting SparkInterpreter after adding Spark dependencies via Maven

I'm trying to add Spark dependencies via maven by specifying groupId:artifactId:version in Dependencies section in Zeppelin's Interpreter console. Once I saved and executed a spark paragraph. A java.lang.NullPointerException was thrown, full log below
java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:44)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:39)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext_2(OldSparkInterpreter.java:375)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext(OldSparkInterpreter.java:364)
at org.apache.zeppelin.spark.OldSparkInterpreter.getSparkContext(OldSparkInterpreter.java:172)
at org.apache.zeppelin.spark.OldSparkInterpreter.open(OldSparkInterpreter.java:740)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:61)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:76)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:92)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I then removed that maven dependencies but the exception didn't disappear.
According to the log, it seems like Zeppelin is unable to start a spark interpreter, so I looked at Spark Interpreter log at zeppelin-interpreter-spark-zeppelin-development-cluster-m.log but nothing was logged when I ran a spark paragraph. Below is a Zeppelin log found in zeppelin-zeppelin-development-cluster-m.log
INFO [2018-08-10 13:24:53,036] ({qtp2110245805-15} VFSNotebookRepo.java[save]:196) - Saving note:2DM9MXZGM
INFO [2018-08-10 13:24:53,053] ({pool-2-thread-2} SchedulerFactory.java[jobStarted]:109) - Job 20180810-111509_373986425 started by scheduler org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-2DM9MXZGM
INFO [2018-08-10 13:24:53,054] ({pool-2-thread-2} Paragraph.java[jobRun]:380) - Run paragraph [paragraph_id: 20180810-111509_373986425, interpreter: sql, note_id: 2DM9MXZGM, user: anonymous]
WARN [2018-08-10 13:24:58,614] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2303) - Job 20180810-111509_373986425 is finished, status: ERROR, exception: null, result: %text java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:44)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:39)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext_2(OldSparkInterpreter.java:375)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext(OldSparkInterpreter.java:364)
at org.apache.zeppelin.spark.OldSparkInterpreter.getSparkContext(OldSparkInterpreter.java:172)
at org.apache.zeppelin.spark.OldSparkInterpreter.open(OldSparkInterpreter.java:740)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:61)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:76)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:92)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I found similar posts facing the same issues but they are related to zeppelin unable to connect to Hive. I don't think that's my issue. I also ran spark-shell and it worked fine.
I'm on Google Dataproc image version 1.3.1.
thanks
I've managed to fix it. The problem is I added the following artifact https://mvnrepository.com/artifact/spotify/spark-bigquery/0.2.2-s_2.11 as an external dependency and it might have conflicted with Spark or Zeppelin libs. After removing everything in zeppelin.dep.localrepo which is /usr/lib/zeppelin/local-repo in my case and restart Zeppelin, everything is back to normal.
thanks

prestodb JDBC interpreter raises NullPointerException

I'm trying to connect to prestodb from Zeppelin using the generic JDBC interpreter. Here's the configuration:
presto %jdbc (default)
Option Shared
Properties
name value
default.driver com.facebook.presto.jdbc.PrestoDriver
default.url jdbc:presto://presto:8080
default.user presto
zeppelin.jdbc.concurrent.max_connection 10
zeppelin.jdbc.concurrent.use true
Dependencies
artifact exclude
/zeppelin/interpreter/jdbc/presto-jdbc-0.157.jar
I can successfully connect and query using the CLI:
./presto --server presto:8080
But when I try to use any query inside a notebook paragraph I get:
null
class java.lang.NullPointerException
org.apache.zeppelin.jdbc.JDBCInterpreter.getMaxResult(JDBCInterpreter.java:471)
org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:307)
org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:408)
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
org.apache.zeppelin.scheduler.Job.run(Job.java:176)
org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Is there something missing in my JDBC interpreter configuration?
It turns out it was a simple configuration issue. Looking at the code for getMaxResult:
propertiesMap.get(COMMON_KEY).getProperty(MAX_LINE_KEY, MAX_LINE_DEFAULT));
So I suspected JDBC interpreters had some configuration that I forgot to include on my presto interpreter. Looking for the keywords COMMON and MAX, there was it:
common.max_count 1000
Adding this property to the presto interpreter solved this problem.

Resources