zeppelin | 0.8.0 | Disable Helium - apache-zeppelin

We are running Zeppelin on docker containers in a locked-down enterprise environment. When Zeppelin starts, it tries to connect to AWS, times-out after a while, but successfully starts. The log trace is below -
INFO [2018-09-03 14:26:25,131] ({main} Notebook.java[<init>]:128) - Notebook indexing finished: 0 indexed in 0s
INFO [2018-09-03 14:26:25,133] ({main} Helium.java[loadConf]:103) - Add helium local registry /opt/zeppelin-0.8.0/helium
INFO [2018-09-03 14:26:25,134] ({main} Helium.java[loadConf]:100) - Add helium online registry https://s3.amazonaws.com/helium-package/helium.json
WARN [2018-09-03 14:26:25,138] ({main} Helium.java[loadConf]:111) - /opt/zeppelin-0.8.0/conf/helium.json does not exists
ERROR [2018-09-03 14:28:32,864] ({main} HeliumOnlineRegistry.java[getAll]:80) - Connect to s3.amazonaws.com:443 [s3.amazonaws.com/54.231.81.59] failed: Connection timed out
INFO [2018-09-03 14:28:33,840] ({main} ContextHandler.java[doStart]:744) - Started o.e.j.w.WebAppContext#ef9296d{/,file:/opt/zeppelin-0.8.0/webapps/webapp/,AVAILABLE}{/opt/zeppelin-0.8.0/zeppelin-web-0.8.0.war}
INFO [2018-09-03 14:28:33,846] ({main} AbstractConnector.java[doStart]:266) - Started ServerConnector#1b1c538d{HTTP/1.1}{0.0.0.0:9991}
INFO [2018-09-03 14:28:33,847] ({main} Server.java[doStart]:379) - Started #145203ms
We have no use-case for Helium (as of now) and the delay in the zeppelin restart affects us. Is there a way we can disable this dependency on Helium?
Thanks!

There was PR3082 ([ZEPPELIN-3636] Add timeout for s3 amazon bucket endpoint) that allows not to wait to Amazon.
PR was merged to master, perhaps will be merged to branch-0.8.

Related

Unable to start DSE, getting "Class not found: org/apache/lucene/uninverting/FieldCache"

DSE server Version : 6.8
Followed installation steps as per the datastax documentation (Tar file installation)
Startup command: bin/dse cassandra -s (Needs search featire , so enabled solr as well)
Error while executing start command:
WARN [main] 2022-03-01 19:05:59,855 DatabaseDescriptor.java:1531 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO [main] 2022-03-01 19:05:59,857 DseDelegateSnitch.java:39 - Setting my workloads to [Cassandra, Search]
INFO [main] 2022-03-01 19:05:59,904 YamlConfigurationLoader.java:77 - Configuration location: file:/Users/rajamani/repositories/cassandra/dse-6.8.20/resources/cassandra/conf/cassandra.yaml
INFO [main] 2022-03-01 19:05:59,912 DseDelegateSnitch.java:41 - Initialized DseDelegateSnitch with workloads [Cassandra, Search], delegating to com.datastax.bdp.snitch.DseSimpleSnitch
INFO [main] 2022-03-01 19:06:00,049 YamlConfigurationLoader.java:77 - Configuration location: file:/Users/rajamani/repositories/cassandra/dse-6.8.20/resources/cassandra/conf/cassandra.yaml
INFO [main] 2022-03-01 19:06:01,154 AuthConfig.java:125 - System keyspaces filtering not enabled.
INFO [main] 2022-03-01 19:06:01,155 IAuditLogger.java:136 - Audit logging is disabled
WARN [main] 2022-03-01 19:06:01,215 DisabledTPCBackpressureController.java:20 - TPC backpressure is disabled. NOT RECOMMENDED.
INFO [main] 2022-03-01 19:06:01,216 TPC.java:137 - Created 9 NIO event loops (with I/O ratio set to 50).
INFO [main] 2022-03-01 19:06:01,239 TPC.java:144 - Created 1 TPC timers due to configured ratio of 5.
INFO [main] 2022-03-01 19:06:01,524 DseConfig.java:372 - CQL slow log is enabled
INFO [main] 2022-03-01 19:06:01,526 DseConfig.java:373 - CQL system info tables are not enabled
INFO [main] 2022-03-01 19:06:01,526 DseConfig.java:374 - Resource level latency tracking is not enabled
INFO [main] 2022-03-01 19:06:01,526 DseConfig.java:375 - Database summary stats are not enabled
INFO [main] 2022-03-01 19:06:01,526 DseConfig.java:376 - Cluster summary stats are not enabled
INFO [main] 2022-03-01 19:06:01,526 DseConfig.java:377 - Histogram data tables are not enabled
INFO [main] 2022-03-01 19:06:01,528 DseConfig.java:378 - User level latency tracking is not enabled
INFO [main] 2022-03-01 19:06:01,529 DseConfig.java:380 - Spark cluster info tables are not enabled
INFO [main] 2022-03-01 19:06:01,531 DseConfig.java:420 - Cql solr query paging is: off
INFO [main] 2022-03-01 19:06:01,535 DseUtil.java:324 - /proc/cpuinfo is not available, defaulting to 1 thread per CPU core...
INFO [main] 2022-03-01 19:06:01,536 DseConfig.java:424 - This instance appears to have 1 thread per CPU core and 10 total CPU threads.
INFO [main] 2022-03-01 19:06:01,538 DseConfig.java:441 - Server ID:F4-D4-88-66-17-8D
ERROR [main] 2022-03-01 19:06:02,024 DseModule.java:114 - Class not found: org/apache/lucene/uninverting/FieldCache. Exiting...
This particular class exists as part of solr-core.
does DSE-server does not have solr bundle ? (Even after placing the lib under solr lib, this particulat error occurs.
Can you please assist to resolve the issue ?
The error is most likely a symptom of another problem. For example, it's quite common to get "class not found" exceptions when using Java 11 with Cassandra. DataStax Enterprise 6.8 is compatible with Cassandra 3.11 which only supports Java 8.
For what it's worth, Java 11 support was only added to Cassandra 4.0 (CASSANDRA-16894). Older versions of Cassandra only work with Java 8.
Going back to your original question, we need a bit more information to investigate the issue but our ability to help you in a Q&A forum is limited. Please log a ticket with DataStax Support and one of our engineers will advise you on what diagnostic info is required and the next steps. Cheers!

Issue in writing data from Apache Kafka to text file

My development environment setup: Windows 10 Enterprise Edition, 16GB RAM, 2.81GHz 64Bit OS. I installed Virtual Box and imported Ubuntu image in it. With in Ubuntu I installed Confluent CLI https://github.com/confluentinc/confluent-cli to run Kafka, zookeeper and other services.
Scenario: I want to write data from Apache Kafka topic to a text file. I am using Sink connector and following the below link to accomplish this task. Also, I didn't write any code to accomplish the same.
Using this Link to accomplish my task
http://bigdatums.net/2017/06/22/writing-data-from-apache-kafka-to-text-file/
Steps accomplished successfully so far:
Able to run Ubuntu image with in Virtual Box.
Able to run Confluent CLI.
Able to bring up Confluent Kafka, ZooKeeper and other services using bin/confluent start command.
Able to create Topics within Confluent CLI
Try to run the following message to read the text message from Kafka Topic
osboxes#osboxes:~/ganesh/confluent-5.1.0$ bin/connect-standalone /home/osboxes/ganesh/confluent-5.1.0/etc/kafka/csx-connect-standalone.properties /home/osboxes/ganesh/confluent-5.1.0/etc/kafka/csx-connect-file-sink.properties
Property configuration details below
connect-file-sink.properties (details)
name=local-file-sink
connector.class=FileStreamSink
tasks.max=1
file=/home/osboxes/ganesh/ptc/messages/output/trainstartevent/MBCDTSKB02.json
topics=TrainStartEvent
connect-file-source.properties (details)
name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=/home/osboxes/ganesh/ptc/messages/input/trainstartevent/MBCDTSKB02.json
topic=TrainStartEvent
connect-standalone.properties (details)
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.flush.interval.ms=10000
plugin.path=share/java
Actual Error Message
[2019-01-20 21:14:17,413] INFO Started
o.e.j.s.ServletContextHandler#546394ed{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler:850)
[2019-01-20 21:14:17,428] ERROR Stopping after connector error
(org.apache.kafka.connect.cli.ConnectStandalone:113)
org.apache.kafka.connect.errors.ConnectException: Unable to start REST
server
[2019-01-20 21:13:19,927] INFO Kafka Connect standalone worker
initializing ... (org.apache.kafka.connect.cli.ConnectStandalone:67)
[2019-01-20 21:13:20,021] INFO WorkerInfo values:
jvm.args = -Xms256M, -Xmx2G, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20,
-XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=bin/../logs, -Dlog4j.configuration=file:bin/../etc/kafka/connect-log4j.properties
jvm.spec = Oracle Corporation, Java HotSpot(TM) 64-Bit Server VM,
1.8.0_201, 25.201-b09
[2019-01-20 21:14:13,427] WARN The configuration 'plugin.path' was
supplied but isn't a known config.
(org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-01-20 21:14:13,431] WARN The configuration 'value.converter' was
supplied but isn't a known config.
(org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-01-20 21:14:13,431] WARN The configuration
'internal.key.converter.schemas.enable' was supplied but isn't a known
config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-01-20 21:14:13,432] WARN The configuration 'key.converter' was
supplied but isn't a known config.
(org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-01-20 21:14:13,433] INFO Kafka version : 2.1.0-cp1
(org.apache.kafka.common.utils.AppInfoParser:109)
[2019-01-20 21:14:13,433] INFO Kafka commitId : 3bce825d5f759863
(org.apache.kafka.common.utils.AppInfoParser:110)
[2019-01-20 21:14:14,047] INFO Kafka cluster ID:
jPHHwv39Riyn1krFQyhYkA (org.apache.kafka.connect.util.ConnectUtils:59)
[2019-01-20 21:14:14,139] INFO Logging initialized #55198ms to
org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:193)
[2019-01-20 21:14:14,565] INFO Added connector for http://:8083
(org.apache.kafka.connect.runtime.rest.RestServer:119)
[2019-01-20 21:14:14,681] INFO Advertised URI: http://127.0.1.1:8083/
(org.apache.kafka.connect.runtime.rest.RestServer:267)
[2019-01-20 21:14:14,705] INFO Kafka version : 2.1.0-cp1
(org.apache.kafka.common.utils.AppInfoParser:109)
[2019-01-20 21:14:14,705] INFO Kafka commitId : 3bce825d5f759863
(org.apache.kafka.common.utils.AppInfoParser:110)
[2019-01-20 21:14:15,228] INFO JsonConverterConfig values:
converter.type = key
schemas.cache.size = 1000
schemas.enable = false
(org.apache.kafka.connect.json.JsonConverterConfig:279)
[2019-01-20 21:14:15,238] INFO JsonConverterConfig values:
converter.type = value
schemas.cache.size = 1000
schemas.enable = false
(org.apache.kafka.connect.json.JsonConverterConfig:279)
[2019-01-20 21:14:15,251] INFO Kafka Connect standalone worker
initialization took 55315ms
(org.apache.kafka.connect.cli.ConnectStandalone:92)
[2019-01-20 21:14:15,251] INFO Kafka Connect starting
(org.apache.kafka.connect.runtime.Connect:49)
[2019-01-20 21:14:15,256] INFO Herder starting
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:88)
[2019-01-20 21:14:15,256] INFO Worker starting
(org.apache.kafka.connect.runtime.Worker:172)
[2019-01-20 21:14:15,256] INFO Starting FileOffsetBackingStore with
file /tmp/connect.offsets
(org.apache.kafka.connect.storage.FileOffsetBackingStore:58)
[2019-01-20 21:14:15,258] INFO Worker started
(org.apache.kafka.connect.runtime.Worker:177)
[2019-01-20 21:14:15,259] INFO Herder started
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:90)
[2019-01-20 21:14:15,259] INFO Starting REST server
(org.apache.kafka.connect.runtime.rest.RestServer:163)
[2019-01-20 21:14:15,565] INFO jetty-9.4.12.v20180830; built:
2018-08-30T13:59:14.071Z; git:
27208684755d94a92186989f695db2d7b21ebc51; jvm 1.8.0_201-b09
(org.eclipse.jetty.server.Server:371)
[2019-01-20 21:14:15,733] INFO DefaultSessionIdManager
workerName=node0 (org.eclipse.jetty.server.session:365)
[2019-01-20 21:14:15,746] INFO No SessionScavenger set, using defaults
(org.eclipse.jetty.server.session:370)
[2019-01-20 21:14:15,748] INFO node0 Scavenging every 600000ms
(org.eclipse.jetty.server.session:149)
Jan 20, 2019 9:14:16 PM org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
WARNING: A provider
org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource
registered in SERVER runtime does not implement any provider
interfaces applicable in the SERVER runtime. Due to constraint
configuration problems the provider
org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource
will be ignored.
Jan 20, 2019 9:14:16 PM org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
WARNING: A provider
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource
registered in SERVER runtime does not implement any provider
interfaces applicable in the SERVER runtime. Due to constraint
configuration problems the provider
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource
will be ignored.
Jan 20, 2019 9:14:16 PM org.glassfish.jersey.internal.inject.Providers
checkProviderRuntime
WARNING: A provider
org.apache.kafka.connect.runtime.rest.resources.RootResource
registered in SERVER runtime does not implement any provider
interfaces applicable in the SERVER runtime. Due to constraint
configuration problems the provider
org.apache.kafka.connect.runtime.rest.resources.RootResource will be
ignored.
Jan 20, 2019 9:14:17 PM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The
(sub)resource method listConnectors in
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource
contains empty path annotation.
WARNING: The (sub)resource method createConnector in
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource
contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in
org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource
contains empty path annotation.
WARNING: The (sub)resource method serverInfo in
org.apache.kafka.connect.runtime.rest.resources.RootResource contains
empty path annotation.
[2019-01-20 21:14:17,413] INFO Started
o.e.j.s.ServletContextHandler#546394ed{/,null,AVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler:850)
[2019-01-20 21:14:17,428] ERROR Stopping after connector error
(org.apache.kafka.connect.cli.ConnectStandalone:113)
org.apache.kafka.connect.errors.ConnectException: Unable to start REST
server
at
org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:214)
at org.apache.kafka.connect.runtime.Connect.start(Connect.java:53)
at
org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:95)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:339)
at
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:395)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at
org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:212)
... 2 more
[2019-01-20 21:14:17,437] INFO Kafka Connect stopping
(org.apache.kafka.connect.runtime.Connect:65)
[2019-01-20 21:14:17,437] INFO Stopping REST server
(org.apache.kafka.connect.runtime.rest.RestServer:223)
[2019-01-20 21:14:17,442] INFO Stopped
http_8083#1b90fee4{HTTP/1.1,[http/1.1]}{0.0.0.0:8083}
(org.eclipse.jetty.server.AbstractConnector:341)
[2019-01-20 21:14:17,460] INFO node0 Stopped scavenging
(org.eclipse.jetty.server.session:167)
[2019-01-20 21:14:17,493] INFO Stopped
o.e.j.s.ServletContextHandler#546394ed{/,null,UNAVAILABLE}
(org.eclipse.jetty.server.handler.ContextHandler:1040)
[2019-01-20 21:14:17,507] INFO REST server stopped
(org.apache.kafka.connect.runtime.rest.RestServer:241)
[2019-01-20 21:14:17,508] INFO Herder stopping
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:95)
[2019-01-20 21:14:17,509] INFO Worker stopping
(org.apache.kafka.connect.runtime.Worker:184)
[2019-01-20 21:14:17,510] INFO Stopped FileOffsetBackingStore
(org.apache.kafka.connect.storage.FileOffsetBackingStore:66)
[2019-01-20 21:14:17,522] INFO Worker stopped
(org.apache.kafka.connect.runtime.Worker:205)
[2019-01-20 21:14:17,523] INFO Herder stopped
(org.apache.kafka.connect.runtime.standalone.StandaloneHerder:112)
[2019-01-20 21:14:17,529] INFO Kafka Connect stopped
(org.apache.kafka.connect.runtime.Connect:70)
Caused by: java.net.BindException: Address already in use
Sound like you ran confluent start and a Kafka Connect server is therefore already running at port 8083.
You therefore need to use confluent load /home/osboxes/ganesh/confluent-5.1.0/etc/kafka/csx-connect-file-sink.properties, or convert the properties file into JSON, which you can then do curl -XPOST -d#csx-connect-file-sink.json http://localhost:8083
See Kafka Connect REST API
Note that to write to a file, you could also do it all from the console consumer
kafka-console-consumer --from-beginning --property print.key=true --topic x --bootstrap-server localhost:9092 --group to-file >> /tmp/file.txt

Apache Flink Kubernetes Job Arguments

I'm trying to setup a cluster (Apache Flink 1.6.1) with Kubernetes and get following error when I run a job on it:
2018-10-09 14:29:43.212 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --------------------------------------------------------------------------------
2018-10-09 14:29:43.214 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.runtime.entrypoint.ClusterConfiguration.<init>(Ljava/lang/String;Ljava/util/Properties;[Ljava/lang/String;)V
at org.apache.flink.runtime.entrypoint.EntrypointClusterConfiguration.<init>(EntrypointClusterConfiguration.java:37)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfiguration.<init>(StandaloneJobClusterConfiguration.java:41)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:78)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:42)
at org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:55)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:153)
My job takes a configuration file (file.properties) as a parameter. This works fine in standalone mode but apparently the Kubernetes cluster cannot parse it
job-cluster-job.yaml:
args: ["job-cluster", "--job-classname", "com.test.Abcd", "-Djobmanager.rpc.address=flink-job-cluster",
"-Dparallelism.default=1", "-Dblob.server.port=6124", "-Dquery.server.ports=6125", "file.properties"]
How to fix this?
Update: The job was built for Apache 1.4.2 and this might be the issue, looking into it.
The job was built for 1.4.2, the class with the error (EntrypointClusterConfiguration.java) was added in 1.6.1 (https://github.com/apache/flink/commit/ab9bd87e521d19db7c7d783268a3532d2e876a5d#diff-d1169e00afa40576ea8e4f3c472cf858) it seems, so this caused the issue.
We updated the job's dependencies to point to new 1.6.1 release and the arguments are parsed correctly.

GAE Standard Php, Linux development server error

Using the Google Cloud SDK, as opposed to the App Launcher that is being phased out, I'm trying to setup the development Php environment on a Linux host. I've got the recommended Php version installed and here are the results of attempting to start a server.
INFO 2017-09-24 02:44:31,139 devappserver2.py:115] Skipping SDK update check.
INFO 2017-09-24 02:44:31,305 api_server.py:299] Starting API server at: http://localhost:42195
INFO 2017-09-24 02:44:31,408 dispatcher.py:224] Starting module "default" running at: http://localhost:8080
INFO 2017-09-24 02:44:31,410 admin_server.py:116] Starting admin server at: http://localhost:8000
ERROR 2017-09-24 02:44:32,434 module.py:1588]
INFO 2017-09-24 02:44:33,412 shutdown.py:45] Shutting down.
INFO 2017-09-24 02:44:33,413 api_server.py:940] Applying all pending transactions and saving the datastore
INFO 2017-09-24 02:44:33,413 api_server.py:943] Saving search indexes

Apache Zeppelin - Disconnected status

I have successfully installed and started Zeppelin on ec2 cluster with spark 1.3 and hadoop 2.4.1 on yarn.(as given in https://github.com/apache/incubator-zeppelin)
However, I see zeppelin started with 'disconnected' status (on the right corner).
As per log, I find that both the zeppelin port and the websocket port (zeppeling port + 1) have been started with no error. Also, both the ports are not used by any other process and I see
zeppelin process (pid) running on both the ports. The IP table is blank.
log:
INFO [2015-06-30 03:20:31,294] ({main} QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
INFO [2015-06-30 03:20:31,294] ({main} StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
INFO [2015-06-30 03:20:31,294] ({main} StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version: 2.2.1
INFO [2015-06-30 03:20:31,295] ({main} QuartzScheduler.java[start]:575) - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
INFO [2015-06-30 03:20:31,510] ({main} ServerImpl.java[initDestination]:94) - Setting the server's publish address to be /
INFO [2015-06-30 03:20:31,625] ({main} StandardDescriptorProcessor.java[visitServlet]:284) - NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet
INFO [2015-06-30 03:20:32,374] ({main} AbstractConnector.java[doStart]:338) - Started SocketConnector#0.0.0.0:8083
INFO [2015-06-30 03:20:32,374] ({main} ZeppelinServer.java[main]:108) - Started
INFO [2015-06-30 03:20:30,181] ({main} ZeppelinConfiguration.java[create]:98) - Load configuration from file:/home/ec2-user/incubator-zeppelin/conf/zeppelin-site.xml
INFO [2015-06-30 03:20:30,336] ({main} NotebookServer.java[creatingwebSocketServerLog]:65) - Create zeppelin websocket on 0.0.0.0:8084
INFO [2015-06-30 03:20:30,537] ({main} ZeppelinServer.java[main]:106) - Start zeppelin server
INFO [2015-06-30 03:20:30,539] ({main} Server.java[doStart]:272) - jetty-8.1.14.v20131031
zeppelin-env.sh:
export ZEPPELIN_PORT=8083
export HADOOP_CONF_DIR=/mnt/disk1/hadoop-2.4.1/etc/hadoop
export SPARK_HOME=/mnt/disk2/spark
In zeppelin-site.xml, I have only set server ip address and port and -1 for websocket port.
When I access websocket port thru chorme I get "no data received..err_empty_reponse" and "Unable to load the webpage because the server sent no data' error.
Am I missing anything during installation or in configuration? Any help is appreciated. Thanks.
I have some experiences using apache zeppelin with IE or Chrome. Just add the your IP address into trusted sites with internet option. Close the IE or Chrome and restart it. And then opening IE or Chrome browser, you can see main page of apache zeppelin.
Try to set the property zeppelin.server.allowed.origins to *in the conf/zeppelin-site.xmland check if it's a websocket issue. After you can list the origins that you would like to allow.

Resources