I am new in Cassandra After Installing DSE in CentOS, I started DSE services successfully but I can not start Solr services.I got error while start solr, Kindly check below error log.
[dba#support dse]$ bin/dse cassandra -s
Tomcat: Logging to /home/dba/tomcat
[dba#support dse]$ 18:08:21,873 |-INFO in ch.qos.logback.classic.LoggerContext[d efault] - Found resource [logback.xml] at [file:/home/Datastax/dse/resources/cas sandra/conf/logback.xml]
18:08:22,484 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
18:08:22,493 |-INFO in ReconfigureOnChangeFilter{invocationCounter=0} - Will sca n for changes in [[/home/Datastax/dse/resources/cassandra/conf/logback.xml]] eve ry 60 seconds.
18:08:22,493 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter
18:08:22,537 |-INFO in ch.qos.logback.classic.joran.action.JMXConfiguratorAction - begin
18:08:22,822 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t o instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
18:08:22,828 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
18:08:22,941 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy#7787 8e70 - Will use zip compression
18:08:22,986 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] fo r [encoder] property
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - A ctive log file name: /home/Datastax/log/cassandra/system.log
18:08:23,037 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - F ile property is set to [/home/Datastax/log/cassandra/system.log]
18:08:23,039 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - openFile(/home/Datastax/log/cassandra/system.log,true) call failed. java.io.File NotFoundException: /home/Datastax/log/cassandra/system.log (Permission denied)
at java.io.FileNotFoundException: /home/Datastax/log/cassandra/system.lo g (Permission denied)
at at java.io.FileOutputStream.open(Native Method)
at at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at at ch.qos.logback.core.recovery.ResilientFileOutputStream.<init> (ResilientFileOutputStream.java:28)
at at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:1 50)
at at ch.qos.logback.core.FileAppender.start(FileAppender.java:108)
at at ch.qos.logback.core.rolling.RollingFileAppender.start(Rolling FileAppender.java:86)
at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderA ction.java:96)
at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Inter preter.java:317)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:196)
at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpre ter.java:182)
at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.ja va:62)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:149)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:135)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:99)
at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(Gen ericConfigurator.java:49)
at at ch.qos.logback.classic.util.ContextInitializer.configureByRes ource(ContextInitializer.java:75)
at at ch.qos.logback.classic.util.ContextInitializer.autoConfig(Con textInitializer.java:150)
at at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.jav a:85)
at at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder .java:55)
at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:142)
at at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.j ava:121)
at at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java: 332)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:305)
at at com.datastax.bdp.server.AbstractDseModule.<clinit>(AbstractDs eModule.java:20)
18:08:23,933 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About t
INFO 12:38:25 Load of settings is done.
INFO 12:38:25 CQL slow log is enabled
INFO 12:38:25 CQL system info tables are not enabled
INFO 12:38:25 Resource level latency tracking is not enabled
INFO 12:38:25 Database summary stats are not enabled
INFO 12:38:25 Cluster summary stats are not enabled
INFO 12:38:25 Histogram data tables are not enabled
INFO 12:38:25 User level latency tracking is not enabled
INFO 12:38:25 Spark cluster info tables are not enabled
INFO 12:38:25 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:25 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:25 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 12:38:25 Global memtable on-heap threshold is enabled at 479MB
INFO 12:38:25 Global memtable off-heap threshold is enabled at 479MB
INFO 12:38:25 Detected search service is enabled, setting my workload to Searc h
INFO 12:38:25 Detected search service is enabled, setting my DC to Solr
INFO 12:38:25 Initialized DseDelegateSnitch with workload Search, delegating t o com.datastax.bdp.snitch.DseSimpleSnitch
INFO 12:38:26 Loading settings from file:/home/Datastax/dse/resources/cassandr a/conf/cassandra.yaml
INFO 12:38:26 Node configuration:[authenticator=AllowAllAuthenticator; authori zer=AllowAllAuthorizer; auto_snapshot=true; batch_size_warn_threshold_in_kb=64; batchlog_replay_throttle_in_kb=1024; cas_contention_timeout_in_ms=1000; client_e ncryption_options=<REDACTED>; cluster_name=Cassandra Cluster; column_index_size_ in_kb=64; commit_failure_policy=stop; commitlog_directory=/home/Datastax/commitl og; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_per iod_in_ms=10000; compaction_throughput_mb_per_sec=16; concurrent_counter_writes= 32; concurrent_reads=32; concurrent_writes=32; counter_cache_save_period=7200; c ounter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; cross_no de_timeout=false; data_file_directories=[/home/Datastax/data]; disk_failure_poli cy=stop; dynamic_snitch_badness_threshold=0.1; dynamic_snitch_reset_interval_in_ ms=600000; dynamic_snitch_update_interval_in_ms=100; endpoint_snitch=com.datasta x.bdp.snitch.DseSimpleSnitch; hinted_handoff_enabled=true; hinted_handoff_thrott le_in_kb=1024; incremental_backups=false; index_summary_capacity_in_mb=null; ind ex_summary_resize_interval_in_minutes=60; inter_dc_tcp_nodelay=false; internode_ compression=dc; key_cache_save_period=14400; key_cache_size_in_mb=null; listen_a ddress=172.16.16.250; max_hint_window_in_ms=10800000; max_hints_delivery_threads =2; memtable_allocation_type=heap_buffers; native_transport_port=9042; num_token s=256; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; permissions_vali dity_in_ms=2000; range_request_timeout_in_ms=10000; read_request_timeout_in_ms=5 000; request_scheduler=org.apache.cassandra.scheduler.NoScheduler; request_timeo ut_in_ms=10000; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=172 .16.16.250; rpc_keepalive=true; rpc_port=9160; rpc_server_type=sync; saved_cache s_directory=/home/Datastax/saved_caches; seed_provider=[{class_name=org.apache.c assandra.locator.SimpleSeedProvider, parameters=[{seeds=172.16.16.250,202.129.19 8.236}]}]; server_encryption_options=<REDACTED>; snapshot_before_compaction=fals e; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_nativ e_transport=true; start_rpc=true; storage_port=7000; thrift_framed_transport_siz e_in_mb=15; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; t rickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout _in_ms=60000; write_request_timeout_in_ms=2000]
INFO 12:38:26 Using Solr-enabled cql queries
INFO 12:38:26 CFS operations enabled
INFO 12:38:27 UserLatencyTracking plugin using 1 async writers
INFO 12:38:27 Initializing user/object io tracker plugin
INFO 12:38:27 Initializing CQL slow query log plugin
INFO 12:38:27 Solr node health tracking is not enabled
INFO 12:38:27 Solr latency snapshots are not enabled
INFO 12:38:27 Solr slow sub-query log is not enabled
INFO 12:38:27 Solr indexing error log is not enabled
INFO 12:38:27 Solr update handler metrics are not enabled
INFO 12:38:27 Solr request handler metrics are not enabled
INFO 12:38:27 Solr index statistics reporting is not enabled
INFO 12:38:27 Solr cache statistics reporting is not enabled
INFO 12:38:27 Initializing Solr slow query log plugin...
INFO 12:38:27 Initializing Solr document validation error log plugin...
INFO 12:38:27 CqlSystemInfo plugin using 1 async writers
INFO 12:38:27 ClusterSummaryStats plugin using 8 async writers
INFO 12:38:27 DbSummaryStats plugin using 8 async writers
INFO 12:38:27 HistogramDataTables plugin using 8 async writers
INFO 12:38:27 ResourceLatencyTracking plugin using 8 async writers
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 Setting TTL to 604800
INFO 12:38:27 DSE version: 4.7.0
INFO 12:38:27 Hadoop version: 1.0.4.15
INFO 12:38:27 Hive version: 0.12.0.7
INFO 12:38:27 Pig version: 0.10.1
INFO 12:38:27 Solr version: 4.10.3.0.6
INFO 12:38:27 Sqoop version: 1.4.5.15.1
INFO 12:38:27 Mahout version: 0.8
INFO 12:38:27 Appender version: 3.1.0
INFO 12:38:27 Spark version: 1.2.1.2
INFO 12:38:27 Shark version: 1.1.1
INFO 12:38:27 Hive metastore version: 1
INFO 12:38:27 CQL slow log is enabled
INFO 12:38:27 CQL system info tables are not enabled
INFO 12:38:27 Resource level latency tracking is not enabled
INFO 12:38:27 Database summary stats are not enabled
INFO 12:38:27 Cluster summary stats are not enabled
INFO 12:38:27 Histogram data tables are not enabled
INFO 12:38:27 User level latency tracking is not enabled
INFO 12:38:27 Spark cluster info tables are not enabled
INFO 12:38:27 Using com.datastax.bdp.cassandra.cql3.DseQueryHandler as query h andler for native protocol queries (as requested with -Dcassandra.custom_query_h andler_class)
INFO 12:38:28 Initializing system.schema_triggers
ERROR 12:38:31 Failed managing commit log segments. Commit disk failure policy is stop; terminating thread
org.apache.cassandra.io.FSWriteError: java.io.FileNotFoundException: /home/Datas tax/commitlog/CommitLog-4-1432643911014.log (Permission denied)
Anyone point me the way to rectify this error
This is likely a permissions issue with the parent Datastax directory. On startup DSE will attempt to create the log file (system.log), and will fail if permissions are not setup correctly on the parent directories. Can you provide more info about?:
install method (stand-alone installer or tarball)
DSE version
Related
When I apply flink job to k8s zookeeper ha, I get below error.
Our structure is job cluster. 1 job and 1 task. We want to implement while we delete job pod the task still can continue work.
job 00000000000000000000000000000000 is not in state RUNNING but SCHEDULED instead. Aborting checkpoint
below is my conf
high-availability: zookeeper
high-availability.storageDir: file:///opt/flink/data/
high-availability.zookeeper.quorum: zk-0.zk-hs:2181,zk-1.zk-hs:2181,zk-2.zk-hs:2181
high-availability.zookeeper.client.acl: open
high-availability.zookeeper.path.root: /flinkha
high-availability.cluster-id: /flink-job-service-kpi-ofcwy
below is error log:
2020-06-19 12:56:02,254 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Recovering checkpoints from ZooKeeper.
2020-06-19 12:56:02,293 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Found 0 checkpoints in ZooKeeper.
2020-06-19 12:56:02,293 INFO org.apache.flink.runtime.checkpoint.ZooKeeperCompletedCheckpointStore - Trying to fetch 0 checkpoints from storage.
2020-06-19 12:56:02,312 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService - Starting ZooKeeperLeaderElectionService ZooKeeperLeaderElectionService{leaderPath='/leader/00000000000000000000000000000000/job_manager_lock'}.
2020-06-19 12:56:02,454 INFO org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job KPI service job (00000000000000000000000000000000) was granted leadership with session id 9644799b-29cf-4ec5-9e68-5e45261aefb2 at akka.tcp://flink#flink-job-service-kpi-ofcwy:35817/user/jobmanager_0.
2020-06-19 12:56:02,532 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/resource_manager_lock.
2020-06-19 12:56:02,534 INFO org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job KPI service job (00000000000000000000000000000000) under job master id 9e685e45261aefb29644799b29cf4ec5.
2020-06-19 12:56:02,552 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Job KPI service job (00000000000000000000000000000000) switched from state CREATED to RUNNING.
2020-06-19 12:56:02,575 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: KPI-Kafka-Consumer -> (Sink: Print to Std. Out, Filter -> KPI Query Map -> KPI Unwind -> KPI Custom Map -> KPI filter -> KPI Data Transformation -> Filter) (1/1) (6aeaf74d5a4ee58579e79fa1d3026535) switched from CREATED to SCHEDULED.
2020-06-19 12:56:02,618 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Cannot serve slot request, no ResourceManager connected. Adding as pending request [SlotRequestId{4abf5ce93cd365168228b616bd80ed71}]
2020-06-19 12:56:02,634 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Process -> Flat Map (1/1) (4ac2344f71fb9b6beb4a42fe18cf77a2) switched from CREATED to SCHEDULED.
2020-06-19 12:56:02,636 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Window(TumblingProcessingTimeWindows(60000), ProcessingTimeTrigger, DistinctCountAggregateFunction, PassThroughWindowFunction) -> Map (1/1) (1fbb13647621f5e48db6f7d750c32865) switched from CREATED to SCHEDULED.
2020-06-19 12:56:02,636 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Flat Map -> (Sink: Unnamed, Sink: Print to Std. Out) (1/1) (46396671fce9498171d03a31b1cee968) switched from CREATED to SCHEDULED.
2020-06-19 12:56:02,655 INFO org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager akka.tcp://flink#flink-job-service-kpi-ofcwy:35817/user/resourcemanager(82039211570997fc83bd52bafb394879)
2020-06-19 12:56:02,674 INFO org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager address, beginning registration
2020-06-19 12:56:02,677 INFO org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager attempt 1 (timeout=100ms)
2020-06-19 12:56:02,692 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService - Starting ZooKeeperLeaderRetrievalService /leader/00000000000000000000000000000000/job_manager_lock.
2020-06-19 12:56:02,693 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registering job manager 9e685e45261aefb29644799b29cf4ec5#akka.tcp://flink#flink-job-service-kpi-ofcwy:35817/user/jobmanager_0 for job 00000000000000000000000000000000.
2020-06-19 12:56:02,753 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered job manager 9e685e45261aefb29644799b29cf4ec5#akka.tcp://flink#flink-job-service-kpi-ofcwy:35817/user/jobmanager_0 for job 00000000000000000000000000000000.
2020-06-19 12:56:02,775 INFO org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully registered at ResourceManager, leader id: 82039211570997fc83bd52bafb394879.
2020-06-19 12:56:02,775 INFO org.apache.flink.runtime.jobmaster.slotpool.SlotPoolImpl - Requesting new slot [SlotRequestId{4abf5ce93cd365168228b616bd80ed71}] and profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
2020-06-19 12:56:02,777 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Request slot with profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} for job 00000000000000000000000000000000 with allocation id dcc3d3f3537cd3f1032fe47a0aafe577.
2020-06-19 12:56:40,983 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint triggering task Source: KPI-Kafka-Consumer -> (Sink: Print to Std. Out, Filter -> KPI Query Map -> KPI Unwind -> KPI Custom Map -> KPI filter -> KPI Data Transformation -> Filter) (1/1) of job 00000000000000000000000000000000 is not in state RUNNING but SCHEDULED instead. Aborting checkpoint.
2020-06-19 12:57:40,982 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint triggering task Source: KPI-Kafka-Consumer -> (Sink: Print to Std. Out, Filter -> KPI Query Map -> KPI Unwind -> KPI Custom Map -> KPI filter -> KPI Data Transformation -> Filter) (1/1) of job 00000000000000000000000000000000 is not in state RUNNING but SCHEDULED instead. Aborting checkpoint.
solved it by config service. missing below configutaion.
high-availability.jobmanager.port: 6070
Hi While working with MapReduceIndexerTool with solr 4.10 cloud, the code is successfully able to connect to Zookeeper, but while fetching the aliases.json, it fails to fetch the data. Below is the command and stack trace:
command:
hadoop --config /etc/hadoop/conf jar target/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j src/test/resources/log4j.properties --morphline-file /home/impadmin/app_quotes_morphline.conf --output-dir hdfs://impetus-i0056.impetus.co.in:8020/user/impadmin/MapReduceIndexerTool/output2 --zk-host 172.26.45.69:9983/solr --collection app.quotes hdfs://impetus-i0056.impetus.co.in:8020/apps/hive/warehouse/kst
stack trace:
WARNING: Use "yarn jar" to launch YARN applications.
1 [main] INFO org.apache.solr.common.cloud.SolrZkClient - Using default ZkCredentialsProvider
87 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Waiting for client to connect to ZooKeeper
114 [main-EventThread] INFO org.apache.solr.common.cloud.ConnectionManager - Watcher org.apache.solr.common.cloud.ConnectionManager#1568159 name:ZooKeeperConnection Watcher:172.26.45.69:9983/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
115 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Client is connected to ZooKeeper
115 [main] INFO org.apache.solr.common.cloud.SolrZkClient - Using default ZkACLProvider
Exception in thread "main" net.sourceforge.argparse4j.inf.ArgumentParserException: java.lang.IllegalArgumentException: Cannot find expected information for SolrCloud in ZooKeeper: 172.26.45.69:9983/solr
at org.apache.solr.hadoop.MapReduceIndexerTool.verifyZKStructure(MapReduceIndexerTool.java:1418)
at org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:716)
at org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:681)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.solr.hadoop.MapReduceIndexerTool.main(MapReduceIndexerTool.java:668)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.IllegalArgumentException: Cannot find expected information for SolrCloud in ZooKeeper: 172.26.45.69:9983/solr
at org.apache.solr.hadoop.ZooKeeperInspector.extractDocCollection(ZooKeeperInspector.java:88)
at org.apache.solr.hadoop.ZooKeeperInspector.extractShardUrls(ZooKeeperInspector.java:56)
at org.apache.solr.hadoop.MapReduceIndexerTool.verifyZKStructure(MapReduceIndexerTool.java:1415)
... 10 more
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /aliases.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:351)
at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:348)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:348)
at org.apache.solr.hadoop.ZooKeeperInspector.checkForAlias(ZooKeeperInspector.java:164)
at org.apache.solr.hadoop.ZooKeeperInspector.extractDocCollection(ZooKeeperInspector.java:85)
... 12 more
Please help me to identify the root cause.
The issue was with the URL that was being hit to access zk solr configs. thus correcting the URL solved the issue. In case of embedded solr instance the URL does not have application solr available, but rather puts it directly under zk root.
When I try to run my flow on Apache Flink standalone cluster I see the following exception:
java.lang.IllegalStateException: Update task on instance aaa0859f6af25decf1f5fc1821ffa55d # app-2 - 4 slots - URL: akka.tcp://flink#192.168.38.98:46369/user/taskmanager failed due to:
at org.apache.flink.runtime.executiongraph.Execution$6.onFailure(Execution.java:954)
at akka.dispatch.OnFailure.internal(Future.scala:228)
at akka.dispatch.OnFailure.internal(Future.scala:227)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:174)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:171)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:28)
at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:134)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka.tcp://flink#192.168.38.98:46369/user/taskmanager#1804590378]] after [10000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:745)
Seems like port 46369 blocked by firewall. It is true because I read configuration section and open these ports only:
6121:
comment: Apache Flink TaskManager (Data Exchange)
6122:
comment: Apache Flink TaskManager (IPC)
6123:
comment: Apache Flink JobManager
6130:
comment: Apache Flink JobManager (BLOB Server)
8081:
comment: Apache Flink JobManager (Web UI)
The same ports described in flink-conf.yaml:
jobmanager.rpc.address: app-1.stag.local
jobmanager.rpc.port: 6123
jobmanager.heap.mb: 1024
taskmanager.heap.mb: 2048
taskmanager.numberOfTaskSlots: 4
taskmanager.memory.preallocate: false
blob.server.port: 6130
parallelism.default: 4
jobmanager.web.port: 8081
state.backend: jobmanager
restart-strategy: none
restart-strategy.fixed-delay.attempts: 2
restart-strategy.fixed-delay.delay: 60s
So, I have two questions:
This exception related to blocked ports. Right?
Which ports should I open on firewall for standalone Apache Flink cluster?
UPDATE 1
I found configuration problem in masters and slaves files (I skip new line separators between hosts described in these files). I fixed it and now I see other exceptions:
flink--taskmanager-0-app-1.stag.local.log
flink--taskmanager-0-app-2.stag.local.log
I have 2 nodes:
app-1.stag.local (with running job and task managers)
app-2.stag.local (with running task manager)
As you can see from these logs the app-1.stag.local task manager can't connect to other task manager:
java.io.IOException: Connecting the channel failed: Connecting to remote task manager + 'app-2.stag.local/192.168.38.98:35806' has failed. This might indicate that the remote task manager has been lost.
but app-2.stag.local has open port:
2016-03-18 16:24:14,347 INFO org.apache.flink.runtime.io.network.netty.NettyServer - Successful initialization (took 39 ms). Listening on SocketAddress /192.168.38.98:35806
So, I think problem related to firewall but I don't understand where I can configure this port (or range of ports) in Apache Flink.
I have found a problem: taskmanager.data.port parameter was set to 0 by default (but documentation say what it should be set to 6121).
So, I set this port in flink-conf.yaml and now all works fine.
I downloaded zeppelin binary package and started it. However it's disconnected.
Here is the log:
INFO [2016-01-12 14:37:56,592] ({main} QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'Defau
ltQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
INFO [2016-01-12 14:37:56,593] ({main} StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler 'DefaultQuartzScheduler' initi
alized from default resource file in Quartz package: 'quartz.properties'
INFO [2016-01-12 14:37:56,593] ({main} StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version: 2.2.1
INFO [2016-01-12 14:37:56,593] ({main} QuartzScheduler.java[start]:575) - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
INFO [2016-01-12 14:37:56,805] ({main} ServerImpl.java[initDestination]:94) - Setting the server's publish address to be /
INFO [2016-01-12 14:37:56,888] ({main} WebInfConfiguration.java[unpack]:478) - Extract jar:file:/data/users/huser/zeppelin/zeppelin-
web-0.5.5-incubating.war!/ to /tmp/jetty-0.0.0.0-8080-zeppelin-web-0.5.5-incubating.war-_-any-/webapp
INFO [2016-01-12 14:37:57,040] ({main} StandardDescriptorProcessor.java[visitServlet]:284) - NO JSP Support for /, did not find org.
apache.jasper.servlet.JspServlet
INFO [2016-01-12 14:37:57,941] ({main} AbstractConnector.java[doStart]:338) - Started SelectChannelConnector#0.0.0.0:8080
INFO [2016-01-12 14:37:57,941] ({main} ZeppelinServer.java[main]:108) - Started zeppelin server
I searched and found that port+1 is the websocket port. However, it's not listening:
netstat -na | grep 8080
tcp 0 0 :::8080 :::* LISTEN
netstat -na | grep 8081
noting...
And the web return the error:
WebSocket connection to 'ws://ip:8080/ws' failed: Establishing a tunnel via proxy server failed.
Can anyone help?
Thanks.
Make sure port forwarding command executes on yr client system all the time you use zeppelin
ssh -i ~/keys/mykey.pem -N -L 8122:xx.xxx.xxx.xxx:8890 hadoop#yy.yyy.yyy.yyy
We try to import our data into SolrCloud using MapReduce batch indexing. We face a problem at the reduce phase, that solr.xml cannot be found. We create a 'twitter' collection but looking at the logs, after it failed to load in solr.xml, it uses the default one and tries to create 'collection1' (failed) and 'core1' (success) SolrCore. I'm not sure if we need to create our own solr.xml and where to put it (we try to put it at several places but it seems not to load in). Below is the log:
2022 [main] INFO org.apache.solr.hadoop.HeartBeater - Heart beat reporting class is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Using this unpacked directory as solr home: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2025 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Creating embedded Solr server with solrHomeDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip, fs: DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1828461666_1, ugi=nguyen (auth:SIMPLE)]], outputShardDir: hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2029 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - HeartBeat thread running
2030 [Thread-64] INFO org.apache.solr.hadoop.HeartBeater - Issuing heart beat for 1 threads
2083 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2259 [main] INFO org.apache.solr.hadoop.SolrRecordWriter - Constructed instance information solr.home /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip (/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip), instance dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/, conf dir /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/conf/, writing index to solr.data.dir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data, with permdir hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014
2266 [main] INFO org.apache.solr.core.ConfigSolr - Loading container configuration from /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml
2267 [main] INFO org.apache.solr.core.ConfigSolr - /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/solr.xml does not exist, using default configuration
2505 [main] INFO org.apache.solr.core.CoreContainer - New CoreContainer 696103669
2505 [main] INFO org.apache.solr.core.CoreContainer - Loading cores into CoreContainer [instanceDir=/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/]
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting socketTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting urlScheme to: http://
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting connTimeout to: 0
2515 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxConnectionsPerHost to: 20
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting corePoolSize to: 0
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maximumPoolSize to: 2147483647
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting maxThreadIdleTime to: 5
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting sizeOfQueue to: -1
2516 [main] INFO org.apache.solr.handler.component.HttpShardHandlerFactory - Setting fairnessPolicy to: false
2527 [main] INFO org.apache.solr.client.solrj.impl.HttpClientUtil - Creating new http client, config:maxConnectionsPerHost=20&maxConnections=10000&socketTimeout=0&connTimeout=0&retry=false
2648 [main] INFO org.apache.solr.logging.LogWatcher - Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2676 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'collection1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1
2677 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/'
2691 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Failed to load file /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/solrconfig.xml
2693 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - Unable to create core: collection1
org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2695 [coreLoadExecutor-3-thread-1] ERROR org.apache.solr.core.CoreContainer - null:org.apache.solr.common.SolrException: Unable to create core: collection1
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:1158)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:670)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:368)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:360)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:596)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:661)
... 10 more
Caused by: java.io.IOException: Can't find resource 'solrconfig.xml' in classpath or '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/conf/', cwd=/data/05/mapred/local/taskTracker/nguyen/jobcache/job_201311191613_0320/attempt_201311191613_0320_r_000014_0/work
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:322)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:287)
at org.apache.solr.core.Config.<init>(Config.java:116)
at org.apache.solr.core.Config.<init>(Config.java:86)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:120)
at org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:593)
... 11 more
2697 [main] INFO org.apache.solr.core.CoreContainer - Creating SolrCore 'core1' using instanceDir: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip
2697 [main] INFO org.apache.solr.core.SolrResourceLoader - new SolrResourceLoader for directory: '/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/'
2751 [main] INFO org.apache.solr.core.SolrConfig - Adding specified lib dirs to ClassLoader
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/extraction/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/extraction/lib).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2752 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/clustering/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/clustering/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/langid/lib/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/langid/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../contrib/velocity/lib (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../contrib/velocity/lib).
2753 [main] WARN org.apache.solr.core.SolrResourceLoader - Can't find (or read) directory to add to classloader: ../../../dist/ (resolved as: /data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/../../../dist).
2785 [main] INFO org.apache.solr.update.SolrIndexConfig - IndexWriter infoStream solr logging is enabled
2790 [main] INFO org.apache.solr.core.SolrConfig - Using Lucene MatchVersion: LUCENE_44
2869 [main] INFO org.apache.solr.core.Config - Loaded SolrConfig: solrconfig.xml
2879 [main] INFO org.apache.solr.schema.IndexSchema - Reading Solr Schema from schema.xml
2937 [main] INFO org.apache.solr.schema.IndexSchema - [core1] Schema name=twitter
3352 [main] INFO org.apache.solr.schema.IndexSchema - unique key field: id
3471 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3478 [main] INFO org.apache.solr.schema.FileExchangeRateProvider - Reloading exchange rates from file currency.xml
3635 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Solr Kerberos Authentication disabled
3636 [main] INFO org.apache.solr.core.JmxMonitoredMap - No JMX servers found, not exposing Solr information with JMX.
3652 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3686 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data
3711 [main] WARN org.apache.solr.core.SolrCore - [core1] Solr index directory 'hdfs:/master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index' doesn't exist. Creating new index...
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - creating directory factory for path hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3719 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Number of slabs of block cache [1] with direct memory allocation set to [true]
3720 [main] INFO org.apache.solr.core.HdfsDirectoryFactory - Block cache target memory usage, slab size of [134217728] will allocate [1] slabs and use ~[134217728] bytes
3721 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 1024 buffers with [8192] buffers.
3740 [main] INFO org.apache.solr.store.blockcache.BufferStore - Initializing the 8192 buffers with [8192] buffers.
3891 [main] INFO org.apache.solr.core.CachingDirectoryFactory - return new directory for hdfs://master.hadoop:8020/user/nguyen/twitter/outdir/reducers/_temporary/_attempt_201311191613_0320_r_000014_0/part-r-00014/data/index
3988 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: init: current segments file is "null"; deletionPolicy=org.apache.solr.core.IndexDeletionPolicyWrapper#65b01d5d
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: now checkpoint "" [0 segments ; isCommit = false]
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: 0 msec to checkpoint
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]: init: create=true
3992 [main] INFO org.apache.solr.update.LoggingInfoStream - [IW][main]:
dir=NRTCachingDirectory(org.apache.solr.store.hdfs.HdfsDirectory#17e5a6d8 lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory#7f117668; maxCacheMB=192.0 maxMergeSizeMB=16.0)
solr looks for solr.home parameter and searchs solrConfig.xml file there. if there is none it tries to load default configuration.
it looks like your solr home is
/data/06/mapred/local/taskTracker/distcache/3866561797898787678_-1754062477_512745567/master.hadoop/tmp/9501daf9-5011-4665-bae3-d5af1c8bcd62.solr.zip/collection1/
check that folder for solrconfig.xml file
if there is none, copy one from example directory of solr
if there is one, match the file/folder permissions with the server instance