generating big files using apache camel and freemarker - apache-camel

i'm currently facing a problem of out of memory when i'm trying to generate a big file using freemarker size > 300 MB.(it works well for small files)
<to uri="freemarker:file:{{karaf.home}}/etc/fileGenerator.ftl" />
is there a way to avoid this problem ?
my configuration : Apche karaf 4.2.6 : 4GB memory
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332) ~[?:?]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~[?:?]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596) ~[?:?]
at java.lang.StringBuffer.append(StringBuffer.java:367) ~[?:?]
at java.io.StringWriter.write(StringWriter.java:94) ~[?:?]
at java.io.Writer.write(Writer.java:127) ~[?:?]
at freemarker.core.TextBlock.accept(TextBlock.java:67) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:330) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:372) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executedNestedContentForCollOrSeqListing(IteratorBlock.java:317) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executeNestedContent(IteratorBlock.java:271) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.accept(IteratorBlock.java:242) ~[?:?]
at freemarker.core.Environment.visitIteratorBlock(Environment.java:642) ~[?:?]
at freemarker.core.IteratorBlock.acceptWithResult(IteratorBlock.java:107) ~[?:?]
at freemarker.core.IteratorBlock.accept(IteratorBlock.java:93) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:330) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:336) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:372) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executedNestedContentForCollOrSeqListing(IteratorBlock.java:317) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executeNestedContent(IteratorBlock.java:271) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.accept(IteratorBlock.java:242) ~[?:?]
at freemarker.core.Environment.visitIteratorBlock(Environment.java:642) ~[?:?]
at freemarker.core.IteratorBlock.acceptWithResult(IteratorBlock.java:107) ~[?:?]
at freemarker.core.IteratorBlock.accept(IteratorBlock.java:93) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:330) ~[?:?]
at freemarker.core.Environment.visit(Environment.java:372) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executedNestedContentForCollOrSeqListing(IteratorBlock.java:317) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.executeNestedContent(IteratorBlock.java:271) ~[?:?]
at freemarker.core.IteratorBlock$IterationContext.accept(IteratorBlock.java:242) ~[?:?]
at freemarker.core.Environment.visitIteratorBlock(Environment.java:642) ~[?:?]
at freemarker.core.IteratorBlock.acceptWithResult(IteratorBlock.java:107) ~[?:?]
at freemarker.core.IteratorBlock.accept(IteratorBlock.java:93) ~[?:?]

Related

Trying to customize 404 error page from Camel Jetty

I'm working on an osgi package with Apache Camel and Jetty and trying to customize the 404 error page but can't find a way
Reading on the internet indicates that it has to be done using an Error Handler.
I am trying to add the errorHandler as follows:
JettyHttpComponent9 jetty = getContext().getComponent("jetty", JettyHttpComponent9.class);
ErrorPageErrorHandler errorPageErrorHandler = new ErrorPageErrorHandler();
errorPageErrorHandler.addErrorPage(404,"http://localhost:8081/getActualDate");
jetty.setErrorHandler(errorPageErrorHandler);
But when I load the bundle in Karaf it shows me the following error:
java.lang.IllegalStateException: STARTED
at org.eclipse.jetty.server.handler.AbstractHandler.setServer(AbstractHandler.java:126) ~[?:?]
at org.eclipse.jetty.server.Server.doStart(Server.java:342) ~[?:?]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72) ~[?:?]
at org.apache.camel.component.jetty.JettyHttpComponent.connect(JettyHttpComponent.java:337) ~[?:?]
at org.apache.camel.http.common.HttpCommonEndpoint.connect(HttpCommonEndpoint.java:186) ~[?:?]
at org.apache.camel.http.common.HttpConsumer.doStart(HttpConsumer.java:58) ~[?:?]
at org.apache.camel.component.jetty.JettyHttpConsumer.doStart(JettyHttpConsumer.java:31) ~[?:?]
at org.apache.camel.support.service.BaseService.start(BaseService.java:119) ~[!/:3.14.3]
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:113) ~[!/:3.14.3]
at org.apache.camel.impl.engine.AbstractCamelContext.startService(AbstractCamelContext.java:3598) ~[!/:3.14.3]
at org.apache.camel.impl.engine.InternalRouteStartupManager.doStartOrResumeRouteConsumers(InternalRouteStartupManager.java:401) ~[!/:3.14.3]
at org.apache.camel.impl.engine.InternalRouteStartupManager.doStartRouteConsumers(InternalRouteStartupManager.java:319) ~[!/:3.14.3]
at org.apache.camel.impl.engine.InternalRouteStartupManager.safelyStartRouteServices(InternalRouteStartupManager.java:213) ~[!/:3.14.3]
at org.apache.camel.impl.engine.InternalRouteStartupManager.doStartOrResumeRoutes(InternalRouteStartupManager.java:147) ~[!/:3.14.3]
at org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(AbstractCamelContext.java:3300) ~[!/:3.14.3]
at org.apache.camel.impl.engine.AbstractCamelContext.doStartContext(AbstractCamelContext.java:2952) ~[!/:3.14.3]
at org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCamelContext.java:2903) ~[!/:3.14.3]
at org.apache.camel.support.service.BaseService.start(BaseService.java:119) ~[!/:3.14.3]
at org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelContext.java:2587) ~[!/:3.14.3]
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:253) ~[!/:3.14.3]
at org.apache.camel.blueprint.BlueprintCamelContext.start(BlueprintCamelContext.java:241) ~[!/:3.14.3]
at org.apache.camel.blueprint.BlueprintCamelContext.maybeStart(BlueprintCamelContext.java:283) ~[!/:3.14.3]
at org.apache.camel.blueprint.BlueprintCamelContext.blueprintEvent(BlueprintCamelContext.java:188) [!/:3.14.3]
at org.apache.aries.blueprint.container.BlueprintEventDispatcher$3.call(BlueprintEventDispatcher.java:190) [!/:1.10.2]
at org.apache.aries.blueprint.container.BlueprintEventDispatcher$3.call(BlueprintEventDispatcher.java:188) [!/:1.10.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_302]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_302]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_302]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_302]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_302]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_302]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_302]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_302]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_302]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302]
From what I understand the error says that since the component is already started it could not load the handler, but I don't know how to load it before it starts.
I am currently using Camel 3.14.3 with Karaf 4.2.8
Any help is appreciated
Edit:
Currently I load the data to build the routes from the database as follows:
configManager.getRoutes().forEach((apiBasePath, route) -> {
Predicate requiresAudit = PredicateBuilder.constant(route.getRequiresAudit());
Predicate requiresAuthorization = PredicateBuilder.constant(route.getRequiresAuthorization());
from(createInternalEndPoint(route)).routeId(route.getApiBasePath())
.setHeader(Exchange.HTTP_PATH,simple(route.getApiBasePath()))
.bean(gateway, "checkRoute")
.setHeader(Exchange.HTTP_PATH,simple("")) //Esto se hace para que no de error la ruta en TO
.choice()
...
.end()
.choice()
..
.end()
.circuitBreaker().resilience4jConfiguration(Resilience4JConfig.getConfig())
.to(createExternalEndPoint(route))
.onFallback()
.process(getShortCircuitedProcess())
.end()
.end()
});
In pom.xml I only have this dependency related to jetty:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jetty</artifactId>
</dependency>
I try to set the error handler before creating the routes (All in the same class that extends RouteBuilder).

Flink StateFun high availability exception: "java.lang.IllegalStateException: There is no operator for the state ....."

I have 2 questions related to high availability of a StateFun application running on Kubernetes
Here are details about my setup:
Using StateFun v3.1.0
Checkpoints are stored on HDFS (state.checkpoint-storage: filesystem)
Checkpointing mode is EXACTLY_ONCE
State backend is rocksdb and incremental checkpointing is enabled
1- I tried both Zookeeper and Kubernetes HA settings, result is the same (log below is from a Zookeeper HA env). When I kill the jobmanager pod, minikube starts another pod and this new pod fails when it tries to load last checkpoint:
...
2021-12-11 14:25:26,426 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Initializing job myStatefunApp (00000000000000000000000000000000).
2021-12-11 14:25:26,443 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Using restart back off time strategy FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=2147483647, backoffTimeMS=1000) for myStatefunApp (00000000000000000000000000000000).
2021-12-11 14:25:26,516 INFO org.apache.flink.runtime.util.ZooKeeperUtils [] - Initialized DefaultCompletedCheckpointStore in 'ZooKeeperStateHandleStore{namespace='statefun_zk_recovery/my-statefun-app/checkpoints/00000000000000000000000000000000'}' with /checkpoints/00000000000000000000000000000000.
2021-12-11 14:25:26,599 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Running initialization on master for job myStatefunApp (00000000000000000000000000000000).
2021-12-11 14:25:26,599 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Successfully ran initialization on master in 0 ms.
2021-12-11 14:25:26,617 INFO org.apache.flink.runtime.scheduler.adapter.DefaultExecutionTopology [] - Built 1 pipelined regions in 1 ms
2021-12-11 14:25:26,626 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Using job/cluster config to configure application-defined state backend: EmbeddedRocksDBStateBackend{, localRocksDbDirectories=null, enableIncrementalCheckpointing=TRUE, numberOfTransferThreads=1, writeBatchSize=2097152}
2021-12-11 14:25:26,627 INFO org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend [] - Using predefined options: DEFAULT.
2021-12-11 14:25:26,627 INFO org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend [] - Using application-defined options factory: DefaultConfigurableOptionsFactory{configuredOptions={state.backend.rocksdb.thread.num=1}}.
2021-12-11 14:25:26,627 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Using application-defined state backend: EmbeddedRocksDBStateBackend{, localRocksDbDirectories=null, enableIncrementalCheckpointing=TRUE, numberOfTransferThreads=1, writeBatchSize=2097152}
2021-12-11 14:25:26,631 INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Checkpoint storage is set to 'filesystem': (checkpoints "hdfs://hdfs-namenode:8020/tmp/statefun_checkpoints/myStatefunApp")
2021-12-11 14:25:26,712 INFO org.apache.flink.runtime.checkpoint.DefaultCompletedCheckpointStore [] - Recovering checkpoints from ZooKeeperStateHandleStore{namespace='statefun_zk_recovery/my-statefun-app/checkpoints/00000000000000000000000000000000'}.
2021-12-11 14:25:26,724 INFO org.apache.flink.runtime.checkpoint.DefaultCompletedCheckpointStore [] - Found 1 checkpoints in ZooKeeperStateHandleStore{namespace='statefun_zk_recovery/my-statefun-app/checkpoints/00000000000000000000000000000000'}.
2021-12-11 14:25:26,725 INFO org.apache.flink.runtime.checkpoint.DefaultCompletedCheckpointStore [] - Trying to fetch 1 checkpoints from storage.
2021-12-11 14:25:26,725 INFO org.apache.flink.runtime.checkpoint.DefaultCompletedCheckpointStore [] - Trying to retrieve checkpoint 2.
2021-12-11 14:25:26,931 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Restoring job 00000000000000000000000000000000 from Checkpoint 2 # 1639232587220 for 00000000000000000000000000000000 located at hdfs://hdfs-namenode:8020/tmp/statefun_checkpoints/myStatefunApp/00000000000000000000000000000000/chk-2.
2021-12-11 14:25:27,012 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error occurred in the cluster entrypoint.
org.apache.flink.util.FlinkException: JobMaster for job 00000000000000000000000000000000 failed.
at org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:873) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.dispatcher.Dispatcher.jobManagerRunnerFailed(Dispatcher.java:459) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.dispatcher.Dispatcher.handleJobManagerRunnerResult(Dispatcher.java:436) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$3(Dispatcher.java:415) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at java.util.concurrent.CompletableFuture.uniHandle(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture$Completion.run(Unknown Source) ~[?:?]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.2.jar:1.13.2]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.Actor.aroundReceive(Actor.scala:517) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.Actor.aroundReceive$(Actor.scala:515) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.12-1.13.2.jar:1.13.2]
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.12-1.13.2.jar:1.13.2]
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not start the JobMaster.
at org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.lang.Thread.run(Unknown Source) ~[?:?]
Caused by: java.util.concurrent.CompletionException: java.lang.IllegalStateException: There is no operator for the state 2edd7b5dafb2c271440b25f6da5f4532
at java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture.completeThrowable(Unknown Source) ~[?:?]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.lang.Thread.run(Unknown Source) ~[?:?]
Caused by: java.lang.IllegalStateException: There is no operator for the state 2edd7b5dafb2c271440b25f6da5f4532
at org.apache.flink.runtime.checkpoint.StateAssignmentOperation.checkStateMappingCompleteness(StateAssignmentOperation.java:712) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:100) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1562) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreInitialCheckpointIfPresent(CheckpointCoordinator.java:1476) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:134) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:342) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:190) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:122) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:132) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:110) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:340) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:317) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:107) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112) ~[flink-dist_2.12-1.13.2.jar:1.13.2]
at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
at java.lang.Thread.run(Unknown Source) ~[?:?]
2021-12-11 14:25:27,017 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting StatefulFunctionsClusterEntryPoint down with application status UNKNOWN. Diagnostics Cluster entrypoint has been closed externally..
2021-12-11 14:25:27,021 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Shutting down rest endpoint.
2021-12-11 14:25:27,025 INFO org.apache.flink.runtime.blob.BlobServer [] - Stopped BLOB server at 0.0.0.0:6124
2021-12-11 14:25:27,034 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Removing cache directory /tmp/flink-web-6c2dafc9-bb7d-489a-9e2d-cf78e3f19b67/flink-web-ui
2021-12-11 14:25:27,035 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Stopping DefaultLeaderElectionService.
2021-12-11 14:25:27,035 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] - Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader/rest_server_lock'}
2021-12-11 14:25:27,036 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Shut down complete.
2021-12-11 14:25:27,036 INFO org.apache.flink.runtime.entrypoint.component.DispatcherResourceManagerComponent [] - Closing components.
2021-12-11 14:25:27,037 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Stopping DefaultLeaderRetrievalService.
2021-12-11 14:25:27,037 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] - Closing ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/dispatcher_lock'}.
2021-12-11 14:25:27,037 INFO org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - Stopping DefaultLeaderRetrievalService.
2021-12-11 14:25:27,037 INFO org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] - Closing ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}.
2021-12-11 14:25:27,038 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Stopping DefaultLeaderElectionService.
2021-12-11 14:25:27,038 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] - Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader/dispatcher_lock'}
2021-12-11 14:25:27,039 INFO org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcess [] - Stopping JobDispatcherLeaderProcess.
2021-12-11 14:25:27,040 INFO org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Closing the slot manager.
2021-12-11 14:25:27,040 INFO org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Suspending the slot manager.
2021-12-11 14:25:27,041 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Stopping DefaultLeaderElectionService.
2021-12-11 14:25:27,041 INFO org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] - Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader/resource_manager_lock'}
I believe not being able to specify ids for Flink operators (as told here) when using StateFun is causing this. While it was working fine in the beginning, operators got some random id assigned and checkpointing went just fine. After the restart, the operators are assigned other random ids, and when the jobmanager (statefun master in this case) tries to load the state "2edd7b5dafb2c271440b25f6da5f4532" it fails to find the operator assigned to it originally.
Can someone confirm what I think is correct and / or give me directions for making my StateFun app work with high availability?
Another interesting thing to note is, after several restarts of the jobmanager pod with the above exception, it sometimes can get past the "Restoring job 00000000000000000000000000000000 from Checkpoint ..." line somehow (?), with "No master state to restore" log (link) - which makes me feel not sure about it really did recover or it just started discarding the state on last successful checkpoint. What might be causing this? Is it really recovering from the checkpoint successfully?
2- For Kubernetes deployments, on StateFun deployment documentation (link) Deployment type is used for jobmanager component. On the other hand Flink deployment documentation (Standalone / Kubernetes section) (link) uses Job type for jobmanager for high available setup (jobmanager-application-ha.yaml file)
Basically since Kubernetes will restart the pod on failures, either Job or Deployment can be used. But the thing is, when we try to stop the job with a savepoint and Deployment type is used, Kubernetes restarts the pod regardless of successful savepoint creation and success exit status (0).
Are we supposed not to stop StateFun apps with savepoint when running on Kubernetes? I am aware of the related bug (link) - but although it seems to be deprecated I can do a cancel with savepoint - are we supposed to just delete deployment as told in High availability data clean up section? (link)
UPDATE for the first question: I turned on debug logging and could capture a session with the exception and a successful startup in a row. The following is from the unsuccessful one:
...
2021-12-11 21:55:14,001 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash '32d5ca33c915e65563a5c7f4d62703ad' for node 'router (my-ingress-1-in)-5' {id: 5, parallelism: 1, user function: }
2021-12-11 21:55:14,001 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash '33b86fe798648d648b237ddfc986200d' for node 'router (my-ingress-2-in)-4' {id: 4, parallelism: 1, user function: }
2021-12-11 21:55:14,001 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash 'bd4c3fa1570bbcf606f2dabddd61ed7f' for node 'router (my-ingress-3-in)-6' {id: 6, parallelism: 1, user function: }
and this is from the successful one:
2021-12-11 21:55:34,543 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash 'a1448ecf31ac98d2215c38bfd119abe0' for node 'router (my-ingress-3-in)-5' {id: 5, parallelism: 1, user function: }
2021-12-11 21:55:34,543 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash '05037ff96baea131d9cf1390846efd98' for node 'router (my-ingress-1-in)-4' {id: 4, parallelism: 1, user function: }
2021-12-11 21:55:34,543 DEBUG org.apache.flink.streaming.api.graph.StreamGraphHasherV2 [] - Generated hash '2edd7b5dafb2c271440b25f6da5f4532' for node 'router (my-ingress-2-in)-6' {id: 6, parallelism: 1, user function: }
It seems that generated hashes between two runs are computed differently.
In statefun <= 3.2 routers do not have manually specified UIDs. While Flinks internal UID generation is deterministic, the way statefun generates the underlying stream graph may not be in some cases. This is a bug. I've opened a PR to fix this in a backwards compatible way[1].
[1] https://github.com/apache/flink-statefun/pull/279

Flink Lambda serialization error logged when running in standalone server

Version: Flink 1.12, java 1.11.
No issues while runnning in local environment. When running in standalone cluster with below config
INFO [] - Loading configuration property: jobmanager.rpc.address, localhost
INFO [] - Loading configuration property: jobmanager.rpc.port, 6123
INFO [] - Loading configuration property: taskmanager.cpu.cores, 1.79
INFO [] - Loading configuration property: taskmanager.memory.task.heap.size, 4096m
INFO [] - Loading configuration property: taskmanager.memory.task.off-heap.size, 4096m
INFO [] - Loading configuration property: taskmanager.memory.managed.size, 128m
INFO [] - Loading configuration property: taskmanager.memory.network.min, 64m
WARN [] - Error while trying to split key and value in configuration file /Users/vgamini/tools/flink-1.12.0/conf/flink-conf.yaml:44: "taskmanager.memory.network.max:64m"
INFO [] - Loading configuration property: jobmanager.memory.flink.size, 4096m
INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1
INFO [] - Loading configuration property: parallelism.default, 1
I am seeing the below error in task manager logs - but the job went to running state
2020-12-29 15:16:31,322 WARN org.apache.flink.runtime.taskmanager.Task [] - Source: Custom Source (1/3)#0 (5c850a62dc24ac6ccea8da166d5cc8f6) switched from DEPLOYING to FAILED.
org.apache.flink.streaming.runtime.tasks.StreamTaskException: Could not instantiate outputs in order.
at org.apache.flink.streaming.api.graph.StreamConfig.getOutEdgesInOrder(StreamConfig.java:470) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriters(StreamTask.java:1138) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriterDelegate(StreamTask.java:1122) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:290) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:277) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:73) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:69) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:?]
at java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[?:?]
at org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1373) [flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:700) [flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) [flink-dist_2.12-1.12.0.jar:1.12.0]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.keySelector of type org.apache.flink.api.java.functions.KeySelector in instance of org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2190) ~[?:?]
at java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2153) ~[?:?]
at java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1407) ~[?:?]
at java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2371) ~[?:?]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2278) ~[?:?]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087) ~[?:?]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594) ~[?:?]
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2355) ~[?:?]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2249) ~[?:?]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087) ~[?:?]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594) ~[?:?]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) ~[?:?]
at java.util.ArrayList.readObject(ArrayList.java:928) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1160) ~[?:?]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2216) ~[?:?]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2087) ~[?:?]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594) ~[?:?]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430) ~[?:?]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
at org.apache.flink.streaming.api.graph.StreamConfig.getOutEdgesInOrder(StreamConfig.java:467) ~[flink-dist_2.12-1.12.0.jar:1.12.0]
Is it just a warning? or will there be any impact because of this error.
Usecase:
Take a kafka input stream and key by a field to apply window grouping.
Note: While creating the fat jar - all the recommendations has been followed.
Do anyone faced the same issue?
The execution shows 2 tasks while the logs show tasks as 1/3. Can somebody help understanding it?
This problem occurs when you run Flink in Application Mode, but only your JobManager has the job jar in its classpath (or the usrlib/ folder).
To fix this issue, make the job jar available on your TaskManger instances as well.
You can do this by either building a custom docker image, containing the required jar(s), or by mounting them.
The exception in Flink 1.13.1 looked like this for me:
2021-08-27 11:44:45,788 INFO org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler [] - Job 857fbf37156b20a8d445b9e3d9a465ab reached terminal state FAILED.
org.apache.flink.util.SerializedThrowable: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.howToHandleFailure(AdaptiveScheduler.java:1073) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.adaptive.Executing.handleAnyFailure(Executing.java:88) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.adaptive.Executing.updateTaskExecutionState(Executing.java:114) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.lambda$updateTaskExecutionState$4(AdaptiveScheduler.java:471) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.adaptive.State.tryCall(State.java:142) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler.updateTaskExecutionState(AdaptiveScheduler.java:468) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_302]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_302]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.11-1.13.1.jar:1.13.1]
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.11-1.13.1.jar:1.13.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) [flink-dist_2.11-1.13.1.jar:1.13.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.11-1.13.1.jar:1.13.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.11-1.13.1.jar:1.13.1]
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.11-1.13.1.jar:1.13.1]
Caused by: org.apache.flink.util.SerializedThrowable: Could not instantiate outputs in order.
at org.apache.flink.streaming.api.graph.StreamConfig.getOutEdgesInOrder(StreamConfig.java:485) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriters(StreamTask.java:1338) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriterDelegate(StreamTask.java:1322) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:327) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:308) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:76) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:72) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_302]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_302]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_302]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_302]
at org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1524) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:730) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_302]
Caused by: org.apache.flink.util.SerializedThrowable: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner.keySelector of type org.apache.flink.api.java.functions.KeySelector in instance of org.apache.flink.streaming.runtime.partitioner.KeyGroupStreamPartitioner
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2301) ~[?:1.8.0_302]
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1431) ~[?:1.8.0_302]
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2411) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667) ~[?:1.8.0_302]
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461) ~[?:1.8.0_302]
at java.util.ArrayList.readObject(ArrayList.java:799) ~[?:1.8.0_302]
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_302]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_302]
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503) ~[?:1.8.0_302]
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461) ~[?:1.8.0_302]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:541) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.api.graph.StreamConfig.getOutEdgesInOrder(StreamConfig.java:482) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriters(StreamTask.java:1338) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriterDelegate(StreamTask.java:1322) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:327) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:308) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:76) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.<init>(SourceStreamTask.java:72) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_302]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_302]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_302]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_302]
at org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1524) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:730) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_302]

OpenDaylight: Error installing boot features

I am facing this error:
2018-02-28T08:30:08,419 | ERROR | pool-1-thread-2 | BootFeaturesInstaller | 7 - org.apache.karaf.features.core - 4.1.3 | Error installing boot features
org.osgi.service.resolver.ResolutionException: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=b9b64fb5-51e0-4ead-92af-087b5f324d3e; type=karaf.feature; version="[0,0.0.0]"; filter:="(&(osgi.identity=b9b64fb5-51e0-4ead-92af-087b5f324d3e)(type=karaf.feature)(version>=0.0.0)(version<=0.0.0))" [caused by: Unable to resolve b9b64fb5-51e0-4ead-92af-087b5f324d3e/0.0.0: missing requirement [b9b64fb5-51e0-4ead-92af-087b5f324d3e/0.0.0] osgi.identity; osgi.identity=odl-flowlistener-rest; type=karaf.feature [caused by: Unable to resolve odl-flowlistener-rest/0.1.0.SNAPSHOT: missing requirement [odl-flowlistener-rest/0.1.0.SNAPSHOT] osgi.identity; osgi.identity=odl-restconf; type=karaf.feature; version="[1.7.0.SNAPSHOT,1.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-restconf/1.7.0.SNAPSHOT: missing requirement [odl-restconf/1.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-restconf-noauth; type=karaf.feature; version="[1.7.0.SNAPSHOT,1.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-restconf-noauth/1.7.0.SNAPSHOT: missing requirement [odl-restconf-noauth/1.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-aaa-shiro; type=karaf.feature; version="[0.7.0.SNAPSHOT,0.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-aaa-shiro/0.7.0.SNAPSHOT: missing requirement [odl-aaa-shiro/0.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-aaa-cert; type=karaf.feature; version="[0.7.0.SNAPSHOT,0.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-aaa-cert/0.7.0.SNAPSHOT: missing requirement [odl-aaa-cert/0.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-mdsal-broker; type=karaf.feature; version="[1.7.0.SNAPSHOT,1.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-mdsal-broker/1.7.0.SNAPSHOT: missing requirement [odl-mdsal-broker/1.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-mdsal-remoterpc-connector; type=karaf.feature; version="[1.7.0.SNAPSHOT,1.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-mdsal-remoterpc-connector/1.7.0.SNAPSHOT: missing requirement [odl-mdsal-remoterpc-connector/1.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-mdsal-broker-local; type=karaf.feature; version="[1.7.0.SNAPSHOT,1.7.0.SNAPSHOT]" [caused by: Unable to resolve odl-mdsal-broker-local/1.7.0.SNAPSHOT: missing requirement [odl-mdsal-broker-local/1.7.0.SNAPSHOT] osgi.identity; osgi.identity=odl-config-netty; type=karaf.feature; version="[0.8.0.SNAPSHOT,0.8.0.SNAPSHOT]" [caused by: Unable to resolve odl-config-netty/0.8.0.SNAPSHOT: missing requirement [odl-config-netty/0.8.0.SNAPSHOT] osgi.identity; osgi.identity=odl-config-startup; type=karaf.feature; version="[0.8.0.SNAPSHOT,0.8.0.SNAPSHOT]" [caused by: Unable to resolve odl-config-startup/0.8.0.SNAPSHOT: missing requirement [odl-config-startup/0.8.0.SNAPSHOT] osgi.identity; osgi.identity=odl-config-persister; type=karaf.feature; version="[0.8.0.SNAPSHOT,0.8.0.SNAPSHOT]" [caused by: Unable to resolve odl-config-persister/0.8.0.SNAPSHOT: missing requirement [odl-config-persister/0.8.0.SNAPSHOT] osgi.identity; osgi.identity=org.opendaylight.controller.config-persister-file-xml-adapter; type=osgi.fragment; version="[0.8.0.SNAPSHOT,0.8.0.SNAPSHOT]"; resolution:=mandatory [caused by: Fragment was not selected for attachment: org.opendaylight.controller.config-persister-file-xml-adapter/0.8.0.SNAPSHOT]]]]]]]]]]]]]
at org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42) ~[?:?]
at org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:391) ~[?:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:377) ~[?:?]
at org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:349) ~[?:?]
at org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:218) ~[?:?]
at org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:291) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1248) ~[?:?]
at org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$1(FeaturesServiceImpl.java:1147) ~[?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:?]
at java.lang.Thread.run(Thread.java:748) [?:?]
Looks like I am missing a boot dependency? How can I fix it?
According to OpenDaylight: Listen for flow updates this happened to you when you downgraded restconf.. I would not do that.

Solr DihR error 500

I have this error while using DiH for importing data from SQL Server to Solr for indexing:
HTTP Status 500 - Severe errors in solr configuration. Check your log files for more
detailed information on what may be wrong. If you want solr to continue after configuration errors, change: <abortOnConfigurationError>false</abortOnConfigurationError> in solr.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: Error loading class 'org.apache.solr.handler.dataimport.DataImportHandler' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:389) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:423) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:459) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:157) at org.apache.solr.core.SolrCore.<init>(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:463) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:273) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:254) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:372) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:98) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4584) at org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5262) at org.apache.catalina.core.StandardContext$2.call(StandardContext.java:5257) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ClassNotFoundException: org.apache.solr.handler.dataimport.DataImportHandler at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.net.FactoryURLClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:373) ... 21 more
You are missing class file, so add apache-solr-dataimporthandler(version).jar in "dist" directory and all the jars in "contrib\dataimporthandler\lib" to path.

Resources