RESTEASY002020: Unhandled asynchronous exception with quarkus 1.0 Final and RestEeasy JAX-RS Resource - resteasy

I have this code, that performs parallel execution of some function that does http calls, this located inside some #Singleton service that is called from the RestEeasy JAX-RS Resource
final Flowable<Map<String, List<Data>>> relatedMaps = Flowable.range(0, requestList.size())
concatMapEager(index ->
fetchByHttp(requestList.get(index))
.subscribeOn(Schedulers.io())
.toFlowable(),
requestList.size(),
1
);
where the fetchByHttp is:
Singe fetchByHttp(request) {
return Single.fromCallable( () -> {
...restClient.getData(request)
...
requestList.size() is about 100. or less.
and sometimes I get this issue:
14:28:35 ERROR [or.jb.re.re.i18n] (RxCachedThreadScheduler-232) RESTEASY002020: Unhandled asynchronous exception, sending back 500: java.lang.NullPointerException
at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:91)
at org.jboss.resteasy.core.AsyncResponseConsumer.sendBuiltResponse(AsyncResponseConsumer.java:148)
at org.jboss.resteasy.core.AsyncResponseConsumer.internalResume(AsyncResponseConsumer.java:115)
at org.jboss.resteasy.core.AsyncResponseConsumer$CompletionStageResponseConsumer.accept(AsyncResponseConsumer.java:237)
at org.jboss.resteasy.core.AsyncResponseConsumer$CompletionStageResponseConsumer.accept(AsyncResponseConsumer.java:216)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at io.reactivex.internal.observers.ConsumerSingleObserver.onSuccess(ConsumerSingleObserver.java:62)
at io.smallrye.context.propagators.rxjava2.ContextPropagatorOnSingleCreateAction$ContextCapturerSingle.lambda$onSuccess$2(ContextPropagatorOnSingleCreateAction.java:50)
at io.smallrye.context.SmallRyeThreadContext.lambda$withContext$0(SmallRyeThreadContext.java:215)
at io.smallrye.context.propagators.rxjava2.ContextPropagatorOnSingleCreateAction$ContextCapturerSingle.onSuccess(ContextPropagatorOnSingleCreateAction.java:50)
at io.smallrye.context.propagators.rxjava2.ContextPropagatorOnSingleCreateAction$ContextCapturerSingle.lambda$onSuccess$2(ContextPropagatorOnSingleCreateAction.java:50)
at io.smallrye.context.SmallRyeThreadContext.lambda$withContext$0(SmallRyeThreadContext.java:215)
On Resource side:
#Timeout(20000)
#GET
#Path("/data")
#Produces(MediaType.APPLICATION_JSON)
public CompletionStage<Data> getData(){
final Single<Data> dataSingle = service.getData();
final CompletableFuture<Data> dataFuture = new CompletableFuture<>();
dataSingle
//.subscribeOn(Schedulers.io())
.subscribe(dataFuture::complete);
return dataFuture;
}
Found this. If it is fixed. Then Q: what's going on and how handle it?
Quarkus 1.0 Final.

Related

java.lang.IllegalStateException: The Kryo Output still contains data from a previous serialize call on Flink KeyedProcessFunction

I am using a KeyedProcessFunction on Flink 1.16.0 with a
private lazy val state: ValueState[Feature] = {
val stateDescriptor = new ValueStateDescriptor[Feature]("CollectFeatureProcessState", createTypeInformation[Feature])
getRuntimeContext.getState(stateDescriptor)
}
which is used in my process function as follows
override def processElement(value: Feature, ctx: KeyedProcessFunction[String, Feature, Feature]#Context, out: Collector[Feature]): Unit = {
val current: Feature = state.value match {
case null => value
case exists => combine(value, exists)
}
if (checkForCompleteness(current)) {
out.collect(current)
state.clear()
} else {
state.update(current)
}
}
Feature is a protobuf class that I registered with kryo as follows (using chill-protobuf 0.7.6)
env.getConfig.registerTypeWithKryoSerializer(classOf[Feature], classOf[ProtobufSerializer])
Within the first few seconds of running the app, I get this exception:
2023-02-07 09:17:04,246 WARN org.apache.flink.runtime.taskmanager.Task [] - KeyedProcess -> (Map -> Sink: signalSink, Map -> Flat Map -> Sink: FeatureSink, Sink: logsink) (2/2)#0 (fa4aae8fb7d2a7a94eafb36fe5470851_6760a9723a5626620871f040128bad1b_1_0) switched from RUNNING to FAILED with failure cause: org.apache.flink.util.FlinkRuntimeException: Error while adding data to RocksDB
at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:109)
at com.grab.grabdefence.acorn.app.functions.stream.CollectFeatureProcessFunction$.processElement(CollectFeatureProcessFunction.scala:69)
at com.grab.grabdefence.acorn.app.functions.stream.CollectFeatureProcessFunction$.processElement(CollectFeatureProcessFunction.scala:18)
at org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:542)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:831)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:780)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:914)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalStateException: The Kryo Output still contains data from a previous serialize call. It has to be flushed or cleared at the end of the serialize call.
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.serialize(KryoSerializer.java:358)
at org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValueInternal(AbstractRocksDBState.java:158)
at org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:180)
at org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:168)
at org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:107)
... 16 more
I checked KryoSerializer.serialize and I do not understand why this exception is thrown, the AbstractRocksDBState.serializeValue will always do a clear() before passing the DataOutputView to the KryoSerializer so it baffles me why output.position() != 0 could ever be true at the beginning of a serialization.

Flutter - Why do I get 'Unhandled Exception: Concurrent modification during iteration' from a database query?

I'm having some trouble understanding an issue regarding flutters database handling (at least I assume so since that's where the issue seems to be). When I run my app I get the following error message:
E/flutter (26185): [ERROR:flutter/lib/ui/ui_dart_state.cc(209)] Unhandled Exception: Concurrent modification during iteration: Instance(length:13) of '_GrowableList'.
E/flutter (26185): #0 ListIterator.moveNext (dart:_internal/iterable.dart:336:7)
E/flutter (26185): #1 _MyHomePageState._processBle.<anonymous closure> (package:blescanner/screens/home_view.dart:283:27)
E/flutter (26185): <asynchronous suspension>
I've put a lot of effort trying to find why this is. The row 283 referenced is my "for (var element in list){"-row in MyHomePage, but to my knowledge I'm not trying to modify that list anywhere. After some time I just started commenting away line after line to find what the cause of the issue and it turns out that it disappeared when I commented away the "dbHelper.getBlItemLog"-call. That led me into that function (code included further down) where the issue seems to be the database query. I thought that it might be some issue with the database being modified during the query, so I removed all other calls to the database, but the issue remained. How can a database query cause this kind of issue? I just can't understand why? does anyone have any kind of insight or suggestion regarding this?
from MyHomePage (which extends Stateful Widget):
void _processBle() {
FlutterBluePlus.instance.scanResults.forEach((list) async {
for (var element in list) {
String bleName = element.device.name;
dev.log('Found device: ' + bleName);
String blItemId = bleName.replaceAll('XXX', '');
var blItemData = CustomBluetoothItem.fromBle(bleName,
blItemId,
element.advertisementData.manufacturerData,
_geoPos.lat, _geoPos.long
);
int tempStatus = blItemData.status;
try {
if (blItemData.status > 1) {
await lock.synchronized(() async {
dev.log('${DateTime.now()
.toIso8601String()} _processBle .. start $bleName');
//here's where I call the troubling function
await dbHelper.getBlItemLog(
bleName: bleName, startDate: blItemData.sDate)
.then((queueItem) async {
// the stuff in here is commented away for now and thus doesn't matter
});
dev.log('${DateTime.now()
.toIso8601String()} _processBle .. stop ${element.device
.name}');
});
}
} catch (e) {
dev.log('error : $e');
}
}
});
firstStartup = false;
dev.log('_processBle - ### done');
}
from my dbHelper:
Future<CustomBluetoothItem?> getBlItemLog({required String bleName, required String startDate}) async {
Database? db = await database;
//the following line seems to be where the issue is, since there's no issue if I comment it away.
List<Map<String, dynamic>> maps = await db!.rawQuery('SELECT * FROM ble_table WHERE bleName = ? AND startDate = ? ORDER BY createDate desc limit 1 offset 0',[bleName,startDate]);
if (maps.isNotEmpty) {
return CustomBluetoothItem.fromJson(maps[0]);
}
return null;
}

how to test that the function under test has thrown an exception

In my scala and play code, a function throws an exception
case None => {
println("error in updating password info")
throw new Exception("error in updating password info") //TODOM - refine errors. Make errors well defined. Pick from config/env file
}
I want to test the above code but I don't know how to test that the Exception was thrown. The spec I have written is
"PasswordRepository Specs" should {
"should not add password for non-existing user" in {
val newPassword = PasswordInfo("newHasher","newPassword",Some("newSalt"))
when(repoTestEnv.mockUserRepository.findOne(ArgumentMatchers.any())).thenReturn(Future{None}) //THIS WILL CAUSE EXCEPTION CODE TO GET EXECUTED
val passwordRepository = new PasswordRepository(repoTestEnv.testEnv.mockHelperMethods,repoTestEnv.mockUserRepository)
println(s"adding password ${newPassword}")
val passwordInfo:PasswordInfo = await[PasswordInfo](passwordRepository.add(repoTestEnv.testEnv.loginInfo,newPassword))(Timeout(Duration(5000,"millis"))) //add SHOULD THROW AN EXCEPTION BUT HOW DO I TEST IT???
}
}
Thanks JB Nizet. I must confess I was being lazy! The correct way is either to use assertThrow or interrupt. Eg.
val exception:scalatest.Assertion = assertThrows[java.lang.Exception](await[PasswordInfo(passwordRepository.add(repoTestEnv.testEnv.loginInfo,newPassword)
)(Timeout(Duration(5000,"millis"))))
or
val exception = intercept[java.lang.Exception](await[Unit](passwordRepository.remove(repoTestEnv.testEnv.loginInfo))(Timeout(Duration(5000,"millis"))))
println(s"exception is ${exception}")
exception.getMessage() mustBe repoTestEnv.testEnv.messagesApi("error.passwordDeleteError")(repoTestEnv.testEnv.langs.availables(0))

Spark streaming nested execution serialization issues

I am trying to connect DB2 database in the spark streaming application and the database query execution statement causing "org.apache.spark.SparkException: Task not serializable" issues. Please advise. Below is the sample code I have for reference.
dataLines.foreachRDD{rdd=>
val spark = SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)
val dataRows=rdd.map(rs => rs.value).map(row =>
row.split(",")(1)-> (row.split(",")(0), row.split(",")(1), row.split(",")(2)
, "cvflds_"+row.split(",")(3).toLowerCase, row.split(",")(4), row.split(",")(5), row.split(",")(6))
)
val db2Conn = getDB2Connection(spark,db2ConParams)
dataRows.foreach{ case (k,v) =>
val table = v._4
val dbQuery = s"(SELECT * FROM $table ) tblResult"
val df=getTableData(db2Conn,dbQuery)
df.show(2)
}
}
Below is other function code:
private def getDB2Connection(spark: SparkSession, db2ConParams:scala.collection.immutable.Map[String,String]): DataFrameReader = {
spark.read.format("jdbc").options(db2ConParams)
}
private def getTableData(db2Con: DataFrameReader,tableName: String):DataFrame ={
db2Con.option("dbtable",tableName).load()
}
object SparkSessionSingleton {
#transient private var instance: SparkSession = _
def getInstance(sparkConf: SparkConf): SparkSession = {
if (instance == null) {
instance = SparkSession
.builder
.config(sparkConf)
.getOrCreate()
}
instance
}
}
Below is the error log:
2018-03-28 22:12:21,487 [JobScheduler] ERROR org.apache.spark.streaming.scheduler.JobScheduler - Error running job streaming job 1522289540000 ms.0
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:916)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:915)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:915)
at ncc.org.civil.receiver.DB2DataLoadToKudu$$anonfun$createSparkContext$1.apply(DB2DataLoadToKudu.scala:139)
at ncc.org.civil.receiver.DB2DataLoadToKudu$$anonfun$createSparkContext$1.apply(DB2DataLoadToKudu.scala:128)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:254)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:253)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.NotSerializableException: org.apache.spark.sql.DataFrameReader
Serialization stack:
- object not serializable (class: org.apache.spark.sql.DataFrameReader, value: org.apache.spark.sql.DataFrameReader#15fdb01)
- field (class: ncc.org.civil.receiver.DB2DataLoadToKudu$$anonfun$createSparkContext$1$$anonfun$apply$2, name: db2Conn$1, type: class org.apache.spark.sql.DataFrameReader)
- object (class ncc.org.civil.receiver.DB2DataLoadToKudu$$anonfun$createSparkContext$1$$anonfun$apply$2, )
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 30 more
Ideally you should keep the closure in dataRows.foreach clear of any connection objects, since the closure is meant to be serialized to executors and run there. This concept is covered in depth # this official link
In your case below line is the closure that is causing the issue:
val df=getTableData(db2Conn,dbQuery)
So, instead of using spark to get the DB2 table loaded, which in your case becomes(after combining the methods):
spark.read.format("jdbc").options(db2ConParams).option("dbtable",tableName).load()
Use plain JDBC in the closure to achieve this. You can use db2ConParams in the jdbc code. (I assume its simple enough to be serializable). The link also suggests using rdd.foreachPartition and ConnectionPool to further optimize.
You have not mentioned what you are doing with the table data except df.show(2). If the rows are huge, then you may discuss more about your use case. Perhaps, you need to consider a different design then.

Apache Camel: CXF - returning Holder values (error: IndexOutOfBoundsException: Index: 1, Size: 1)

I have a problem with setting holders in my output message.
I have following simple routing and processor:
from("cxf:bean:ewidencjaEndpoint")
.process(new ProcessResult())
.end();
public class ProcessResult implements Processor {
public void process(Exchange exchange) throws Exception {
Object[] args = exchange.getIn().getBody(Object[].class);
long id = (long) args[0];
Holder<A> dataA = (Holder<A>) args[1];
Holder<B> dataB = (Holder<B>) args[2];
exchange.getOut().setBody(new Object[]{ dataA, dataB});
}
I get the following error:
java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at org.apache.cxf.jaxws.interceptors.HolderInInterceptor.handleMessage(HolderInInterceptor.java:67)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:255)
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:737)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2335)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream$1.run(HTTPConduit.java:2198)
I've read many similar problems descriped on web (ie.: http://camel.465427.n5.nabble.com/java-lang-IndexOutOfBoundsException-in-cxf-producer-td468541.html) but without any success in resolving the problem.
In debug I get output message like:
Exchange[Message[null,null, A#xxxm B#yyy]]
I don't understand what the foolowing "null" values come from.
I've got only 2 outputs values (in Holders) according to wsdl file (and generated interface). I see also in debug console that in 'out' part of exchange body I have only 2 values set in ProcessResult()(idexed from 2 to 3), and size value of the 'out' part is set to '4' (not 2) ?

Resources