Running a Flink job with Gelly with a custom VertexCentericIteration algorithm I am getting NoSuchElement exception thrown from ApplyNeighborCoGroupFunction class.
Caused by: java.util.NoSuchElementException
at java.base/java.util.Collections$EmptyIterator.next(Collections.java:4210)
at org.apache.flink.graph.Graph$ApplyNeighborCoGroupFunction.coGroup(Graph.java:2435)
at org.apache.flink.runtime.operators.CoGroupDriver.run(CoGroupDriver.java:177)
From the class source code it does make sense to get the error because what it does in coGroup method is vertex.iterator().next(). And in my case vertex iterable does not have any elements.
I would like to understand if that's really exceptional case and vertex object must have some elements in the coGroup method. And if so, what in my code might cause that to break.
Related
I have a queue system of which camel is just a small part of. In this queue system, for some queues, the broker returns FAIL when the queue is full. To handle this, I look at JMS Exception I get and from the message I can see whether or not the reason is that the queue is full.
What I want to achieve in Camel is that for the specific case of full queue, I want to retry deliveries, whereas for any other JMS Exception (or any other exception) I want to send it to DLQ.
I'm assuming that I have to use the onException(JMSException.class) and provide a custom processor which will look at the exception message, but after that, I'm not sure what to do. I tried raising a specific custom exception from the processor in that case (e.g. QueueFailedException) and have another onException(QueueFailedException) in the setup, but then I get the following error:
"(...) ERROR org.apache.camel.processor.FatalFallbackErrorHandler - Exception occurred while trying to handle previously thrown exception on exchangeId: (...) using: [Channel[DelegateSync[org.eumetsat.gems.bridge.JmsMessageReplicator$JMSExceptionHandler#67a93d5f]]]. The previous and the new exception will be logged in the following."
As far as I understood, you need to tell Camel it must consider an exception handled and also it should retry sending the message according to some logic check. The .handled() also accepts a Predicate or an Expression besides Boolean.
Don't the following sections help you achieve that?
Marking exceptions as handled
Using fine grained retry using retrywhile predicate.
I'm considering you're able to identify if the exception was really caused by a queue flooding through a caught JMSException.
For the last few weeks, I have been trying to deal with an intermittent problem on a camel route using XSLT processing following aggregation. It is intermittent in the sence that while it frequently raises this exception, I can re-run the data extract and processing that failed a few seconds later and it usually succeeds. I have yet to find any data that fails consistently.
I am assuming that the aggregation is causing the problem, but I can't for the life of me understand why. I thought it might be the custom aggregation bean I was using, so I replaced it with XSLTAggreationStrategy, but it still intermittently gives this issue, either when further transforming the aggregated XML, or when just writing it out to the a file.
This is executing in an Apache-Karaf environment, and I have Camel-Saxon 2.21.2 and Apache ServiceMix Saxon-HE 9.8.0.8_1 bundles loaded.
Thanks for looking.
The abridged stack trace is:
...
Caused by: [java.lang.NullPointerException -
null]java.lang.NullPointerException
at net.sf.saxon.dom.DOMNodeWrapper$ChildEnumeration.skipFollowingTextNodes(DOMNodeWrapper.java:1149)
at net.sf.saxon.dom.DOMNodeWrapper$ChildEnumeration.next(DOMNodeWrapper.java:1178)
at net.sf.saxon.tree.util.Navigator$EmptyTextFilter.next(Navigator.java:1078)
at net.sf.saxon.tree.util.Navigator$AxisFilter.next(Navigator.java:1039)
at net.sf.saxon.tree.util.Navigator$AxisFilter.next(Navigator.java:1017)
at net.sf.saxon.expr.parser.ExpressionTool.effectiveBooleanValue(ExpressionTool.java:643)
at net.sf.saxon.expr.Expression.effectiveBooleanValue(Expression.java:532)
at net.sf.saxon.pattern.PatternWithPredicate.matches(PatternWithPredicate.java:141)
at net.sf.saxon.trans.Mode.searchRuleChain(Mode.java:570)
at net.sf.saxon.trans.Mode.getRule(Mode.java:476)
at net.sf.saxon.trans.Mode.applyTemplates(Mode.java:1041)
at net.sf.saxon.expr.instruct.ApplyTemplates.apply(ApplyTemplates.java:281)
at net.sf.saxon.expr.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:241)
at net.sf.saxon.expr.instruct.Template.applyLeavingTail(Template.java:239)
at net.sf.saxon.trans.Mode.applyTemplates(Mode.java:1057)
at net.sf.saxon.expr.instruct.ApplyTemplates.apply(ApplyTemplates.java:281)
at net.sf.saxon.expr.instruct.ApplyTemplates.process(ApplyTemplates.java:237)
at net.sf.saxon.expr.instruct.ElementCreator.processLeavingTail(ElementCreator.java:431)
at net.sf.saxon.expr.instruct.ElementCreator.processLeavingTail(ElementCreator.java:373)
at net.sf.saxon.expr.instruct.Template.applyLeavingTail(Template.java:239)
at net.sf.saxon.trans.Mode.applyTemplates(Mode.java:1057)
at net.sf.saxon.Controller.transformDocument(Controller.java:2080)
at net.sf.saxon.Controller.transform(Controller.java:1903)
at org.apache.camel.builder.xml.XsltBuilder.process(XsltBuilder.java:141)
at org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:103)
at org.apache.camel.component.xslt.XsltEndpoint.onExchange(XsltEndpoint.java:138) ...
In 9.8.0.8, the class net.sf.saxon.dom.DOMNodeWrapper has only 1144 lines, so a stacktrace showing line 1178 suggests there's some kind of versioning problem.
The class DOMNodeWrapper was first introduced in 9.5 (previously it was called NodeWrapper), and the line numbers are just one off from those in the current 9.5 source, so I suspect what you have loaded is some sub-release of the 9.5 branch. Other line numbers in the stack trace are also consistent with this being 9.5.
That of course doesn't explain the problem, but it might give a clue.
My immediate instinct was that over the years since 9.5 we might have fixed a multi-threading bug. DOM is not thread-safe, so Saxon takes considerable care to synchronize its access. Saxon bug https://saxonica.plan.io/issues/2376 addresses this problem. On the 9.5 branch this was first fixed in maintenance release 9.5.1.11, so it's possible you don't have that patch. I think it would be useful to investigate why you are loading an old version of Saxon, and another useful angle would be to discover exactly which version it is (the static method net.sf.saxon.Version.getProductVersion() will give you this information.)
Incidentally, if you are using multi-threaded access to a DOM tree then you should ask yourself whether this is a good idea. Saxon access to DOM is slow at the best of times (compared to JDOM and XOM, let alone to Saxon's native tree model), and the lack of thread safety and the need for synchronisation makes it a pretty poor choice in a multi-threaded application.
Also, note that Saxon can synchronize its own access to the DOM, but it can't synchronize with third-party code that might also be using the DOM.
I wonder if there is an option of built in error handling in Flink.
there may be 2 cases:
the current message from Kafka (in my case) is invalid, continue to next one
uncaught exception - from what I saw it can stop the stream aggregation completely.
ho can I handle these 2 cases? (java code)
1) This is done idiomatically with a flatMap: if your message is valid, you go on with a list containing your valid element (maybe already processed in the same step). If it's not valid, you simply return an empty list so that no elements are produced by that step. I could provide Scala code but I'm not familiar with Java APIs so I don't want to put you off track. Just check the flatMap call.
2) This depends on the type of exception: if it's provoked by your own code, just catch it and handle it inside the operator, or simply log it and move on. Without any further information about a specific case, this is the best I know of, but again, coming from Scala I haven't experienced runtime exceptions.
When sending Andoird Build to server I get the following build error:
Error! Failed to transform some classes java.lang.RuntimeException:
Method code too large! at
net.orfjackal.retrolambda.asm.MethodWriter.getSize(MethodWriter.java:2036)
at
net.orfjackal.retrolambda.asm.ClassWriter.toByteArray(ClassWriter.java:827)
at
net.orfjackal.retrolambda.Transformers.transform(Transformers.java:121)
at
net.orfjackal.retrolambda.Transformers.transform(Transformers.java:106)
at
net.orfjackal.retrolambda.Transformers.backportClass(Transformers.java:46)
at net.orfjackal.retrolambda.Retrolambda.run(Retrolambda.java:72) at
net.orfjackal.retrolambda.Main.main(Main.java:26)
I must confess I'm not sure why this is occurring as I do not references these classes. Could someone please explain how to track down the cause and fix it? I have not added any new imports since the last successful build :/ My project is also set to use Java 8. Not sure where to go from here to be honest.
there is a hard limit on the size of methods in a class file of 64k. You have at least one big method that you need to split up. It may have been coming in just under the limit for the initial compilation but the retrolambda conversion just pushed it over. You need to split these methods into smaller methods.
This error doesnt really give you a clue as to which methods are problematic but you can probably eyeball it.
Using ASP.NET MVC and Entity, I keep getting the "The operation cannot be completed because the DbContext has been disposed.".
Using the UnitOfWork pattern along with a GenericRepository.
The Exception happens in PagedList.PagedListExtensions.ToPagedList.
I tried to use AsNoTracking, without any result.
How to avoid those exception to happens, as it crash the client's interface with an Internal Error.
Most likely you aren't asking to enumerate the query until after the context is disposed. LINQ to Entities doesn't actually execute queries until you ask to enumerate the query, i.e. see each item in the result set. ToPagedList sounds like exactly the sort of thing that requires enumeration.
The fix is to force enumeration of the query sometime before disposing of the context (in the Repository probably), by calling .ToList() on it.