I see the following error in the production log. Please advise what could be causing this error. These email logs are so abstract really not able to determine the issue.
secondly, where is the device information is enclosed here.
[Network Thread] 45:4:44,291 - The operation couldn’t be completed. Software caused connection abort
[Network Thread] 45:4:44,297 - Exception: java.io.IOException - The operation couldn’t be completed. Software caused connection abort
java.io.IOException
at com_codename1_io_ConnectionRequest.performOperation:884
at com_codename1_io_NetworkManager_NetworkThread.run:325
at com_codename1_impl_CodenameOneThread.run:176
at java_lang_Thread.runImpl:153
This is a "feature" of iOS 12: https://forums.developer.apple.com/thread/106838
It seems you're using the connection when the app went to the background. Apple disallows that and historically would randomly kill the app when you did that without a background task.
Related
I'm currently working with server-sent-events using RESTEasy in Wildfly. So far everything is working, except that sometimes the SSE implementation doesn't somehow recognize that the client(s) listening to events is/are already closed (also the close() method of the SseEventSourceon the client-side was called). In aspect of the program logic this isn't a problem at all.
But unfortunately org.jboss.resteasy.plugins.providers.sse.SseEventOutputImpl class which is used to send the events, does - in addition to reporting the exception back to org.jboss.resteasy.plugins.providers.sse.SseBroadcasterImpl - not only report the exception, but also logs it using the failedToWriteSseEvent(String, Throwable) method of the org.jboss.resteasy.resteasy_jaxrs.i18n.LogMessages (later class is based on the JBoss logging). So I get every now and then an unnecessary log messages on level ERROR telling me that the connection was closed by the client. And it get that entry in the log in addition to the onClose event I get from the SseBroadcaster.
Configuring the JBoss logging seems impossible as the log name is org.jboss.resteasy.resteasy_jaxrs.i18n which is also used for logging other errors (Means just configuring the logger in the log4j.xml of the deployment won't work / also turn off other errors).
2021-06-22 12:59:27 [ERROR] [org.jboss.resteasy.resteasy_jaxrs.i18n:272] - RESTEASY002030: Failed to write event org.jboss.resteasy.plugins.providers.sse.OutboundSseEventImpl#fd79b33
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(Unknown Source)
at sun.nio.ch.IOUtil.write(Unknown Source)
at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
at org.xnio.nio.NioSocketConduit.write(NioSocketConduit.java:162)
at io.undertow.server.protocol.http.HttpResponseConduit.write(HttpResponseConduit.java:647)
at io.undertow.conduits.ChunkedStreamSinkConduit.doWrite(ChunkedStreamSinkConduit.java:166)
at io.undertow.conduits.ChunkedStreamSinkConduit.write(ChunkedStreamSinkConduit.java:128)
at org.xnio.conduits.ConduitStreamSinkChannel.write(ConduitStreamSinkChannel.java:150)
at io.undertow.channels.DetachableStreamSinkChannel.write(DetachableStreamSinkChannel.java:240)
at io.undertow.server.HttpServerExchange$WriteDispatchChannel.write(HttpServerExchange.java:2103)
at io.undertow.servlet.spec.ServletOutputStreamImpl.writeBufferBlocking(ServletOutputStreamImpl.java:574)
at io.undertow.servlet.spec.ServletOutputStreamImpl.flushInternal(ServletOutputStreamImpl.java:489)
at io.undertow.servlet.spec.ServletOutputStreamImpl.flush(ServletOutputStreamImpl.java:476)
at io.undertow.servlet.spec.HttpServletResponseImpl.flushBuffer(HttpServletResponseImpl.java:468)
at javax.servlet.ServletResponseWrapper.flushBuffer(ServletResponseWrapper.java:221)
at org.jboss.resteasy.plugins.server.servlet.HttpServletResponseWrapper.flushBuffer(HttpServletResponseWrapper.java:124)
at org.jboss.resteasy.plugins.providers.sse.SseEventOutputImpl.writeEvent(SseEventOutputImpl.java:264)
at org.jboss.resteasy.plugins.providers.sse.SseEventOutputImpl.send(SseEventOutputImpl.java:199)
at org.jboss.resteasy.plugins.providers.sse.SseBroadcasterImpl.lambda$null$4(SseBroadcasterImpl.java:150)
at java.lang.Iterable.forEach(Unknown Source)
at org.jboss.resteasy.plugins.providers.sse.SseBroadcasterImpl.lambda$broadcast$5(SseBroadcasterImpl.java:146)
at java.util.concurrent.CompletableFuture$AsyncRun.run(Unknown Source)
at java.util.concurrent.CompletableFuture$AsyncRun.exec(Unknown Source)
at java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(Unknown Source)
at java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
at java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)
Is there a way to control that logging within RESTEasy and disabling the logging of failedToWriteSseEvent(String, Throwable)? Like introducing / inject an own implementation of LogMessages (but as far as I understand it the interface is used as a proxy, so...)?
You might be hitting RESTEASY-1986. You can filter these out with a log filter though. In CLI something like:
/subsystem=logging/logger=org.jboss.resteasy.resteasy_jaxrs.i18n:add(filter-spec=not(match(".*RESTEASY002030.*")), level=INFO)
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
We are using the Apache Zookeeper Client C bindings in our application. Client library version is 3.5.1. When the Zookeeper connection gets disconnected, the application is configured to exit with error code 116.
Systemd is being used to automate starting/stopping the application. The unit file does not override the default setting for KillMode, which is to send SIGTERM to the application.
When the process is stopped using the systemctl stop directive, the Zookeeper client threads seem to be attempting to reconnect to Zookeeper:
2016-04-12 22:34:45,799:4506(0xf14f7b40):ZOO_ERROR#handle_socket_error_msg#2363: Socket [128.0.0.4:61758] zk retcode=-4, errno=112(Host is down): failed while receiving a server response
2016-04-12 22:34:45,799:4506(0xf14f7b40):ZOO_INFO#check_events#2345: initiated connection to server [128.0.0.4:61758]
Apr 12 22:34:45 main thread: zookeeperWatcher: event type ZOO_SESSION_EVENT state ZOO_CONNECTING_STATE path
2016-04-12 22:34:45,801:4506(0xf14f7b40):ZOO_INFO#check_events#2397: session establishment complete on server [128.0.0.4:61758], sessionId=0x40000015b8d0077, negotiated timeout=20000
2016-04-12 22:34:46,476:4506(0xf14f7b40):ZOO_WARN#zookeeper_interest#2191: Delaying connection after exhaustively trying all servers [128.0.0.4:61758]
2016-04-12 22:34:46,810:4506(0xf14f7b40):ZOO_INFO#check_events#2345: initiated connection to server [128.0.0.4:61758]
2016-04-12 22:34:46,811:4506(0xf14f7b40):ZOO_ERROR#handle_socket_error_msg#2382: Socket [128.0.0.4:61758] zk retcode=-112, errno=116(Stale file handle): sessionId=0x40000015b8d0077 h
Due to this, the process is exiting with an error code. Systemd sees failure code upon exit and does not attempt to restart the application. Does anyone know why the client is getting disconnected?
I am aware that I can work around this by setting SuccessExitStatus=116 in the unit file, but I don't want to mask out genuine errors. I have tried registering a signal handler for SIGTERM and closing the Zookeeper client in the handler. But the handler code never seems to get hit when I issue systemctl stop.
EDIT: The handler wasn't getting called because I had made it asynchronous - it didn't execute immediately upon receiving signal. OTOH the process exits immediately upon Zookeeper disconnect.
What happens when you load the handler for SIGTERM and issue systemctrl stop?
If nothing occurs then you may have a mask blocking the signal (I guess not).
If the application keeps exiting with the same error code then I would suggest you make sure that the signal handler is being loaded correctly.
This is working expected, it's the application writer's responsibility to specify how to gracefully shutdown the service, if you don't want to use the default, which sends SIGTERM, you can use the ExecStop to make your own stop command in the unit files:
ExecStart=/usr/bin/app
ExecStop=/usr/bin/app -stop
For details see docs at
https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStop=
The issue is unrelated, someone was running a script that was killing the connection. Thank you all for your help!
Currently I am running a Flink program on a remote cluster of 4 machines using 144 TaskSlots. After running for around 30 minutes I received the following error:
INFO
org.apache.flink.runtime.jobmanager.web.JobManagerInfoServlet - Info
server for jobmanager: Failed to write json updates for job
b2eaff8539c8c9b696826e69fb40ca14, because
org.eclipse.jetty.io.RuntimeIOException:
org.eclipse.jetty.io.EofException at
org.eclipse.jetty.io.UncheckedPrintWriter.setError(UncheckedPrintWriter.java:107)
at
org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:280)
at
org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:295)
at
org.apache.flink.runtime.jobmanager.web.JobManagerInfoServlet.writeJsonUpdatesForJob(JobManagerInfoServlet.java:588)
at
org.apache.flink.runtime.jobmanager.web.JobManagerInfoServlet.doGet(JobManagerInfoServlet.java:209)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at
javax.servlet.http.HttpServlet.service(HttpServlet.java:847) at
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:532)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:965)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:388)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:187)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:901)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:47)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:113)
at org.eclipse.jetty.server.Server.handle(Server.java:352) at
org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:596)
at
org.eclipse.jetty.server.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:1048)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:549)
at
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:211)
at
org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:425)
at
org.eclipse.jetty.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:489)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
at java.lang.Thread.run(Thread.java:745) Caused by:
org.eclipse.jetty.io.EofException at
org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:905)
at
org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:427)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:78) at
org.eclipse.jetty.server.HttpConnection$Output.flush(HttpConnection.java:1139)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:159) at
org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:86) at
java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:154)
at org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:258) at
org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:107) at
org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:271)
... 24 more Caused by: java.io.IOException: Broken pipe at
sun.nio.ch.FileDispatcherImpl.write0(Native Method) at
sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at
sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at
sun.nio.ch.IOUtil.write(IOUtil.java:51) at
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at
org.eclipse.jetty.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:185)
at
org.eclipse.jetty.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:256)
at
org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:849)
... 33 more
I know that java.io.IOException: Broken pipe means that the JobManager lost some kind of connection so I guess the whole job failed and I have to restart it. Although I think the process is not running anymore the WebInterface still lists it as running. Additionally the JobManager is still present when I use jps to identify my running processes on the cluster. So my question is if my job is lost and whether this error is happening randomly sometimes or whether my program caused it.
EDIT: My TaskManagers still send Heartbeats every few seconds and seem to be running.
It's actually a problem of the JobManagerInfoServlet, Flink's web server, which cannot sent the latest JSON updates of the requested job to your browser because of the java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method). Thus, only the GET request to the server failed.
Such a failure should not affect the execution of the currently running Flink job. Simply refreshing your browser (with Flink's web UI) should send another GET request which then hopefully completes successfully.
I am sending more than 50 requests to a server using node.js. However after 20-30 requests, I am getting a socket hang up error.
Error --
Error: socket hang up
at createHangUpError (http.js:1472:15)
at Socket.socketOnEnd [as onend] (http.js:1568:23)
at Socket.g (events.js:180:16)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:920:16
at process._tickCallback (node.js:415:13)
Yeah, looks like your backend server is hanging up the socket, either due to timeout or capacity. Can you throttle the requests you are sending? Using a library like async (with Limit method) might help you throttle the connections in an easy way.
TL;DR:
This could be a problem caused by Node firing its GC and freezing all operations, which results in unpredictable occasional "socket hang up" errors.
I experienced a similar situation. I start a bunch of server calls wrapped in promises that run in parallel and then have a loop with a sleep that check periodically for their completion. I observe the following pattern:
The calls complete promptly for some time
then there is a slow-down
then eventually some "socket hang up" errors
then server calls continue promptly
This pattern repeats continuously. The DataDog stats on the server I call does not show variations in latency that could be a primary cause, so my conclusion is - it is something on the Node app side.
If the problem is due to GC - that is a bad news, cause GC is usually referred to as "Magic" :). You can't predict, when and how deep it goes.
HTH
Im currently developing with WebkitGtk+ Unstable Api
I'm using Soupsession Object to conect Signals and Rertve Soupmessages to (again)
hook signals to every Message to obtain time details of network events, my problem is how to monitor errors from this point.
if I'm using just the signal, there is a way to detect when a network error like DNS error or a socket error ocurrs i searched over the SoupSession Manuals but found nothing usable.
can someone give me some guidances?
Some time ago i figured it out.
the errors are reported in the responce http code of the soup message
https://developer.gnome.org/libsoup/stable/libsoup-2.4-soup-status.html
I just needed to capture the status code in the soup message signal "finished" to know if the resource failed (and why) or if was successful