I am working on a SPA using spring(REST backend & security) and angularjs. Everything going ok but when I run it on my local I do see an error no Idea why this is happening.
2016-10-24 10:43:29.028 DEBUG 8040 --- [io-8443-exec-10] o.apache.coyote.http11.Http11Processor : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1a572ed1:org.apache.tomcat.util.net.SecureNioChannel#1b7d9385:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8443 remote=/0:0:0:0:0:0:0:1:53170]], Status in: [OPEN_READ], State out: [OPEN]
10:43:29.028 [https-jsse-nio-8443-exec-10] DEBUG o.a.coyote.http11.Http11Processor - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1a572ed1:org.apache.tomcat.util.net.SecureNioChannel#1b7d9385:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8443 remote=/0:0:0:0:0:0:0:1:53170]], Status in: [OPEN_READ], State out: [OPEN]
2016-10-24 10:43:31.949 DEBUG 8040 --- [nio-8443-exec-7] o.apache.coyote.http11.Http11Processor : Error parsing HTTP request header
java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
at org.apache.tomcat.util.net.SecureNioChannel.read(SecureNioChannel.java:590)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1200)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1149)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:404)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:667)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
10:43:31.949 [https-jsse-nio-8443-exec-7] DEBUG o.a.coyote.http11.Http11Processor - Error parsing HTTP request header
java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
at org.apache.tomcat.util.net.SecureNioChannel.read(SecureNioChannel.java:590)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1200)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1149)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:404)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:667)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
If anyone can give me a starting point I can do more research.
We had the same issue.
It turned out that on the server side an external event was forcefully closing the connection at some point during the call, which was causing this error.
I was facing same issue with creating wss(secured) webscoket api using tomcat's jsr-356 and deploying over aws ec2 and referring via domain from cloudflare.
To fix the issue I made 2 changes:
- changed cloudflare proxied mode to DNS only in A record.
- Bought new certificate and started using that from tomcat instead of cloudflare's origin CA certificate.
For detailed steps check this:
https://techxperiment.blogspot.com/2020/06/aws-ec2-tomcat-jsr-356-secure.html
Related
I see an error server returned http response code 401 whenever I try to send Android build for any project. I have even tried to send an unchanged version of the KitchenSink demo. I'm assuming I am missing an update of some kind. Below is the message on the output console:
You sent an android build without submitting a keystore. Notice that you will receive a build that is inappropriate for distribution (although it could be used for debugging purposes). For further details read http://www.codenameone.com/signing.html
Your build size is: 680kb
Sending build request to the server, notice that the build might take a while to complete!
Sending build to account: my#email.com
java.io.IOException: Server returned HTTP response code: 401 for URL: https://cloud.codenameone.com/appsec/7.0/build/upload
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1876)
at sun.net.www.protocol.http.HttpURLConnection.access$200(HttpURLConnection.java:91)
at sun.net.www.protocol.http.HttpURLConnection$9.run(HttpURLConnection.java:1466)
at sun.net.www.protocol.http.HttpURLConnection$9.run(HttpURLConnection.java:1464)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.AccessController.doPrivilegedWithCombiner(AccessController.java:782)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1463)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at com.codename1.build.client.BuildProcess.sendS3Build(BuildProcess.java:469)
at com.codename1.build.client.BuildProcess.sendRequestToServer(BuildProcess.java:500)
at com.codename1.build.client.CodeNameOneBuildTask.execute(CodeNameOneBuildTask.java:529)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:293)
at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:435)
at org.apache.tools.ant.Target.performTasks(Target.java:456)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1405)
at org.apache.tools.ant.Project.executeTarget(Project.java:1376)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1260)
at org.apache.tools.ant.module.bridge.impl.BridgeImpl.run(BridgeImpl.java:286)
at org.apache.tools.ant.module.run.TargetExecutor.run(TargetExecutor.java:555)
at org.netbeans.core.execution.RunClassThread.run(RunClassThread.java:153)
C:\Users\jn\Documents\NetBeansProjects\KitchenSink\build.xml:343: Error in server build process
BUILD FAILED (total time: 6 seconds)
Any help would be greatly appreciated
I have client application instrumented with Zipkin library with configuration in spring application.properties .
camel.zipkin.host-name=hawkular-apm-server.com
camel.zipkin.port=443
camel.zipkin.include-message-body-streams=true
Maven dependency
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-zipkin-starter</artifactId>
</dependency>
The hawkular apm server console is reachable from the local machine.
However, when the rest api exposed in the client application is invoked, the zipkin trace is logged but they are not collected at the hawkular apm server.
04:31:55.632 [http-nio-0.0.0.0-8080-exec-1] INFO o.a.c.c.s.CamelHttpTransportServlet - Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=]
04:31:55.668 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientRequest [service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.672 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverRequest [service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.676 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverResponse[service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.677 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientResponse[service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.758 [pool-1-thread-1] WARN o.a.t.transport.TIOStreamTransport - Error closing output stream.
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TSocket.close(TSocket.java:194)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at com.github.kristofa.brave.scribe.ScribeClientProvider.close(ScribeClientProvider.java:96)
at com.github.kristofa.brave.scribe.ScribeClientProvider.exception(ScribeClientProvider.java:75)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:123)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
04:31:55.763 [pool-1-thread-1] WARN c.g.k.b.scribe.SpanProcessingThread - Logging spans failed. 1 spans are lost!
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at com.twitter.zipkin.gen.scribe$Client.recv_Log(scribe.java:74)
at com.twitter.zipkin.gen.scribe$Client.Log(scribe.java:61)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:127)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I am not sure if its a configuration issue at client application, as the Hawkular APM UI is opening properly.
As per my understanding, Zipkin client can be integrated with Hawkular apm by simple replacing the hawkular url in place of zipkin server, but this does not seem to work.
Any suggestions on this, unfortunately I could not find any examples too.
I am trying to run the simple code below after building everything from Flink's github master branch for various reasons. I get an exception below and I wonder what runs on port 9065? and How to fix this exception?
val dataStream = senv.fromElements(1, 2, 3, 4)
dataStream.countWindowAll(2).sum(0).print()
senv.execute("My streaming program")
Below is the Exception
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.CompletionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 16 more
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
I build it from the sources in the following way (just followed the instructions on Flink github page):
git clone https://github.com/apache/flink.git
cd flink
mvn clean package -DskipTests
cd build-target
./bin/start-scala-shell.sh local
Underlying distributed runtime is currently heavily worked on in master. Starting from 1.5 the default runtime will be the one known as FLIP6, therefore ocassionally some parts might not work. I think it would be very beneficial if you could create a JIRA ticket for this.
Just to add what runs on 9065 port, in the new architecture it is the default port of Dispatcher.
I had the same exception. My issue was that I had a port conflict when started the cluster with a docker image on my machine. So I had changed the port for rest in flink config file to use 8084 instead of 8081. When I did this the cluster would start up properly but I was unable to submit the job. When I killed the conflicting process and reverted the port back to 8081, I could submit jobs successfully
I got the same error.
Use jdk 1.8 for flink 1.7.2
I'm trying to run the solr 6.6.0 tutorial and after running:
bin/solr start -e cloud -noprompt
it starts solr on ports 8983 and 7574 but fails to create the getting started collection with the following error:
ERROR: Failed to create collection 'gettingstarted' due to: {10.1.20.105:7574_solr=org.apache.solr.client.solrj.SolrServerException:IOException occured when talking to server at: http://10.1.20.105:7574/solr}
ERROR: Failed to create collection using command: [-name, gettingstarted, -shards, 2, -replicationFactor, 2, -confname, gettingstarted, -confdir, data_driven_schema_configs, -configsetsDir, /Users/rcarey/solr-6.6.0/server/solr/configsets, -solrUrl, http://localhost:8983/solr]
It looks like its trying to create each replica on a different ip, rather than a different port on the same ip. 10.1.20.105 is not the IP that the 8983 replica is using. I'm not sure if theres something additional I need to configure for this so that it uses the one IP for both. I have the host set to localhost.
The Solr Admin is available on both http://localhost:8983/solr and http://localhost:7574/solr
I get the following in the log:
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Error from shard: http://10.1.20.105:7574/solr
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Error from shard: http://10.1.20.105:7574/solr
org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://10.1.20.105:7574/solr
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:624)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 10.1.20.105:7574 timed out
at org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:119)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
... 12 more
24/08/2017, 11:38:36 ERROR null OverseerCollectionMessageHandler Cleaning up collection [gettingstarted].
24/08/2017, 11:39:06 ERROR null CollectionsHandler Timed out waiting for new collection's replicas to become ACTIVE with timeout=30
Help me to fix this.
I had the same issue. In bin/solr.in.sh, I uncommented and set the following:
SOLR_HOST="localhost"
Then things worked because solr communicated with the server via "localhost" instead of an IP, which had a timeout. Fixing the error:
SolrServerException:IOException occured when talking to server at: http://YOUR_IP/solr
Their is a similar posting that claims to have the answer but I'm still getting the error after putting step -> stepwise=false in my camel route. -> exception
14:35:33,649 WARN [org.apache.camel.component.file.remote.RemoteFilePollingConsumerPollStrategy] (Camel (camel-1) thread #13 - sftp://myftp:22) Trying to recover by discon
necting from remote server forcing a re-connect at next poll: sftp://myUser#myftp:22
14:35:33,654 WARN [org.apache.camel.component.file.remote.SftpConsumer] (Camel (camel-1) thread #13 - sftp://myftp:22) Consumer Consumer[sftp://myftp:22?delay
=1h&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser] failed polling endpoint: Endpoint[sftp://myftp:22?delay=1h
&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser]. Will try again at next poll. Caused by: [org.apache.camel.component.file.
GenericFileOperationFailedException - Cannot list directory: .]: org.apache.camel.component.file.GenericFileOperationFailedException: Cannot list directory: .
at org.apache.camel.component.file.remote.SftpOperations.listFiles(SftpOperations.java:583) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.remote.SftpConsumer.doPollDirectory(SftpConsumer.java:90) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.remote.SftpConsumer.pollDirectory(SftpConsumer.java:52) [camel-ftp-2.13.2.jar:2.13.2]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:119) [camel-core-2.13.2.jar:2.13.2]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:187) [camel-core-2.13.2.jar:2.13.2]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:114) [camel-core-2.13.2.jar:2.13.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_65]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_65]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_65]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]
Caused by: 4:
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1660) [jsch-0.1.49.jar:]
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1466) [jsch-0.1.49.jar:]
at org.apache.camel.component.file.remote.SftpOperations.listFiles(SftpOperations.java:574) [camel-ftp-2.13.2.jar:2.13.2]
... 12 more
Caused by: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308) [rt.jar:1.7.0_65]
at com.jcraft.jsch.Channel$MyPipedInputStream.updateReadSide(Channel.java:344) [jsch-0.1.49.jar:]
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1483) [jsch-0.1.49.jar:]
... 14 more
here is my route ->
sftp://myftp:22?delay
=1h&delete=true&doneFileName=done&password=xxxxxx&sortBy=ignoreCase%3Afile%3Aname&stepwise=false&username=myUser
Yeah, there is java.io.IOException: Pipe closed.
I just checked the code of camel-ftp, it has the code to check the connection, but it's hard to know if the connection is still opened if we don't send some bytes to server socket.
The solution could be force the ftp client send some ping message to check if the connection is still open.
If you want to set a very long delay for your ftp pulling, the work around way is setting disconnect option to be true to force ftp endpoint restart a new connection when it pull the directory again from the FTP server.
I just created a JIRA for it.