Error : java.io.IOException: UT000036: Connection terminated parsing multipart data - reactjs

I'm working with React Js and Jhipster(Spring boot) , I try to upload a file using MultipartFile.
Where I pass Request Method: POST.
With Content-Type: multipart/form-data; boundary=----WebKitFormBoundarykyl9Y0dZ1t72LO7Q
And server nginx
Here is the codes:
The controller
#PostMapping(value = "/uploadCustomer", consumes = "multipart/form-data")
public void uploadMultipart(#RequestPart("file") MultipartFile file,#RequestPart("tags") String tags ) {
log.debug("tags "+tags+" {}",tags);
myService.saveCustomerFromCSV(file,tags);
}
part of configuration file
spring:
servlet:
multipart:
enabled: false
file-size-threshold: 2KB
max-file-size: 15MB
max-request-size: 50MB
ribbon:
eureka:
enabled: true
ReadTimeout: 300000
connection-timeout: 30000
# See http://cloud.spring.io/spring-cloud-netflix/spring-cloud-netflix.html
zuul: # those values must be configured depending on the application specific needs
sensitive-headers: Cookie,Set-Cookie #see https://github.com/spring-cloud/spring-cloud-netflix/issues/3126
ignored-headers: Access-Control-Allow-Credentials, Access-Control-Allow-Origin
host:
max-total-connections: 1000
max-per-route-connections: 100
connect-timeout-millis: 60000
socket-timeout-millis: 60000
semaphore:
max-semaphores: 500
But with Undertow, it throws RuntimeException , The Exception message:
java.lang.RuntimeException: java.io.IOException: UT000036: Connection terminated parsing multipart data
at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:798)
at io.undertow.servlet.spec.HttpServletRequestImpl.getParameter(HttpServletRequestImpl.java:665)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:84)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117)
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:65)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:132)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:336)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: UT000036: Connection terminated parsing multipart data
at io.undertow.server.handlers.form.MultiPartParserDefinition$MultiPartUploadHandler.parseBlocking(MultiPartParserDefinition.java:228)
at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:792)
... 42 common frames omitted
Solution :
In my case the Solution is to remove the Spring Servlet multipart Configuration which is in the gateway part
#spring:
# servlet:
# multipart:
# enabled: false
# file-size-threshold: 2KB
# max-file-size: 15MB
# max-request-size: 50MB

Related

The configuration does not specify the checkpoint directory 'state.checkpoints.dir'

While submitting the flink job on the dataproc cluster getting the below error. Please find the code base and the error. I am using the flink 1.9.3 version.
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: f064ceaa5b318fdad9a77b2723b9ee64)
at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:255)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1489)
at org.flink.ReadFromPubsub.main(ReadFromPubsub.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:604)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:466)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1008)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1081)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1081)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:391)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:263)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
org.apache.flink.runtime.client.JobSubmissionException: Failed to submit job.
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not set up JobManager
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:376)
at org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could not instantiate configured state backend
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:303)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Cannot create the RocksDB state backend: The configuration does not specify the checkpoint directory 'state.checkpoints.dir'
at org.apache.flink.contrib.streaming.state.RocksDBStateBackendFactory.createFromConfig(RocksDBStateBackendFactory.java:44)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackendFactory.createFromConfig(RocksDBStateBackendFactory.java:32)
at org.apache.flink.runtime.state.StateBackendLoader.loadStateBackendFromConfig(StateBackendLoader.java:154)
at org.apache.flink.runtime.state.StateBackendLoader.fromApplicationOrConfigOrDefault(StateBackendLoader.java:219)
at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:299)
... 20 more
End of exception on server side>]
at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:389)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:373)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
... 4 more
Code snippet I executed
public class ReadFromPubsub
{
public static void main(String args[]) throws Exception
{
System.out.println("Flink Pubsub Code Read 1");
StreamExecutionEnvironment streamExecEnv = StreamExecutionEnvironment.getExecutionEnvironment();
System.out.println("Flink Pubsub Code Read 2");
DeserializationSchema<String> deserializer = new SimpleStringSchema();
System.out.println("Flink Pubsub Code Read 3");
SourceFunction<String> pubsubSource = PubSubSource.newBuilder()
.withDeserializationSchema(deserializer)
.withProjectName("vz-it-np-gudv-dev-vzntdo-0")
.withSubscriptionName("subscription1")
.build();
System.out.println("Flink Pubsub Code Read 4");
streamExecEnv.addSource(pubsubSource).print();
streamExecEnv.enableCheckpointing(10);
System.out.println("Flink Pubsub Code Read 5");
streamExecEnv.execute();
}
}
I can see all the print statements during the execution of the code. After the last print statement I am getting the error.
All the exceptions to be resolved
Generally, you would supply the appropriate savepoint/checkpointing directory within your Flink configuration within flink-conf.yaml as detailed in the docs. If you aren't currently setting it, you can do so in the following ways:
Via flink-conf.yaml (Preferred)
state.checkpoints.dir: "file://example/checkpoints"
Programmatically (within the job):
Configuration configuration = new Configuration();
conf.setString("state.checkpoints.dir", "file://example/checkpoints");
StreamExecutionEnvironment streamExecEnv =
StreamExecutionEnvironment.getExecutionEnvironment(configuration);
Via the CLI:
flink run -Dstate.checkpoints.dir="/example/checkpoints" your-job.jar
Additionally, if you didn't actually want to perform checkpointing, you could likely remove the following configuration within your job:
streamExecEnv.enableCheckpointing(10);

Issue with Apache Camel https rest api with username and password

I have the following piece of code, I've built for connecting to a "https" REST end point using Apache Camel. The problem is that I get 401 error if this is run.
from("timer:learnTimer?period=100s")
.to("log:?level=INFO&showBody=true")
.setHeader("currentTime", simple(currentTime))
.setHeader(Exchange.CONTENT_TYPE,constant("application/json"))
.setHeader(Exchange.HTTP_METHOD, constant("GET"))
.setHeader(Exchange.HTTP_URI, simple("https://xxxxxx/api/siem/offenses?filter=status%20%3D%20%22OPEN%22%20and%20start_time%20%3E%201543647979000?&authMethod=Basic&authUsername=xxxxx&authPassword=xxxxx"))
.to("https://xxxxxxx/api/siem/offenses?filter=status%20%3D%20%22OPEN%22%20and%20start_time%20%3E%201543647979000?&authMethod=Basic&authUsername=xxxx&authPassword=xxxx").convertBodyTo(String.class)
.to("log:?level=INFO&showBody=true");
The error I am receiving is:
Stacktrace
org.apache.camel.http.common.HttpOperationFailedException: HTTP operation failed invoking https://xx.xx.xx.xx/api/siem/offenses?filter=status+%3D+%22OPEN%22+and+start_time+%3E+1543647979000%3F with statusCode: 401
at org.apache.camel.component.http.HttpProducer.populateHttpOperationFailedException(HttpProducer.java:243)
at org.apache.camel.component.http.HttpProducer.process(HttpProducer.java:165)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
15:16| WARN | CamelLogger.java 213 | Error processing exchange. Exchange[ID-zabbixproxy-node2-1544019394005-0-1]. Caused by: [org.apache.camel.http.common.HttpOperationFailedException - HTTP operation failed invoking https://xx.xx.xx.xx/api/siem/offenses?filter=status+%3D+%22OPEN%22+and+start_time+%3E+1543647979000%3F with statusCode: 401]
org.apache.camel.http.common.HttpOperationFailedException: HTTP operation failed invoking https://10.96.40.66/api/siem/offenses?filter=status+%3D+%22OPEN%22+and+start_time+%3E+1543647979000%3F with statusCode: 401
at org.apache.camel.component.http.HttpProducer.populateHttpOperationFailedException(HttpProducer.java:243)
at org.apache.camel.component.http.HttpProducer.process(HttpProducer.java:165)
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148)
Are you sure you should set these header before making an rest call?
un necessary request headers in IN Message may cause some issue.
Exchange exchange = ExchangeBuilder.anExchange(camelContext)
.withHeader("").withProperty("")
.withPattern(ExchangePattern...)
.withHeader(Exchange.HTTP_METHOD, HttpMethod.GET)
.build();
producer.send("the end point to rest",exchange);
// producer is ProducerTemaplte
In above code you can set The ExchangePattern and required Headers and property (if only needed).
Hope this helps.

Exporting data using solr streaming expressions is randomly not working

I am using solr 7.4 version. I am trying to export data using streaming expressions.
Following is the simple curl query that i am using. I am indexing my data and testing the following query.
curl --data-urlencode 'expr=search(collection1,
q="text:solar",
fl="id",
sort="id asc",
qt="/export")' http://localhost:8983/solr/collection1/stream
when i run above query with initial few documents indexed, it is working fine. After indexing few thousand documents (around , i am getting the following error for the same query.
EXCEPTION":"java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details."
I've looked into my solr logs and the following is the error info
java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:400)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:275)
at org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:397)
at org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83)
at org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:539)
at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:181)
at org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
at org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
at org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
at org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:787)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:524)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream.openStreams(CloudSolrStream.java:394)
... 49 more
Caused by: java.io.IOException: --> http://localhost:8983/solr/collection1/: An exception has occurred on the server, refer to server log for details.
at org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:225)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$TupleWrapper.next(CloudSolrStream.java:484)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$StreamOpener.call(CloudSolrStream.java:507)
at org.apache.solr.client.solrj.io.stream.CloudSolrStream$StreamOpener.call(CloudSolrStream.java:494)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.IOException: JSONTupleStream: expected OBJECT_START but got EOF
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.expect(JSONTupleStream.java:99)
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.advanceToDocs(JSONTupleStream.java:179)
at org.apache.solr.client.solrj.io.stream.JSONTupleStream.next(JSONTupleStream.java:77)
at org.apache.solr.client.solrj.io.stream.SolrStream.read(SolrStream.java:194)
... 8 more
Is it because new documents have something unusual in their content? Even if that is the case, i am requesting only one field which is id and there is nothing unusual about it.
Also, to mention, if i give q="*:*" it is working fine (which is kind of weird)
Any idea what what is issue here ?

Error integration of solr 5.5.0 with nutch 1.13: 'Connection pool shut down'

I had a problem when I tried to integrate 'Solr' with 'Nutch':
version of 'Nutch':1.13
version of 'Solr': 5.5.0 (as recommended by the official
documentations https://wiki.apache.org/nutch/NutchTutorial#Verify_your_Nutch_installation)
The error is :
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance
solr.zookeeper.hosts : URL of the Zookeeper quorum
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
Indexer: number of documents indexed, deleted, or skipped:
Indexer: finished at 2017-11-30 01:34:49, elapsed: 00:00:01
Cleaning up index if possible
apache-nutch-1.13/bin /nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
SolrIndexer: deleting 1/1 documents
ERROR CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Error running:
apache-nutch-1.13/bin/nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
Failed with exit value 255.
on the log file:
2017-11-30 01:34:50,851 WARN output.FileOutputCommitter - Output Path is null in cleanupJob()
2017-11-30 01:34:50,851 WARN mapred.LocalJobRunner - job_local531807742_0001
java.lang.Exception: java.lang.IllegalStateException: Connection pool shut down
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:481)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:191)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:179)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:117)
at org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:122)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-30 01:34:51,458 ERROR indexer.CleaningJob - CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Please do you have any idea?
Had same problem and yours probably is due to the same reason
https://issues.apache.org/jira/browse/NUTCH-2269
Try patch it and error should be gone
From my finding, it appears to be a bug. Here's a blog that explains it well, https://reformatcode.com/code/apache-configuration/apache-nutch-112-with-apache-solr-621-give-an-error

microsoft.exchange.webservices.data.EWSHttpException: Connection not established

i am trying to use EWSJavaAPI 1.2 to send email from exchange server 2007 email as follows:
String email="myuser#mydomain";
String password="mypass";
String host="myhost";
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
ExchangeCredentials credentials = new WebCredentials(email, password);
service.setCredentials(credentials);
service.setUrl(new java.net.URI("https://" + host
+ "/EWS/Exchange.asmx"));
service.setTraceEnabled(true);
service.setTimeout(60 * 1000);
EmailMessage msg = new EmailMessage(service);
msg.setSubject("Hello world!");
msg.setBody(MessageBody.getMessageBodyFromText("Sent using the EWS Managed API."));
msg.getToRecipients().add("user2#mydomain");
msg.send();
but i am getting the following error:
microsoft.exchange.webservices.data.EWSHttpException: Connection not established
at microsoft.exchange.webservices.data.HttpClientWebRequest.throwIfConnIsNull(Unknown Source)
at microsoft.exchange.webservices.data.HttpClientWebRequest.getResponseCode(Unknown Source)
at microsoft.exchange.webservices.data.EwsUtilities.formatHttpResponseHeaders(Unknown Source)
at microsoft.exchange.webservices.data.ExchangeServiceBase.traceHttpResponseHeaders(Unknown Source)
at microsoft.exchange.webservices.data.ExchangeServiceBase.processHttpResponseHeaders(Unknown Source)
at microsoft.exchange.webservices.data.SimpleServiceRequestBase.internalExecute(Unknown Source)
at microsoft.exchange.webservices.data.MultiResponseServiceRequest.execute(Unknown Source)
at microsoft.exchange.webservices.data.ExchangeService.internalCreateItems(Unknown Source)
at microsoft.exchange.webservices.data.ExchangeService.createItem(Unknown Source)
at microsoft.exchange.webservices.data.Item.internalCreate(Unknown Source)
at microsoft.exchange.webservices.data.EmailMessage.internalSend(Unknown Source)
at microsoft.exchange.webservices.data.EmailMessage.send(Unknown Source)
and the trace:
<Trace Tag="EwsRequestHttpHeaders" Tid="1" Time="2014-07-15 11:44:43Z">
POST /EWS/Exchange.asmx HTTP/1.1
Content-type : text/xml; charset=utf-8
Accept-Encoding : gzip,deflate
Keep-Alive : 300
User-Agent : ExchangeServicesClient/0.0.0.0
Connection : Keep-Alive
Accept : text/xml
</Trace>
<Trace Tag="EwsRequest" Tid="1" Time="2014-07-15 11:44:43Z">
<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:m="http://schemas.microsoft.com/exchange/services/2006/messages" xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types"><soap:Header><t:RequestServerVersion Version="Exchange2007"></t:RequestServerVersion></soap:Header><soap:Body><m:CreateItem MessageDisposition="SendOnly"><m:Items><t:Message><t:Subject>Hello world!</t:Subject><t:Body BodyType="HTML">Sent using the EWS Managed API.</t:Body><t:ToRecipients><t:Mailbox><t:EmailAddress>myuser#mydomain</t:EmailAddress></t:Mailbox></t:ToRecipients></t:Message></m:Items></m:CreateItem></soap:Body></soap:Envelope>
</Trace>
<Trace Tag="EwsResponseHttpHeaders" Tid="1" Time="2014-07-15 11:44:43Z">
401 null
WWW-Authenticate : Basic realm="myhost"
Date : Tue, 15 Jul 2014 11:44:43 GMT
Content-Length : 0
X-Powered-By : ASP.NET
Server : Microsoft-IIS/7.0
</Trace>
<Trace Tag="EwsResponseHttpHeaders" Tid="1" Time="2014-07-15 11:44:43Z">
401 null
WWW-Authenticate : Basic realm="myhost"
Date : Tue, 15 Jul 2014 11:44:43 GMT
Content-Length : 0
X-Powered-By : ASP.NET
Server : Microsoft-IIS/7.0
</Trace>
<Trace Tag="EwsResponse" Tid="1" Time="2014-07-15 11:44:43Z">
Non-textual response
</Trace>
BTW, when i tried to access the url:
https://myhost/EWS/Exchange.asmx
it redirects me to the following url:
https://myhost/EWS/Services.wsdl
please advise how to fix this issue.
That's a bug in EWS Java. If there's an issue reading the response in SimpleServiceRequestBase.readResponse(), it will close the response in a finally block, then kick it back to the calling method (in this case, internalExecute()), which will try to read the headers on the response. Since the response was closed, this will throw a NullPointerException, which gets wrapped in a ServiceRequestException, effectively hiding the original Exception and preventing you from seeing it in the stack trace.
Fix the bug and you'll probably have an easier time reading the actual error. Get used to bugs in EWS Java. I've found over 30 of them and counting.
I think this is a bug in the EWS Java API library, which i fixed recently (see this pull request). You should try to use the updated library from the official ews-java-api repository (for now you'll have to build it yourself) and see if it works now.
I have faced the same issue.Check your UserName and Password, It may be wrong.
According to the trace "401" Status code signifies that you are Unauthorized user and u might be getting connection not Connection not established as an error message.

Resources