Http Parser full exception in jetty 8 solr (v 4.5) server - solr

I have a standalone solr(4.5) server which is running on top of jetty 8 .I have an application server on apache tomcat server which is a customer facing node.
Application server connects to standalone solr server to fetch the search result.I send POST request as the query to SOLR is huge but I get the below WARN message on jetty solr server:
WARN org.eclipse.jetty.http.HttpParser รข HttpParser Full for server1:8983 <->server 2:99988
On the tomcat application server I get the below error message :
SEVERE: Servlet.service() for servlet [DispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.solr.common.SolrException: No live SolrServers available to handle this request] with root cause
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | java.net.SocketException: Broken pipe
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at java.net.SocketOutputStream.socketWrite0(Native Method)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:147)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.io.AbstractSessionOutputBuffer.writeLine(AbstractSessionOutputBuffer.java:246)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.io.HttpRequestWriter.writeHeadLine(HttpRequestWriter.java:56)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.io.HttpRequestWriter.writeHeadLine(HttpRequestWriter.java:44)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.io.AbstractMessageWriter.write(AbstractMessageWriter.java:90)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.AbstractHttpClientConnection.sendRequestHeader(AbstractHttpClientConnection.java:258)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.conn.DefaultClientConnection.sendRequestHeader(DefaultClientConnection.java:271)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestHeader(ManagedClientConnectionImpl.java:203)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:221)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:395)
INFO | jvm 1 | main | 2017/03/30 03:30:46.402 | at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:199)
INFO | jvm 1 | main | 2017/03/30 03:30:46.403 | at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:467)
INFO | jvm 1 | main | 2017/03/30 03:30:46.403 | at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)
INFO | jvm 1 | main | 2017/03/30 03:30:46.403 | at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
I have tried increasing the "requestHeaderSize" , and "maxFormContentSize" in jetty.xml but no luck.

First: Jetty 8 is EOL (End of Life), consider upgrading to something supported and stable.
The HttpParser Full is from excessively large request entities (which is the Request URI line and Request Headers).
If this error is from the server side, then its the Request headers.
See https://stackoverflow.com/a/16015332/775715 for advice on configuring it properly on the server side. (Hint: its a Connector setting. So if you have 2 connectors, you have 2 configurations to change)
maxFormContentSize is for the request body content on POST requests, and has no effect on request uri or request headers. The HttpParser Full is not going to be triggered for excessive request body content, so ignore this aspect of the problem, focus on the request URI and request headers.
If this error is from the client side, then its the Response headers.
Pay attention to what's generating those request URI and request headers, as that's the culprit! The default setting is specifically designed for maximum compatibility on the general internet, if you have to increase the default settings then you either have something seriously wrong with your request URI or request headers, or you are using the API improperly (such as sending documents via POST/GET uri strings, and not request body content)

Related

CamelContext keeps restart and shutdown

We are using JBoss Fuse 6.3. Our bundles usually have property placeholders for database connection, and other project properties. The following is one o the placeholder configuration:
<cm:property-placeholder
id="property-placeholder-databaseconnection"
persistent-id="com.mycompany.database" update-strategy="reload"/>
A few of us have experienced that upon installing a bundle, the camelcontext keep start and shutdown. The following is the log
11:24:44,782 | INFO | rint Extender: 2 | ManagedManagementStrategy | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | JMX is enabled
11:24:44,786 | INFO | rint Extender: 2 | DefaultRuntimeEndpointRegistry | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
11:24:44,795 | INFO | rint Extender: 2 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.
11:24:44,795 | INFO | rint Extender: 2 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
11:24:44,820 | INFO | rint Extender: 2 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Route: queuetable-message-router started and consuming from: Endpoint[direct-vm://queuetable-Parentid]
11:24:44,820 | INFO | rint Extender: 2 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Total 1 routes, of which 1 are started.
11:24:44,820 | INFO | rint Extender: 2 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Apache Camel 2.17.0.redhat-630187 (CamelContext: camelqueuetable) started in 0.076 seconds
11:24:44,825 | INFO | Thread-3577 | BlueprintCamelContext | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Apache Camel 2.17.0.redhat-630187 (CamelContext: camelqueuetable) is shutting down
11:24:44,826 | INFO | Thread-3577 | DefaultShutdownStrategy | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Starting to graceful shutdown 1 routes (timeout 300 seconds)
11:24:44,827 | INFO | 9 - ShutdownTask | DefaultShutdownStrategy | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Route: queuetable-message-router shutdown complete, was consuming from: Endpoint[direct-vm://queuetable-Parentid]
11:24:44,827 | INFO | Thread-3577 | DefaultShutdownStrategy | 232 - org.apache.camel.camel-core - 2.17.0.redhat-630187 | Graceful shutdown of 1 routes completed in 0 seconds
This happens frequently, but not all the time. By search the web, some people suggested to change the update-strategy="none" in the property placeholder. This change does solve the problem. However, we do want to have update-strategy="reload" for some property holders. Besides, we want to know why this happens.

Opendaylight Boron : Config Shard not getting created and Circuit Breaker Timed out

We are using ODL Boron - SR2. We observe a strange behavior of "Config" Shard not getting created when we start ODL in cluster mode in RHEL 6.9. We observe Circuit Breaker Timed Out exception. However "Operational" shard is getting created without any issues. Due to unavailability of "Config" shard we are unable to persist anything in "Config" tree. We checked in JMX console and "Shards" is missing.
This is consistently reproducible in RHEL, however it works in CentOS.
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
This is an issue with akka persistence where it times out trying to write to the disk. See the discussion in https://lists.opendaylight.org/pipermail/controller-dev/2017-August/013781.html.

Google Datastore limitation

I wish to use Datastore, but I read that an entity size is limited to 1mb.
I have an entity "Users", which contains around 50k "User". I wonder if the entity size restriction is not too restrictive for my case. And if a day I will have more users, will I be blocked.
This is how I imagine my database, maybe I misunderstood how it's supposed to work:
+--------- Datastore -------------+
| |
| +---------- Users ------------+ |
| | | |
| | +---------- User ---------+ | |
| | | Name: Alpha | | |
| | +-------------------------+ | |
| | | |
| | +---------- User ---------+ | |
| | | Name: Beta | | |
| | +-------------------------+ | |
| +-----------------------------+ |
+---------------------------------+
Where "Users" is an entity which contains entities "User".
Thank you.
Your "KIND" is user, your "entities" are EACH user. So no matter how MANY users you have, as long as EACH user is under a meg, you're fine.
The only limit to the size of the full "kind" is what you're willing to pay in storage. Reading up on this doc, or watching this introduction video could give some high level advice to your situation.
To better understand keys and indexes (another VERY important concept of datastore), I would suggest this video that explains VERY well how composite indexes work and behave :)

uml sequence diagram: create objects in a loop

In a sequence diagram i am trying to model a loop that creates a bunch of objects. i have found little information online regarding the creation of multiple objects in an SD diagram so i turn to you.
The classes are Deck and Card
Cards are created by fillDeck(), which is called by the constructor of Deck (FYI the objects are stored in an arraylist in Deck).
There are many types of cards with varying properties. Suppose i want 8 cards of type A to be made, 12 of type B and 3 of type C
How would i go about modelling such a thing? this is the idea i have in mind so far, but it is obviously incomplete.
Hope someone can help! thanks!
+------+
| Deck |
+------+
|
+--+-------+--------------+
| loop 8x / |
+--+-----+ +----------+ |
| |-------->| Card(A) | |
| | +-----+----+ |
+--+----------------------+
| |
+--+--------+------|-----------------------+
| loop 12x / | |
+--+------+ | +---------+ |
| |------------------------->| Card(B) | |
| | | +----+----+ |
|--+---------------------------------------+
| | | |
+--+-------+----------------------------------------------+
| loop 3x / | | |
+--+-----+ | | +---------+ |
| |--------------------------------------->| Card(C) | |
| | | | +----+----+ |
|--+------------------------------------------------------+
| | | |
"A sequence diagram describes an Interaction by focusing on the sequence of Messages that are exchanged, along with their corresponding OccurrenceSpecifications on the Lifelines." (UML standard) A lifeline are defined by one object. But that doesn't mean you must keep all objects in lifelines. You should show only these lifelines, that are exchanging messages you are thinking about.
And you needn't show all messages sequences logic on one diagram. In one SD normally you are showing one Interaction. Or maybe a few of them, if they are simple.
So, if your SD is showing one logical concept, it is correct. If there will be another interaction between some objects, you will draw another SD for this interaction, and there will be only objects participating in this second interaction.
UML standard 2.5. Figure 17.25 - Overview of Metamodel elements of a Sequence Diagram

Can the ExtJS GridPanel support column groups?

I would like to have a gridpanel with columns that are broken into 2 sub-columns, kind of like this:
| Monday | Tuesday | Wednesday | Thursday |
| In | Out | In | Out | In | Out | In | Out |
| 9 | 4 | 10 | 5 | 8:30| 4 | 10 | 5 |
Is this possible with ExtJS?
Yes, this is definitively possible. However, you will not find this as out-of-the-box functionality. There is a user extension/plugin (2.0 here) that should do the trick for you. There is also an example in the ExtGWT samples demo that has similar functionality.
In Ext 4.1.3 You can see http://docs.sencha.com/ext-js/4-1/#!/example/grid/group-header-grid.html
It supports group headers.

Resources