I'm upgrading Solr from 6.x to 8.x. In the past, we used to build our request thusly in our PHP script:
$aPostData = array(
'stream.body' => '{"add": {"doc":{...stuff here...}}',
'commit' => 'true',
'collection' => 'mycollection',
'expandMacros' => 'false'
);
$oBody = new \http\Message\Body();
$oBody->addForm($aPostData);
sending it to our Solr server at /solr/mycollection/update/json. That worked just fine in 6.x but now that I've upgraded to 8.x, I'm receiving the following response from Solr
{
"responseHeader":{
"status":400,
"QTime":1
},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"missing content stream",
"code":400
}
}
Digging around I ran across the following
https://issues.apache.org/jira/browse/SOLR-10748
and
Solr error - Stream Body is disabled
I tried following the suggestions of both answers. For the first one, I now see a file called "configoverlay.json" in my ./conf directory and it has those settings. For the second answer, I set it up so my requestParsers node had those attributes. However, neither worked. I've searched around but at this point I'm at my wits end. How can I make it so that I can continue using "stream.body"? If I shouldn't be using "stream.body" is there some other request var that I can/should use when sending my data? I couldn't find anything in the documentation. Perhaps I was looking in the wrong place?
Any help would be greatly appreciated.
thnx,
Christoph
Related
I'm setting up my first Solr server via docker using solr:8.11.1-slim. I am gonna use the schema API to set up the schema for my core whose name is 'products'.
While reading the docs there seems to be false info on the docs for field types:
https://solr.apache.org/guide/8_11/field-types-included-with-solr.html
vs.
https://solr.apache.org/guide/8_11/schema-api.html
I followed the first guide to get info on what field types I can specify and am trying to send requests based on the second doc such as this:
{ 'add-field': { "name":"latlong", "type":"LatLongPointSpatialField", "multiValued":False, "stored":True, 'indexed': True } },
but Solr gives me back errors such as:
org.apache.solr.api.ApiBag$ExceptionWithErrObject: error processing commands, errors: [{add-field={name=latlong, type=LatLongPointSpatialField, multiValued=false, stored=true, indexed=true}, errorMessages=[Field 'latlong': Field type 'LatLongPointSpatialField' not found
So what gives? Am I misreading the docs or are they wrong or is something wrong with the solr 8.11.1 image in docker? Why does it not accept the field types I'm providing?
Thanks for your help ahead of time.
Below is my actual log,
--2019-05-09 06:49:05.590 -TRACE 6293 --- [ntainer#0-0-C-1] c.s.s.service.MessageLogServiceImpl : [41a6811cbc1c66eda0e942712a12a003d6bf4654b3edb6d24bf159b592afc64f1557384545548] Event => Message Failure Identified : INVALID_STRUCTURE
My given grok filter pattern,
match => {
"message" => "--%{TIMESTAMP_ISO8601:logtime} -%{LOGLEVEL:level} (?<pid>\d+) --- \[(?<thread>[^\]]+)] (?<classname>[\w.]+)\s+: \[(?<token>[^\]]+)] Event \=> Message Failure Identified : (?<code>[\w]+)"
}
After doing some adding/removing desired filed below is my tokenized form,
{
"code" => "INVALID_STRUCTURE",
"event" => "message_failure",
"token" => "41a6811cbc1c66eda0e942712a12a003d6bf4654b3edb6d24bf159b592afc64f1557384545548",
"logtime" => "2019-05-09 06:49:05.590"
}
Now I want to send it to solr, but while sending this is giving me the warning,
[WARN ][logstash.outputs.solrhttp] An error occurred while indexing: undefined method `iso8601' for nil:NilClass
I think it related to "logtime" field since that is the only portion which deals with ISO8601. Nothing extra information found in the logs. What is the problem here?
Finally got the answer. Thanks
this blog
Logstash will use a ruby library called Rsolr for connecting to Solr. Logstash will install the latest version of Rsolr, but latest Rsolr has a bub with timestamp datatype which will prevent indexing to Solr. To prevent this we will have to manually install the working version of Rsolr.
Changed the default rsolr plugin as mentioned in the post and everything started working
I have been using SOLR 4.10.2, and am getting ready to migrate to 7.1
Under 4.10.2 I was able to clear an index with the following:
var address = #"http://mysolrserver:8983/solr/mysolrcore/update?stream.body=<delete><query>(*:*)</query></delete>&commit=true";
WebClient client = new WebClient();
client.DownloadString(address).Dump();
When I try this against a SOLR 7.1 server, I get a response 400 - Bad request.
{
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"Stream Body is disabled. See http://lucene.apache.org/solr/guide/requestdispatcher-in-solrconfig.html for help",
"code":400}}
I went into solrconfig.xml for the core and set the element to
<requestParsers enableRemoteStreaming="true"
multipartUploadLimitInKB="2048000"
formdataUploadLimitInKB="2048"
addHttpRequestToContext="false"/>
but I still get the same error.
Since 7.1 is now json by default, I have tried adding
&wt=xml
to the end of the url, but I get the same result: 400 - Bad Request
Any ideas?
You're switching the wrong parameter. If you want to allow stream.body in the URL, you have to set enableStreamBody="true". enableRemoteStreaming controls stream.file and stream.url which can be used to read from remote locations.
I run below call in postman, after deleting query working fine.
http://localhost:8983/solr/CORENAME/config -H 'Content-type:application/json' -d'{
"set-property" : {"requestDispatcher.requestParsers.enableRemoteStreaming":true},
"set-property" : {"requestDispatcher.requestParsers.enableStreamBody":true}
}'
i use solr 4.3 for my website ,
whie i query one data with morelike this function ,
sometime this exception goes out :
org.apache.solr.common.SolrException: parsing error
at org.apache.solr.client.solrj.impl.BinaryResponseParser.processResponse(BinaryResponseParser.java:43)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)
at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
parsing error??
how was it ?
why it occures ?
thanks for your reply .
this code :
the code :
public MoreLikeThisQueryResponse(QueryResponse queryResponse) {
this.queryResponse = queryResponse;
NamedList<Object> res = this.queryResponse.getResponse();
for (int i = 0; i < res.size(); i++) {
String name = res.getName(i);
if ("match".equals(name)) {
this.matchResult = (SolrDocumentList) res.getVal(i);
}
}
}
This type of "parsing error" in the BinaryResponseParser.processResponse method typically indicates that the server is returning a raw HTML text response rather than a Solr javabin response. Typically that means that the server container (Tomcat or Jetty) is detecting and reporting an error before Solr gets a chance to handle the error and obey the wt parameter which sets the response format to javabin.
Check the server log for the actual error.
I had a similar problem.
It seems that there might be a compatibility error is your are not using the EXACT compatible versions of Solr and Solrj.
To resolve this, you have to specify
server.setParser(new XMLResponseParser());
Quoting from Solrj
SolrJ generally maintains backwards compatibility, so you can use a newer SolrJ with an older Solr, or an older SolrJ with a newer Solr. There are some minor exceptions to this general rule:
If you're mixing 1.x and a later major version, you must set the response parser to XML, because the two versions use incompatible versions of javabin.
I am new to Solr. I have been following the documentation provided in the http://haystacksearch.org/ site.
My project is on django 1.4.
The steps I followed:
1.Added haystack to installed apps.
2.Modified settings.py with
HAYSTACK_SITECONF = 'directory.search_sites'
HAYSTACK_SEARCH_ENGINE = 'solr'
HAYSTACK_SOLR_URL = 'http://127.0.0.1:8983/solr'
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.solr_backend.SolrEngine',
'URL': 'http://127.0.0.1:8983/solr'
# ...or for multicore...
# 'URL': 'http://127.0.0.1:8983/solr/mysite',
},
}
3.My search_indexes.py file
from haystack import indexes
from app.models import SellerItem
class SellerItemIndex(indexes.SearchIndex):
text = indexes.CharField(document=True, use_template=True)
title = indexes.CharField(model_attr='title')
sub_title = indexes.CharField(model_attr='sub_title')
description = indexes.CharField(model_attr='description')
def get_model(self):
return SellerItem
def index_queryset(self):
"""Used when the entire index for model is updated."""
return self.get_model().objects.filter(pk__gt=0)
4.Added search_sites.py
import haystack
haystack.autodiscover()
5.added templates/search/indexes/selleritem.txt
{{ object.title }}
{{ object.sub_title }}
{{ object.description }}
6.Added this to urls.py:
(r'^search/', include('haystack.urls')),
7.Created search template
8.Replaced schema.xml in apache-solr-3.6.0/example/solr/conf with the generated xml by using the command:
python manage.py build_solr_schema
I am getting an error like this when I start the solr server:
SEVERE: org.apache.solr.common.SolrException: undefined field text
at org.apache.solr.schema.IndexSchema.getDynamicFieldType(IndexSchema.java:1330)
at org.apache.solr.schema.IndexSchema$SolrQueryAnalyzer.getAnalyzer(IndexSchema.java:408)
at org.apache.solr.schema.IndexSchema$SolrIndexAnalyzer.reusableTokenStream(IndexSchema.java:383)
at org.apache.lucene.queryParser.QueryParser.getFieldQuery(QueryParser.java:574)
at org.apache.solr.search.SolrQueryParser.getFieldQuery(SolrQueryParser.java:206)
at org.apache.lucene.queryParser.QueryParser.Term(QueryParser.java:1429)
at org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1317)
at org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1245)
at org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1234)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:206)
at org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:79)
at org.apache.solr.search.QParser.getQuery(QParser.java:143)
at org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:105)
at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:165)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376)
at org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:59)
at org.apache.solr.core.SolrCore$3.call(SolrCore.java:1182)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
Still the server will be starting
When I do ./manage.py rebuild_index and do a search I get an error log
Problem accessing /solr/select/. Reason:undefined field text
What did I miss? Did anyone had the same issue before?
Thank you
I think your issue stems from the incorrectly named template. You use search/indexes/selleritem.txt, but it should be search/indexes/app/selleritem_text.txt.
As a side note, I see that you’re mixing Haystack 1.X and 2.X settings and methods. By the lack of the indexes.Indexable mixin in your SellerItemIndex search index class, it appears that you must actually be using 1.X. Your life will be simpler if you stick with the docs for the version you are using.
1.2.7 docs
2.0.0-beta docs
Hope that helps,
Ben