Problem in solrQuery.setFilteQueries() Method - solr

I have the following query which I took from my URL
public static String query="pen&mq=pen&f=owners%5B%22abc%22%5D&f=application_type%5B%22cde%22%5D";
public static String q="pen";
I parsed my query string and took each facetname and facet value from it and stored in a map
String querydec = URLDecoder.decode(query, "UTF-8");
String[] facetswithval = querydec.split("&f=");
Map<String, String> facetMap = new HashMap<String, String>();
for (int i = 1; i < facetswithval.length; i++) {
String[] fsplit = facetswithval[i].split("\\[\"");
String[] value = fsplit[1].split("\"\\]");
facetMap.put(fsplit[0], value[0]);
}
Then i use the following code to query in solr using solrj
CommonsHttpSolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(q);
for (Iterator<String> iter = facetMap.keySet().iterator(); iter.hasNext();){
String key=iter.next();
System.out.println("key="+key+"::value="+facetMap.get(key));
solrQuery.setFilterQueries(key+":"+facetMap.get(key));
}
solrQuery.setRows(MAX_ROW_NUM);
QueryResponse qr = server.query(solrQuery);
SolrDocumentList sdl = qr.getResults();
But after running my code I found out that solrQuery.setFilterQuery method is setting filter for only last set facet. That means if i m running the loop and using this function three times it is taking the last set filter values only.
Can somebody please clarify this and tell me better approach for doing this. Also I am decoding url. So, if my facet contains some special character in the middle then i am not getting any result for that. I tried using it without encoding also but it didnt work. :(

There is also a addFilterQuery method, I would call that since you are setting the filter queries individually in your for loop.
Also, please see this post Filter query with special character using SolrJ client from the Solr Users Mailing List about the need to still escape special characters in queries.

Related

How to create a Solr collection that doesn't shuffle the createNodeSet hosts using the SolrJ API?

I'm trying to create a Solr collection on SolrCloud, and I want to pass in the hosts I want the collection to exist on in a certain order, and have Solr follow that order. Solr exposes this functionality in the API with the parameter createNodeSet.shuffle, but I can't explicitly set this parameter in a SolrAdminRequest.Create instance.
Does this functionality not exist within Solrj? Can I set the value with the setProperties() method even though it's a "param"?
I'm facing this problem too, and I notice that you had opened a PR on the GitHub. I've tried several ways to achieve this goal but finally I give up by shuffling the nodes myself, before passing them to the Create request.
In Kotlin:
val nodes = listOf("node1", "node2")
val createNodeSet = nodes.shuffled().joinToString(",")
In Java:
List<String> nodes = Arrays.asList("node1", "node2");
Collections.shuffle(nodes);
String createNodeSet = String.join(",", nodes);
As current Solr now has the constructor marked as protected, only accessible via a static builder, and as I didn't want to have a new class to worry about, I figured out the following way to set the needed parameter.
This method should be usable on many of the other builder created op objects.
Create req = CollectionAdminRequest //
.createCollection(newCollection, newConfigSet, NUM_SHARDS, NUM_REPLICAS);
final SolrParams reqParams = req.getParams();
if (reqParams instanceof ModifiableSolrParams) {
((ModifiableSolrParams) reqParams).set("createNodeSet.shuffle", "false");
}

How to sort results from MongoTemplate.findAll()?

I currently have a query that returns all the documents in a collection using the findAll() method of MongoTemplate. I want to sort these results, but do not see any way to do so. I see that I can use find() with a Query argument and call .with(Sort sort), but in this scenario, how do I set the Query to return all documents? I would be okay with using either approach.
Query query = new Query();
query.with(new Sort(Sort.Direction.DESC, "_id"));
List<MyClass> myClassList= mongoTemplate.find(query, MyClass.class);
An empty Query will behave as findAll(). Like you can write in the mongo shell: db.myCollection.find({}) you can write an emypty Query in the java mongdb driver.
An working sample code would be:
public static void main(String[] args) throws UnknownHostException
{
ServerAddress address = new ServerAddress("localhost", 27017);
MongoClient client = new MongoClient(address);
SimpleMongoDbFactory simpleMongoDbFactory = new SimpleMongoDbFactory(client, "mydatabase");
MongoTemplate mongoTemplate = new MongoTemplate(simpleMongoDbFactory);
Query query = new Query().with(new Sort("_id", "-1"));
List<MyClass> allObjects = mongoTemplate.find(query, MyClass.class);
System.out.println(allObjects);
}
The syntax is now:
Query query = new Query();
query.with(Sort.by(Sort.Direction.DESC, "_id"));
List<MyClass> myClassList= mongoTemplate.find(query, MyClass.class);

Dapper: read multiple results with type array not generics

I have a 3 part query that I am reading using QueryMultiple. My problem is on the first Read<T> I need to split the query into 12 different classes, which Dapper does not support from what I could see. Before I used QueryMultiple, my query was only one part and I was using the method from this example Using Dapper to map more than 5 types to get 12 different classes. My question is, how can i split the first Read<T> into twelve classes and then continue with the GridReader? Please note I cannot create one big query.
public static IEnumerable<TReturn> Query<TReturn>(this IDbConnection cnn, string sql, Type[] types, Func<object[], TReturn> map, dynamic param = null, IDbTransaction transaction = null, bool buffered = true, string splitOn = "Id", int? commandTimeout = null, CommandType? commandType = null);
UPDATE
I tested this method I added to the Dapper file and it worked, but I was only referencing the DLL and not the actual file in my app so I am not sure how to add this on without taking in the Dapper file from github. I was hoping there was built-in support for what I wanted and I just missed it somewhere in the code. Thanks for any help.
public IEnumerable<TReturn> Read<TReturn>(Type[] types, Func<object[], TReturn> func, string splitOn = "id", bool buffered = true)
{
var identity = this.identity.ForGrid(typeof(TReturn), types, gridIndex);
try
{
foreach (var r in SqlMapper.MultiMapImpl<TReturn>(null, default(CommandDefinition), types, func, splitOn, reader, identity, false))
{
yield return r;
}
}
finally
{
NextResult();
}
}
As i was about to add a pull request i noticed someone was one step ahead of me. Seems this functionality is not included in Dapper right now.
https://github.com/StackExchange/dapper-dot-net/pull/308

Solr, Special Characters, and the MultiFieldQueryParser

I need to programatically build boolean queries against multiple Solr fields. I thought that the Lucene MultiFieldQueryParser would be a good way to go. This works well except when special characters are involved.
public class QueryParserSpike {
String userQuery = "(-)-foo";
String escapedQuery = ClientUtils.escapeQueryChars(userQuery); // \(\-\)\-foo
Analyzer analyzer = new WhitespaceAnalyzer(Version.LUCENE_43);
QueryParser parser = new MultiFieldQueryParser(Version.LUCENE_43, new String[]{"a"}, analyzer);
#Test(expected=ParseException.class)
public void testNoEscape() throws Exception {
parser.parse(userQuery); // Throws an exception
}
#Test
public void testEscape() throws Exception {
Query q = parser.parse(escapedQuery);
System.out.println(q.toString()); // a:(-)-foo (This can't be parsed by Solr)
}
#Test
public void testDoubleEscape() throws Exception {
String doubleEscapedQuery = escapedQuery.replaceAll("\\\\", "\\\\\\\\") ;
Query q = parser.parse(doubleEscapedQuery);
System.out.println(q.toString()); // (a:\) (a:\-\) (a:\-foo) (This isn't the correct query)
}
}
What I'm trying to get out of this would be a:\(\-\)\-foo. Is there a Solr class that does something similar? Or is the best option to write something to process the result of the MultiFieldQueryParser myself?
What the query passes from Query.toString() method is a best effort at a user readable query. It is not necessarily a parsable query, like in this case. You can never rely on logic like: parser.parse(query.toString()). The Lucene Query API is capable of expressing many things that there is no way at all to express with the QueryParser syntax.
The method you use to escape the query in testEscape() should be correct, and give you the query you are looking for. You could also use QueryParser.escape(userQuery), for the raw Lucene method.

Override "fl" parameter in Solr using SolrParams in a custom SearchComponent

I have an interesting use case for a Solr implementation we have, where there are some fields in the Solr Schema that shouldn't be returned when doing a query. The ideal solution is to change the calling program so it doesn't query for &fl=score like it does now, and only requests the necessary fields, but that won't happen in the short term so in the meantime we have to filter out some fields from the Solr response.
The approach we think has the smallest performance impact (let me know if there is a better way to do this), is to override the &fl= parameter so it lists all the fields but the ones that should be filtered out. For this, we added a new SearchComponent to the RequestHandler components list that modifies the &fl parameter. The issue we ran into with this approach is that once we get the SolrParams from the SolrQueryRequest, it cannot be modified (which is I think is the right thing to do, since it could be changing something another SearchComponent relies on). But we still need to find a way to remove these extra fields.
So, this is the code we started to write:
public void prepare(ResponseBuilder rb) throws IOException {
SolrQueryRequest req = rb.req;
SolrParams params = req.getParams();
String fl = params.get("fl");
//Remove the "fl" parameter from params and replace it with a new list:
//Cannot be done"
...
And ran into the issue of not being able to add to the SolrParams.
As a plan B, that same SearchComponent is removing the fields in the process() method, but doing it this way is slower. The code has to go through the resulting SolrDocumentList, and for each SolrDocument call removeFields(), something similar to: (simplified code)
public void process(ResponseBuilder rb) throws IOException {
...
SolrQueryResponse rsp = rb.rsp;
NamedList values = rsp.getValues();
SolrDocumentList docs = (SolrDocumentList) values.get("response");
Iterator<SolrDocument> docsIterator = sdoclist.iterator();
while (docsIterator.hasNext()) {
SolrDocument sd = sdocIterator.next();
sd.removeFields(field);
...
Any ideas on how/if this can be achieved?
Thanks for any suggestion!
With your own SearchHandler you can specify invariants (things that will always be fixed no matter the request) on any query parameter, among which there is the &fl.
It's something in the lines of:
<requestHandler name="filtered" class="solr.StandardRequestHandler">
<lst name="invariants">
<str name="fl">score,id,something_else,etc.</bool>
</lst>
</requestHandler>
More documentation:
http://wiki.apache.org/solr/SearchHandler
The only problem is that, for now, there's no negative fl parameter (i.e. return all fields except those i'm telling you). https://issues.apache.org/jira/browse/SOLR-3191
Finally, to specify which SearchHandler you want to use at query time, simply add &qt=filtered (or the name you used for it)
Try removing the fields that you don't want from the ReturnFields object.
For example, something like this:
#Override
public void process(ResponseBuilder rb) throws IOException {
String fl = rb.req.getParams().get(CommonParams.FL);
List<String> fields = Lists.newArrayList(fl.split(","));
List<String> newFields = Lists.newArrayList();
for (String field : fields) {
if (!field.equals("score")) {
newFields.add(field);
}
}
String newFl = Joiner.on(",").join(newFields);
ReturnFields returnFields = new ReturnFields(newFl, rb.req);
rb.rsp.setReturnFields(returnFields);
}
I've set the custom SearchComponent in "last-components" at solrconfig.xml.
P.S: I was using guava libraries for Lists and Joiner.

Resources