Apache solr search 3.4 version in coldfusion 11 - solr

Have an ecommerce application in coldfusion 11 which is using apache solr search feature for creating collection .
The built in solr version in it seems to be 3.4
When searching a word with space like 'console sink',it also brings result for products like console sink legs.
when the user expects to see the sink products. Hope that is fetching since the title contain the string
'console sink' along with legs. The sink products which are expected are also coming along with it,but in last pages.
Http request call format like this
http://localhost:8987/solr//select?indent=on&q="console sink"~10&start=0&rows=10&fl=*,score&wt=json
How can we handle to display the expected result(ie in the above case sink products instead of its parts) in top results of solr search?Any parameters to be passed?
Is it possible to upgrade the solr version without upgrading coldfusion version?

Related

How can I retrieve the metrics from my Solr server using SolrJ?

We are running Solr 8.4 and SolrJ 8.4. I can successfully retrieve about 18K lines of metrics using curl: curl 'http://localhost:1080/MySolr/admin/metrics'. How can I retrieve the same metrics using SolrJ?
I was unable to find any information in either the Solr or SolrJ documentation about this.
Any help is appreciated.
You make explicit use of the CommonParams.QT parameter to change the query path into any value.
query.setParam(CommonParams.QT, "/admin/metrics");
This lets you make a custom query to a path under a specific core name.

How to Ignore a Missing Field in Solr Request Parameters API?

I have an old solr instance that has enabled a customer parameter in the json API, e.g.
?json.customParam="eeebc"
In order to upgrade Solr to a new version (on a new machine), I simply want to ignore it instead of solr returning an exception so some level of backward compatibility is possible. The current exception is:
Unknown top-level key in JSON request: customParam
I'm hoping there is some fairly simple way to just ignore that parameter using the schema/solrconfig etc.

bootstrap solr on tomcat with compositeID routing

We are upgrading solr 4.0 to solr 4.3.1 on tomcat 7.
We would like to use the "compositeId" router. It seems that there are two ways to do that:
1. using collections API to create a new collection by passing "numShards";
2. Passing "numShards" in bootstrap process.
For 1, we have a large amount of existing index data that we don't want to reindex. Hence, we can't create new collections.
SolrCloud wiki use examples of jetty where it is possible to pass "numShards" parameter. Is it possible to do it in tomcat?
This is currently what happens in solr 4.3.1 on tomcat 7. When doing the default bootstrap: solr read "solr.xml" to find all solr cores and bootstrap all of them. however, the hash range of a solr core shows "null" in : "clusterstate.json" in zookeeper and will result in using "implicit" router.
Thanks!
When you want to set up collection with Solr running in Tomcat (ZooKeeper runs separately) you should use Collections API: i.e. you can specify number of shards (numShards) and other parameters when calling CREATE action.
With Solr 4.3.1 there's a nice option now that allows splitting existing shards. Please, check out SPLITSHARDS in Collections API.
https://cwiki.apache.org/confluence/display/solr/Collections+API
http://wiki.apache.org/solr/SolrCloud (some points about collections API are also there)

Search using SOLR is not up to date

I am writing an application in which I present search capabilities based on SOLR 4.
I am facing a strange behaviour: in case of massive indexing, search request doesnt always "sees" new indexed data. It seems like the index reader is not getting refreshed frequently, and only after I manually refresh the core from the Solr Core Admin window - the expected results will return...
I am indexing my data using JsonUpdateRequestHandler.
Is it a matter of configuration? do I need to configure Solr to reopen its index reader more frequently somehow?
Changes to the index are not available until they are commited.
For SolrJ, do
HttpSolrServer server = new HttpSolrServer(host);
server.commit();
For XML either send in <commit/> or add ?commit=true to the URL, e.g. http://localhost:8983/solr/update?commit=true

Does solr do web crawling?

I am interested to do web crawling. I was looking at solr.
Does solr do web crawling, or what are the steps to do web crawling?
Solr 5+ DOES in fact now do web crawling!
http://lucene.apache.org/solr/
Older Solr versions do not do web crawling alone, as historically it's a search server that provides full text search capabilities. It builds on top of Lucene.
If you need to crawl web pages using another Solr project then you have a number of options including:
Nutch - http://lucene.apache.org/nutch/
Websphinx - http://www.cs.cmu.edu/~rcm/websphinx/
JSpider - http://j-spider.sourceforge.net/
Heritrix - http://crawler.archive.org/
If you want to make use of the search facilities provided by Lucene or SOLR you'll need to build indexes from the web crawl results.
See this also:
Lucene crawler (it needs to build lucene index)
Solr does not in of itself have a web crawling feature.
Nutch is the "de-facto" crawler (and then some) for Solr.
Solr 5 started supporting simple webcrawling (Java Doc). If want search, Solr is the tool, if you want to crawl, Nutch/Scrapy is better :)
To get it up and running, you can take a detail look at here. However, here is how to get it up and running in one line:
java
-classpath <pathtosolr>/dist/solr-core-5.4.1.jar
-Dauto=yes
-Dc=gettingstarted -> collection: gettingstarted
-Ddata=web -> web crawling and indexing
-Drecursive=3 -> go 3 levels deep
-Ddelay=0 -> for the impatient use 10+ for production
org.apache.solr.util.SimplePostTool -> SimplePostTool
http://datafireball.com/ -> a testing wordpress blog
The crawler here is very "naive" where you can find all the code from this Apache Solr's github repo.
Here is how the response looks like:
SimplePostTool version 5.0.0
Posting web pages to Solr url http://localhost:8983/solr/gettingstarted/update/extract
Entering auto mode. Indexing pages with content-types corresponding to file endings xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
SimplePostTool: WARNING: Never crawl an external web site faster than every 10 seconds, your IP will probably be blocked
Entering recursive mode, depth=3, delay=0s
Entering crawl at level 0 (1 links total, 1 new)
POSTed web resource http://datafireball.com (depth: 0)
Entering crawl at level 1 (52 links total, 51 new)
POSTed web resource http://datafireball.com/2015/06 (depth: 1)
...
Entering crawl at level 2 (266 links total, 215 new)
...
POSTed web resource http://datafireball.com/2015/08/18/a-few-functions-about-python-path (depth: 2)
...
Entering crawl at level 3 (846 links total, 656 new)
POSTed web resource http://datafireball.com/2014/09/06/node-js-web-scraping-using-cheerio (depth: 3)
SimplePostTool: WARNING: The URL http://datafireball.com/2014/09/06/r-lattice-trellis-another-framework-for-data-visualization/?share=twitter returned a HTTP result status of 302
423 web pages indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update/extract...
Time spent: 0:05:55.059
In the end, you can see all the data are indexed properly.
You might also want to take a look at
http://www.crawl-anywhere.com/
Very powerful crawler that is compatible with Solr.
I have been using Nutch with Solr on my latest project and it seems to work quite nicely.
If you are using a Windows machine then I would strongly recommend following the 'No cygwin' instructions given by Jason Riffel too!
Yes, I agree with the other posts here, use Apache Nutch
bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
Although your solr version has the match the correct version of Nutch, because older versions of solr stores the indices in a different format
Its tutorial:
http://wiki.apache.org/nutch/NutchTutorial
I know it's been a while, but in case someone else is searching for a Solr crawler like me, there is a new open-source crawler called Norconex HTTP Collector
I know this question is quite old, but I'll respond anyway for the newcomer that will wonder here.
In order to use Solr, you can use a web crawler that is capable of storing documents in Solr.
For instance, The Norconex HTTP Collector is a flexible and powerful open-source web crawler that is compatible with Solr.
To use Solr with the Norconex HTTP Collector you will need the Norconex HTTP Collector which is used to crawl the website that you want to collect data from, and you will need to install the Norconex Apache Solr Committer to store collected documents into Solr. When the committer is installed, you will need to configure the XML configuration file of the crawler. I would recommend that you follow this link to get started test how the crawler works and here to know how to configure the configuration file. Finally, you will need this link to configure the committer section of the configuration file with Solr.
Note that if your goal is not to crawl web pages, Norconex also has a Filesystem Collector that can be used with the Sorl Committer as well.
Def Nutch !
Nutch also has a basic web front end which will let you query your search results. You might not even need to bother with SOLR depending on your requirements. If you do a Nutch/SOLR combination you should be able to take advantage of the recent work done to integrate SOLR and Nutch ... http://issues.apache.org/jira/browse/NUTCH-442

Resources