Apache Nutch - indexing only the modified files in Solr - solr

Iam able to set up the Apache Nutch and get the data indexed in Solr. While indexing I am trying to make sure only modified pages gets indexed. Below are the 2 questions we have regarding this.
Is it possible to tell Nutch to send ‘If-modified-since’ header while
crawling the site and download the page only if it has changed since
the last time it was crawled.
I could see that Nutch is forming the MD5 digest out of the
retrieved page content, but even though digest hasn’t changed
(compared to previous version), it is still the indexing the page
in Solr. Is there any setting with in Nutch to make sure if the
content hasn’t changed have it not index in Solr?

Answering my own question here, Hope it helps someone
Once I set the adaptivefetchschedule, could see that Nutch was not pulling the pages that hasnt changed.Its honoring if-modified-since header.

Related

Nutch 1.6 doesn't search new entries in seed.txt

I set up Solr 7.7.1 and Nutch 1.6 and ran a test search. For that I put a URL in seed.txt and everything works fine. After this test I removed the old core in Solr, created a new core and put multiple URLs in seed.txt, and started Nutch again for a new crawl. But I got in every try the results of the previous test run. How can I remove the previous search and can start Nutch to crawl the new URLs i put in seed.txt?
Thanks in advance for your answers.
You should remove the crawl/ directory (if it is named crawl). This directory contains the previously crawled data (before it is sent to Solr). Probably there is no new content after you run the crawl command and Nutch is sending the already stored data into Solr.

How do I tell Nutch to crawl *through* a url without storing it?

Let's say I have a Confluence instance, and I want to crawl it and store the results in Solr as part of an intranet search engine.
Now let's say I only want to store a subset of the pages (matching a regex) on the Confluence instance as part of the search engine.
But, I do want Nutch to crawl all the other pages, looking for links to pages that match—I just don't want Nutch to store them (or at least I don't want Solr to return them in the results).
What's the normal or least painful way to set Nutch->Solr up to work like this?
Looks like the only way to do this is write your own IndexFilter plugin (or find someone's to copy from).
[Will add my sample plugin code here when it's working properly]
References:
http://www.atlantbh.com/precise-data-extraction-with-apache-nutch/
http://florianhartl.com/nutch-plugin-tutorial.html
How to filter URLs in Nutch 2.1 solrindex command

Nutch solrindex command not indexing all URLs in Solr

I have a Nutch index crawled from a specific domain and I am using the solrindex command to push the crawled data to my Solr index. The problem is that it seems that only some of the crawled URLs are actually being indexed in Solr. I had the Nutch crawl output to a text file so I can see the URLs that it crawled, but when I search for some of the crawled URLs in Solr I get no results.
Command I am using to do the Nutch crawl: bin/nutch crawl urls -dir crawl -depth 20 -topN 2000000
This command is completing successfully and the output displays URLs that I cannot find in the resulting Solr index.
Command I am using to push the crawled data to Solr: bin/nutch solrindex http://localhost:8983/solr/ crawl/crawldb crawl/linkdb crawl/segments/*
The output for this command says it is also completing successfully, so it does not seem to be an issue with the process terminating prematurely (which is what I initially thought it might be).
One final thing that I am finding strange is that the entire Nutch & Solr config is identical to a setup I used previously on a different server and I had no problems that time. It is literally the same config files copied onto this new server.
TL;DR: I have a set of URLs successfully crawled in Nutch, but when I run the solrindex command only some of them are pushed to Solr. Please help.
UPDATE: I've re-run all these commands and the output still insists it's all working fine. I've looked into any blockers for indexing that I can think of, but still no luck. The URLs being passed to Solr are all active and publicly accessible, so that's not an issue. I'm really banging my head against a wall here so would love some help.
I can only guess what happend from my experiences:
There is a component called url-normalizer (with its configuration url-normalizer.xml) which is truncating some urls (removing URL parameters, SessionIds, ...)
Additionally, Nutch uses a unique constraint, by default each url is only saved once.
So, if the normalizer truncates 2 or more URLs ('foo.jsp?param=value', 'foo.jsp?param=value2', 'foo.jsp?param=value3', ...) to the exactly same one ('foo.jsp'), they get only saved once. So Solr will only see a subset of all your crawled URLs.
cheers

Apache Nutch does not index the entire website, only subfolders

Apache Nutch 1.2 does not index the entire website, only subfolders. My index-page provides links in most areas/subfolders of my website. For example stuff, students, research... But nutch only crawl in one specific folder - "students" in this case. Seems as if links in other directories are not followed.
crawl-urlfilter.txt:
+^http://www5.my-domain.de/
seed.txt in the URLs-folder:
http://www5.my-domain.de/
Starting nutch with(windows/linux both used):
nutch crawl "D:\Programme\nutch-1.2\URLs" -dir "D:\Programme\nutch-1.2\crawl" -depth 10 -topN 1000000
Different variants for depth(5-23) and topN(100-1000000) are tested. Providing more links in seed.txt doesnt help at all, still not following links found in injected pages.
Interestingly, crawling gnu.org works perfect. No robots.txt or preventing meta-tags used in my site.
Any ideas?
While attempting to crawl all links from an index page, I discovered that nutch was limited to exactly 100 links of around 1000. The setting that was holding me back was:
db.max.outlinks.per.page
Setting this to 2000 allowed nutch to index all of them in one shot.
Check out if you´ve got intra domain links limitation (property as false in nutch-site.xml). Also check out other properties as maximun intra-extra links per page and http size. Sometimes they produce wrong results during crawling.
Ciao!

solrindex way of mapping nutch schema to solr

We have several custom nutch fields that the crawler picks up and indexes. Transferring this to solr via solrindex (using the mapping file) works fine. The log shows everything is fine, however the index in solr environment does not reflect this.
Any help will be much appreciated,
Thanks,
Ashok
What I would do is use a tool like tcpmon to monitor exactly what Nutch is sending to Solr. By examing the xml payload, you could determine if Nutch is correctly sending those custom fields to Solr. If Nutch is sending them correctly, there is something going on on the Solr side. On the opposite, re-check your Nutch code.

Resources