Could anyone please give a guidence on how to properly configure apache nutch in order to get some amount of records in the database as a result of crawling a web site. I would very appreciate that!
Here details:
I've got the following line in my bin/urls/seed.txt file:
http://transmetod.ru/
The following is the line from regex-urlfilter.txt file (all other regexps are commented) :
+^http://([a-z0-9]*\.)*transmetod.ru/([a-z0-9]*\.)*
Basically I expect lots of records in the database to appear as a result of crawling, but the only thing a got there is just a single record with base url ( with out any other records with additional sublinks in the url )
This is a command line I use to run apache-nutch-2.1 project:
./nutch crawl urls -depth 3 -topN 10000
Can anyone point me out to mistake I've made or gust give some piece of advice ?
P.S.: basically, when I built project and ran it without any changes, I didn't get a bunch of records as well... (if I remmember things right)
Try changing you regex filter to:
+^http://([a-z0-9]*.)transmetod.ru/
Also, when you first run Nutch, it will crawl the urls you put in your seed file.
The next time your run the crawl, using the same crawl folder, It should pick up the outlinks of the first page and crawl them.
Related
I mainly followed the guide on this page. I installed Nutch 2.3, Cassandra 2.0, and solr 4.10.3. Set up went well. But when I executed the following command. No urls were fetched.
./bin/crawl urls/seed.txt TestCrawl http://localhost:8983/solr/ 2
Below are my settings.
nutch-site.xml
http://ideone.com/H8MPcl
regex-urlfilter.txt
+^http://([a-z0-9]*\.)*nutch.apache.org/
hadoop.log
http://ideone.com/LnpAw4
I don't see any errors in the log file. I am really lost. Any help would be appreciated. Thanks!
You will have to add the regex for your website that you want to crawl in regex-urlfilter.txt to pick the link that you have added in nutch-site.xml.
Right now it will only crawl "nutch.apache.org"
Try adding below line:
+^http://([a-z0-9]*\.)*ideone.com/
Try to set nutch logs in debug level and get the logs while executing the crawl command.
It will clearly shows why you are unable to crawl and index the site.
Regards,
Jayesh Bhoyar
http://technical-fundas.blogspot.com/p/technical-profile.html
I got a similar problem recently. I think you can try the following steps to find out the problem.
1 Do some tests to make sure the DB works well.
2 Instead of running the crawl in batch, you can call nutch step by step and watch the log change as well as the change of DB content, in particular, the new urls.
3 Turn off solr and focus on nutch and the DB.
Iam able to set up the Apache Nutch and get the data indexed in Solr. While indexing I am trying to make sure only modified pages gets indexed. Below are the 2 questions we have regarding this.
Is it possible to tell Nutch to send ‘If-modified-since’ header while
crawling the site and download the page only if it has changed since
the last time it was crawled.
I could see that Nutch is forming the MD5 digest out of the
retrieved page content, but even though digest hasn’t changed
(compared to previous version), it is still the indexing the page
in Solr. Is there any setting with in Nutch to make sure if the
content hasn’t changed have it not index in Solr?
Answering my own question here, Hope it helps someone
Once I set the adaptivefetchschedule, could see that Nutch was not pulling the pages that hasnt changed.Its honoring if-modified-since header.
Let's say I have a Confluence instance, and I want to crawl it and store the results in Solr as part of an intranet search engine.
Now let's say I only want to store a subset of the pages (matching a regex) on the Confluence instance as part of the search engine.
But, I do want Nutch to crawl all the other pages, looking for links to pages that match—I just don't want Nutch to store them (or at least I don't want Solr to return them in the results).
What's the normal or least painful way to set Nutch->Solr up to work like this?
Looks like the only way to do this is write your own IndexFilter plugin (or find someone's to copy from).
[Will add my sample plugin code here when it's working properly]
References:
http://www.atlantbh.com/precise-data-extraction-with-apache-nutch/
http://florianhartl.com/nutch-plugin-tutorial.html
How to filter URLs in Nutch 2.1 solrindex command
I have a Nutch index crawled from a specific domain and I am using the solrindex command to push the crawled data to my Solr index. The problem is that it seems that only some of the crawled URLs are actually being indexed in Solr. I had the Nutch crawl output to a text file so I can see the URLs that it crawled, but when I search for some of the crawled URLs in Solr I get no results.
Command I am using to do the Nutch crawl: bin/nutch crawl urls -dir crawl -depth 20 -topN 2000000
This command is completing successfully and the output displays URLs that I cannot find in the resulting Solr index.
Command I am using to push the crawled data to Solr: bin/nutch solrindex http://localhost:8983/solr/ crawl/crawldb crawl/linkdb crawl/segments/*
The output for this command says it is also completing successfully, so it does not seem to be an issue with the process terminating prematurely (which is what I initially thought it might be).
One final thing that I am finding strange is that the entire Nutch & Solr config is identical to a setup I used previously on a different server and I had no problems that time. It is literally the same config files copied onto this new server.
TL;DR: I have a set of URLs successfully crawled in Nutch, but when I run the solrindex command only some of them are pushed to Solr. Please help.
UPDATE: I've re-run all these commands and the output still insists it's all working fine. I've looked into any blockers for indexing that I can think of, but still no luck. The URLs being passed to Solr are all active and publicly accessible, so that's not an issue. I'm really banging my head against a wall here so would love some help.
I can only guess what happend from my experiences:
There is a component called url-normalizer (with its configuration url-normalizer.xml) which is truncating some urls (removing URL parameters, SessionIds, ...)
Additionally, Nutch uses a unique constraint, by default each url is only saved once.
So, if the normalizer truncates 2 or more URLs ('foo.jsp?param=value', 'foo.jsp?param=value2', 'foo.jsp?param=value3', ...) to the exactly same one ('foo.jsp'), they get only saved once. So Solr will only see a subset of all your crawled URLs.
cheers
Apache Nutch 1.2 does not index the entire website, only subfolders. My index-page provides links in most areas/subfolders of my website. For example stuff, students, research... But nutch only crawl in one specific folder - "students" in this case. Seems as if links in other directories are not followed.
crawl-urlfilter.txt:
+^http://www5.my-domain.de/
seed.txt in the URLs-folder:
http://www5.my-domain.de/
Starting nutch with(windows/linux both used):
nutch crawl "D:\Programme\nutch-1.2\URLs" -dir "D:\Programme\nutch-1.2\crawl" -depth 10 -topN 1000000
Different variants for depth(5-23) and topN(100-1000000) are tested. Providing more links in seed.txt doesnt help at all, still not following links found in injected pages.
Interestingly, crawling gnu.org works perfect. No robots.txt or preventing meta-tags used in my site.
Any ideas?
While attempting to crawl all links from an index page, I discovered that nutch was limited to exactly 100 links of around 1000. The setting that was holding me back was:
db.max.outlinks.per.page
Setting this to 2000 allowed nutch to index all of them in one shot.
Check out if you´ve got intra domain links limitation (property as false in nutch-site.xml). Also check out other properties as maximun intra-extra links per page and http size. Sometimes they produce wrong results during crawling.
Ciao!