Version 7 (modified by waue, 13 years ago) (diff) |
---|
Nutch 1.3
[intro]
7 June 2011 - Apache Nutch 1.3 Released
[get]
cd /opt/nutch-1.3 ant
[setup]
deploy
可將 bin/nutch 與 nutch-1.3.job 放到 hadoop 與之整合
local
cd /opt/nutch-1.3/runtime/local
- bin/nutch (inject)
export JAVA_HOME="/usr/lib/jvm/java-6-sun"
- conf/nutch-site.xml (inject)
<configuration> <property> <name>http.agent.name</name> <value>waue_test</value> </property> <property> <name>plugin.includes</name> <value>protocol-http|urlfilter-regex|parse-(html|tika)|index-(basic|anchor)|scoring-opic|urlnormalizer-(pass|regex|basic)</value> </property> <property> <name>http.robots.agents</name> <value>nutch</value> </property> <property> <name>http.agent.url</name> <value>waue_test</value> </property> <property> <name>http.agent.email</name> <value>waue_test</value> </property> <property> <name>http.agent.version</name> <value>waue_test</value> </property> </configuration>
- conf/regex-urlfilter.txt (replace) (1.2 conf/crawl-urlfilter.txt)
-^(file|ftp|mailto): # skip image and other suffixes we can't yet parse -\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$ # skip URLs containing certain characters as probable queries, etc. -[*!] # skip URLs with slash-delimited segment that repeats 3+ times, to break loops #-.*(/[^/]+)/[^/]+\1/[^/]+\1/ # accept anything else +.
[execute]
mkdir urls ; echo "http://lucene.apache.org/nutch/" >urls/url.txt bin/nutch crawl urls -dir crawl2 -depth 2 -topN 50
- finally , you will get only 3 directories.
crawldb linkdb segments