Last change
on this file since 51 was
51,
checked in by jazz, 16 years ago
|
- 修改 control:
- 發現只相依 sun-java6-jre 會造成一些 error:
Error: could not open
`/usr/lib/jvm/java-6-sun-1.6.0.07/jre/lib/i386/jvm.cfg'
- hadoop.postinst, hadoop.postrm, hadoop.prerm
- 用來做建立 hdfsadm 使用者帳號, ssh-keygen,
ssh key exchange (authorized_keys), namenode format,
啟動 namenode, datanode, tasktracker 等
|
File size:
1.5 KB
|
Rev | Line | |
---|
[50] | 1 | Source: hadoop |
---|
| 2 | Section: devel |
---|
| 3 | Priority: extra |
---|
| 4 | Maintainer: Jazz Yao-Tsung Wang <jazzwang.tw@gmail.com> |
---|
| 5 | Build-Depends: debhelper (>= 5) |
---|
| 6 | Standards-Version: 3.7.2 |
---|
| 7 | |
---|
| 8 | Package: hadoop |
---|
| 9 | Architecture: any |
---|
[51] | 10 | Depends: ${shlibs:Depends}, ${misc:Depends}, sun-java6-jre, sun-java6-bin |
---|
| 11 | Suggests: sun-java6-jdk |
---|
[50] | 12 | Description: Apache Hadoop Core |
---|
| 13 | . |
---|
| 14 | Apache Hadoop Core is a software platform that lets one easily write and |
---|
| 15 | run applications that process vast amounts of data. |
---|
| 16 | . |
---|
| 17 | Here's what makes Hadoop especially useful: |
---|
| 18 | * Scalable: Hadoop can reliably store and process petabytes. |
---|
| 19 | * Economical: It distributes the data and processing across clusters of |
---|
| 20 | commonly available computers. These clusters can number into |
---|
| 21 | the thousands of nodes. |
---|
| 22 | * Efficient: By distributing the data, Hadoop can process it in parallel on |
---|
| 23 | the nodes where the data is located. This makes it extremely |
---|
| 24 | rapid. |
---|
| 25 | * Reliable: Hadoop automatically maintains multiple copies of data and |
---|
| 26 | automatically redeploys computing tasks based on failures. |
---|
| 27 | . |
---|
| 28 | Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS) |
---|
| 29 | MapReduce divides applications into many small blocks of work. HDFS creates |
---|
| 30 | multiple replicas of data blocks for reliability, placing them on compute |
---|
| 31 | nodes around the cluster. MapReduce can then process the data where it is |
---|
| 32 | located. |
---|
| 33 | . |
---|
| 34 | For more information about Hadoop, please see the Hadoop website. |
---|
| 35 | http://hadoop.apache.org/ |
---|
| 36 | |
---|
Note: See
TracBrowser
for help on using the repository browser.