wiki:waue/2011/chukwa

Version 1 (modified by waue, 14 years ago) (diff)

--

hadoop 0.21.1 chukwa 0.4 ub1 是 jobtracker / namenode ub2 chukwa server / mysql server

The Chukwa hadoop cluster (CC) The monitored source nodes (SN) The monitored source nodes set up as a hadoop cluster (SN-C)

保險起見,cc sn 都建立資料夾 /chukwa/ /tmp/chukwa

sudo apt-get install sysstat

cc 要設定的 conf/

alert

waue@…

chukwa-collector-conf.xml

<property>

<name>writer.hdfs.filesystem</name> <value>hdfs://ub2:9000/</value> <description>HDFS to dump to</description>

</property> <property>

<name>chukwaCollector.outputDir</name> <value>/chukwa/logs/</value> <description>Chukwa data sink directory</description>

</property> <property>

<name>chukwaCollector.http.port</name> <value>8080</value> <description>The HTTP port number the collector will listen on</description>

</property>

jdbc.conf

jdbc.conf.template 改成 jdbc.conf

demo=jdbc:mysql://ub2:3306/test?user=root

nagios.properties

log4j.appender.NAGIOS.Host=ub2

預設值即可不用改

aggregator.sql chukwa-demux-conf.xml chukwa-log4j.properties commons-logging.properties database_create_tables.sql log4j.properties mdl.xml

cc, sn 都要設定的 conf

chukwa-env.sh

export JAVA_HOME=/usr/lib/jvm/java-6-sun export HADOOP_HOME="/opt/hadoop" export HADOOP_CONF_DIR="/opt/hadoop/conf"

sn 要設定的 conf

agents

agents.template ==> agents

ub1 ub2

chukwa-agent-conf.xml

chukwa-agent-conf.xml.template ==> chukwa-agent-conf.xml

<property>

<name>chukwaAgent.tags</name> <value>cluster="wauegroup"</value> <description>The cluster's name for this agent</description>

</property>

<property>

<name>chukwaAgent.hostname</name> <value>localhost</value> <description>The hostname of the agent on this node. Usually localhost, this is used by the chukwa instrumentation agent-control interface library</description>

</property>

collectors

localhost

initial_adaptors

cp initial_adaptors.template initial_adaptors

cc

安裝 mysql , php , phpmyadmin, apache2

開 database : test 匯入: conf/database_create_tables.sql

log4j.properties

$ vim hadoop/conf/log4j.properties

log4j.appender.DRFA=org.apache.log4j.net.SocketAppender? log4j.appender.DRFA.RemoteHost?=ub2 log4j.appender.DRFA.Port=9096 log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout? log4j.appender.DRFA.layout.ConversionPattern?=%d{ISO8601} %p %c: %m%n

$ cd /opt/hadoop $ cp /opt/chukwa/conf/hadoop-metrics.properties.template conf/ $ cp conf/hadoop-metrics.properties conf/hadoop-metrics $ cp /opt/chukwa/chukwa-hadoop-0.4.0-client.jar ./ $ cp /opt/chukwa/chukwa-hadoop-0.4.0-client.jar ./lib/

Configuring and starting the Collector

  1. Copy conf/chukwa-collector-conf.xml.template to conf/chukwa-collector-conf.xml
  2. Edit conf/chukwa-collector-conf.xml and comment out the default properties for chukwaCollector.writerClass, and chukwaCollector.pipeline. Uncomment block for HBaseWriter parameters, and save.
  3. If you're running HBase in distributed mode, copy your hbase-site.xml file to the collectors conf/ directory. At a minimum, this file must contain a setting for hbase.zookeeper.quorum.
  4. Copy conf/chukwa-env.sh-template to conf/chukwa-env.sh.
  5. Edit chukwa-env.sh. You almost certainly need to set JAVA_HOME, HADOOP_HOME, HADOOP_CONF_DIR, HBASE_HOME, and HBASE_CONF_DIR at least. 6.

In the chukwa root directory, say bash bin/chukwa collector

Configuring and starting the local agent

1.

Copy conf/chukwa-agent-conf.xml.template to conf/chukwa-agent-conf.xml

2.

Copy conf/collectors.template to conf/collectors

3.

In the chukwa root directory, say bash bin/chukwa agent

Starting Adaptors

The local agent speaks a simple text-based protocol, by default over port 9093. Suppose you want Chukwa to monitor system metrics, hadoop metrics, and hadoop logs on the localhost:

  1. Telnet to localhost 9093 2.

Type [without quotation marks] "add org.apache.hadoop.chukwa.datacollection.adaptor.sigar.SystemMetrics? SystemMetrics? 60 0"

3.

Type [without quotation marks] "add SocketAdaptor? HadoopMetrics? 9095 0"

4.

Type [without quotation marks] "add SocketAdaptor? Hadoop 9096 0"

  1. Type "list" -- you should see the adaptor you just started, listed as running.
  2. Type "close" to break the connection. 7.

If you don't have telnet, you can get the same effect using the netcat (nc) command line tool.

Set Up Cluster Aggregation Script

For data analytics with pig, there are some additional environment setup. Pig does not use the same environment variable name as Hadoop, therefore make sure the following environment are setup correctly:

  1. export PIG_CLASSPATH=$HADOOP_CONF_DIR:$HBASE_CONF_DIR
  2. Setup a cron job for "pig -Dpig.additional.jars=${HBASE_HOME}/hbase-0.20.6.jar:${PIG_PATH}/pig.jar ${CHUKWA_HOME}/script/pig/ClusterSummary.pig" to run periodically

Set Up HICC

The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface. To set up HICC, do the following:

  1. bin/chukwa hicc

Data visualization

1.

Point web browser to http://localhost:4080/hicc/jsp/graph_explorer.jsp

  1. The default user name and password is "demo" without quotes.
  2. System Metrics collected by Chukwa collector will be browsable through graph_explorer.jsp file.