由於作者很窮只有兩台爛爛的主機可以使用 所以這邊示範只會用一台master和一台slave做架設
工具和安裝方法都在 "Server端 單一叢集式Hadoop安裝" 這邊提過 我們直接跳到設定內往的地方開始
vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
BOOTPROTO=none #DHCP關閉
IPADDR=192.168.70.101 #這台主機的固定IP 可以自訂
NETMASK=255.255.255.0 #固定寫法
GATEWAY=192.168.70.1 #我的路由器的閘道設定
vi /etc/sysconfig/network
HOSTNAME=master01
vi /etc/hosts
192.168.70.101 master01
192.168.70.102 slave01
重新啟動網路
service network restart
安裝JAVA JDK
rpm -ivh jdk-7u65-linux-x64.rpm
java -version
設定java路徑
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_65
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
source /etc/profile
安裝Hadoop 2.2.0
cp -r /mnt/Hadoop/hadoop-2.2.0 /opt/hadoop
設定Hadoop路徑
vi ~/.bashrc
export HADOOP_HOME=/opt/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source ~/.bashrc
確認Hadoop版本
hadoop version
建立Hadoop暫存資料夾
mkdir -p $HADOOP_HOME/tmp
指定JAVA路徑給Hadoop
vi /opt/hadoop/libexec/hadoop-config.sh
export JAVA_HOME="/usr/java/jdk1.7.0_65"
vi /opt/hadoop/etc/hadoop/slaves
master01
slave01
vi /opt/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME="/usr/java/jdk1.7.0_65"
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
vi /opt/hadoop/etc/hadoop/yarn-env.sh
export JAVA_HOME="/usr/java/jdk1.7.0_65"
vi /opt/hadoop/etc/hadoop/core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://master01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
vi /opt/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
cp /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml
vi /opt/hadoop/etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
vi /opt/hadoop/etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.hostname</name>
<value>master01</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
關閉防火牆
service iptables stop
chkconfig iptables off
取消當新主機加入時檢查鑰匙
vi /etc/ssh/ssh_config
#找到StrictHostKeyChecking這行,將註解移除,並改為no
StrictHostKeyChecking no
service sshd restart
產生鑰匙
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh master01
複製Hadoop到Slave01
scp -rp /opt/hadoop slave1:/opt
複製鑰匙到Slave01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@slave01
登入Slave01試試
ssh slave01
離開
exit
啟動Hadoop
hadoop namenode -format
start-all.sh
看Hadoop網頁
http://192.168.70.101:8088
http://192.168.70.101:50070
留言列表