现在的位置: 首页 > 云计算 > 正文

Hadoop集群安装 (1)

2013年08月10日 云计算 ⁄ 共 1473字 ⁄ 字号 评论关闭

1. 机器配置

(1) 机器规划

master(NameNode, JobTracker) 192.168.100.123   node14  

slave1(DataNode, TaskTracker)192.168.100.124   node15

slave2(DataNode, TaskTracker)192.168.100.125   node16

(2) 添加hadoop用户

在三台机器上分别 groupadd hadoop 并 useradd -g hadoop hadoop添加hadoop用户

(3) NFS设置

通过root用户在master上配置NFS server,并共享/home目录;

在slaves上挂在master上的/home到本地/home

(4) ssh无密码(在node14上对hadoop)

ssh-keygen -t rsa  
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys 	

(5) 目录结构

~/soft

~/program

~/study

2. 安装JDK (在master上安装,hadoop用户)

(1) 解压

(2) 配置环境变量

[hadoop@node14 ~]$ vi .bashrc
export JAVA_HOME=/home/hadoop/program/jdk1.6.0_22
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$CLASSPATH
[hadoop@node14 ~]$ source .bashrc
[hadoop@node14 ~]$ which java
~/program/jdk1.6.0_22/bin/java

3. 安装Hadoop0.21(在master上安装,hadoop用户)

(1) 在~/program下面解压

[hadoop@node14 ~]$ cp soft/hadoop-0.21.0.tar.gz program/

[hadoop@node14 program]$ tar -zxvf hadoop-0.21.0.tar.gz

(2)配置环境变量

[hadoop@node14 ~]$ vi .bashrc

export HADOOP_HOME=/home/hadoop/program/hadoop-0.21.0

[hadoop@node14 ~]$ source .bashrc

(3) 配置环境

[hadoop@node14 hadoop-0.21.0]$ vi conf/hadoop-env.sh

export JAVA_HOME=/home/hadoop/program/jdk1.6.0_22

export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

(4)修改master和slave文件

[hadoop@node14 hadoop-0.21.0]$ cat conf/masters

node14

[hadoop@node14 hadoop-0.21.0]$ cat conf/slaves

node15

node16

说明:

   接下来

(1)配置conf/mapred-site.xml、conf/hdfs-site.xml、conf/mapred-site.xml

(2)其他配置如内存配置、节点心跳、日志配置可不配 (就用缺省配置)

(3)将/home/hadoop/program/hadoop-0.21.0打压缩包,并scp到其他slave节点

(4)将.bashrc文件scp到其他slave节点,并source .bashrc

抱歉!评论已关闭.