现在的位置: 首页 > 综合 > 正文

VM下搭建hadoop集群

2014年06月14日 ⁄ 综合 ⁄ 共 3677字 ⁄ 字号 评论关闭

一.前期准备

1.安装VMware_Workstation_wmb

2.安装三台CentOS-6.3-i386-bin-DVD1  

      Master192.168.66.174

      Slave1192.168.66.171

      Slave29:1.168.66.173

二.安装步骤:

(在安装centos时就把pc的名字改好,免得后面改hostname

1.在每台pc上的/etc/hosts中加入:

127.0.0.1 localhost

192.168.66.174master

192.168.66.171slave1

192.168.66.173slave2

2.在每台pc安装java:

/etc/profile中加入:

exportJAVA_HOME=/usr/local/java/jdk1.6.0_45

exportPATH=:$JAVA_HOME/bin:/sbin:/usr/bin:/usr/sbin:/bin

exportCLASSPATH=.:$JAVA_HOME/lib

3.配置SSH无密码登录:

   在每台pc上面:

  
ssh-keygen-t rsa

 cat~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys

 chmod 700 .ssh

 chmod 644 authorized_keys

编辑sshd配置文件/etc/ssh/sshd_config

把#AuthorizedKeysFile  .ssh/authorized_keys前面的注释取消掉。

在master上面:

ssh-copy-id -i~/.ssh/id_rsa.pub root@slave1

ssh-copy-id -i~/.ssh/id_rsa.pub root@slave2

重启

servicesshd restart

在slave1上面:

ssh-copy-id -i~/.ssh/id_rsa.pub root@master

ssh-copy-id -i~/.ssh/id_rsa.pub root@slave2

重启

servicesshd restart

在slave2上面:

ssh-copy-id -i~/.ssh/id_rsa.pub root@slave1

ssh-copy-id -i~/.ssh/id_rsa.pub root@master

重启

servicesshd restart

再ssh slave1;sshslave2;ssh master验证

**遇到Agentadmitted failure to sign using the key
解決方式使用ssh-add 指令将私钥加进来(根据个人的密匙命名不同更改 id_rsa
# ssh-add ~/.ssh/id_rsa

**遇到ssh:connect
to host master port 22: No route to host
是由于ip地址问题,检查/etc/hosts

4.配置hadoop

   下载hadoop

scphadoop-1.2.0.tar.gz root@slave1:/usr/local

tar xzvfhadoop-1.2.0.tar.gz

mv hadoop-1.2.0/usr/local/hadoop

profile./bashrc中加入:

 export HADOOP_HOME=/usr/local/hadoop

 export PATH=$PATH:$HADOOP_HOME/bin

配置hadoop-env.sh

    设置JAVA_HOME

配置core-site.xml

<property>

       <name>fs.default.name</name>

       <value>hdfs://master:9000</value>

       <final>true</final>

</property>

<property>

       <name>hadoop.tmp.dir</name>

       <value>/home/hadoop/hadoop/data/hdfs/tmp</value>

</property>

配置mapred-site.xml

<property>

       <name>mapred.job.tracker</name>

       <value>master:9001</value>

</property>

配置hdfs-site.xml

<!-- master配 -->

<property>

       <name>dfs.name.dir</name>

       <value>/home/hadoop/hadoop/data/hdfs/name</value>

       <final>true</final>

</property>

<!-- slave配 -->

<property>

       <name>dfs.data.dir</name>

       <value>/home/hadoop/hadoop/data/hdfs/data</value>

       <final>true</final>

</property>

<property>

       <name>dfs.replication</name>

       <value>1</value>

       <final>true</final>

</property>

配置masters

修改为master或对应的IP地址

配置slaves

slave1

slave2

(可以salves机上通过scpmasters机上的hadoop主目录:sudo
scp -r test@192.168.30.20:/usr/local/hadoop /usr/local

关掉防火墙

root用户下

service iptables stop

格式化namenode

在master机子中:hadoop namenode -format

在master下启动集群即可:start-all.sh(因为已经将hadoop添加到PATH中去了)

检验master:输入jps

3711 NameNode

4085 Jps

3970 JobTracker

3874 SecondaryNameNode

检验slave:输入jps

2892 Jps

2721 DataNode

2805 TaskTracker

或在master输入:hadoop dfsadmin -report

Safe mode is ON

Configured Capacity:15481700352 (14.42 GB)

Present Capacity:13734293504 (12.79 GB)

DFS Remaining:13457870848 (12.53 GB)

DFS Used: 276422656(263.62 MB)

DFS Used%: 2.01%

Under replicatedblocks: 0

Blocks with corruptreplicas: 0

Missing blocks: 0

-------------------------------------------------

Datanodes available: 3(3 total, 0 dead)

Name:192.168.160.143:50010

Decommission Status :Normal

Configured Capacity:5160566784 (4.81 GB)

DFS Used: 41160704(39.25 MB)

Non DFS Used:582455296 (555.47 MB)

DFS Remaining:4536950784(4.23 GB)

DFS Used%: 0.8%

DFS Remaining%: 87.92%

Last contact: Mon May06 16:12:02 CST 2013

Name:192.168.160.140:50010

Decommission Status :Normal

Configured Capacity:5160566784 (4.81 GB)

DFS Used: 97075200(92.58 MB)

Non DFS Used:582545408 (555.56 MB)

DFS Remaining: 4480946176(4.17GB)

DFS Used%: 1.88%

DFS Remaining%: 86.83%

Last contact: Mon May06 16:12:01 CST 2013

Name:192.168.160.141:50010

Decommission Status :Normal

Configured Capacity:5160566784 (4.81 GB)

DFS Used: 138186752(131.79 MB)

Non DFS Used:582406144 (555.43 MB)

DFS Remaining:4439973888(4.14 GB)

DFS Used%: 2.68%

DFS Remaining%: 86.04%

Last contact: Mon May06 16:12:00 CST 2013

此时都表示集群成功。

hadoop集群关闭:stop-all.sh

 

 

 

 

 

 

抱歉!评论已关闭.