现在的位置: 首页 > 综合 > 正文

Hadoop执行报错 JobTracer could only be replicated to 0 nodes, instead of 1

2014年09月05日 ⁄ 综合 ⁄ 共 1461字 ⁄ 字号 评论关闭

Hadoop执行报错如下: JobTracer could only be replicated to 0 nodes, instead of 1

网上各种说法都有,我只说我的解决办法,仅供参考

1、关闭防火墙

2、删除hadoop的data目录下的文件,然后格式化namenode借点

详细说明:

引用
问题:I am trying to resolve an IOException error. I have a basic setup and shortly after running start-dfs.sh I get a: error: java.io.IOException:
File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 Any pointers how to resolve this?
Thanks! 


解答:You'll probably find that even though the name node starts, it doesn't have any data nodes and is completely empty. Whenever hadoop
creates a new filesystem, it assigns a large random number to it to prevent you from mixing datanodes from different filesystems on accident. When you reformat the name node its FS has one ID, but your data nodes still have chunks of the old FS with a different
ID and so will refuse to connect to the namenode. You need to make sure these are cleaned up before reformatting. You can do it just by deleting the data node directory, although there's probably a more "official" way to do it.


这个原因是执行如下命令导致的: 
$hadoop namenode -format 



解决办法是删除掉data.dir目录,我查了一下,删除的是如下这个目录
/tmp/hadoop-root/dfs/data,也就是hdfs-site.xml的dfs.data.dir所配置的目录



然后 reformat namenode,问题解决。也就是再重新执行$hadoop namenode -format 

3、关闭安全模式 hadoop dfsadmin -safemode leave

执行完三步,问题解决

网上不少地方说要检查文件权限、dfs使用空间、/etc/hosts文件使用ip、hadoop各配置文件使用ip而非localhost  可以参考修改试一试。

另外一种可能是如果是为分布模式(我使用的是为分布,其他模式估计也是报这个错),datanode磁盘空间不足,也会出现这种问题,可以使用df -hl查看磁盘使用情况,如果是hadoop所在盘空间不足,那么需要扩容,或者在hdfs-site.xml增加新的datanode数据存储的位置。

抱歉!评论已关闭.