现在的位置: 首页 > 综合 > 正文

Faster Datanodes with less wait io using df instead of du

2014年09月30日 ⁄ 综合 ⁄ 共 1399字 ⁄ 字号 评论关闭

I have noticed often that the check Hadoop uses to calculate usage for the data nodes causes a fair amount of wait io on them driving up load.

Every cycle we can get from every spindle we want!

So I came up with a nice little hack to use df instead of du.

Here is basically what I did so you can do it too.



mv /usr/bin/du /usr/bin/bak_du

vi /usr/bin/du

and save this inside of it


#!/bin/sh

mydf=$(df $2 | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $3 }')

echo -e "$mydf\t$2"

then give it execute permission


chmod a+x /usr/bin/du

restart you data node check the log for no errors and make sure it starts back up

viola

Now when Hadoop calls “du -sk /yourhdfslocation” it will be expedient with its results

whats wrong with this?

1) I assume you have nothing else on your disks that you are storing so df is really close to du since almost all of your data is in HDFS

2) If you have more than 1 volume holding your hdfs blocks this is not exactly accurate so you are skewing the size of each vol by only calculating one of them and using that result for the others…. this is simple to fix just parse your df result differently
and use the path passed into the second paramater to know which vol to grep in your df result… your first volume is going to be larger anyways most likely and you should be monitoring disk space another way so it is not going to be very harmefull if you just
check and report the first volume’s size

3) you might not have your HDFS blocks on your first volume …. see #2 you can just grep the volume you want to report

/*

Joe Stein
http://www.linkedin.com/in/charmalloc

*/

zz: http://allthingshadoop.com/2011/05/20/faster-datanodes-with-less-wait-io-using-df-instead-of-du/

抱歉!评论已关闭.