现在的位置: 首页 > 综合 > 正文

Hadoop学习笔记(二)helloworld

2012年06月08日 ⁄ 综合 ⁄ 共 5946字 ⁄ 字号 评论关闭

(1)格式化HDFS

执行格式化HDFS的命令行:

[root@localhost hadoop-0.19.0]# bin/hadoop namenode -format

格式化执行信息如下所示:
10/08/01 19:04:02 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.19.0
STARTUP_MSG:  build =https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.19 -r713890; compiled by 'ndaley' on Fri Nov 14 03:12:29 UTC 2008
************************************************************/
Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) y
Format aborted in /tmp/hadoop-root/dfs/name
10/08/01 19:04:05 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

(2)启动Hadoop相关后台进程

执行命令行:

[root@localhost hadoop-0.19.0]# bin/start-all.sh

启动执行信息如下所示:
starting namenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-namenode-localhost.out
localhost: starting datanode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-datanode-localhost.out
localhost: starting secondarynamenode, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-secondarynamenode-localhost.out
starting jobtracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-jobtracker-localhost.out
localhost: starting tasktracker, logging to /root/hadoop-0.19.0/bin/../logs/hadoop-root-tasktracker-localhost.out

(3)准备执行wordcount任务的数据

首先,这里在本地创建了一个数据目录input,并拷贝一些文件到该目录下面,如下所示:

[root@localhost hadoop-0.19.0]# mkdir input
[root@localhost hadoop-0.19.0]# cp CHANGES.txt LICENSE.txt NOTICE.txt README.txt input/

然后,将本地目录input上传到HDFS文件系统上,执行如下命令:

[root@localhost hadoop-0.19.0]# bin/hadoop fs -put input/ input

(4)启动wordcount任务

执行如下命令行:

[root@localhost hadoop-0.19.0]# bin/hadoop jar hadoop-0.19.0-examples.jar wordcount input output

元数据目录为input,输出数据目录为output。

任务执行信息如下所示:
10/08/01 19:06:15 INFO mapred.FileInputFormat: Total input paths to process : 4
10/08/01 19:06:15 INFO mapred.JobClient: Running job: job_201008011904_0002
10/08/01 19:06:16 INFO mapred.JobClient:  map 0% reduce 0%
10/08/01 19:06:22 INFO mapred.JobClient:  map 20% reduce 0%
10/08/01 19:06:24 INFO mapred.JobClient:  map 40% reduce 0%
10/08/01 19:06:25 INFO mapred.JobClient:  map 60% reduce 0%
10/08/01 19:06:27 INFO mapred.JobClient:  map 80% reduce 0%
10/08/01 19:06:28 INFO mapred.JobClient:  map 100% reduce 0%
10/08/01 19:06:38 INFO mapred.JobClient:  map 100% reduce 26%
10/08/01 19:06:40 INFO mapred.JobClient:  map 100% reduce 100%
10/08/01 19:06:41 INFO mapred.JobClient: Job complete: job_201008011904_0002
10/08/01 19:06:41 INFO mapred.JobClient: Counters: 16
10/08/01 19:06:41 INFO mapred.JobClient:   File Systems
10/08/01 19:06:41 INFO mapred.JobClient:     HDFS bytes read=301489
10/08/01 19:06:41 INFO mapred.JobClient:     HDFS bytes written=113098
10/08/01 19:06:41 INFO mapred.JobClient:     Local bytes read=174004
10/08/01 19:06:41 INFO mapred.JobClient:     Local bytes written=348172
10/08/01 19:06:41 INFO mapred.JobClient:   Job Counters 
10/08/01 19:06:41 INFO mapred.JobClient:     Launched reduce tasks=1
10/08/01 19:06:41 INFO mapred.JobClient:     Launched map tasks=5
10/08/01 19:06:41 INFO mapred.JobClient:     Data-local map tasks=5
10/08/01 19:06:41 INFO mapred.JobClient:   Map-Reduce Framework
10/08/01 19:06:41 INFO mapred.JobClient:     Reduce input groups=8997
10/08/01 19:06:41 INFO mapred.JobClient:     Combine output records=10860
10/08/01 19:06:41 INFO mapred.JobClient:     Map input records=7363
10/08/01 19:06:41 INFO mapred.JobClient:     Reduce output records=8997
10/08/01 19:06:41 INFO mapred.JobClient:     Map output bytes=434077
10/08/01 19:06:41 INFO mapred.JobClient:     Map input bytes=299871
10/08/01 19:06:41 INFO mapred.JobClient:     Combine input records=39193
10/08/01 19:06:41 INFO mapred.JobClient:     Map output records=39193
10/08/01 19:06:41 INFO mapred.JobClient:     Reduce input records=10860

(5)查看任务执行结果

可以通过如下命令行:

bin/hadoop fs -cat output/*

执行结果,截取部分显示如下所示:

vijayarenu      20
violations.     1
virtual 3
vis-a-vis       1
visible 1
visit   1
volume  1
volume, 1
volumes 2
volumes.        1
w.r.t   2
wait    9
waiting 6
waiting.        1
waits   3
want    1
warning 7
warning,        1
warnings        12
warnings.       3
warranties      1
warranty        1
warranty,       1

(6) 测试类

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper 
       extends Mapper<Object, Text, Text, IntWritable>{
    
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
      
    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }
  
  public static class IntSumReducer 
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

抱歉!评论已关闭.