Usage: java FsShell
[-ls <path>]
[-lsr <path>]
[-df [<path>]]
[-du [-s] [-h] <path>]
[-dus <path>]
[-count[-q] <path>]
[-mv <src> <dst>]
[-cp <src> <dst>]
[-rm [-skipTrash] <path>]
[-rmr [-skipTrash] <path>]
[-expunge]
[-put <localsrc> ... <dst>]
[-copyFromLocal <localsrc> ... <dst>]
[-moveFromLocal <localsrc> ... <dst>]
[-get [-ignoreCrc] [-crc] <src> <localdst>]
[-getmerge <src> <localdst> [addnl]]
[-cat <src>]
[-text <src>]
[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]
[-moveToLocal [-crc] <src> <localdst>]
[-mkdir <path>]
[-setrep [-R] [-w] <rep> <path/file>]
[-touchz <path>]
[-test -[ezd] <path>]
[-stat [format] <path>]
[-tail [-f] <file>]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-chgrp [-R] GROUP PATH...]
[-help [cmd]]
如果熟悉linux的话,这些命令基本上就不怎么需要解释了。 但是下面还是列出了几个常用场景下的命令:
1. 增加文件和目录
hdfs上的文件的目录结构同样也是类似于linux的,根目录使用/表示。下面的命令将在根目录下建立目录newdir:
./bin/hadoop fs -mkdir /newdir
ls查看一下:
xuqiang@ubuntu:~/hadoop/src/hadoop-0.21.0$ ./bin/hadoop fs -ls /
11/06/01 18:04:11 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/06/01 18:04:11 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 3 items
drwxr-xr-x - xuqiang supergroup 0 2011-06-01 17:31 /jobtracker
drwxr-xr-x - xuqiang supergroup 0 2011-06-01 18:04 /newdir
drwxr-xr-x - xuqiang supergroup 0 2011-06-01 17:31 /tmp
既然已经有了这个目录,那么我们接着将一个本地文件local file上传到hdfs上。
xuqiang@ubuntu:~/hadoop/src/hadoop-0.21.0$ ./bin/hadoop fs -put ./README.txt .
这里需要注意的是.代表的含义,在hdfs中对于每个登录的用户都会存在一个默认的工作目录/user/$LOGINNAME(类似于linux下的home目录),.代表的就是这个默认工作目录。
2. 下载文件
xuqiang@ubuntu:~/hadoop/src/hadoop-0.21.0$ ./bin/hadoop fs -get /user/xuqiang/README.txt
3. 删除文件
xuqiang@ubuntu:~/hadoop/src/hadoop-0.21.0$ ./bin/hadoop fs -rm /user/xuqiang/README.txt
4. 帮助命令
xuqiang@ubuntu:~/hadoop/src/hadoop-0.21.0$ ./bin/hadoop fs -help ls