现在的位置: 首页 > 综合 > 正文

Spark RDD Transformation 详解—Spark学习笔记7

2015年09月05日 ⁄ 综合 ⁄ 共 11638字 ⁄ 字号 评论关闭

这几天学习了Spark RDD transformation 和 action ,做个笔记记录下心得,顺便分享给大家。

1. 启动spark-shell 

SPARK_MASTER=local[4] ./spark-shell.sh
Welcome to
      ____              __  
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 0.8.1
      /_/                  


Using Scala version 2.9.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_20)
Initializing interpreter...
14/04/04 10:49:44 INFO server.Server: jetty-7.x.y-SNAPSHOT
14/04/04 10:49:44 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:5757
Creating SparkContext...
14/04/04 10:49:50 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
14/04/04 10:49:50 INFO spark.SparkEnv: Registering BlockManagerMaster
14/04/04 10:49:50 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140404104950-5dd2

2.我们就拿根目录下的CHANGES.txt和README.txt文件做示例吧。

scala> sc
res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@5849b49d

scala> val changes = sc.textFile("CHANGES.txt")
14/04/04 10:51:39 INFO storage.MemoryStore: ensureFreeSpace(44905) called with curMem=0, maxMem=339585269
14/04/04 10:51:39 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 43.9 KB, free 323.8 MB)
changes: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at <console>:12

scala> changes foreach println
14/04/04 10:52:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/04 10:52:03 WARN snappy.LoadSnappy: Snappy native library not loaded
14/04/04 10:52:03 INFO mapred.FileInputFormat: Total input paths to process : 1
14/04/04 10:52:03 INFO spark.SparkContext: Starting job: foreach at <console>:15
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Got job 0 (foreach at <console>:15) with 1 output partitions (allowLocal=false)
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Final stage: Stage 0 (foreach at <console>:15)
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Missing parents: List()
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at textFile at <console>:12), which has no missing parents
14/04/04 10:52:03 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[1] at textFile at <console>:12)
14/04/04 10:52:03 INFO local.LocalTaskSetManager: Size of task 0 is 1664 bytes
14/04/04 10:52:03 INFO executor.Executor: Running task ID 0
14/04/04 10:52:03 INFO storage.BlockManager: Found block broadcast_0 locally
14/04/04 10:52:03 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
Spark Change Log

Release 0.8.1-incubating

  d03589d Mon Dec 9 23:10:00 2013 -0800
  Merge pull request #248 from colorant/branch-0.8
  [Fix POM file for mvn assembly on hadoop 2.2 Yarn]

  3e1f78c Sun Dec 8 21:34:12 2013 -0800
  Merge pull request #195 from dhardy92/fix_DebScriptPackage
  [[Deb] fix package of Spark classes adding org.apache prefix in scripts embeded in .deb]

  c14f373 Sat Dec 7 22:35:31 2013 -0800
  Merge pull request #241 from pwendell/master
  [Update broken links and add HDP 2.0 version string]

  9c9e71e Sat Dec 7 12:47:26 2013 -0800
  Merge pull request #241 from pwendell/branch-0.8
  [Fix race condition in JobLoggerSuite [0.8 branch]]

  92597c0 Sat Dec 7 11:58:00 2013 -0800
  Merge pull request #240 from pwendell/master
  [SPARK-917 Improve API links in nav bar]

  cfca70e Sat Dec 7 01:15:20 2013 -0800
  Merge pull request #236 from pwendell/shuffle-docs
  [Adding disclaimer for shuffle file consolidation]

现在要找出所有包含Merge的文字

filter

现在要找出所有包含Merge的文字

scala> val mergeLines = changes.filter(_.contains("Merge"))
mergeLines: org.apache.spark.rdd.RDD[String] = FilteredRDD[2] at filter at <console>:14


scala> mergeLines foreach println
14/04/04 10:54:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/04 10:54:52 WARN snappy.LoadSnappy: Snappy native library not loaded
14/04/04 10:54:52 INFO mapred.FileInputFormat: Total input paths to process : 1
14/04/04 10:54:52 INFO spark.SparkContext: Starting job: foreach at <console>:17
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Got job 0 (foreach at <console>:17) with 1 output partitions (allowLocal=false)
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Final stage: Stage 0 (foreach at <console>:17)
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Missing parents: List()
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at <console>:14), which has no missing parents
14/04/04 10:54:52 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (FilteredRDD[2] at filter at <console>:14)
14/04/04 10:54:52 INFO local.LocalTaskSetManager: Size of task 0 is 1733 bytes
14/04/04 10:54:52 INFO executor.Executor: Running task ID 0
14/04/04 10:54:52 INFO storage.BlockManager: Found block broadcast_0 locally
14/04/04 10:54:52 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
  Merge pull request #248 from colorant/branch-0.8
  Merge pull request #195 from dhardy92/fix_DebScriptPackage
  Merge pull request #241 from pwendell/master
  Merge pull request #241 from pwendell/branch-0.8
  Merge pull request #240 from pwendell/master
  Merge pull request #236 from pwendell/shuffle-docs
  Merge pull request #237 from pwendell/formatting-fix
  Merge pull request #235 from pwendell/master
  Merge pull request #234 from alig/master
  Merge pull request #199 from harveyfeng/yarn-2.2
  Merge pull request #101 from colorant/yarn-client-scheduler
  Merge pull request #191 from hsaputra/removesemicolonscala
  Merge pull request #178 from hsaputra/simplecleanupcode
  Merge pull request #189 from tgravescs/sparkYarnErrorHandling
  Merge pull request #232 from markhamstra/FiniteWait
  Merge pull request #231 from pwendell/branch-0.8
  Merge pull request #228 from pwendell/master
  Merge pull request #227 from pwendell/master
  Merge pull request #223 from rxin/transient
  Merge pull request #95 from aarondav/perftest
  Merge pull request #218 from JoshRosen/spark-970-pyspark-unicode-error
  Merge pull request #181 from BlackNiuza/fix_tasks_number
  Merge pull request #219 from sundeepn/schedulerexception
  Merge pull request #201 from rxin/mappartitions
  Merge pull request #197 from aarondav/patrick-fix
  Merge pull request #200 from mateiz/hash-fix
  Merge pull request #193 from aoiwelle/patch-1
  Merge pull request #196 from pwendell/master
  Merge pull request #174 from ahirreddy/master
  Merge pull request #166 from ahirreddy/simr-spark-ui
  Merge pull request #137 from tgravescs/sparkYarnJarsHdfsRebase
  Merge pull request #165 from NathanHowell/kerberos-master
  Merge pull request #153 from ankurdave/stop-spot-cluster
  Merge pull request #160 from xiajunluan/JIRA-923
  Merge pull request #175 from kayousterhout/no_retry_not_serializable
  Merge pull request #173 from kayousterhout/scheduler_hang

map

scala> mergeLines.map(line=>line.split(" "))
res2: org.apache.spark.rdd.RDD[Array[java.lang.String]] = MappedRDD[3] at map at <console>:17

scala> mergeLines.map(line=>line.split(" ")) take 10
14/04/04 11:05:24 INFO spark.SparkContext: Starting job: take at <console>:17
14/04/04 11:05:24 INFO scheduler.DAGScheduler: Got job 1 (take at <console>:17) with 1 output partitions (allowLocal=true)
14/04/04 11:05:24 INFO scheduler.DAGScheduler: Final stage: Stage 1 (take at <console>:17)
14/04/04 11:05:24 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/04/04 11:05:24 INFO scheduler.DAGScheduler: Missing parents: List()
14/04/04 11:05:24 INFO scheduler.DAGScheduler: Computing the requested partition locally
14/04/04 11:05:24 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
14/04/04 11:05:24 INFO spark.SparkContext: Job finished: take at <console>:17, took 0.010779038 s
res3: Array[Array[java.lang.String]] = Array(Array("", "", Merge, pull, request, #248, from, colorant/branch-0.8), Array("", "", Merge, pull, request, #195, from, dhardy92/fix_DebScriptPackage), Array("", "", Merge, pull, request, #241, from, pwendell/master), Array("", "", Merge, pull, request, #241, from, pwendell/branch-0.8), Array("", "", Merge, pull, request, #240, from, pwendell/master), Array("", "", Merge, pull, request, #236, from, pwendell/shuffle-docs), Array("", "", Merge, pull, request, #237, from, pwendell/formatting-fix), Array("", "", Merge, pull, request, #235, from, pwendell/master), Array("", "", Merge, pull, request, #234, from, alig/master), Array("", "", Merge, pull, request, #199, from, harveyfeng/yarn-2.2))

可以看出,map最后输出的数据集是Array(array1,array2....)这样的嵌套数组

scala> val splitedLine = mergeLines.map(line=>line.split(" "))
splitedLine: org.apache.spark.rdd.RDD[Array[java.lang.String]] = MappedRDD[5] at map at <console>:16

flatMap

flatMap其实就是将数据集给扁平化了,变成了1个Seq或者Array
scala> changes.flatMap(line=>line.split(" ")) take 10
14/04/04 11:18:26 INFO spark.SparkContext: Starting job: take at <console>:15
14/04/04 11:18:26 INFO scheduler.DAGScheduler: Got job 17 (take at <console>:15) with 1 output partitions (allowLocal=true)
14/04/04 11:18:26 INFO scheduler.DAGScheduler: Final stage: Stage 17 (take at <console>:15)
14/04/04 11:18:26 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/04/04 11:18:26 INFO scheduler.DAGScheduler: Missing parents: List()
14/04/04 11:18:26 INFO scheduler.DAGScheduler: Computing the requested partition locally
14/04/04 11:18:26 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
14/04/04 11:18:26 INFO spark.SparkContext: Job finished: take at <console>:15, took 0.003913527 s
res26: Array[java.lang.String] = Array(Spark, Change, Log, "", Release, 0.8.1-incubating, "", "", "", d03589d)

union

changes.union(changes) foreach println

groupByKey, reduceByKey

scala> val wordcount = changes.flatMap(line=>line.split(" ")).map(word=>(word,1))
wordcount: org.apache.spark.rdd.RDD[(java.lang.String, Int)] = MappedRDD[22] at map at <console>:14
map后打印出来如下:
(#534,1)
(from,1)
(stephenh/removetrycatch,1)
(,1)
(,1)
([Remove,1)
(try/catch,1)
(block,1)
(that,1)
(can't,1)
(be,1)
(hit.],1)
(,1)
(,1)
(,1)

用groupByKey来形成shuffle后的结果

wordcount.groupByKey() foreach println

这里的key是[Automatically,LogQuery. 
value是一个数组list[v]

([Automatically,ArrayBuffer(1))
(LogQuery,ArrayBuffer(1))
(2d3eae2,ArrayBuffer(1))
(#130,ArrayBuffer(1))
(8e9bd93,ArrayBuffer(1))
(8,ArrayBuffer(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1))

用reduceByKey来计算wordcount,个人感觉reduceByKey = groupByKey + aggregate function on list[v]
wordcount.reduceByKey(_+_) foreach println
结果如下:
(instructions,2)
(Adds,1)
(ssh,1)
(#691,1)
(#863,1)
(README],2)
(#676,1)
(javadoc,1)
(#571,1)
(0f1b7a0,1)
(shimingfei/joblogger,1)
(links,2)
(memory-efficient,1)
(pwendell/akka-standalone,1)
(hsaputra/update-pom-asf,1)
(method],1)
(mapPartitionsWithIndex],1)
(key-value,1)
(22:19:00,1)
(sbt,4)
(e5b9ed2,1)
(loss,1)
(stephenh/volatile,1)
(code,6)

distinct

distinct类似java中的set,去重。
scala> wordcount.count()
res43: Long = 12222

wordcount.distinct.count()
res44: Long = 3354

sortByKey

依据key排序,true为升序,false降序
wordcount.sortByKey(true) take 10
res10: Array[(java.lang.String, Int)] = Array(("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1))

wordcount.sortByKey(false) take 10
res11: Array[(java.lang.String, Int)] = Array(({0,1},1), (zookeeper,1), (zip,1), (zip,1), (zip,1), (zero-sized,1), (zero,1), (yarn],1), (yarn.version,1), (yarn.version,1))

join

这里为了避免""值在join的影响,过滤掉""元素
val wordcount = changes.flatMap(line=>line.split(" ")).filter(_!="").map(word=>(word,1)).reduceByKey(_+_)

scala> val readme = sc.textFile("README.md")
scala> val readMeWordCount = readme.flatMap(_.split(" ")).filter(_!="").map(word=>(word,1)).reduceByKey(_+_)

scala> wordcount.join(readMeWordCount)
res3: org.apache.spark.rdd.RDD[(java.lang.String, (Int, Int))] = FlatMappedValuesRDD[36] at join at <console>:21

结果会将相同的key的value整合到一个list里,即key,list[v from wordcount..., v from readMeWordCount...]
wordcount.join(readMeWordCount) take 20
res6: Array[(java.lang.String, (Int, Int))] = Array((at,(1,2)), (do,(1,1)), (by,(9,5)), (ASF,(2,1)), (adding,(3,1)), (all,(2,1)), (versions,(6,4)), (sbt/sbt,(1,6)), (under,(2,2)), (set,(7,1)))

cogroup

这个操作是将两个数据集join后,相同的key,的value会变成[seq1 from left],[seq from right]
wordcount.cogroup(readMeWordCount,2).filter(_._1==("do")) take 10
res18: Array[(java.lang.String, (Seq[Int], Seq[Int]))] = Array((do,(ArrayBuffer(1),ArrayBuffer(1))))
这两个ArrayBuffer第一个是来自左表,第二个来自右表。

cartesian

笛卡尔积,,,你懂的。。。m*n,,数据量大了话,不要随便试玩。。
wordcount.count
3353
readMeWordCount.count
res25: Long = 324

wordcount.cartesian(readMeWordCount,2)
res23: Long = 1086372

3353 * 324 = 1086372

原创,转载请注明出处http://blog.csdn.net/oopsoom/article/details/22918991,谢谢。

抱歉!评论已关闭.