视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
Python开发MapReduce系列之WordCountDemo
2020-11-27 14:22:52 责编:小采
文档


我们知道MapReduce是hadoop这只大象的核心,Hadoop中,数据处理核心就是 MapReduce 程序设计模型。一个Map/Reduce 通常会把输入的数据集切分为若干的数据块,由 map任务(task)以完全并行的方式处理它们。框架会对map的输出先进行排序, 然后把结果输入给reduce任务。通常作业的输入和输出都会被存储在文件系统中。因此,我们的编程中心主要是 mapper阶段和reducer阶段。

下面来从零开发一个MapReduce程序,并在hadoop集群上运行。
mapper代码 map.py:

 import sys 
 for line in sys.stdin:
 word_list = line.strip().split(' ') 
 for word in word_list: print '	'.join([word.strip(), str(1)])

View Code

reducer代码 reduce.py:

 import sys
 
 cur_word = None
 sum = 0 
 for line in sys.stdin:
 ss = line.strip().split('	') 
 if len(ss) < 2: continue
 
 word = ss[0].strip()
 count = ss[1].strip() 
 if cur_word == None:
 cur_word = word 
 if cur_word != word: print '	'.join([cur_word, str(sum)])
 cur_word = word
 sum = 0
 
 sum += int(count) 
 print '	'.join([cur_word, str(sum)])
 sum = 0

View Code

资源文件 src.txt(测试用,在集群中跑时,记得上传到hdfs上):

hello 
 ni hao ni haoni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ao ni haoni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni haoao ni haoni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao ni hao
 Dad would get out his mandolin and play for the family
 Dad loved to play the mandolin for his family he knew we enjoyed singing
 I had to mature into a man and have children of my own before I realized how much he had sacrificed
 I had to,mature into a man and,have children of my own before.I realized how much he had sacrificed

View Code

首先本地调试查看结果是否正确,输入命令以下:

cat src.txt | python map.py | sort -k 1 | python reduce.py

命令行中输出的结果:

a 2
 and 2
 and,have 1
 ao 1
 before 1
 before.I 1
 children 2
 Dad 2
 enjoyed 1
 family 2
 for 2
 get 1
 had 4
 hao 33
 haoao 1
 haoni 3
 have 1
 he 3
 hello 1
 his 2
 how 2
 I 3
 into 2
 knew 1
 loved 1
 man 2
 mandolin 2
 mature 1
 much 2
 my 2
 ni 34
 of 2
 out 1
 own 2
 play 2
 realized 2
 sacrificed 2
 singing 1
 the 2
 to 2
 to,mature 1
 we 1
 would 1

View Code

通过调试发现本地调试,代码是OK的。下面扔到集群上面跑。为了方便,专门写了一个脚本 run.sh,劳动力嘛。

HADOOP_CMD="/home/hadoop/hadoop/bin/hadoop"
 STREAM_JAR_PATH="/home/hadoop/hadoop/contrib/streaming/hadoop-streaming-1.2.1.jar"
 
 INPUT_FILE_PATH="/home/input/src.txt"
 OUTPUT_PATH="/home/output"
 
 $HADOOP_CMD fs -rmr $OUTPUT_PATH 
 
 $HADOOP_CMD jar $STREAM_JAR_PATH -input $INPUT_FILE_PATH -output $OUTPUT_PATH 
 -mapper "python map.py" -reducer "python reduce.py" -file ./map.py -file ./reduce.py

下面解析下脚本:

 HADOOP_CMD: hadoop的bin的路径
 STREAM_JAR_PATH:streaming jar包的路径
 INPUT_FILE_PATH:hadoop集群上的资源输入路径
 OUTPUT_PATH:hadoop集群上的结果
输出路径。(注意:这个目录不应该存在的,因此在脚本加了先删除这个目录。**注意****注意****注意**:若是第一次执行,没有这个目录,会报错的。可以先手动新建一个新的output目录。) $HADOOP_CMD fs -rmr $OUTPUT_PATH $HADOOP_CMD jar $STREAM_JAR_PATH -input $INPUT_FILE_PATH -output $OUTPUT_PATH -mapper "python map.py" -reducer "python reduce.py" -file ./map.py -file ./reduce.py #这里固定格式,指定输入,输出的路径;指定mapper,reducer的文件; #并分发mapper,reducer角色的我们用户写的代码文件,因为集群其他的节点还没有mapper、reducer的可执行文件。

输入以下命令查看经过reduce阶段后输出的记录:

cat src.txt | python map.py | sort -k 1 | python reduce.py | wc -l
命令行中
输出:43

在浏览器输入:master:50030 查看任务的详细情况。

Kind % Complete Num Tasks Pending Running Complete Killed Failed/Killed Task Attempts
map 100.00% 2 0 0 2 0 0 / 0
reduce 100.00% 1 0 0 1 0 0 / 0

Map-Reduce Framework中看到这个。

Counter   Map Reduce Total
Reduce output records 0   0    43

证明整个过程成功。第一个hadoop程序开发结束。

下载本文
显示全文
专题