手机流量分析——Hadoop实现
1.任务需求
统计每个手机号上行流量和、下行流量和、总流量和(上行流量和+下行流量和),并且:将统计结果按照手机号的前缀进行区分,并输出到不同的输出文件中去。
13* ==> …
15* ==> …
other ==> …
其中,access.log数据文件
- 第二个字段:手机号
- 倒数第三个字段:上行流量
- 倒数第二个字段:下行流量
2.实现思路
1.编写MapReduce程序
2.hadoop调用MapReduce程序的数据源要上传至集群当中
项目分析:
(1) 通过数据源文件,首先求出每个手机号的总上行流量、下行流量、总流量
(2) 根据(1)的手机号流量汇总结果再按照题目要求将不同的手机号进行分组输出到不同的文件中
3.实现过程
上传数据
首先将数据复制到Linux系统当中
(1)启动集群,在集群创建空目录,用于存放该数据源文件
hadoop fs -mkdir -p /xx/input/
可以通过浏览器(http://<Hadoop主节点的IP地址>:9870)查看到已经创建成功
(2)将该题数据源上传至创建的空目录中
可以看到已经成功将数据源上传
编写MapReduce程序(1)
(1)创建java项目并在pom.xml导入Hadoop依赖
<dependencies><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>3.2.0</version></dependency></dependencies>
(2)编写MapReduce程序
该项目的MapReduce程序包含以下几个类:
FlowBean、FlowDriver、FlowMapper、FlowReducer
FlowBean.java类代码如下:
importorg.apache.hadoop.conf.Configuration;importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Job;importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;publicclassFlowDriver{publicstaticvoidmain(String[] args)throwsException{//1.获取配置信息Configuration conf =newConfiguration();Job job =Job.getInstance(conf);//2.获取jar包信息
job.setJarByClass(FlowDriver.class);//3.配置mapper、reducer类
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);//4.配置mapper输出key、value值
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);//5.配置输出key、value值
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);// //设置分区// job.setPartitionerClass(FlowPartitioner.class);//// //设置Reducenum,依据是看flowpartitioner里分了几个区// job.setNumReduceTasks(5);//6.配置输入路径和输出路径FileInputFormat.setInputPaths(job,newPath(args[0]));FileOutputFormat.setOutputPath(job,newPath(args[1]));//7.提交boolean result = job.waitForCompletion(true);System.exit(result?0:1);}}
FlowMapper.java类代码如下:
importjava.io.IOException;importorg.apache.hadoop.io.LongWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Mapper;publicclassFlowMapperextendsMapper<LongWritable,Text,Text,FlowBean>{//不能在map方法中new对象,map方法执行频率高,内存消耗大。这也就是需要在bean对象中要有一个空构造方法的原因FlowBean bean =newFlowBean();Text k =newText();@Overrideprotectedvoidmap(LongWritable key,Text value,Context context)throwsIOException,InterruptedException{//1.获取一行数据()String line = value.toString();//2.截取字段(1、2可归结为一大步:对输入数据的处理)String[] fields = line.split("\t");//3.封装bean对象,获取电话号码(第二大步:具体的业务逻辑)String phoneNum = fields[1];long upFlow =Long.parseLong(fields[fields.length -3]);long downFlow =Long.parseLong(fields[fields.length -2]);//在map方法中new对象是不好的,因为在输入数据时,每读一行数据,会执行以下map方法,这一造成内存消耗很大//FlowBean bean = new FlowBean(upFlow,downFlow);
bean.set(upFlow, downFlow);
k.set(phoneNum);//4.写出去(第三大步:将数据输出出去,key和value分别是什么,规定清楚)//context.write(new Text(phoneNum), bean);
context.write(k, bean);}}
FlowReducer.java类代码如下:
importjava.io.IOException;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Reducer;publicclassFlowReducerextendsReducer<Text,FlowBean,Text,FlowBean>{@Overrideprotectedvoidreduce(Text key,Iterable<FlowBean> values,Context context)throwsIOException,InterruptedException{//1.计算总的流量long sum_upFlow =0;long sum_downFlow =0;for(FlowBean bean : values){
sum_upFlow += bean.getUpFlow();
sum_downFlow += bean.getDownFlow();}//2.输出
context.write(key,newFlowBean(sum_upFlow,sum_downFlow));}}
FlowDriver.java类代码如下:
importorg.apache.hadoop.conf.Configuration;importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Job;importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;publicclassFlowDriver{publicstaticvoidmain(String[] args)throwsException{//1.获取配置信息Configuration conf =newConfiguration();Job job =Job.getInstance(conf);//2.获取jar包信息
job.setJarByClass(FlowDriver.class);//3.配置mapper、reducer类
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);//4.配置mapper输出key、value值
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);//5.配置输出key、value值
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);// //设置分区
job.setPartitionerClass(FlowPartitioner.class);// //设置Reducenum,依据是看flowpartitioner里分了几个区
job.setNumReduceTasks(5);//6.配置输入路径和输出路径FileInputFormat.setInputPaths(job,newPath(args[0]));FileOutputFormat.setOutputPath(job,newPath(args[1]));//7.提交boolean result = job.waitForCompletion(true);System.exit(result?0:1);}}
上传MapReduce程序到Linux系统
Idea中打包 java程序:
File --> Project Structure --> Artifacts --> + -->JAR --> From modules with dependencies
随后在Main Class中选择 FlowDriver,随后点击OK
然后,选择: Build --> Build Artifacts --> Build,会生成一个out的文件
打包后在Linux系统中找到该项目所在位置,输入scp命令将jar包上传至启动集群的Linux系统中
在该目录下使用hadoop调用该包,通过数据源文件,求出每个手机号的总上行流量、下行流量、总流量
执行结果,执行结果在集群中查看
编写MapReduce程序(2)
根据(1)的手机号流量汇总结果再按照不同的手机号进行分组输出到不同的文件中
包含FlowBean、FlowDriver、FlowMapper、FlowReducer、NumPartitioner几个类
FlowBean.java类代码如下:
importjava.io.DataInput;importjava.io.DataOutput;importjava.io.IOException;importorg.apache.hadoop.io.Writable;// bean对象要实例化publicclassFlowBeanimplementsWritable{privatelong upFlow;privatelong downFlow;privatelong sumFlow;// 反序列化时,需要反射调用空参构造函数,所以必须有publicFlowBean(){super();}publicFlowBean(long upFlow,long downFlow){super();this.upFlow = upFlow;this.downFlow = downFlow;this.sumFlow = upFlow + downFlow;}publiclonggetSumFlow(){return sumFlow;}publicvoidsetSumFlow(long sumFlow){this.sumFlow = sumFlow;}publiclonggetUpFlow(){return upFlow;}publicvoidsetUpFlow(long upFlow){this.upFlow = upFlow;}publiclonggetDownFlow(){return downFlow;}publicvoidsetDownFlow(long downFlow){this.downFlow = downFlow;}/**
* 序列化方法
*
* @param out
* @throws IOException
*/@Overridepublicvoidwrite(DataOutput out)throwsIOException{
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);}/**
* 反序列化方法
注意反序列化的顺序和序列化的顺序完全一致
*
* @param in
* @throws IOException
*/@OverridepublicvoidreadFields(DataInput in)throwsIOException{
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();}@OverridepublicStringtoString(){return upFlow +"\t"+ downFlow +"\t"+ sumFlow;}publicvoidset(long upFlow,long downFlow){this.upFlow = upFlow;this.downFlow = downFlow;this.sumFlow = upFlow + downFlow;}}
FlowMapper.jav类如下:
importjava.io.IOException;importorg.apache.hadoop.io.LongWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Mapper;publicclassFlowMapperextendsMapper<LongWritable,Text,Text,FlowBean>{//不能在map方法中new对象,map方法执行频率高,内存消耗大。这也就是需要在bean对象中要有一个空构造方法的原因FlowBean bean =newFlowBean();Text k =newText();@Overrideprotectedvoidmap(LongWritable key,Text value,Context context)throwsIOException,InterruptedException{//1.获取一行数据()String line = value.toString();//2.截取字段(1、2可归结为一大步:对输入数据的处理)String[] fields = line.split("\t");//3.封装bean对象,获取电话号码(第二大步:具体的业务逻辑)String phoneNum = fields[0];long upFlow =Long.parseLong(fields[fields.length -3]);long downFlow =Long.parseLong(fields[fields.length -2]);//在map方法中new对象是不好的,因为在输入数据时,每读一行数据,会执行以下map方法,这一造成内存消耗很大//FlowBean bean = new FlowBean(upFlow,downFlow);
bean.set(upFlow, downFlow);
k.set(phoneNum);//4.写出去(第三大步:将数据输出出去,key和value分别是什么,规定清楚)//context.write(new Text(phoneNum), bean);
context.write(k, bean);}}
FlowReducer.java类代码如下:
importjava.io.IOException;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Reducer;publicclassFlowReducerextendsReducer<Text,FlowBean,Text,FlowBean>{@Overrideprotectedvoidreduce(Text key,Iterable<FlowBean> values,Context context)throwsIOException,InterruptedException{//1.计算总的流量long sum_upFlow =0;long sum_downFlow =0;for(FlowBean bean : values){
sum_upFlow += bean.getUpFlow();
sum_downFlow += bean.getDownFlow();}//2.输出
context.write(key,newFlowBean(sum_upFlow,sum_downFlow));}}
NumPartitioner.java类如下:
importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Partitioner;/*
* K2 V2 对应的是map输出kv类型
* @author Administrator
*/publicclassNumPartitionerextendsPartitioner<Text,FlowBean>{@OverridepublicintgetPartition(Text key,FlowBean value,int numPartitions){// 1 获取电话号码的前三位String preNum = key.toString().substring(0,3);int partition;// 2 判断是哪个前缀if(preNum.startsWith("13")){
partition =0;// 13* 前缀}elseif(preNum.startsWith("15")){
partition =1;// 15* 前缀}else{
partition =2;// 其他前缀}return partition;}}
FlowDriver.java类代码如下:
importorg.apache.hadoop.conf.Configuration;importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Job;importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;publicclassFlowDriver{publicstaticvoidmain(String[] args)throwsException{//1.获取配置信息Configuration conf =newConfiguration();Job job =Job.getInstance(conf);//2.获取jar包信息
job.setJarByClass(FlowDriver.class);//3.配置mapper、reducer类
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);//4.配置mapper输出key、value值
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);//5.配置输出key、value值
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);//设置分区
job.setPartitionerClass(NumPartitioner.class);//设置Reducenum,依据是看flowpartitioner里分了几个区
job.setNumReduceTasks(3);//6.配置输入路径和输出路径FileInputFormat.setInputPaths(job,newPath(args[0]));FileOutputFormat.setOutputPath(job,newPath(args[1]));//7.提交boolean result = job.waitForCompletion(true);System.exit(result?0:1);}}
将程序打包上传至启动集群的Linux系统中,步骤同上
执行结果,执行结果在集群中查看
13开头的结果:
15开头的结果:
其他开头的结果:
版权归原作者 m0_70276855 所有, 如有侵权,请联系我们删除。