将flume的数据实时发送到spark streaming的部署文档
一、创建数据源文件
echo "hello world" >> /tmp/word.txt
二、安装flume
参考csdn文档
https://blog.csdn.net/weixin_43859091/article/details/123635082
三、编写spark.properties文件放置在/usr/local/flume/conf/spark.properties目录中
a1.sources = r1
a1.channels = c1
a1.sinks = k1
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /tmp/word.txt
a1.channels.c1.type = memory
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 127.0.0.1
a1.sinks.k1.port =44444
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
四、下载spark2.4.8
https://mirrors.aliyun.com/apache/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz
通过xftp上传到linux系统上,然后tar zxvf spark-2.4.8-bin-hadoop2.7.tgz解压
mv /root/spark-2.4.8-bin-hadoop2.7 /usr/local/spark2
五、配置环境变量
vim /etc/profile
export PYSPARK_PYTHON=python3
export SPARK_HOME=/usr/local/spark2
退出当前xshell,然后重新连接xshell
六、安装dos2unix
yum install -y dos2unix
dos2unix /root/SparkStreamingFlume.py
八、启动streaming进程
/usr/local/spark2/bin/spark-submit /root/SparkStreamingFlume.py
将spark-streaming-flume_2.11-2.4.8.jar和spark-streaming-flume-assembly_2.11-2.4.8.jar两个文件上传到
/usr/local/spark2/jars目录中
然后双击xshell新打开一个标签,然后运行netstat -ant 检查是否有44444端口生成,如果有则表明streaming进程启动成功。
九、启动flume进程
/usr/local/flume/bin/flume-ng agent -n a1 -c /usr/local/flume/conf/ -f /usr/local/flume/conf/spark1.properties -Dflume.root.logger=INFO,console
正常启动的标志是没有error信息即可
十、使用echo命令一直朝/tmp/word.txt文件追加内容
双击xshell再新打开一个标签,然后运行如下命令:
echo "hello world" >> /tmp/word.txt
同时观察启动streaming的xshell窗口,查看是否有(hello,3)这样的内容出现,如果有则表明成功。
版权归原作者 py8585 所有, 如有侵权,请联系我们删除。