0


Flume 与 Kafka 集成开发(和最近的日记)

3.16

自从早上的kafka被拿下之后,我今天的进度异常顺利,果然版本对了啥都不容易错,版本错了啥都是错的。平常看视频最多也就7 8 个,今天差不多看了十二三个,九点四十左右结束。明天再整理了,太累了。

刚刚得到一个好消息,后天的图书馆要解封了,确实幸喜,明天寝室艰苦奋斗一天,后天又可以去图书馆了

3.17

图书馆限流,只能400个人线上预约,我即使蹲点也没抢到,半分钟就抢完了。还得蹲一天寝室

3.18

晚上7.59

八点钟就要抢位置了,冲冲冲。

结果!!

没错抢到了,这次是600个位置,终于抢到了。


3.19

在图书馆的我现在正在整理kafka与flume的集成开发。

先更新一下,我今天因为太着迷,一个bug搞了一天,最后终于搞出来了,还是版本问题,居然错过了八点的抢图书馆位置时间,我人麻了,又要在寝室呆一天,明天又得抢。

Flume 与 Kafka 集成开发

在flume根目录下的conf文件夹中创建文件

flume-hbase.simple.propreties

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.

# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'agent'

agent.sources = execSource
agent.channels = memoryChannel
agent.sinks = hbaseSink

# For each one of the sources, the type is defined

agent.sources.execSource.type = exec
agent.sources.execSource.channels = memoryChannel
agent.sources.execSource.command = tail -F /home/hadoop/data/flume
/logs/test.log

# Each channel's type is defined.

agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100

# Each sink's type must be defined
agent.sinks.hbaseSink.type = asynchbase
agent.sinks.hbaseSink.table = wordcount   #创建的表名
agent.sinks.hbaseSink.columnFamily = frequency  #创建的表的列簇
agent.sinks.hbaseSink.serializer = org.apache.flume.sink.hbase.Sim
pleAsyncHbaseEventSerializer
agent.sinks.hbaseSink.channel = memoryChannel 

**修改flume.env.sh **

将下面两条加入

export HADOOP_HOME=/home/hadoop/app/hadoop #Hadoop根目录
export HBASE_HOME=/home/hadoop/app/hbase  #hbase 根目录

启动zookeeper

启动hdfs

启动hbase

(不需要启动yarn)

创建HBase测试表

与flume-hbase.simple.propreties文件中保持一致

create 'wordcount', 'frequency'

本文转载自: https://blog.csdn.net/weixin_52105111/article/details/123537417
版权归原作者 cxy好好先生 所有, 如有侵权,请联系我们删除。

“Flume 与 Kafka 集成开发(和最近的日记)”的评论:

还没有评论