Flume高级组件之Channel选择器

2023-10-12 大数据flume

ChannelSelector(通道选择器),用于将事件从 Source(数据源)发送到不同的 Channel(通道)的组件,它决定了事件应该发送到哪个 Channel 中。

  1. Replicating Channel Selector (opens new window):将事件复制到所有可用的Channel中,每个Channel都会接收到相同的事件。
  2. Multiplexing Channel Selector (opens new window):可以根据事件的某个属性来选择目标 Channel,以便按照不同的条件将事件分发到不同的 Channel。

# 实战:将日志分别输送至Kafka和HDFS

将日志分别输送至Kafka和HDFS,思想是channel1连接Kafka Sink,channel2连接至HDFS Sink,将数据发往两个平台进行存储(说明:这里为方便学习Flume,我们使用LoggerSink做个演示,Kafka需要自己搭建环境,原理是相同的)。

ChannelSelector案例一.drawio

使用netcat source就直接从控制台输入,入门案例用过,方便测试;

a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2

# source组件配置
a1.sources.r1.type = netcat 
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444
 
# 配置channel选择器 [默认就是这个,可以省略]
a1.sources.r1.selector.type = replicating

# channels组件配置 [配两个]
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100


a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# sink组件配置 [配两个]
a1.sinks.k1.type = logger

a1.sinks.k2.type = hdfs
a1.sinks.k2.hdfs.path = hdfs://192.168.133.103:9000/replicating
a1.sinks.k2.hdfs.fileType = DataStream
a1.sinks.k2.hdfs.writeFormat = Text
a1.sinks.k2.hdfs.rollInterval = 3600
a1.sinks.k2.hdfs.rollSize = 4194304
a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.filePrefix = data
a1.sinks.k2.hdfs.fileSuffix = .log

# 将组件关联起来
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

# 实战:根据规则发送数据到不同channel

假设一个数据为JSON格式,有一个属性为city,根据城市不同,发送数据到不同channel。例如北京的发往C1,其他的发往C2;

ChannelSelector案例二.drawio

a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2

# source组件配置
a1.sources.r1.type = netcat 
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444

# 配置source拦截器,正则
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_extractor
a1.sources.r1.interceptors.i1.regex = "city":"(\\w+)"
a1.sources.r1.interceptors.i1.serializers = s1
a1.sources.r1.interceptors.i1.serializers.s1.name = city

# 配置channel选择器
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = city
a1.sources.r1.selector.mapping.bj= c1
a1.sources.r1.selector.default = c2

# channels组件配置 [配两个]
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# sink组件配置 [配两个]
a1.sinks.k1.type = logger

a1.sinks.k2.type = hdfs
a1.sinks.k2.hdfs.path = hdfs://192.168.133.103:9000/multiplexing
a1.sinks.k2.hdfs.fileType = DataStream
a1.sinks.k2.hdfs.writeFormat = Text
a1.sinks.k2.hdfs.rollInterval = 3600
a1.sinks.k2.hdfs.rollSize = 4194304
a1.sinks.k2.hdfs.rollCount = 0
a1.sinks.k2.hdfs.useLocalTimeStamp = true
a1.sinks.k2.hdfs.filePrefix = data
a1.sinks.k2.hdfs.fileSuffix = .log

# 将组件关联起来
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
上次更新: 4 个月前