分布式日志收集框架Flume
1.业务现状分析
-
WebServer/ApplicationServer分散在各个机器上
-
想在大数据平台Hadoop进行统计分析
-
日志如何收集到Hadoop平台上
-
解决方案及存在的问题
-
如何解决我们的数据从其他的server上移动到Hadoop之上?
- shell: cp –> Hadoop集群的机器上,hdfs dfs -put ….(有很多问题不好解决,容错、负载均衡、时效性、压缩)
- Flume,从 A –> B 移动日志
2.Flume概述
- Flume官网:http://flume.apache.org/
Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.
Flume是由Apache基金会提供的一个分布式、高可靠、高可用的服务,用于分布式的海量日志的高效收集、聚合、移动系统。
- Flume设计目标
-
可靠性:高科要
- 扩展性:模块可扩展
- 管理性:agent管理
-
界同类产品对比
-
Flume: Cloudera/Apache, Java语言开发。
- Logstash: ELK(ElasticsSearch, Logstash, Kibana)
- Scribe: Facebook, 使用C/C++开发, 负载均衡不是很好, 已经不维护了。
- Chukwa: Yahoo/Apache, 使用Java语言开发, 负载均衡不是很好, 已经不维护了。
- Fluentd: 和Flume类似, Ruby开发。
-
Flume发展史
-
Cloudera公司提出0.9.2,叫Flume-OG
- 2011年Flume-728编号,重要里程碑(Flume-NG),贡献给Apache社区
- 2012年7月 1.0版本
- 2015年5月 1.6版本
- ~ 1.7版本
3.Flume架构及核心组件
Flume有三大组件
- Source: 收集,指定数据源从哪里来(Avro, Thrift, Spooling, Kafka, Exec)
- Channel: 聚集,把数据先存在(Memory, File, Kafka等用的比较多)
- Sink: 把数据写到某个地方去(HDFS, Hive, Logger, Avro, Thrift, File, ES, HBase, Kafka等)
4.Flume环境部署
-
前置条件
-
Java Runtime Environment – Java 1.8 or later(安装Java)
- Memory – Sufficient memory for configurations used by sources, channels or sinks(足够内存)
- Disk Space – Sufficient disk space for configurations used by channels or sinks(足够空间)
- Directory Permissions – Read/Write permissions for directories used by agent(读写权限)
-
1.安装JDK(下载,解压,安装,配置环境变量)
-
2.安装Flume(下载,加压,安装,配置环境变量,检测:flume-ng version)
5.Flume实战
-
需求1:从指定网络端口采集数据输出到控制台
-
flume-conf.properties
-
A) 配置Source
* B) 配置Channel
* C) 配置Sink
* D) 把以上三个组件串起来
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29 1# example.conf: A single-node Flume configuration
2
3# a1: agent名称
4# r1:source的名称
5# k1:sink的名称
6# c1:channel的名称
7
8# Name the components on this agent
9a1.sources = r1
10a1.sinks = k1
11a1.channels = c1
12
13# Describe/configure the source
14a1.sources.r1.type = netcat
15a1.sources.r1.bind = localhost
16a1.sources.r1.port = 44444
17
18# Describe the sink
19a1.sinks.k1.type = logger
20
21# Use a channel which buffers events in memory
22a1.channels.c1.type = memory
23a1.channels.c1.capacity = 1000
24a1.channels.c1.transactionCapacity = 100
25
26# Bind the source and sink to the channel
27a1.sources.r1.channels = c1
28a1.sinks.k1.channel = c1
29
-
启动Agent
1
2
3
4
5
6
7
8
9
10
11
12 1flume-ng agent \
2--name $agent_name \
3--conf conf \
4--conf-file conf/flume-conf.properties \
5-Dflume.root.logger=INFO,console
6
7flume-ng agent \
8--name a1 \
9--conf $FLUME_HOME/conf \
10--conf-file $FLUME_HOME/conf/example.conf \
11-Dflume.root.logger=INFO,console
12
-
需求2:监控一个文件实时采集新增的数据输出到控制台
-
1.Agent选型:exec source + memory channel + logger sink
-
2.配置文件
-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29 1# exec-memory-logger.conf: A single-node Flume configuration
2
3# a1: agent名称
4# r1:source的名称
5# k1:sink的名称
6# c1:channel的名称
7
8# Name the components on this agent
9a1.sources = r1
10a1.sinks = k1
11a1.channels = c1
12
13# Describe/configure the source
14a1.sources.r1.type = exec
15a1.sources.r1.command = tail -F /home/k.o/data/data.log
16a1.sources.r1.shell = /bin/sh -c
17
18# Describe the sink
19a1.sinks.k1.type = logger
20
21# Use a channel which buffers events in memory
22a1.channels.c1.type = memory
23a1.channels.c1.capacity = 1000
24a1.channels.c1.transactionCapacity = 100
25
26# Bind the source and sink to the channel
27a1.sources.r1.channels = c1
28a1.sinks.k1.channel = c1
29
-
启动Agent
1
2
3
4
5
6
7
8
9
10
11
12 1flume-ng agent \
2--name $agent_name \
3--conf conf \
4--conf-file conf/flume-conf.properties \
5-Dflume.root.logger=INFO,console
6
7flume-ng agent \
8--name a1 \
9--conf $FLUME_HOME/conf \
10--conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
11-Dflume.root.logger=INFO,console
12
- 需求3:将A服务器上的日志实时采集到B服务器
- 技术选型:
1.exec source + memory channel + avro sink
2.arro source + memory channel + logger sink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 1# exec-memory-avro.conf: A single-node Flume configuration
2
3# exec-memory-avro: agent名称
4# exec-source:source的名称
5# avro-sink:sink的名称
6# memory-channel:channel的名称
7
8# Name the components on this agent
9exec-memory-avro.sources = exec-source
10exec-memory-avro.sinks = avro-sink
11exec-memory-avro.channels = memory-channel
12
13# Describe/configure the source
14exec-memory-avro.sources.exec-source.type = exec
15exec-memory-avro.sources.exec-source.command = tail -F /home/k.o/data/data.log
16exec-memory-avro.sources.exec-source.shell = /bin/sh -c
17
18# Describe the sink
19exec-memory-avro.sinks.avro-sink.type = avro
20exec-memory-avro.sinks.avro-sink.hostname = localhost
21exec-memory-avro.sinks.avro-sink.port = 44444
22
23# Use a channel which buffers events in memory
24exec-memory-avro.channels.memory-channel.type = memory
25exec-memory-avro.channels.memory-channel.capacity = 1000
26exec-memory-avro.channels.memory-channel.transactionCapacity = 100
27
28# Bind the source and sink to the channel
29exec-memory-avro.sources.exec-source.channels = memory-channel
30exec-memory-avro.sinks.avro-sink.channel = memory-channel
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29 1# avro-memory-logger.conf: A single-node Flume configuration
2
3# avro-memory-logger: agent名称
4# exec-source:source的名称
5# logger-sink:sink的名称
6# memory-channel:channel的名称
7
8# Name the components on this agent
9avro-memory-logger.sources = avro-source
10avro-memory-logger.sinks = logger-sink
11avro-memory-logger.channels = memory-channel
12
13# Describe/configure the source
14avro-memory-logger.sources.avro-source.type = avro
15avro-memory-logger.sources.avro-source.bind = localhost
16avro-memory-logger.sources.avro-source.port = 44444
17
18# Describe the sink
19avro-memory-logger.sinks.logger-sink.type = logger
20
21# Use a channel which buffers events in memory
22avro-memory-logger.channels.memory-channel.type = memory
23avro-memory-logger.channels.memory-channel.capacity = 1000
24avro-memory-logger.channels.memory-channel.transactionCapacity = 100
25
26# Bind the source and sink to the channel
27avro-memory-logger.sources.avro-source.channels = memory-channel
28avro-memory-logger.sinks.logger-sink.channel = memory-channel
29
-
启动Agent
1
2
3
4
5
6
7
8
9
10
11
12
13
14 1# 先启动 avro-memory-logger
2flume-ng agent \
3--name avro-memory-logger \
4--conf $FLUME_HOME/conf \
5--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
6-Dflume.root.logger=INFO,console
7
8# 再启动 exec-memory-avro
9flume-ng agent \
10--name exec-memory-avro \
11--conf $FLUME_HOME/conf \
12--conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
13-Dflume.root.logger=INFO,console
14
- 日志收集过程
- 机器A上监控一个文件,当我们访问主站时会有用户行为日志记录到access.log钟
-
avro sink把新产生的日志输出到对应的avro source指定的hostname和port上
-
通过avro source对应的logger将我们收集的日志输出到控制台
-