|NO.Z.00005|——————————|^^ 部署 ^^|——|Hadoop&kafka.V05|-------------------------------------------|kaf



[BigDataHadoop:Hadoop&kafka.V05]                                                                          [BigDataHadoop.kafka][|章节一|Hadoop生态圈技术栈|kafka|kafka单机模式部署|jdk.v8u231|zookeeper.v3.4.14|kafka.v2.12|]








一、Kafka安装与配置:Java环境为前提
### --- [kafka架构与实战]

~~~     [部署Java.v8u231]
~~~     [zookeeper.v3.4.14]
~~~     [kafka2.12-1.0.2] 
### --- 查看自带的openjdk

~~~     # 查看系统自带的jdk环境
[root@hadoop ~]# rpm -qa | grep java
~~~     # 如果有⾃自带的,卸载系统⾃自带的openjdk
[root@hadoop ~]# rpm -e java-1.6.0-openjdk-1.6.0.41-1.13.13.1.el6_8.x86_64 tzdata-java-2016j-1.el6.noarch java-1.7.0-openjdk-1.7.0.131-2.6.9.0.el6_8.x86_64 --nodeps
### --- 安装jdk

~~~     # 所有软件安装路径
[root@hadoop ~]# mkdir -p /opt/yanqi/servers
~~~     # 所有软件压缩包的存放路路径
[root@hadoop ~]# mkdir -p /opt/yanqi/software
[root@hadoop ~]# cd /opt/yanqi/software/
~~~     # 上传jdk到/opt/yanqi/software路路径下去,并解压
[root@hadoop software]# tar -zxvf jdk-8u231-linux-x64.tar.gz -C ../servers/
### --- 配置环境变量

~~~     # 配置环境变量量
[root@hadoop ~]# vim /etc/profile
export JAVA_HOME=/opt/yanqi/servers/jdk1.8.0_231
export PATH=:$JAVA_HOME/bin:$PATH
~~~     # 修改完成之后记得 source /etc/profile⽣生效 
[root@hadoop ~]# source /etc/profile 
### --- 查看jdk版本

[root@hadoop ~]# java -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)
二、Zookeeper的安装配置
### --- 上传zookeeper-3.4.14.tar.gz到服务器

[root@hadoop ~]# ll /opt/yanqi/software/
-rw-r--r-- 1 root root  37676320 Jul 17  2020 zookeeper-3.4.14.tar.gz
### --- 解压到/opt:

[root@hadoop ~]# cd /opt/yanqi/software/
[root@hadoop software]# tar -zxvf zookeeper-3.4.14.tar.gz -C ../servers/
~~~     # 复制zoo_sample.cfg命名为zoo.cfg

[root@hadoop ~]# mv /opt/yanqi/servers/zookeeper-3.4.14/ /opt/yanqi/servers/zookeeper
[root@hadoop ~]# cp /opt/yanqi/servers/zookeeper/conf/zoo_sample.cfg /opt/yanqi/servers/zookeeper/conf/zoo.cfg
### --- 修改Zookeeper保存数据的目录,dataDir:

~~~     # 创建数据存储目录
[root@hadoop ~]# mkdir -p /var/yanqi/zookeeper/data
~~~     # 编辑zoo.cfg文件

[root@hadoop ~]# vim /opt/yanqi/servers/zookeeper/conf/zoo.cfg 
dataDir=/opt/yanqi/servers/zookeeper-3.4.14/data            # 第12行,更新datadir
dataLogDir=/opt/yanqi/servers/zookeeper-3.4.14/data/logs    # 第13行,添加logdir
### --- 配置环境变量:编辑/etc/profile:

~~~     设置环境变量ZOO_LOG_DIR,指定Zookeeper保存日志的位置;
~~~     ZOOKEEPER_PREFIX指向Zookeeper的解压目录;
~~~     将Zookeeper的bin目录添加到PATH中:
~~~     # 配置环境变量
[root@hadoop ~]# vim /etc/profile
#ZOOKEEPER_PREFIX
export ZOOKEEPER_PREFIX=/opt/yanqi/servers/zookeeper
export PATH=$PATH:$ZOOKEEPER_PREFIX/bin
export ZOO_LOG_DIR=/opt/yanqi/servers/zookeeper/log
 
~~~     # 使环境变量生效
[root@hadoop ~]# source /etc/profile 
### --- 验证zookeeper配置

[root@hadoop ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/yanqi/servers/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
### --- 启动Zookeeper:

[root@hadoop ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/yanqi/servers/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
### --- 确认Zookeeper的状态:

[root@hadoop ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/yanqi/servers/zookeeper/bin/../conf/zoo.cfg
Mode: standalone
三、Kafka的安装与配置
### --- 上传kafka_2.12-1.0.2.tgz到服务器并解压:

[root@hadoop ~]# ll /opt/yanqi/software/kafka_2.12-1.0.2.tgz 
-rw-r--r-- 1 root root 44549705 Jul 17  2020 /opt/yanqi/software/kafka_2.12-1.0.2.tgz
 
[root@hadoop ~]# cd /opt/yanqi/software/
[root@hadoop software]# tar -zxvf kafka_2.12-1.0.2.tgz -C ../servers/

[root@hadoop ~]# cd /opt/yanqi/servers/
[root@hadoop servers]# mv kafka_2.12-1.0.2/ kafka
### --- 配置环境变量并生效:

[root@hadoop ~]# vim /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/yanqi/servers/kafka
export PATH=$PATH:$KAFKA_HOME/bin
 
[root@hadoop ~]# source /etc/profile
### --- 配置/opt/kafka_2.12-1.0.2/config中的server.properties文件:
~~~     Kafka连接Zookeeper的地址,此处使用本地启动的Zookeeper实例,
~~~     连接地址是localhost:2181,后面的myKafka 是Kafka在Zookeeper中的根节点路径:

~~~     # 创建kafka存储持久化目录
[root@hadoop ~]# mkdir /opt/yanqi/servers/kafka/kafka-logs

[root@hadoop ~]# vim /opt/yanqi/servers/kafka/config/server.properties
############################# Zookeeper #############################
zookeeper.connect=localhost:2181/myKafka       # 连接zookeeper在zookeeper创建mykafka节点,这个子节点存放的是kafka的元数据
 
log.dirs=/opt/yanqi/servers/kafka/kafka-logs   # 存放kafka日志信息,kafka日志信息是非常重要的 
### --- 启动Kafka:前台启动/停止kafka
~~~     进入Kafka安装的根目录,执行如下命令:
      
~~~     # 前台启动kafka
[root@hadoop ~]# cd /opt/yanqi/servers/kafka/bin
[root@hadoop kafka]# kafka-server-start.sh ../config/server.properties
~~~     # 启动成功,可以看到控制台输出的最后一行的started状态:
INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

~~~     # 此时Kafka是前台模式启动,要停止,使用Ctrl+C或者通过命令操作。
[root@hadoop ~]# kafka-server-stop.sh
### --- 后台启动/停止kafka服务

~~~     # 后台启动kafka
[root@hadoop ~]# cd /opt/yanqi/servers/kafka/bin
[root@hadoop bin]# kafka-server-start.sh -daemon ../config/server.properties

~~~     # 停止kafka服务
[root@hadoop ~]# kafka-server-stop.sh
### --- 查看Zookeeper的节点:

~~~     # 登录zookeeper
[root@hadoop ~]# zkCli.sh
 
[zk: localhost:2181(CONNECTED) 6] ls /
[ myKafka, zookeeper]
[zk: localhost:2181(CONNECTED) 7] ls /myKafka
[cluster, controller_epoch, controller, brokers, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
### --- 查看Kafka的后台进程:

[root@hadoop ~]# ps aux | grep kafka








===============================END===============================


Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart                                                                                                                                                   ——W.S.Landor



来自为知笔记(Wiz)

相关文章

# 前言 现有主流消息中间件都是生产者-消费者模型,主要角色...
错误的根源是:kafka版本过高所致,2.2+=的版本,已经不需要...
DWS层主要是存放大宽表数据,此业务中主要是针对Kafka topic...
不多BB讲原理,只教你怎么用,看了全网没有比我更详细的了,...
终于写完了,其实最开始学kafka的时候是今年2月份,那时候还...
使用GPKafka实现Kafka数据导入Greenplum数据库踩坑问题记录(...