在server1、server2 和 server3 三个节点上部署 Zookeeper。
1、解压Zookeeper安装包到/opt/module/目录下
[huanchu@server1 software]$ tar -zxvf zookeeper-3.4.13.tar.gz -C /opt/module/
1、在/opt/module/zookeeper-3.4.13/这个目录下创建zkData
[huanchu@server1 zookeeper-3.4.13]$ mkdir -p zkData
2、在/opt/module/zookeeper-3.4.13/zkData目录下创建一个myid的文件
[huanchu@server1 zkData]$ touch myid
3、编辑myid文件
[huanchu@server1 zkData]$ vi myid
在文件中添加与server对应的编号 :
1
[huanchu@server1 module]$ xsync zookeeper-3.4.13/
并分别在hadoop102、hadoop103上修改myid文件中内容为2、3
1、重命名/opt/module/zookeeper-3.4.13/conf这个目录下的zoo_sample.cfg为zoo.cfg
mv zoo_sample.cfg zoo.cfg
2、打开zoo.cfg文件
[huanchu@server1 conf]$ vim zoo.cfg
修改数据存储路径配置
dataDir=/opt/module/zookeeper-3.4.13/zkData
增加如下配置
#######################cluster##########################
server.1=server1:2888:3888
server.2=server2:2888:3888
server.3=server3:2888:3888
3、同步zoo.cfg配置文件
[huanchu@server conf]$ xsync zoo.cfg
1、分别启动 Zookeeper
[huanchu@server1 zookeeper-3.4.13]$ bin/zkServer.sh start
[huanchu@server2 zookeeper-3.4.13]$ bin/zkServer.sh start
[huanchu@server3 zookeeper-3.4.13]$ bin/zkServer.sh start
2、查看状态
[huanchu@server1 zookeeper-3.4.13]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower
[huanchu@server2 zookeeper-3.4.13]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: leader
[huanchu@server3 zookeeper-3.4.13]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower
因为一个一个地启动 ZK 太麻烦了, 所以为了方便起见, 我直接使用 docker-compose 来启动 ZK 集群。
1、首先创建一个名为 docker-compose.yml 的文件, 其内容如下 :
version: '3.1'
services:
zoo1:
image: zookeeper
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo2:
image: zookeeper
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888
zoo3:
image: zookeeper
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888
2、接着我们在 docker-compose.yml 当前运行下面的命令启动集群
COMPOSE_PROJECT_NAME=zk_cluster docker-compose up
3、使用 docker-compose ps 命令可以查看启动的 ZK 容器
COMPOSE_PROJECT_NAME=zk_cluster docker-compose ps
4、使用 Docker 命令行客户端连接 ZK 集群
通过 docker-compose ps 命令, 我们知道启动的 ZK 集群的三个主机名分别是 zoo1, zoo2, zoo3, 因此我们分别 link 它们即可
docker run -it --rm \
--link zoo1:zk1 \
--link zoo2:zk2 \
--link zoo3:zk3 \
--net zktest_default \
zookeeper zkCli.sh -server zk1:2181,zk2:2181,zk3:2181
5、通过本地主机连接 ZK 集群
zkCli.sh -server localhost:2181,localhost:2182,localhost:2183
6、查看集群
我们可以通过 nc 命令连接到指定的 ZK 服务器, 然后发送 stat 可以查看 ZK 服务的状态, 例如:
huanchu-mbp:Software huanchu$ echo stat | nc 127.0.0.1 2181
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
/172.19.0.1:51080[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 2
Sent: 1
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: follower
Node count: 4
huanchu-mbp:Software huanchu$ echo stat | nc 127.0.0.1 2182
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
/172.19.0.1:52416[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 3
Sent: 2
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: follower
Node count: 4
huanchu-mbp:Software huanchu$ echo stat | nc 127.0.0.1 2183
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
Clients:
/172.19.0.1:49558[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 2
Sent: 1
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: leader
Node count: 4
Proposal sizes last/min/max: -1/-1/-1
huanchu-mbp:Software huanchu$
通过上面的输出, 我们可以看到, zoo1, zoo2 都是 follower, 而 zoo3 是 leader, 因此证明了 ZK 集群确实是搭建起来了。
评论