Docker搭建MongoRocks副本分片集群(Docker & Mongodb & Rocksdb & Replication & Sharding)

释放双眼,带上耳机,听听看~!

Docker搭建MongoRocks副本分片集群

  • 准备

  • 依赖

    • 安装
    • 下载镜像
  • 基本单实例

  • 带配置的单实例

  • 权限配置

    • docker参数解释
    • 启动命令
    • rocksdb配置解释
    • 查看启动日志
    • 连接测试
  • overlay网络container分片集群

  • 准备基础环境

    • 创建swarm overlay网络
    • 测试overlay网络连通性
    • 创建数据目录
    • 启动configsvr
    • 建立configsvr 副本集
    • 启动shardsvr
    • 建立shardsvr副本集
    • 启动mongos
    • 配置分片
    • 测试
  • docker stack 部署service集群

  • 准备基础环境

    • 创建数据目录
    • 编写docker stack deploy脚本
    • 启动集群
    • 检查启动
    • 构建副本集群
    • 添加分片
    • 测试

准备

依赖

  • 操作系统:CentOS 7.6

安装

参照安装(点击)

下载镜像

mongodb 在3.4以后有部分api变化,导致percona无法在后续版本继续集成rocks engine。目前可用的mongodb最大版本号为3.4,percona官方提供了3.4的docker镜像。


1
2
3
1docker pull percona/percona-server-mongodb:3.4
2
3

基本单实例

基本的命令很简单:


1
2
3
1docker run -p 27017:27017 -d percona/percona-server-mongodb:3.4 --storageEngine=rocksdb
2
3

这里我们并不启动这个容器,直接从带配置的容器开始。

带配置的单实例

权限配置

创建目录,为容器内用户赋予执行权限


1
2
3
4
1mkdir -p /root/volumns/mongorocks/db
2chmod 777 -R /root/volumns/mongorocks/db
3
4

docker参数解释

  • docker设置目录映射-v /root/volumns/mongorocks/db:/data/db
  • docker设置network访问容器名–name=mongorocks
  • docker设置link访问主机地址–hostname=mongorocks
  • docker设置自动重启 –restart=always
  • mongo设置数据目录–dbpath=/data/db

启动命令

完整启动命令如下


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1docker run -d \
2       --name=mongorocks \
3       --hostname=mongorocks \
4       --restart=always \
5       -p 27017:27017 \
6       -v /root/volumns/mongorocks/db:/data/db \
7       -v /etc/localtime:/etc/localtime \
8       percona/percona-server-mongodb:3.4 \
9       --storageEngine=rocksdb \
10      --dbpath=/data/db \
11      --rocksdbCacheSizeGB=1 \
12      --rocksdbCompression=snappy \
13      --rocksdbMaxWriteMBPerSec=1024 \
14      --rocksdbCrashSafeCounters=false \
15      --rocksdbCounters=true \
16      --rocksdbSingleDeleteIndex=false
17
18

rocksdb配置解释

配置可以参照:官方描述(点击)
简单说明需要更改的配置项:

  • rocksdbCacheSizeGB设置block cache的大小:默认为30%的可用物理内存。rocksdb使用cache包含两个部分,未压缩数据uncompressed data 放在block cache中,压缩数据放在kernel page cache中。
  • rocksdbMaxWriteMBPerSec定义最大写入的速度:默认为1GiB/S速度,目前能够超过这个读写速度的只有nvmeSSD,目前只有本地SSD可以支持以上的速度,使用云盘ESSD只有480MiB。nvmeSSD所受到的限制是网卡的限制,阿里云可用最大带宽为4Gib/s。也就是500MiB。根据使用的部署环境来设定最大写入速度。这个值决定了读写占用CPU时间的分配。当值设置小于物理硬盘最大速度,值越小,读取速度越快,写入速度越慢;反之值越大,读取速度越慢,写入速度越快。当值超过物理硬盘速度上限时,写入速度达到上限,超过上限的部分将降低读取速度而不提升写入速度。可以通过linux命令测试硬盘读写速度。参照这里(点击)
  • rocksdbCompression定义压缩格式:默认snappy,可选none, snappy, zlib, lz4, lz4hc,各种

查看启动日志


1
2
3
1docker logs mongorocks
2
3

日志显示成功


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
12019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Block Cache Size GB: 1
22019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Compression: snappy
32019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] MaxWriteMBPerSec: 1024
42019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Engine custom option:
52019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Crash safe counters: 0
62019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Counters: 1
72019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Use SingleDelete in index: 0
82019-08-04T16:18:17.920+0000 I ACCESS   [main] Initialized External Auth Session
92019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongorocks
102019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] db version v3.4.21-2.19
112019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] git version: 2e0631f5e0d868dd51b71e1e55eb8a57300d00df
122019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
132019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] allocator: tcmalloc
142019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] modules: none
152019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] build environment:
162019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     distarch: x86_64
172019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     target_arch: x86_64
182019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] options: { storage: { dbPath: "/data/db", engine: "rocksdb", rocksdb: { cacheSizeGB: 1, compression: "snappy", counters: true, crashSafeCounters: false, maxWriteMBPerSec: 1024, singleDeleteIndex: false } } }
192019-08-04T16:18:17.935+0000 I STORAGE  [initandlisten] 0 dropped prefixes need compaction
202019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
212019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
222019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
232019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          You can use percona-server-mongodb-enable-auth.sh to fix it.
242019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
252019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
262019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
272019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
282019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten]
292019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
302019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
312019-08-04T16:18:17.937+0000 I CONTROL  [initandlisten]
322019-08-04T16:18:17.937+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
332019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
342019-08-04T16:18:17.938+0000 I INDEX    [initandlisten]    building index using bulk method; build may temporarily use up to 500 megabytes of RAM
352019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index done.  scanned 0 total records. 0 secs
362019-08-04T16:18:17.938+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 3.4
372019-08-04T16:18:17.939+0000 I NETWORK  [thread1] waiting for connections on port 27017
38
39

连接测试


1
2
3
4
1docker run -it --link mongorocks --rm percona/percona-server-mongodb:3.4 \
2   mongo --host mongorocks
3
4

连接成功

overlay网络container分片集群

准备基础环境

部署三台机器vm1,vm2,vm3。基础环境和单实例一样。

创建swarm overlay网络

创建swarm overlay网络(点击)

测试overlay网络连通性

在vm1上运行mongorocks manager


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1docker run -d --name=mongorocks-1 \
2 --hostname=mongorocks-1 \
3 --network=mongo-overlay \
4 --restart=always \
5 -p 27017:27017 \
6 -v /root/volumns/mongorocks/db:/data/db
7 -v /etc/localtime:/etc/localtime \
8percona/percona-server-mongodb:3.4 \
9 --storageEngine=rocksdb --dbpath=/data/db \
10 --rocksdbCacheSizeGB=1   \
11 --rocksdbCompression=snappy   \
12 --rocksdbMaxWriteMBPerSec=1024   \
13 --rocksdbCrashSafeCounters=false   \
14 --rocksdbCounters=true   \
15 --rocksdbSingleDeleteIndex=false
16
17

参数解析:

  • container加入overlay网络:–network=mongo-overlay

在vm3上通过刚刚建立的overlay网络测试连通性。


1
2
3
4
1docker pull debian:latest
2docker run --network=mongo-overlay -it debian:latest ping mongorocks-1
3
4

显示结果


1
2
3
4
5
164 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=1 ttl=64 time=0.494 ms
264 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=2 ttl=64 time=1.05 ms
364 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=3 ttl=64 time=1.19 ms
4
5

创建数据目录

准备建立三个configsvr,三个mongos,三个shardsvr,每台机器上分别部署一个configsvr,,一个mongos,一个shardsvr。
在vm1,vm2,vm3上建立数据目录。


1
2
3
4
5
6
7
8
9
10
11
1#vm1
2mkdir -p /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
3chmod -R 777 /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
4#vm2
5mkdir -p /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
6chmod -R 777 /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
7#vm3
8mkdir -p /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db
9chmod -R 777 /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db
10
11

这里取名用的是master和slave,除了arbiter节点只负责仲裁意外,实际上副本集其他的节点都是对等关系,出现单点故障自动切换。

启动configsvr


1
2
3
4
5
6
7
8
1#vm1
2docker run -d --name=mongo-config-1 --hostname=mongo-config-1 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-1/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
3#vm2
4docker run -d --name=mongo-config-2 --hostname=mongo-config-2 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-2/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
5#vm3
6docker run -d --name=mongo-config-3 --hostname=mongo-config-3 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-3/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
7
8

docker logs检查所有config 容器的启动日志,看是否启动正常

建立configsvr 副本集

3.4版本以后要求configsvr必须是副本集


1
2
3
4
5
1docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 \
2   mongo mongo-config-1:27019 --eval \
3   "rs.initiate({_id:'config',members:[{_id:0,host:'mongo-config-1:27019'},{_id:1,host:'mongo-config-2:27019'},{_id:2,host:'mongo-config-3:27019'}]})"
4
5

启动shardsvr


1
2
3
4
5
6
7
8
9
10
11
12
13
14
1#vm1
2docker run -d --name=mongo-shard-1-master --hostname=mongo-shard-1-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-1-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
3docker run -d --name=mongo-shard-3-slave --hostname=mongo-shard-3-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-3-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
4docker run -d --name=mongo-shard-2-arbiter --hostname=mongo-shard-2-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-2-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
5#vm2
6docker run -d --name=mongo-shard-2-master --hostname=mongo-shard-2-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-2-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
7docker run -d --name=mongo-shard-1-slave --hostname=mongo-shard-1-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-1-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
8docker run -d --name=mongo-shard-3-arbiter --hostname=mongo-shard-3-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-3-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
9#vm3
10docker run -d --name=mongo-shard-3-master --hostname=mongo-shard-3-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-3-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
11docker run -d --name=mongo-shard-2-slave --hostname=mongo-shard-2-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-2-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
12docker run -d --name=mongo-shard-1-arbiter --hostname=mongo-shard-1-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-1-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
13
14

docker logs检查所有shard 容器的启动日志,看是否启动正常

建立shardsvr副本集

arbiterOnly标记arbiter实例。因为进入了mongo-overlay网络,以下命令可以在任意机器上运行。


1
2
3
4
5
1docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-1-master:27018 --eval "rs.initiate({_id:'shard-1',members:[{_id:0,host:'mongo-shard-1-master:27018'},{_id:1,host:'mongo-shard-1-slave:27018'},{_id:2,host:'mongo-shard-1-arbiter:27018', arbiterOnly: true}]})"
2docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-2-master:27018 --eval "rs.initiate({_id:'shard-2',members:[{_id:0,host:'mongo-shard-2-master:27018'},{_id:1,host:'mongo-shard-2-slave:27018'},{_id:2,host:'mongo-shard-2-arbiter:27018', arbiterOnly: true}]})"
3docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-3-master:27018 --eval "rs.initiate({_id:'shard-3',members:[{_id:0,host:'mongo-shard-3-master:27018'},{_id:1,host:'mongo-shard-3-slave:27018'},{_id:2,host:'mongo-shard-3-arbiter:27018', arbiterOnly: true}]})"
4
5

启动mongos

mongos不是副本集,不需要做副本集配置;可以在外层使用代理软件例如haproxy做mongos的反向代理。


1
2
3
4
5
6
7
8
1#vm1
2docker run -d --name=mongo-mongos-1 --hostname=mongo-mongos-1 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
3#vm2
4docker run -d --name=mongo-mongos-2 --hostname=mongo-mongos-2 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
5#vm3
6docker run -d --name=mongo-mongos-3 --hostname=mongo-mongos-3 --network=mongo-overlay --restart=always -p 27017:27017 -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
7
8

docker logs检查所有mongos 容器的启动日志,看是否启动正常

配置分片

shard分片数据保存在configsvr中,只在一个mongos节点上添加所有shard到configsvr就可以了。逐条执行以下命令,以判定哪个分片出错了。


1
2
3
4
5
1docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018,mongo-shard-1-arbiter:27018');"
2docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018,mongo-shard-2-arbiter:27018');"
3docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018,mongo-shard-3-arbiter:27018');"
4
5

检查shard状态


1
2
3
1docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.status()"
2
3

显示结果


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1Percona Server for MongoDB shell version v3.4.21-2.19
2connecting to: mongodb://mongo-mongos-1:27017/test
3Percona Server for MongoDB server version: v3.4.21-2.19
4--- Sharding Status ---
5  sharding version: {
6   "_id" : 1,
7   "minCompatibleVersion" : 5,
8   "currentVersion" : 6,
9   "clusterId" : ObjectId("5d498c6d23c7391c25ba330a")
10  }
11  shards:
12        {  "_id" : "shard-1",  "host" : "shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018",  "state" : 1 }
13        {  "_id" : "shard-2",  "host" : "shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018",  "state" : 1 }
14        {  "_id" : "shard-3",  "host" : "shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018",  "state" : 1 }
15  active mongoses:
16        "3.4.21-2.19" : 3
17  autosplit:
18        Currently enabled: yes
19  balancer:
20        Currently enabled:  yes
21        Currently running:  no
22NaN
23        Failed balancer rounds in last 5 attempts:  0
24        Migration Results for the last 24 hours:
25                No recent migrations
26  databases:
27
28

测试

使用NosqlBooster for MongoDB生成测试程序
NosqlBooster for MongoDB会自动添加随机数据,保证每条数据都不一样。


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1faker.locale = "en"
2const STEPCOUNT = 1000; //total 10 * 1000 = 10000
3function isRandomBlank(blankWeight) {
4    return Math.random() * 100 <= blankWeight;
5};
6for (let i = 0; i < 10; i++) {
7    db.getCollection("testCollection").insertMany(
8        _.times(STEPCOUNT, () => {
9            return {
10                "name": faker.name.findName(),
11                "username": faker.internet.userName(),
12                "email": faker.internet.email(),
13                "address": {
14                    "street": faker.address.streetName(),
15                    "suite": faker.address.secondaryAddress(),
16                    "city": faker.address.city(),
17                    "zipcode": faker.address.zipCode()
18                },
19                "phone": faker.phone.phoneNumber(),
20                "website": faker.internet.domainName(),
21                "company": faker.company.companyName()
22            }
23        })
24    )
25    console.log("test:testCollection", `${(i + 1) * STEPCOUNT} docs inserted`);
26}
27
28

根据测试程序设置分片属性,1是根据范围分片,hashed是根据key进行分片,为了更容易看到分片效果,选择hashed


1
2
3
4
1sh.enableSharding('test')
2sh.shardCollection(`test.testCollection`, { _id: 'hashed'})
3
4

在NosqlBooster for MongoDB运行测试程序,执行完成后查看分片状态


1
2
3
4
1use test
2db.getCollection('testCollection').stats()
3
4

显示结果


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
1{
2  "sharded": true,
3  "capped": false,
4  "ns": "test.testCollection",
5  "count": 10000,
6  "size": 2998835,
7  "storageSize": 2998528,
8  "totalIndexSize": 346888,
9  "indexSizes": {
10    "_id_": 180000,
11    "_id_hashed": 166888
12  },
13  "avgObjSize": 299,
14  "nindexes": 2,
15  "nchunks": 6,
16  "shards": {
17    "shard-1": {
18      "ns": "test.testCollection",
19      "size": 994917,
20      "count": 3317,
21      "avgObjSize": 299,
22      "storageSize": 994816,
23      "capped": false,
24      "nindexes": 2,
25      "totalIndexSize": 115072,
26      "indexSizes": {
27        "_id_": 59706,
28        "_id_hashed": 55366
29      },
30      "ok": 1
31    },
32    "shard-2": {
33      "ns": "test.testCollection",
34      "size": 1005509,
35      "count": 3354,
36      "avgObjSize": 299,
37      "storageSize": 1005312,
38      "capped": false,
39      "nindexes": 2,
40      "totalIndexSize": 116324,
41      "indexSizes": {
42        "_id_": 60372,
43        "_id_hashed": 55952
44      },
45      "ok": 1
46    },
47    "shard-3": {
48      "ns": "test.testCollection",
49      "size": 998409,
50      "count": 3329,
51      "avgObjSize": 299,
52      "storageSize": 998400,
53      "capped": false,
54      "nindexes": 2,
55      "totalIndexSize": 115492,
56      "indexSizes": {
57        "_id_": 59922,
58        "_id_hashed": 55570
59      },
60      "ok": 1
61    }
62  },
63  "ok": 1
64}
65
66

docker stack 部署service集群

准备基础环境

跟前面的环境配置一样,建立vm1,vm2,vm3的的swarm网络,但不需要建立overlay网络。

创建数据目录

在vm1,vm2,vm3上创建目录,并赋予权限


1
2
3
4
5
6
7
8
9
10
1mkdir -p /data/mongo/config/db
2chmod -R 777 /data/mongo/config/db
3mkdir -p /data/mongo/shard-1/db
4chmod -R 777 /data/mongo/shard-1/db
5mkdir -p /data/mongo/shard-2/db
6chmod -R 777 /data/mongo/shard-2/db
7mkdir -p /data/mongo/shard-3/db
8chmod -R 777 /data/mongo/shard-3/db
9
10

编写docker stack deploy脚本


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
1version: '3.4'
2services:
3  shard-1-server-1:
4    image: percona/percona-server-mongodb:3.4
5    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
6    networks:
7      - overlay
8    volumes:
9      - /etc/localtime:/etc/localtime
10      - /data/mongo/shard-1/db:/data/db
11    deploy:
12      restart_policy:
13        condition: on-failure
14      replicas: 1
15      placement:
16        constraints:
17          - node.hostname==vm1
18  shard-1-server-2:
19    image: percona/percona-server-mongodb:3.4
20    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
21    networks:
22      - overlay
23    volumes:
24      - /etc/localtime:/etc/localtime
25      - /data/mongo/shard-1/db:/data/db
26    deploy:
27      restart_policy:
28        condition: on-failure
29      replicas: 1
30      placement:
31        constraints:
32          - node.hostname==vm3
33  shard-1-server-3:
34    image: percona/percona-server-mongodb:3.4
35    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
36    networks:
37      - overlay
38    volumes:
39      - /etc/localtime:/etc/localtime
40      - /data/mongo/shard-1/db:/data/db
41    deploy:
42      restart_policy:
43        condition: on-failure
44      replicas: 1
45      placement:
46        constraints:
47          - node.hostname==vm2
48  shard-2-server-1:
49    image: percona/percona-server-mongodb:3.4
50    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
51    networks:
52      - overlay
53    volumes:
54      - /etc/localtime:/etc/localtime
55      - /data/mongo/shard-2/db:/data/db
56    deploy:
57      restart_policy:
58        condition: on-failure
59      replicas: 1
60      placement:
61        constraints:
62          - node.hostname==vm2
63  shard-2-server-2:
64    image: percona/percona-server-mongodb:3.4
65    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
66    networks:
67      - overlay
68    volumes:
69      - /etc/localtime:/etc/localtime
70      - /data/mongo/shard-2/db:/data/db
71    deploy:
72      restart_policy:
73        condition: on-failure
74      replicas: 1
75      placement:
76        constraints:
77          - node.hostname==vm1
78  shard-2-server-3:
79    image: percona/percona-server-mongodb:3.4
80    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
81    networks:
82      - overlay
83    volumes:
84      - /etc/localtime:/etc/localtime
85      - /data/mongo/shard-2/db:/data/db
86    deploy:
87      restart_policy:
88        condition: on-failure
89      replicas: 1
90      placement:
91        constraints:
92          - node.hostname==vm3
93  shard-3-server-1:
94    image: percona/percona-server-mongodb:3.4
95    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
96    networks:
97      - overlay
98    volumes:
99      - /etc/localtime:/etc/localtime
100      - /data/mongo/shard-3/db:/data/db
101    deploy:
102      restart_policy:
103        condition: on-failure
104      replicas: 1
105      placement:
106        constraints:
107          - node.hostname==vm3
108  shard-3-server-2:
109    image: percona/percona-server-mongodb:3.4
110    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
111    networks:
112      - overlay
113    volumes:
114      - /etc/localtime:/etc/localtime
115      - /data/mongo/shard-3/db:/data/db
116    deploy:
117      restart_policy:
118        condition: on-failure
119      replicas: 1
120      placement:
121        constraints:
122          - node.hostname==vm2
123  shard-3-server-3:
124    image: percona/percona-server-mongodb:3.4
125    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
126    networks:
127      - overlay
128    volumes:
129      - /etc/localtime:/etc/localtime
130      - /data/mongo/shard-3/db:/data/db
131    deploy:
132      restart_policy:
133        condition: on-failure
134      replicas: 1
135      placement:
136        constraints:
137          - node.hostname==vm1
138  config-1:
139    image: percona/percona-server-mongodb:3.4
140    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
141    networks:
142      - overlay
143    volumes:
144      - /etc/localtime:/etc/localtime
145      - /data/mongo/config/db:/data/db
146    deploy:
147      restart_policy:
148        condition: on-failure
149      replicas: 1
150      placement:
151        constraints:
152          - node.hostname==vm1
153  config-2:
154    image: percona/percona-server-mongodb:3.4
155    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
156    networks:
157      - overlay
158    volumes:
159      - /etc/localtime:/etc/localtime
160      - /data/mongo/config/db:/data/db
161    deploy:
162      restart_policy:
163        condition: on-failure
164      replicas: 1
165      placement:
166        constraints:
167          - node.hostname==vm2
168  config-3:
169    image: percona/percona-server-mongodb:3.4
170    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
171    networks:
172      - overlay
173    volumes:
174      - /etc/localtime:/etc/localtime
175      - /data/mongo/config/db:/data/db
176    deploy:
177      restart_policy:
178        condition: on-failure
179      replicas: 1
180      placement:
181        constraints:
182          - node.hostname==vm3
183  mongos-1:
184    image: percona/percona-server-mongodb:3.4
185    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
186    networks:
187      - overlay
188    ports:
189      - 27017:27017
190    volumes:
191      - /etc/localtime:/etc/localtime
192    depends_on:
193      - config-1
194      - config-2
195      - config-3
196    deploy:
197      restart_policy:
198        condition: on-failure
199      replicas: 1
200      placement:
201        constraints:
202          - node.hostname==vm1
203  mongos-2:
204    image: percona/percona-server-mongodb:3.4
205    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
206    networks:
207      - overlay
208    ports:
209      - 27018:27017
210    volumes:
211      - /etc/localtime:/etc/localtime
212    depends_on:
213      - config-1
214      - config-2
215      - config-3
216    deploy:
217      restart_policy:
218        condition: on-failure
219      replicas: 1
220      placement:
221        constraints:
222          - node.hostname==vm2
223  mongos-3:
224    image: percona/percona-server-mongodb:3.4
225    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
226    networks:
227      - overlay
228    ports:
229      - 27019:27017
230    volumes:
231      - /etc/localtime:/etc/localtime
232    depends_on:
233      - config-1
234      - config-2
235      - config-3
236    deploy:
237      restart_policy:
238        condition: on-failure
239      replicas: 1
240      placement:
241        constraints:
242          - node.hostname==vm3
243networks:
244  overlay:
245    driver: overlay
246
247

启动集群


1
2
3
1docker stack deploy -c mongo.yaml mongo
2
3

显示


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1Creating network mongo_overlay
2Creating service mongo_shard-3-server-3
3Creating service mongo_shard-1-server-2
4Creating service mongo_mongos-3
5Creating service mongo_shard-3-server-1
6Creating service mongo_mongos-1
7Creating service mongo_config-2
8Creating service mongo_shard-1-server-1
9Creating service mongo_mongos-2
10Creating service mongo_shard-2-server-2
11Creating service mongo_shard-1-server-3
12Creating service mongo_shard-2-server-1
13Creating service mongo_config-3
14Creating service mongo_shard-2-server-3
15Creating service mongo_config-1
16Creating service mongo_shard-3-server-2
17
18

检查启动


1
2
3
1docker service ls
2
3

Relpliates全部是1/1就表示启动成功


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1ID                  NAME                     MODE                REPLICAS            IMAGE                                PORTS
29urzybegrbmz        mongo_config-1           replicated          1/1                 percona/percona-server-mongodb:3.4  
3jldecj0s6238        mongo_config-2           replicated          1/1                 percona/percona-server-mongodb:3.4  
4n9r4ld6komnq        mongo_config-3           replicated          1/1                 percona/percona-server-mongodb:3.4  
5ni94pd5odl89        mongo_mongos-1           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27017->27017/tcp
6sh4ykadpmoka        mongo_mongos-2           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27018->27017/tcp
712m4nbyn77va        mongo_mongos-3           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27019->27017/tcp
8psolde1gltn9        mongo_shard-1-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4  
94t2xwavpgg26        mongo_shard-1-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4  
10qwjfpg93qkho        mongo_shard-1-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4  
11ztbxk12npvwo        mongo_shard-2-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4  
12tz3n5oj55osx        mongo_shard-2-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4  
13pcprsbo9xxin        mongo_shard-2-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4  
14nn7mrm0iy26v        mongo_shard-3-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4  
15ps4zqmiqzw1k        mongo_shard-3-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4  
16iv1gvzzm3ai0        mongo_shard-3-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4  
17
18

构建副本集群


1
2
3
4
5
6
7
1docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"config\",configsvr: true, members: [{ _id : 0, host : \"config-1:27019\" },{ _id : 1, host : \"config-2:27019\" }, { _id : 2, host : \"config-3:27019\" }]})' | mongo --port 27019";
2docker exec -it $(docker ps | grep "shard-1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-1\", members: [{ _id : 0, host : \"shard-1-server-1:27018\" },{ _id : 1, host : \"shard-1-server-2:27018\" },{ _id : 2, host : \"shard-1-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
3docker exec -it $(docker ps | grep "shard-2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-2\", members: [{ _id : 0, host : \"shard-2-server-1:27018\" },{ _id : 1, host : \"shard-2-server-2:27018\" },{ _id : 2, host : \"shard-2-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
4docker exec -it $(docker ps | grep "shard-3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-3\", members: [{ _id : 0, host : \"shard-3-server-1:27018\" },{ _id : 1, host : \"shard-3-server-2:27018\" },{ _id : 2, host : \"shard-3-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
5
6
7

注意:不可以在arbiter所在机器添加当前分片,需要换一台机器。

添加分片


1
2
3
4
5
6
1docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-1/shard-1-server-1:27018,shard-1-server-2:27018,shard-1-server-3:27018\")' | mongo ";
2docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-2/shard-2-server-1:27018,shard-2-server-2:27018,shard-2-server-3:27018\")' | mongo ";
3docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-3/shard-3-server-1:27018,shard-3-server-2:27018,shard-3-server-3:27018\")' | mongo ";
4
5
6

测试

同overlay的测试部分。

给TA打赏
共{{data.count}}人
人已打赏
安全运维

OpenSSH-8.7p1离线升级修复安全漏洞

2021-10-23 10:13:25

安全运维

设计模式的设计原则

2021-12-12 17:36:11

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索