首页>业界 > 正文
k8s实战案例之部署redis单机和redis cluster|天天亮点
来源: 博客园 发布于:2023-06-07 06:02:15
1、在k8s上部署redis单机1.1、redis简介

redis是一款基于BSD协议,开源的非关系型数据库(nosql数据库),作者是意大利开发者Salvatore Sanfilippo在2009年发布,使用C语言编写;redis是基于内存存储,而且是目前比较流行的键值数据库(key-value database),它提供将内存通过网络远程共享的一种服务,提供类似功能的还有memcache,但相比 memcache,redis 还提供了易扩展、高性能、具备数据持久性等功能。主要的应用场景有session共享,常用于web集群中的tomcat或PHP中多web服务器的session共享;消息队列,ELK的日志缓存,部分业务的订阅发布系统;计数器,常用于访问排行榜,商品浏览数等和次数相关的数值统计场景;缓存,常用于数据查询、电商网站商品信息、新闻内容等;相对memcache,redis支持数据的持久化,可以将内存的数据保存在磁盘中,重启redis服务或者服务器之后可以从备份文件中恢复数据到内存继续使用;

1.2、PV/PVC 及 Redis 单机

由于redis的数据(主要是redis快照)都存放在存储系统中,即便redis pod挂掉,对应数据都不会丢;因为在k8s上部署redis单机,redis pod挂了,k8s会将对应pod重建,重建时会把对应pvc挂载至pod中,加载快照,从而使得redis的数据不被pod的挂掉而丢数据;


【资料图】

1.3、构建redis镜像
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# lltotal 1784drwxr-xr-x  2 root root    4096 Jun  5 15:22 ./drwxr-xr-x 11 root root    4096 Aug  9  2022 ../-rw-r--r--  1 root root     717 Jun  5 15:20 Dockerfile-rwxr-xr-x  1 root root     235 Jun  5 15:21 build-command.sh*-rw-r--r--  1 root root 1740967 Jun 22  2021 redis-4.0.14.tar.gz-rw-r--r--  1 root root   58783 Jun 22  2021 redis.conf-rwxr-xr-x  1 root root      84 Jun  5 15:21 run_redis.sh*root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile #Redis Image# 导入自定义centos基础镜像FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009 # 添加redis源码包至/usr/local/srcADD redis-4.0.14.tar.gz /usr/local/src# 编译安装redisRUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data # 添加redis配置文件ADD redis.conf /usr/local/redis/redis.conf # 暴露redis服务端口EXPOSE 6379#ADD run_redis.sh /usr/local/redis/run_redis.sh#CMD ["/usr/local/redis/run_redis.sh"]# 添加启动脚本ADD run_redis.sh /usr/local/redis/entrypoint.sh# 启动redisENTRYPOINT ["/usr/local/redis/entrypoint.sh"]root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh #!/bin/bashTAG=$1#docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .#sleep 3#docker push  harbor.ik8s.cc/magedu/redis:${TAG}nerdctl build -t  harbor.ik8s.cc/magedu/redis:${TAG} .nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh #!/bin/bash# Redis启动命令/usr/sbin/redis-server /usr/local/redis/redis.conf# 使用tail -f 在pod内部构建守护进程tail -f  /etc/hostsroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v "^#\|^$" redis.conf bind 0.0.0.0protected-mode yesport 6379tcp-backlog 511timeout 0tcp-keepalive 300daemonize yessupervised nopidfile /var/run/redis_6379.pidloglevel noticelogfile ""databases 16always-show-logo yessave 900 1save 5 1save 300 10save 60 10000stop-writes-on-bgsave-error nordbcompression yesrdbchecksum yesdbfilename dump.rdbdir /data/redis-dataslave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay noslave-priority 100requirepass 123456lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noslave-lazy-flush noappendonly noappendfilename "appendonly.aof"appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yesaof-use-rdb-preamble nolua-time-limit 5000slowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold 0notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-size -2list-compress-depth 0set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes 3000activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yesroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# 
1.3.1、验证rdis镜像是否上传至harbor?1.4、测试redis 镜像1.4.1、验证将redis镜像运行为容器,看看是否正常运行?1.4.2、远程连接redis,看看是否可正常连接?

能够将redis镜像运行为容器,并且能够通过远程主机连接至redis进行数据读写,说明我们构建的reids镜像没有问题;

1.5、创建PV和PVC1.5.1、在nfs服务器上准备redis数据存储目录
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1mkdir: created directory "/data/k8sdata/magedu/redis-datadir-1"root@harbor:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported#               to NFS clients.  See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)/data/k8sdata/myserver *(rw,no_root_squash)/data/k8sdata/mysite *(rw,no_root_squash)/data/k8sdata/magedu/images *(rw,no_root_squash)/data/k8sdata/magedu/static *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)/data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash) root@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 
1.5.2、创建pv
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml     ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-datadir-pv-1spec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteOnce  nfs:    path: /data/k8sdata/magedu/redis-datadir-1     server: 192.168.0.42root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
1.5.3、创建pvc
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml ---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: redis-datadir-pvc-1   namespace: mageduspec:  volumeName: redis-datadir-pv-1   accessModes:    - ReadWriteOnce  resources:    requests:      storage: 10Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
1.6、部署redis服务
root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yamlkind: Deployment#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata:  labels:    app: devops-redis   name: deploy-devops-redis  namespace: mageduspec:  replicas: 1   selector:    matchLabels:      app: devops-redis  template:    metadata:      labels:        app: devops-redis    spec:      containers:        - name: redis-container          image: harbor.ik8s.cc/magedu/redis:v4.0.14           imagePullPolicy: Always          volumeMounts:          - mountPath: "/data/redis-data/"            name: redis-datadir      volumes:        - name: redis-datadir          persistentVolumeClaim:            claimName: redis-datadir-pvc-1 ---kind: ServiceapiVersion: v1metadata:  labels:    app: devops-redis  name: srv-devops-redis  namespace: mageduspec:  type: NodePort  ports:  - name: http    port: 6379     targetPort: 6379    nodePort: 36379   selector:    app: devops-redis  sessionAffinity: ClientIP  sessionAffinityConfig:    clientIP:      timeoutSeconds: 10800root@k8s-master01:~/k8s-data/yaml/magedu/redis# 

上述报错说我们的服务端口超出范围,这是因为我们在初始化k8s集群时指定的服务端口范围;

1.6.1、修改nodeport端口范围

编辑/etc/systemd/system/kube-apiserver.service,将其--service-node-port-range选项指定的值修改即可;其他两个master节点也需要修改哦

1.6.2、重载kube-apiserver.service,重启kube-apiserver
root@k8s-master01:~# systemctl daemon-reload                 root@k8s-master01:~# systemctl restart kube-apiserver.serviceroot@k8s-master01:~# 

再次部署redis

1.7、验证redis数据读写1.7.1、连接k8s任意节点的36376端口,测试redis读写数据1.8、验证redis pod 重建对应数据是否丢失?1.8.1、查看redis快照文件是否存储到存储上呢?
root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1total 12drwxr-xr-x 2 root root 4096 Jun  5 16:29 ./drwxr-xr-x 8 root root 4096 Jun  5 15:53 ../-rw-r--r-- 1 root root  116 Jun  5 16:29 dump.rdbroot@harbor:~# 

可以看到刚才我们向redis写入数据,对应redis在规定时间内发现key的变化就做了快照,因为redis数据目录时通过pv/pvc挂载的nfs,所以我们在nfs对应目录里时可以正常看到这个快照文件的;

1.8.2、删除redis pod 等待k8s重建redis pod1.8.3、验证重建后的redis pod数据

可以看到k8s重建后的redis pod 还保留着原有pod的数据;这说明k8s重建时挂载了前一个pod的pvc;

2、在k8s上部署redis集群2.1、PV/PVC及Redis Cluster-StatefulSet

redis cluster相比redis单机要稍微复杂一点,我们也是通过pv/pvc将redis cluster数据存放在存储系统中,不同于redis单机,redis cluster对存入的数据会做crc16计算,然后和16384做取模计算,得出一个数字,这个数字就是存入redis cluster的一个槽位;即redis cluster将16384个槽位,平均分配给集群所有master节点,每个master节点存放整个集群数据的一部分;这样一来就存在一个问题,如果master宕机,那么对应槽位的数据也就不可用,为了防止master单点故障,我们还需要对master做高可用,即专门用一个slave节点对master做备份,master宕机的情况下,对应slave会接管master继续向集群提供服务,从而实现redis cluster master的高可用;如上图所示,我们使用3主3从的redis cluster,redis0,1,2为master,那么3,4,5就对应为0,1,2的slave,负责备份各自对应的master的数据;这六个pod都是通过k8s集群的pv/pvc将数据存放在存储系统中;

2.2、创建PV2.2.1、在nfs上准备redis cluster 数据目录
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}mkdir: created directory "/data/k8sdata/magedu/redis0"mkdir: created directory "/data/k8sdata/magedu/redis1"mkdir: created directory "/data/k8sdata/magedu/redis2"mkdir: created directory "/data/k8sdata/magedu/redis3"mkdir: created directory "/data/k8sdata/magedu/redis4"mkdir: created directory "/data/k8sdata/magedu/redis5"root@harbor:~# tail -6 /etc/exports /data/k8sdata/magedu/redis0 *(rw,no_root_squash)/data/k8sdata/magedu/redis1 *(rw,no_root_squash)/data/k8sdata/magedu/redis2 *(rw,no_root_squash)/data/k8sdata/magedu/redis3 *(rw,no_root_squash)/data/k8sdata/magedu/redis4 *(rw,no_root_squash)/data/k8sdata/magedu/redis5 *(rw,no_root_squash)root@harbor:~# exportfs  -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [18]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis0".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [19]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [20]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [21]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [22]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis4".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [23]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis5".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis5exporting *:/data/k8sdata/magedu/redis4exporting *:/data/k8sdata/magedu/redis3exporting *:/data/k8sdata/magedu/redis2exporting *:/data/k8sdata/magedu/redis1exporting *:/data/k8sdata/magedu/redis0exporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 
2.2.2、创建pv
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv0spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis0 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv1spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis1 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv2spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis2 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv3spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis3 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv4spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis4 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv5spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis5 root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3、部署redis cluster2.3.1、基于redis.conf文件创建configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3.2、创建configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu configmap/redis-conf createdroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu NAME               DATA   AGEkube-root-ca.crt   1      35hredis-conf         1      6sroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3.3、验证configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu Name:         redis-confNamespace:    mageduLabels:       Annotations:  Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379BinaryData====Events:  root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3.4、部署redis cluster
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml apiVersion: v1kind: Servicemetadata:  name: redis  namespace: magedu  labels:    app: redisspec:  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis    port: 6379  clusterIP: None  ---apiVersion: v1kind: Servicemetadata:  name: redis-access  namespace: magedu  labels:    app: redisspec:  type: NodePort  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis-access    protocol: TCP    port: 6379    targetPort: 6379    nodePort: 36379---apiVersion: apps/v1kind: StatefulSetmetadata:  name: redis  namespace: mageduspec:  serviceName: redis  replicas: 6  selector:    matchLabels:      app: redis      appCluster: redis-cluster  template:    metadata:      labels:        app: redis        appCluster: redis-cluster    spec:      terminationGracePeriodSeconds: 20      affinity:        podAntiAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 100            podAffinityTerm:              labelSelector:                matchExpressions:                - key: app                  operator: In                  values:                  - redis              topologyKey: kubernetes.io/hostname      containers:      - name: redis        image: redis:4.0.14        command:          - "redis-server"        args:          - "/etc/redis/redis.conf"          - "--protected-mode"          - "no"        resources:          requests:            cpu: "500m"            memory: "500Mi"        ports:        - containerPort: 6379          name: redis          protocol: TCP        - containerPort: 16379          name: cluster          protocol: TCP        volumeMounts:        - name: conf          mountPath: /etc/redis        - name: data          mountPath: /var/lib/redis      volumes:      - name: conf        configMap:          name: redis-conf          items:          - key: redis.conf            path: redis.conf  volumeClaimTemplates:  - metadata:      name: data      namespace: magedu    spec:      accessModes: [ "ReadWriteOnce" ]      resources:        requests:          storage: 5Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

上述配置清单,主要用sts控制器创建了6个pod副本,每个副本都使用configmap中的配置文件作为redis配置文件,使用pvc模板指定pod在k8s上自动关联pv,并在magedu名称空间创建pvc,即只要k8s上有空余的pv,对应pod就会在magedu这个名称空间按pvc模板信息创建pvc;当然我们可以使用存储类自动创建pvc,也可以提前创建好pvc,一般情况下使用sts控制器,我们可以使用pvc模板的方式来指定pod自动创建pvc(前提是k8s有足够的pv可用);

应用配置清单部署redis cluster

使用sts控制器创建pod,pod名称是sts控制器的名称-id,使用pvc模板创建pvc的名称为pvc模板名称-pod名称,即pvc模板名-sts控制器名-id;

2.4、初始化redis cluster2.4.1、在k8s上创建临时容器,安装redis cluster 初始化工具
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bashIf you don"t see a command prompt, try pressing enter.root@ubuntu1804:/#root@ubuntu1804:/# apt update# 安装必要工具root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools# 更新piproot@ubuntu1804:/# pip install --upgrade pip# 使用pip安装redis cluster初始化工具redis-tribroot@ubuntu1804:/# pip install redis-trib==0.5.1root@ubuntu1804:/#
2.4.2、初始化redis cluster
root@ubuntu1804:/# redis-trib.py create \ `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 

在k8s上我们使用sts创建pod,对应pod的名称是固定不变的,所以我们初始化redis 集群就直接使用redis pod名称就可以直接解析到对应pod的IP地址;在传统虚拟机或物理机上初始化redis集群,我们可用直接使用IP地址,原因是物理机或虚拟机IP地址是固定的,在k8s上pod的IP地址是不固定的;

2.4.3、给master指定slave给redis-0指定slave为 redis-3
root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
给redis-1指定slave为 redis-4
root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
给redis-2指定slave为 redis-5
root@ubuntu1804:/# redis-trib.py replicate \--master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \--slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
2.5、验证redis cluster状态2.5.1、进入redis cluster 任意pod 查看集群信息2.5.2、查看集群节点

集群节点信息中记录了master节点id和slave id,其中slave后面会对应master的id,表示该slave备份对应master数据;

2.5.3、查看当前节点信息
127.0.0.1:6379> info# Serverredis_version:4.0.14redis_git_sha1:00000000redis_git_dirty:0redis_build_id:165c932261a105d7redis_mode:clusteros:Linux 5.15.0-73-generic x86_64arch_bits:64multiplexing_api:epollatomicvar_api:atomic-builtingcc_version:8.3.0process_id:1run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3tcp_port:6379uptime_in_seconds:4303uptime_in_days:0hz:10lru_clock:8272053executable:/data/redis-serverconfig_file:/etc/redis/redis.conf# Clientsconnected_clients:1client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0# Memoryused_memory:2642336used_memory_human:2.52Mused_memory_rss:5353472used_memory_rss_human:5.11Mused_memory_peak:2682248used_memory_peak_human:2.56Mused_memory_peak_perc:98.51%used_memory_overhead:2559936used_memory_startup:1444856used_memory_dataset:82400used_memory_dataset_perc:6.88%total_system_memory:16740012032total_system_memory_human:15.59Gused_memory_lua:37888used_memory_lua_human:37.00Kmaxmemory:0maxmemory_human:0Bmaxmemory_policy:noevictionmem_fragmentation_ratio:2.03mem_allocator:jemalloc-4.0.3active_defrag_running:0lazyfree_pending_objects:0# Persistenceloading:0rdb_changes_since_last_save:0rdb_bgsave_in_progress:0rdb_last_save_time:1685992849rdb_last_bgsave_status:okrdb_last_bgsave_time_sec:0rdb_current_bgsave_time_sec:-1rdb_last_cow_size:245760aof_enabled:1aof_rewrite_in_progress:0aof_rewrite_scheduled:0aof_last_rewrite_time_sec:-1aof_current_rewrite_time_sec:-1aof_last_bgrewrite_status:okaof_last_write_status:okaof_last_cow_size:0aof_current_size:0aof_base_size:0aof_pending_rewrite:0aof_buffer_length:0aof_rewrite_buffer_length:0aof_pending_bio_fsync:0aof_delayed_fsync:0# Statstotal_connections_received:7total_commands_processed:17223instantaneous_ops_per_sec:1total_net_input_bytes:1530962total_net_output_bytes:108793instantaneous_input_kbps:0.04instantaneous_output_kbps:0.00rejected_connections:0sync_full:1sync_partial_ok:0sync_partial_err:1expired_keys:0expired_stale_perc:0.00expired_time_cap_reached_count:0evicted_keys:0keyspace_hits:0keyspace_misses:0pubsub_channels:0pubsub_patterns:0latest_fork_usec:853migrate_cached_sockets:0slave_expires_tracked_keys:0active_defrag_hits:0active_defrag_misses:0active_defrag_key_hits:0active_defrag_key_misses:0# Replicationrole:masterconnected_slaves:1slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ffmaster_replid2:0000000000000000000000000000000000000000master_repl_offset:1120second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:1120# CPUused_cpu_sys:12.50used_cpu_user:7.51used_cpu_sys_children:0.01used_cpu_user_children:0.00# Clustercluster_enabled:1# Keyspace127.0.0.1:6379> 
2.5.4、验证redis cluster读写数据是否正常?2.5.4.1、手动连接redis cluster 进行数据读写

手动连接redis 集群master节点进行数据读写,存在一个问题就是当我们写入的key经过crc16计算对16384取模后,对应槽位可能不在当前节点,redis它会告诉我们该key该在哪里去写;从上面的截图可用看到,现在redis cluster 是可用正常读写数据的

2.5.4.2、使用python脚本连接redis cluster 进行数据读写
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py#!/usr/bin/env python#coding:utf-8#Author:Zhang ShiJie#python 2.7/3.8#pip install redis-py-clusterimport sys,timefrom rediscluster import RedisClusterdef init_redis():    startup_nodes = [        {"host": "192.168.0.34", "port": 36379},        {"host": "192.168.0.35", "port": 36379},        {"host": "192.168.0.36", "port": 36379},        {"host": "192.168.0.34", "port": 36379},        {"host": "192.168.0.35", "port": 36379},        {"host": "192.168.0.36", "port": 36379},    ]    try:        conn = RedisCluster(startup_nodes=startup_nodes,                            # 有密码要加上密码哦                            decode_responses=True, password="")        print("连接成功!!!!!1", conn)        #conn.set("key-cluster","value-cluster")        for i in range(100):            conn.set("key%s" % i, "value%s" % i)            time.sleep(0.1)            data = conn.get("key%s" % i)            print(data)        #return conn    except Exception as e:        print("connect error ", str(e))        sys.exit(1)init_redis()root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

运行脚本,向redis cluster 写入数据

root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.pyTraceback (most recent call last):  File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in     from rediscluster import RedisClusterModuleNotFoundError: No module named "rediscluster"root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#

这里提示没有找到rediscluster模块,解决办法就是通过pip安装redis-py-cluster模块即可;

安装redis-py-cluster模块运行脚本连接redis cluster进行数据读写连接redis pod,验证数据是否正常写入?

从上面的截图可用看到三个reids cluster master pod各自都存放了一部分key,并非全部;说明刚才我们用python脚本把数据正常写入了redis cluster;

验证在slave 节点是否可用正常读取数据?

从上面的截图可以了解到在slave节点是不可以读取数据;

到slave对应的master节点读取数据

上述验证说明了redis cluster 只有master可以读写数据,slave只是对master数据做备份,不可以在slave上读写数据;

2.6、验证验证redis cluster高可用2.6.1、在k8s node节点将redis:4.0.14镜像上传至本地harbor修改镜像tag
root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
上传redis镜像至本地harbor
root@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625) WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc" index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625:    done           |++++++++++++++++++++++++++++++++++++++| manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done           |++++++++++++++++++++++++++++++++++++++| config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8:   done           |++++++++++++++++++++++++++++++++++++++| elapsed: 1.4 s                                                                    total:  8.5 Ki (6.1 KiB/s)                                       root@k8s-node01:~# 
2.6.2、修改redis cluster部署清单镜像和镜像拉取策略

修改镜像为本地harbor镜像和拉取策略是方便我们测试redis cluster的高可用;

2.6.3、重新apply redis cluster部署清单
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yamlservice/redis unchangedservice/redis-access unchangedstatefulset.apps/redis configuredroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

这里相当于给redis cluster更新,他们之间的集群关系还存在,因为集群关系配置都保存在远端存储之上;

验证pod是否都正常running?验证集群状态和集群关系

不同于之前,这里rdis-0变成了slave ,redis-3变成了master;从上面的截图我们也发现,在k8s上部署redis cluster pod重建以后(IP地址发生变化),对应集群关系不会发生变化;对应master和salve一对关系始终只是再对应的master和salve两个pod中切换,这其实就是高可用;

2.6.4、停掉本地harbor,删除redis master pod,看看对应slave是否会提升为master?停止harbor服务
root@harbor:~# systemctl stop harbor
删除redis-3,看看redis-0是否会提升为master?

可用看到我们把redis-3删除(相当于master宕机)以后,对应slave提升为master了;

2.6.5、恢复harbor服务,看看对应redis-3恢复会议后是否还是redis-0的slave呢?恢复harbor服务验证redis-3pod是否恢复?

再次删除redis-3以后,对应pod正常被重建,并处于running状态;

验证redis-3的主从关系

可以看到redis-3恢复以后,对应自动加入集群成为redis-0的slave;

关键词: