K8S二进制部署高可用集群-1.22[五]
本节前言:
本节关键字:KUBE-CONTROLLER-MANAGER、KUBE-SCHEDULER;
关于"KUBE-CONTROLLER-MANAGER"组件,其主要作用为:负责维护集群的状态,具体如下:1、生命周期功能 :包括"namespace"创建和生命周期、"event"垃圾回收、"pod"终止相关的垃圾回收、级联垃圾回收及"node"垃圾回收等;2、API业务逻辑 :如"replicaset"的扩展操作……;
关于"KUBE-SCHEDULER"组件,其主要作用为:负责资源的调度,按照预定的调度策略将"Pod"调度到相应的"NODE";;
本节开始……
一、KUBE-CONTROLLER-MANAGER组件
注意:在执行本步骤前,你应该已经将[ kube-controller-manager、kube-scheduler ]这些文件保存至"/usr/local/bin"[192.168.100.41 - 43]:
以下操作均在[192.168.100.41]进行,然后按需要推送至[192.168.100.42 - 43];开始:生成"KUBE-CONTROLLER-MANAGER"相关证书:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
1、生成 KUBE-CONTROLLER-MANAGER 相关证书 # 生成的证书请求"kube-controller-manager-csr.json" $ cd /opt/cluster/ssl $ cat > kubernetes/kube-controller-manager-csr.json << "EOF" { "CN": "system:kube-controller-manager", # 这里的"CN"值非常重要,CONTROLLER-MANAGER是否能正常与APISERVER通讯与此值有关 "hosts": [ # K8S默认会提取"CN"字段的值作为用户名,这实际是指K8S"RoleBinding/ClusterRoleBinding"资源中的 "127.0.0.1", # "subjects.kind"的值为"User";例如K8S内建用户"system:kube-proxy",其相应的定义为"User" "192.168.100.41", # KUBE-CONTROLLER-MANAGER运行节点的IP "192.168.100.42", # 理论上只写对应的一个IP也是可以的,但写在一起能减少证书的生成操作 "192.168.100.43" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "KUBERNETES", # 这里的定义的"O"值相对CONTROLLER-MANAGER来说没什么特殊意义 "OU": "LEMONSYS" # K8S默认会提取"O"字段的值作为组,这实际是指K8S"RoleBinding/ClusterRoleBinding"资源中的 } # "subjects.kind"的值为"Group";例如K8S内建用户"system:masters",其相应的定义为"Group" ] } EOF # 生成服务器密钥对 -- "kube-controller-manager-key.pem" 和"kube-controller-manager.pem" $ cd /opt/cluster/ssl $ cfssl gencert -ca=rootca/rootca.pem -ca-key=rootca/rootca-key.pem --config=cfssl-conf.json -profile=common kubernetes/kube-controller-manager-csr.json | cfssljson -bare kubernetes/kube-controller-manager |
生成"kube-controller-manager.kubeconfig"配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ cd /opt/cluster/ssl $ kubectl config set-cluster kubernetes --certificate-authority=/opt/cluster/ssl/rootca/rootca.pem \ --embed-certs=true --server=https://192.168.100.40:6443 \ --kubeconfig=kubernetes/kube-controller-manager.kubeconfig $ kubectl config set-credentials kube-controller-manager --client-certificate=kubernetes/kube-controller-manager.pem \ --client-key=kubernetes/kube-controller-manager-key.pem --embed-certs=true \ --kubeconfig=kubernetes/kube-controller-manager.kubeconfig $ kubectl config set-context default --cluster=kubernetes --user=kube-controller-manager \ --kubeconfig=kubernetes/kube-controller-manager.kubeconfig $ kubectl config use-context default --kubeconfig=kubernetes/kube-controller-manager.kubeconfig |
将证书与配置文件分发至其它服务器[分发至:192.168.100.42 - 43];
1 2 |
scp -r /opt/cluster/ssl 192.168.100.42:/opt/cluster/ scp -r /opt/cluster/ssl 192.168.100.43:/opt/cluster/ |
为"KUBE-CONTROLLER-MANAGER"生成"kube-controller-manager.service"启动文件;本处特别注意,此文件通用![192.168.100.41 - 43]:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
cat > /usr/lib/systemd/system/kube-controller-manager.service << "EOF" [Unit] Description=Kubernetes:Kube-Controller-Manager After=network.target network-online.target Wants=network-online.target [Service] Restart=on-failure RestartSec=5 ExecStart=/usr/local/bin/kube-controller-manager \ --cluster-name=kubernetes \ # K8S集群的前缀名称[默认"kubernetes"] --secure-port=10257 \ # HTTPS的服务端口 --bind-address=127.0.0.1 \ # CONTROOLER-MANAGER监听地址 --service-cluster-ip-range=10.96.0.0/16 \ # ClusterIP地址范围[为"Service"资源分配的IP范围] --allocate-node-cidrs=true \ # 启用PodIP限制分配范围的功能 --cluster-cidr=10.97.0.0/16 \ # PodIP的CIDR范围 --leader-elect=true \ # 启用CONTROOLER-MANAGER主节点选举功能 --controllers=*,bootstrapsigner,tokencleaner \ # 启用完整的所有的控制器 --kubeconfig=/opt/cluster/ssl/kubernetes/kube-controller-manager.kubeconfig \ # KUBECONFIG配置文件 --tls-cert-file=/opt/cluster/ssl/kubernetes/kube-controller-manager.pem \ # CONTROOLER-MANAGER的公钥 --tls-private-key-file=/opt/cluster/ssl/kubernetes/kube-controller-manager-key.pem \ # CONTROOLER-MANAGER的私钥 --cluster-signing-cert-file=/opt/cluster/ssl/rootca/rootca.pem \ # 指定一个CA证书用于签发集群范围内的其它证书 --cluster-signing-key-file=/opt/cluster/ssl/rootca/rootca-key.pem \ # 指定一个CA私钥用于签发集群范围内的其它证书 --cluster-signing-duration=87600h0m0s \ # 所签发的其它证书的有效期 --use-service-account-credentials=true \ # 启用K8S内置的RBAC权限策略 --root-ca-file=/opt/cluster/ssl/rootca/rootca.pem \ # 启用后在服务账号的令牌 Secret 中会包含此根证书机构 --service-account-private-key-file=/opt/cluster/ssl/rootca/rootca-key.pem \ # 此私钥用于对服务账号令牌签名 --logtostderr=false \ # API-SERVER日志配置 --v=2 \ --log-dir=/opt/cluster/log/kube-controller-manager [Install] WantedBy=multi-user.target EOF |
启动"KUBE-CONTROLLER-MANAGER"组件[192.168.100.41 - 43];你应该可以看到"KUBE-CONTROLLER-MANAGER"组件的正常启动:
1 |
systemctl daemon-reload && systemctl enable --now kube-controller-manager.service && systemctl status kube-controller-manager.service |
使用"kubectl"命令进行复查:
1 2 |
# 使用下面这个命令可以查询到"KUBE-CONTROLLER-MANAGER"组件的连接状态 $ kubectl get componentstatuses |
关于"KUBE-APISERVER"组件的部署至此结束~~
二、KUBE-SCHEDULER组件
以下操作均在[192.168.100.41]进行,然后按需要推送至[192.168.100.42 - 43];开始:生成"KUBE-SCHEDULER"相关证书:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
# 生成证书请求"kube-scheduler-csr.json" $ cd /opt/cluster/ssl $ cat > kubernetes/kube-scheduler-csr.json << "EOF" { "CN": "system:kube-scheduler", # 这里的"CN"值非常重要,SCHEDULER是否能正常与APISERVER通讯与此值有关 "hosts": [ # K8S默认会提取"CN"字段的值作为用户名,这实际是指K8S"RoleBinding/ClusterRoleBinding"资源中的 "127.0.0.1", # "subjects.kind"的值为"User";例如K8S内建用户"system:kube-proxy",其相应的定义为"User" "192.168.100.41", # KUBE-SCHEDULER运行节点的IP "192.168.100.42", # 理论上只写对应的一个IP也是可以的,但写在一起能减少证书的生成操作 "192.168.100.43" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "CN", "ST": "GuangDong", "L": "GuangZhou", "O": "KUBERNETES", # 这里的定义的"O"值相对CONTROLLER-MANAGER来说没什么特殊意义 "OU": "LEMONSYS" # K8S默认会提取"O"字段的值作为组,这实际是指K8S"RoleBinding/ClusterRoleBinding"资源中的 } # "subjects.kind"的值为"Group";例如K8S内建用户"system:masters",其相应的定义为"Group" ] } EOF # 生成服务器密钥对 -- "kube-scheduler-key.pem" 和"kube-scheduler.pem" $ cd /opt/cluster/ssl $ cfssl gencert -ca=rootca/rootca.pem -ca-key=rootca/rootca-key.pem --config=cfssl-conf.json -profile=common kubernetes/kube-scheduler-csr.json | cfssljson -bare kubernetes/kube-scheduler |
生成"kube-scheduler.kubeconfig"配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ cd /opt/cluster/ssl $ kubectl config set-cluster kubernetes --certificate-authority=/opt/cluster/ssl/rootca/rootca.pem \ --embed-certs=true --server=https://192.168.100.40:6443 \ --kubeconfig=kubernetes/kube-scheduler.kubeconfig $ kubectl config set-credentials kube-scheduler --client-certificate=kubernetes/kube-scheduler.pem \ --client-key=kubernetes/kube-scheduler-key.pem --embed-certs=true \ --kubeconfig=kubernetes/kube-scheduler.kubeconfig $ kubectl config set-context default --cluster=kubernetes --user=kube-scheduler \ --kubeconfig=kubernetes/kube-scheduler.kubeconfig $ kubectl config use-context default --kubeconfig=kubernetes/kube-scheduler.kubeconfig |
将证书与配置文件分发至其它服务器[分发至:192.168.100.42 - 43];
1 2 |
scp -r /opt/cluster/ssl 192.168.100.42:/opt/cluster/ scp -r /opt/cluster/ssl 192.168.100.43:/opt/cluster/ |
为"KUBE-SCHEDULER"生成"kube-scheduler.service"启动文件;本处特别注意,此文件通用![192.168.100.41 - 43]:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cat > /usr/lib/systemd/system/kube-scheduler.service << "EOF" [Unit] Description=Kubernetes:Kube-Scheduler After=network.target network-online.target Wants=network-online.target [Service] Restart=on-failure RestartSec=5 ExecStart=/usr/local/bin/kube-scheduler \ --kubeconfig=/opt/cluster/ssl/kubernetes/kube-scheduler.kubeconfig \ --address=127.0.0.1 \ # IP地址保持使用"127.0.0.1"即可 --leader-elect=true \ --logtostderr=false \ --v=2 \ --log-dir=/opt/cluster/log/kube-scheduler [Install] WantedBy=multi-user.target EOF |
启动"KUBE-SCHEDULER"组件[192.168.100.41 - 43];你应该可以看到"KUBE-SCHEDULER"组件的正常启动:
1 |
systemctl daemon-reload && systemctl enable --now kube-scheduler.service && systemctl status kube-scheduler.service |
使用"kubectl"命令进行复查:
1 2 |
# 使用下面这个命令可以查询到"KUBE-SCHEDULER"组件的连接状态 $ kubectl get componentstatuses |
关于"KUBE-SCHEDULER"组件的部署至此结束~~
结、
关于"KUBE-CONTROLLER-MANAGER"与"KUBE-SCHEDULER"组件,相对"KUBE-APISERVER"组件要简单一些,在完成了了这两个组件的部署,即一个MASTER节点就已经部署完成了。下一节的内容是"KUBELET"组件与"KUBE-PROXY"组件的部署,这也是K8S集群中,工作节点的主要组件。本篇完,读者可点击以下链接进入下一章或返回上一章;
K8S二进制部署高可用集群-1.22[五]:等您坐沙发呢!