今天通过 kubelet命令 查看集群信息的时候,突然发现证书过期了。刚好一年
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get sc
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2022-12-15T00:20:43+08:00 is after 2022-12-12T16:00:42Z
可以通过 下面的命令查看实际证书的有效时间。
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep Not
Not Before: Dec 12 16:00:42 2021 GMT
Not After : Dec 12 16:00:42 2022 GMT
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
可以看到,当前证书只有一年的有效期,过期了,没办法做认证,所以 apiservice 组件无法转化 kubectl 命令。
当前集群使用 kubeadm 安装,默认情况下,kubeadm 会生成运行一个集群所需的全部证书。各个证书到的有效期如下:
/etc/kubernetes/pki/etcd/ca.crt #10年有效期
/etc/kubernetes/pki/front-proxy-ca.crt #10年有效期
/etc/kubernetes/pki/ca.crt #10年有效期
/etc/kubernetes/pki/apiserver.crt #1年有效期
/etc/kubernetes/pki/apiserver-etcd-client.crt #1年有效期
/etc/kubernetes/pki/front-proxy-client.crt #1年有效期
/etc/kubernetes/pki/etcd/server.crt #1年有效期
/etc/kubernetes/pki/etcd/peer.crt #1年有效期
/etc/kubernetes/pki/etcd/healthcheck-client.crt #1年有效期
/etc/kubernetes/pki/apiserver-kubelet-client.crt #1年有效期
可以使用 check-expiration 子命令来检查证书何时过期
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 12, 2022 16:00 UTC no
apiserver Dec 12, 2022 16:00 UTC ca no
apiserver-etcd-client Dec 12, 2022 16:00 UTC etcd-ca no
apiserver-kubelet-client Dec 12, 2022 16:00 UTC ca no
controller-manager.conf Dec 12, 2022 16:00 UTC no
etcd-healthcheck-client Dec 12, 2022 16:00 UTC etcd-ca no
etcd-peer Dec 12, 2022 16:00 UTC etcd-ca no
etcd-server Dec 12, 2022 16:00 UTC etcd-ca no
front-proxy-client Dec 12, 2022 16:00 UTC front-proxy-ca no
scheduler.conf Dec 12, 2022 16:00 UTC no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 10, 2031 16:00 UTC 8y no
etcd-ca Dec 10, 2031 16:00 UTC 8y no
front-proxy-ca Dec 10, 2031 16:00 UTC 8y no
该命令显示 /etc/kubernetes/pki 文件夹中的客户端证书以及 kubeadm(admin.conf、controller-manager.conf 和 scheduler.conf) 使用的 KUBECONFIG 文件中嵌入的客户端证书的到期时间/剩余时间。
实际上kubeadm 会在 master 升级 的时候更新所有证书。所以自动更新CA 的前提是需要在一年以内执行过 Kubernetes 版本升级。
手动更新大于等于 v1.15.x 的版本可直接使用 kubeadm certs renew 具体的证书名称 来手动更新证书有效期,执行命令后证书有效期延长 1 年,此命令用 CA(或者 front-proxy-CA )证书和存储在 /etc/kubernetes/pki 中的密钥(.key),如果小于 v1.15.x 那只能用现有的 密钥重新生成 证书。
当前版本为 1.22.2 所以我们使用 kubeadm 的方式,续约之前需要备份当前的密钥和证书
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$cp -r /etc/kubernetes /etc/kubernetes.20221214.bak
执行续约命名,这里续约全部的证书
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
重新查看证书过期时间
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 14, 2023 17:11 UTC 364d no
apiserver Dec 14, 2023 17:11 UTC 364d ca no
apiserver-etcd-client Dec 14, 2023 17:11 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 14, 2023 17:11 UTC 364d ca no
controller-manager.conf Dec 14, 2023 17:11 UTC 364d no
etcd-healthcheck-client Dec 14, 2023 17:11 UTC 364d etcd-ca no
etcd-peer Dec 14, 2023 17:11 UTC 364d etcd-ca no
etcd-server Dec 14, 2023 17:11 UTC 364d etcd-ca no
front-proxy-client Dec 14, 2023 17:11 UTC 364d front-proxy-ca no
scheduler.conf Dec 14, 2023 17:11 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 10, 2031 16:00 UTC 8y no
etcd-ca Dec 10, 2031 16:00 UTC 8y no
front-proxy-ca Dec 10, 2031 16:00 UTC 8y no
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
执行完此命令之后你需要重启 master 的 静态 Pods。因为动态证书重载目前还不被所有组件和证书支持,所有这项操作是必须的。静态 Pods 是被本地 kubelet 而不是 API Server 管理, 所以 kubectl 不能用来删除或重启他们。
要重启静态 Pod 你可以临时将清单文件从 /etc/kubernetes/manifests/ 移除并等待 20 秒 (参考 KubeletConfiguration 结构 中的 fileCheckFrequency 值)。如果 Pod 不在清单目录里,kubelet 将会终止它。在另一个 fileCheckFrequency 周期之后你可以将文件移回去,为了组件可以完成 kubelet 将重新创建 Pod 和证书更新。
这里 把 这个目录 的 静态 pod yaml 文件打包 ,然后删掉,20 秒后这解包出来
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$tar -cf ./static.tar etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml static.tar
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$tar -tf static.tar
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$rm -f *.yaml
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$ls
static.tar
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$
可以发现连接报错, 说明 apiService 组件对应的 pod 死掉了。然后我们在解压
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$kubectl get ns
The connection to the server 192.168.26.81:6443 was refused - did you specify the right host or port?
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$tar -xf static.tar
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml static.tar
再次登录,提示需要认证
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$kubectl get ns
error: You must be logged in to the server (Unauthorized)
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$
我们重新做了证书,可能之前的 kubeconfig 文件 copy 的作废了,需要 把新的 kubeconfig 文件拷贝到 .kube 目录下
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$ls
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$cp admin.conf /root/.kube/config
cp:是否覆盖"/root/.kube/config"?y
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl get ns
NAME STATUS AGE
awx Active 60d
constraints-cpu-example Active 36d
default Active 367d
ingress-nginx Active 356d
..............
OK ,拷贝之后,测试成功,可以正常查看命名空间信息,确认下 master 节点静态 pod 的信息
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl get pods -n kube-system | grep vms81.liruilongs.github.io
etcd-vms81.liruilongs.github.io 1/1 Running 0 367d
kube-apiserver-vms81.liruilongs.github.io 1/1 Running 0 332d
kube-controller-manager-vms81.liruilongs.github.io 1/1 Running 0 365d
kube-scheduler-vms81.liruilongs.github.io 1/1 Running 0 367d
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$
关于 1.5.X 以及之后版本的 证书续约和小伙伴分享到这,对于 1.5.x 版本之前的 ,小伙伴需要依托现有的 密钥重新生成证书,并且回填到 对应的 kubeconfig 配置文件。下面的 github 项目是有大佬写的一个 续约的脚本,可以用于 1.5.x 之前的版本。
如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!
赞1
添加新评论0 条评论