youki2008
作者youki2008·2020-04-23 09:37
系统架构师·DDT

Kubeadm 部署安装 Kubernetes 群集(v1.15)

字数 16859阅读 1519评论 3赞 6

Kubeadm 部署安装 Kubernetes 群集

实验环境:

组件版本:

  • Docker 18.09
  • Kubeadm 1.15.1
  • Kubectl 1.15.1
  • Kubelet 1.15.1

群集机器:

  • k8s-master01 192.168.27.184
  • k8s-node01 192.168.27.185
  • k8s-node02 192.168.27.187

第一部分:系统初始化

主机名修改:

1、 在 3 台主机上分别设置永久主机名,然后重新登录

192.168.27.184
hostnamectl set-hostname k8s-master01

192.168.27.185
hostnamectl set-hostname k8s-node01

192.168.27.187
hostnamectl set-hostname k8s-node02

2、 修改 /etc/hosts 文件,添加主机名和 IP 的对应关系 (3 台主机都要修改 )

vi /etc/hosts
192.168.27.184 k8s-master01
192.168.27.185 k8s-node01
192.168.27.187 k8s-node02

安装依赖包

yum install -y conntrack ntpdate ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

设置防火墙为 iptables 并清空规则 (3 台主机都要修改 )

systemctl stop firewalld && systemctl disable firewalld
yum install -y iptables-services && systemctl enable iptables && systemctl start iptables && iptables -F && service iptables save

关闭 swap 和 selinux(3 台主机都要修改 )

swapoff -a && sed -i '/ swap / s/^\(.\)$/#\1/g' /etc/fstab && setenforce 0 && sed -i 's/^SELINUX=./SELINUX=disabled/' /etc/selinux/config

设置路由模块
yum install -y bridge-utils.x86_64
modprobe br_netfilter

调整内核参数 (3 台主机都要修改 )

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 #禁止使用swap空间,只有当系统OOM时才允许使用它
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

调整系统时区
\#设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai

\#将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0

\#重启依赖系统时间的服务
systemctl restart rsyslog
systemctl restart crond

关闭系统不需要的服务
systemctl stop posfix&&systemctl disable posfix

设置rsyslogd和systemd journald

mkdir /var/log/journal #持久化保存日志的记录
mkdir etc/systemd/journal.conf.d
cat > /etc/systemd/journal.conf.d/99-prophet.conf <<EOF
[Journal]
\#持久化保存到磁盘
Storage=persistent

\#压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

\#最大占用空间
SystemMaxUse=10G

\#单日志最大文件
SystemMaxFileSize=200M

\#日志保存周期2周
MaxRententionSec=2week

\#不将日志转发到syslog
ForwardToSyslog=no

EOF

systemctl restart systemd-journald

升级系统内核为4.4
CentOS 7.x系统自带的3.10.x内核存在一些bug,导致运行的Docker、kubernetes不稳定,例如:rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
\#安装完成后检查/boot/grub2/grub.cfg中对应内核menuentry中是否包含initrd16配置,如果没有,再安装一次!

yum --enablerepo=elrepo-kernel install -y kernel-lt

\#设置开机从新内核启动
grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

第二部分Kubeadm部署安装

kube-proxy开启ipvs的前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
\#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装Docker软件
yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum update -y && yum install -y docker-ce

\#yum list docker-ce.x86_64 --showduplicates | sort -r 查找所有docker版本

\#创建/etc/docker目录
mkdir /etc/docker

\#配置daemon
cat >/etc/docker/daemon.json <<EOF
{
"exec-opts":["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts":{
"max-size":"100m"
}
}

EOF

mkdir -p /etc/systemd/system/docker-service.d

\#重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安装kubeadm(主从配置)
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF
yum -y install kubeadm-1.15.1 kubectl-1.1.5.1 kubelet-1.15.1
systemctl enable kubelet.service

初始化主节点

kubeadm config print init-defaults > kubeadm-config.yaml

\#修改kubeadm-config.yaml文件中修改以下几处地方:
advertiseAddress: 192.168.27.184
kubernetesVersion: v1.15.1
podSubnet: “10.244.0.0/16”
增加:
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxy
Mode: true mode: ipvs

vi kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:

  • groups:
  • system:bootstrappers:kubeadm:default-node-token
    token: abcdef.0123456789abcdef
    ttl: 24h0m0s
    usages:
  • signing
  • authentication
    kind: InitConfiguration
    localAPIEndpoint:
    advertiseAddress: 192.168.27.184
    bindPort: 6443
    nodeRegistration:
    criSocket: /var/run/dockershim.sock
    name: k8s-master01
    taints:
  • effect: NoSchedule

    key: node-role.kubernetes.io/master

apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: “10.244.0.0/16”
serviceSubnet: 10.96.0.0/12
scheduler: {}


apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs

kubeadm init --config=install-k8s/core/kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.27.184 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.27.184 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.27.184]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.503212 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
64ec333e72f19469304fca25102fda8e4cb068169bea3705e17e6bc2e38f8f78
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.27.184:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ec15e02b16515b7ae03dabb8036d5ed0082f93f010814bcbf3431cb0c4854ba2

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署 flannel 网络 (master 节点上部署 )

[root@k8s-master01 -]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

验证:
① master 节点已经 Ready

[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 22h v1.15.1

② 查询 kube-system 名称空间下

[root@k8s-master01 ~]# kubectl get pods -n kube-system |grep flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-f6l9d 1/1 Running 0 36s

Node节点加入群集

在 2 个 node 服务器上执行,完成以下所有操作 **

kubeadm join 192.168.27.184:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ec15e02b16515b7ae03dabb8036d5ed0082f93f010814bcbf3431cb0c4854ba2

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

验证集群是否初始化成功

[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 22h v1.15.1
k8s-node01 Ready 21h v1.15.1
k8s-node02 Ready 21h v1.15.1

参考文章:
https://www.cnblogs.com/lonelyxmas/p/10621663.html

附:安装部署过程遇到过的问题及解决思路:

  1. Master 节点初始化后, node 节点加入群集时一直卡在 [preflight] Running pre-flight checks 不动

kubeadm join 192.168.27.184:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ec15e02b16515b7ae03dabb8036d5ed0082f93f010814bcbf3431cb0c4854ba2

[preflight] Running pre-flight checks

分析:

Node 节点 join 到 master 节点时通过 apiserver 的 6443 端口进行通信。先检查 master 节点的 6443 端口是否有连接信息 .

在 master 节点上执行如下命令:

netstat -anltp |grep 192.168.27.184

[root@k8s-master01 ~]# netstat -anl tp |grep 192.168.27.184
tcp 0 0 192.168.27.184:2379 0.0.0.0:* LISTEN 17526/etcd
tcp 0 0 192.168.27.184:2380 0.0.0.0:* LISTEN 17526/etcd
tcp 0 0 192.168.27.184:53702 192.168.27.184:6443 ESTABLISHED 17141/kube-controll
tcp 0 0 192.168.27.184:53972 192.168.27.184:6443 ESTABLISHED 17141/kube-controll
tcp 0 0 192.168.27.184:2379 192.168.27.184:36728 ESTABLISHED 17526/etcd
tcp 0 0 192.168.27.184:53694 192.168.27.184:6443 ESTABLISHED 17530/kube-schedule
tcp 0 0 192.168.27.184:53904 192.168.27.184:6443 ESTABLISHED 17107/kube-proxy
tcp 0 896 192.168.27.184:22 192.168.103.99:54763 ESTABLISHED 22332/sshd: root@pt
tcp 0 0 192.168.27.184:53690 192.168.27.184:6443 ESTABLISHED 11020/kubelet
tcp 0 0 192.168.27.184:36728 192.168.27.184:2379 ESTABLISHED 17526/etcd
tcp6 0 0 192.168.27.184:6443 192.168.27.184:53904 ESTABLISHED 17734/kube-apiserve
tcp6 0 0 192.168.27.184:6443 192.168.27.184:33338 ESTABLISHED 17734/kube-apiserve
tcp6 0 0 192.168.27.184:6443 192.168.27.184:53972 ESTABLISHED 17734/kube-apiserve
tcp6 0 0 192.168.27.184:6443 192.168.27.184:53694 ESTABLISHED 17734/kube-apiserve

没有看到 node 节点的通信连接信息,在 node 节点上执行如下命令:

netstat -anltp |grep 192.168.27.184

[root@k8s-node01 ~]# netstat -anltp |grep 192.168.27.184
tcp 0 0 192.168.27.185:35302 192.168.27.184:6443 SYN_SENT
tcp 0 0 192.168.27.185:35280 192.168.27.184:6443 SYN_SENT

node 节点与 master 节点的 tcp 连接状态为 SYN_SENT ,说明 tcp 的三次握手协议没有完成。于是检查 master 节点 iptables 防火墙设置:

[root@k8s-master01 ~]# iptables --list

Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- anywhere anywhere
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere / kubernetes forwarding rules /
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
ACCEPT all -- k8s-master01/16 anywhere
ACCEPT all -- anywhere k8s-master01/16

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- anywhere anywhere

Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere / kubernetes firewall for dropping marked packets / mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere / kubernetes forwarding rules / mark match 0x4000/0x4000
ACCEPT all -- k8s-master01/16 anywhere / kubernetes forwarding conntrack pod source rule / ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere k8s-master01/16 / kubernetes forwarding conntrack pod destination rule / ctstate RELATED,ESTABLISHED

确定是 iptables 防火墙拦截了 node 节点与 master 节点的 6443 端口通信。于是在 iptables 上增加一条防火墙规则。

[root@k8s-master01 ~]# vi /etc/sysconfig/iptables

\#sample configuration for iptables service
\#you can edit this manually or use system-config-firewall
\#please do not ask us to add additional ports/services to this default configuration

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

COMMIT

systemctl redstart iptables

node 节点再次执行 kubeadm join 命令,成功加入到 kubernetes 群集。

  1. kubernetes 安装 flannel 提示 pod cidr not assgned
    \# 问题现象
    使用 kubeadm 安装的 kubernetes 集群, 再添加多个的 master 时, flannel 无法启动, 日志出现错误提示 pod cidr not assgned

\# 问题原因
kube-controller-manager 没有给新加入的节点分配IP段.

\#解决方案
手动给节点分配IP段, 需要在 master 节点执行 kubectl edit node {...} , 然后手动给 spec 节添加 podCIRD 字段
apiVersion: v1
kind: Node
metadata:
name: kube-master1
spec:
podCIDR: 10.244.0.0/16

如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!

6

添加新评论3 条评论

youki2008youki2008系统架构师DDT
2020-07-03 09:47
k8s生产环境最好使用二进制安装,当然使用kubeadm安装也是可以的,只是默认证书的有效期是一年,到期后需要更新证书有效期。更新过程也不是很麻烦,网上都有教程。
swimming03swimming03系统工程师芯火科技
2020-07-01 13:30
生产环境的话可以这么来么?
soisnicesoisnice系统运维工程师mmc
2020-06-30 10:59
写的很不错,学习中

youki2008@soisnice 谢谢,共同进步!

2020-06-30 22:22
Ctrl+Enter 发表

作者其他文章

相关文章

相关问题

相关资料

X社区推广