dream_653
作者dream_653·2020-04-18 10:57
系统应用运维·*****

CentOS7.5 Kubernetes V1.13 二进制部署集群

字数 49478阅读 1146评论 0赞 0

一、概述

kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现普遍可用性。
Kubernetes 1.13 的核心特性包括:利用 kubeadm 简化集群管理、容器存储接口(CSI )以及将 CoreDNS 作为默认 DNS 。
利用 kubeadm 简化集群管理功能
大多数与 Kubernetes 接触频繁的人或多或少都会亲自动手使用 kubeadm ,它是管理集群生命周期的重要工具,能够帮助从创建到配置再到升级的整个流程。;随着 1.13 版本的发布,kubeadm 功能进入 GA 版本,正式普遍可用。kubeadm 处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心 Kubernetes 组件,以便为新节点提供安全而简单的连接流程并支持轻松升级。
该 GA 版本中最值得注意的是已经毕业的高级功能,尤其是可插拔性和可配置性。kubeadm 旨在为管理员与高级自动化系统提供一套工具箱,如今已迈出重要一步。
容器存储接口(CSI)
容器存储接口最初于 1.9 版本中作为 alpha 测试功能引入,在 1.10 版本中进入 beta 测试,如今终于进入 GA 阶段正式普遍可用。在 CSI 的帮助下,Kubernetes 卷层将真正实现可扩展性。通过 CSI ,第三方存储供应商将可以直接编写可与 Kubernetes 互操作的代码,而无需触及任何 Kubernetes 核心代码。事实上,相关规范也已经同步进入 1.0 阶段。
随着 CSI 的稳定,插件作者将能够按照自己的节奏开发核心存储插件,详见 CSI 文档。
CoreDNS 成为 Kubernetes 的默认 DNS 服务器
在 1.11 版本中,开发团队宣布 CoreDNS 已实现基于 DNS 服务发现的普遍可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成为 Kubernetes 中的默认 DNS 服务器。CoreDNS 是一种通用的、权威的 DNS 服务器,能够提供与 Kubernetes 向下兼容且具备可扩展性的集成能力。由于 CoreDNS 自身单一可执行文件与单一进程的特性,因此 CoreDNS 的活动部件数量会少于之前的 DNS 服务器,且能够通过创建自定义 DNS 条目来支持各类灵活的用例。此外,由于 CoreDNS 采用 Go 语言编写,它具有强大的内存安全性。
CoreDNS 现在是 Kubernetes 1.13 及后续版本推荐的 DNS 解决方案,Kubernetes 已将常用测试基础设施架构切换为默认使用 CoreDNS ,因此,开发团队建议用户也尽快完成切换。KubeDNS 仍将至少支持一个版本,但现在是时候开始规划迁移了。另外,包括 1.11 中 Kubeadm 在内的许多 OSS 安装工具也已经进行了切换。

1、安装环境准备:

部署节点说明
IP地址 主机名 CPU 内存 磁盘
10.167.130.201 master 4C 4G 50G
10.167.130.202 node01 4C 4G 50G
10.167.130.207 node02 4C 4G 50G
2、架构图
Kubernetes 架构图

Flannel网络架构图

  • 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另外一端。
  • Flannel通过Etcd服务维护了一张节点间的路由表,在稍后的配置部分我们会介绍其中的内容。
  • 源主机的flanneld服务将原本的数据内容UDP封装后根据自己的路由表投递给目的节点的flanneld服务,数据到达以后被解包,然后直接进入目的节点的flannel0虚拟网卡,然后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通信一下的有docker0路由到达目标容器。

3、 Kubernetes工作流程

集群功能各模块功能描述:
Master节点:
Master节点上面主要由四个模块组成,APIServer,schedule,controller-manager,etcd

  • APIServer: APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。
  • schedule: schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。
  • controller manager: 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了.
  • etcd:etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API.

Node节点:

每个Node节点主要由三个模板组成:kublet, kube-proxy

  • kube-proxy: 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。
  • kublet:kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致.

    二、基础环境配置(三台)
    1.1、设置hosts文件
          vim /etc/hosts
    10.167.130.201 master
    10.167.130.202 node01
    10.167.130.207 node02     
    1.2、设置关闭防火墙及SELINUX
    systemctl stop firewalld && systemctl disable firewalld
    setenforce 0
    vi /etc/selinux/config
    SELINUX=disabled
    1.3、关闭Swap
    swapoff -a && sysctl -w vm.swappiness=0
    vi /etc/fstab
    #UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0
    1.4、设置Docker所需参数
    cat << EOF | tee /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl -p /etc/sysctl.d/k8s.conf
    1.5、安装 Docker
     yum install -y yum-utils   device-mapper-persistent-data  lvm2
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    yum list docker-ce --showduplicates | sort -r
    yum install docker-ce -y
    systemctl start docker && systemctl enable docker
    1.6、修改镜像源
    mkdir -p /etc/docker
    tee /etc/docker/daemon.json <<-'EOF'
    {"registry-mirrors": ["https://890km4uy.mirror.aliyuncs.com"],"graph": "/data/docker"}
    EOF
    systemctl daemon-reload
    systemctl restart docker
    1.7、ssh认证
    root@master:~# ssh-keygen (一路回车)
    root@master:~# ssh-copy-id 10.167.130.202
    root@master:~# ssh-copy-id 10.167.130.207
    
    三、Etcd v3.3.10 版本部署(3台集群)
    1.1、规划
    etcd01 10.167.130.201
    etcd02 10.167.130.202
    etcd03 10.167.130.207
    1.2、安装及配置CFSSL(master操作)
     root@master:~# mkdir /data/ssl -p
     root@master:~# cd /data/
     root@master:/data# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
     root@master:/data# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
     root@master:/data# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
     root@master:/data# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
     root@master:/data# mv cfssl_linux-amd64 /usr/local/bin/cfssl
     root@master:/data# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
     root@master:/data# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
     root@master:/data# cd /data/ssl/
    1.3、二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.3.10
    下载 etcd-v3.3.10-linux-amd64.tar.gz
    1.4、生成key
     root@master:/data/ssl# mkdir /data/ssl/etcd
     root@master:/data/ssl# cd /data/ssl/etcd
     root@master:/data/ssl/etcd#vim etcd.sh
    # etcd
    # cat ca-config.json
    cat > ca-config.json <<EOF
    {
    "signing": {
      "default": {
        "expiry": "87600h"
      },
      "profiles": {
        "www": {
           "expiry": "87600h",
           "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
          ]
        }
      }
    }
    }
    EOF
    
    # cat ca-csr.json
    cat > ca-csr.json <<EOF
    {
      "CN": "etcd CA",
      "key": {
          "algo": "rsa",
          "size": 2048
      },
      "names": [
          {
              "C": "CN",
              "L": "Beijing",
              "ST": "Beijing"
          }
      ]
    }
    EOF
    
    # cat server-csr.json
    cat > server-csr.json <<EOF
    {
      "CN": "etcd",
      "hosts": [
      "10.167.130.201",
      "10.167.130.202",
      "10.167.130.207"
      ],
      "key": {
          "algo": "rsa",
          "size": 2048
      },
      "names": [
          {
              "C": "CN",
              "L": "Beijing",
              "ST": "Beijing"
          }
      ]
    }
    EOF
    
     root@master:/data/ssl/etcd#sh etcd.sh
     root@master:/data/ssl/etcd# ls -lrt
    total 16
    -rw-r--r-- 1 root root 950 Jan  4 15:46 etcd.sh
    -rw-r--r-- 1 root root 287 Jan  4 15:46 ca-config.json
    -rw-r--r-- 1 root root 209 Jan  4 15:46 ca-csr.json
    -rw-r--r-- 1 root root 293 Jan  4 15:46 server-csr.json
    
     root@master:/data/ssl/etcd# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    2019/01/04 15:47:40 [INFO] generating a new CA key and certificate from CSR
    2019/01/04 15:47:40 [INFO] generate received request
    2019/01/04 15:47:40 [INFO] received CSR
    2019/01/04 15:47:40 [INFO] generating key: rsa-2048
    2019/01/04 15:47:40 [INFO] encoded CSR
    2019/01/04 15:47:40 [INFO] signed certificate with serial number 298742305978348987462201054923289128332949640870
     root@master:/data/ssl/etcd# ls -lrt
    total 28
    -rw-r--r-- 1 root root  950 Jan  4 15:46 etcd.sh
    -rw-r--r-- 1 root root  287 Jan  4 15:46 ca-config.json
    -rw-r--r-- 1 root root  209 Jan  4 15:46 ca-csr.json
    -rw-r--r-- 1 root root  293 Jan  4 15:46 server-csr.json
    -rw-r--r-- 1 root root 1265 Jan  4 15:47 ca.pem
    -rw------- 1 root root 1675 Jan  4 15:47 ca-key.pem
    -rw-r--r-- 1 root root  956 Jan  4 15:47 ca.csr
     root@master:/data/ssl/etcd# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    2019/01/04 15:48:23 [INFO] generate received request
    2019/01/04 15:48:23 [INFO] received CSR
    2019/01/04 15:48:23 [INFO] generating key: rsa-2048
    2019/01/04 15:48:23 [INFO] encoded CSR
    2019/01/04 15:48:23 [INFO] signed certificate with serial number 318982732805652711605393514888589880134711342068
    2019/01/04 15:48:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
     root@master:/data/ssl/etcd# ls -lrt
    total 40
    -rw-r--r-- 1 root root  950 Jan  4 15:46 etcd.sh
    -rw-r--r-- 1 root root  287 Jan  4 15:46 ca-config.json
    -rw-r--r-- 1 root root  209 Jan  4 15:46 ca-csr.json
    -rw-r--r-- 1 root root  293 Jan  4 15:46 server-csr.json
    -rw-r--r-- 1 root root 1265 Jan  4 15:47 ca.pem
    -rw------- 1 root root 1675 Jan  4 15:47 ca-key.pem
    -rw-r--r-- 1 root root  956 Jan  4 15:47 ca.csr
    -rw-r--r-- 1 root root 1338 Jan  4 15:48 server.pem
    -rw------- 1 root root 1675 Jan  4 15:48 server-key.pem
    -rw-r--r-- 1 root root 1013 Jan  4 15:48 server.csr
    1.5、部署etcd
     root@master:/data/ssl/etcd# mkdir /data/src/
     root@master:/data/ssl/etcd# cd /data/src/
     root@master:/data/src# mkdir /opt/etcd/{bin,cfg,ssl} -p
     root@master:/data/src# tar xf etcd-v3.3.10-linux-amd64.tar.gz
     root@master:/data/src# mv etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
     root@master:/data/src# vim /opt/etcd/cfg/etcd
    # 3台机器 ETCD_NAME 名字不一样  本机IP不一样 
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://10.167.130.201:2380"
    ETCD_LISTEN_CLIENT_URLS="https://10.167.130.201:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.167.130.201:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://10.167.130.201:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://10.167.130.201:2380,etcd02=https://10.167.130.202:2380,etcd03=https://10.167.130.207:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
  • ETCD_NAME 节点名称
  • ETCD_DATA_DIR 数据目录
  • ETCD_LISTEN_PEER_URLS 集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
  • ETCD_INITIAL_CLUSTER 集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN 集群Token
  • ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

    root@master:/data/src# vim /usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target

    [Service]
    Type=notify
    EnvironmentFile=/opt/etcd/cfg/etcd
    ExecStart=/opt/etcd/bin/etcd \
    --name=${ETCD_NAME} \
    --data-dir=${ETCD_DATA_DIR} \
    --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
    --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
    --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
    --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
    --initial-cluster=${ETCD_INITIAL_CLUSTER} \
    --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
    --initial-cluster-state=new \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --peer-cert-file=/opt/etcd/ssl/server.pem \
    --peer-key-file=/opt/etcd/ssl/server-key.pem \
    --trusted-ca-file=/opt/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    root@master:/data/src# ssh node01 "mkdir /opt/etcd/{bin,cfg,ssl} -p"
    root@master:/data/src# ssh node02 "mkdir /opt/etcd/{bin,cfg,ssl} -p"
    root@master:/data/src# ssh node01 "mkdir /data/src/ -p"
    root@master:/data/src# ssh node02 "mkdir /data/src/ -p"
    root@master:/data/src# scp /opt/etcd/bin/{etcd,etcdctl} node01:/opt/etcd/bin/
    root@master:/data/src# scp /opt/etcd/bin/{etcd,etcdctl} node02:/opt/etcd/bin/
    root@master:/data/src# scp /opt/etcd/cfg/etcd node01:/opt/etcd/cfg/
    root@master:/data/src# scp /opt/etcd/cfg/etcd node02:/opt/etcd/cfg/
    root@master:/data/src# scp /usr/lib/systemd/system/etcd.service node01:/usr/lib/systemd/system/
    root@master:/data/src# scp /usr/lib/systemd/system/etcd.service node02:/usr/lib/systemd/system/
    root@master:/data/src# ssh node01 "sed -i 's/ETCD_NAME\=\"etcd01\"/ETCD_NAME\=\"etcd02\"/g;s/ETCD_LISTEN_PEER_URLS\=\"https\:\/\/10.167.130.201\:2380\"/ETCD_LISTEN_PEER_URLS\=\"https\:\/\/10.167.130.202\:2380\"/g;s/ETCD_LISTEN_CLIENT_URLS\=\"https\:\/\/10.167.130.201\:2379\"/ETCD_LISTEN_CLIENT_URLS\=\"https\:\/\/10.167.130.202\:2379\"/g;s/ETCD_INITIAL_ADVERTISE_PEER_URLS\=\"https\:\/\/10.167.130.201\:2380\"/ETCD_INITIAL_ADVERTISE_PEER_URLS\=\"https\:\/\/10.167.130.202\:2380\"/g;s/ETCD_ADVERTISE_CLIENT_URLS\=\"https\:\/\/10.167.130.201\:2379\"/ETCD_ADVERTISE_CLIENT_URLS\=\"https\:\/\/10.167.130.202\:2379\"/g' /opt/etcd/cfg/etcd"
    root@master:/data/src# ssh node02 "sed -i 's/ETCD_NAME\=\"etcd01\"/ETCD_NAME\=\"etcd03\"/g;s/ETCD_LISTEN_PEER_URLS\=\"https\:\/\/10.167.130.201\:2380\"/ETCD_LISTEN_PEER_URLS\=\"https\:\/\/10.167.130.207\:2380\"/g;s/ETCD_LISTEN_CLIENT_URLS\=\"https\:\/\/10.167.130.201\:2379\"/ETCD_LISTEN_CLIENT_URLS\=\"https\:\/\/10.167.130.207\:2379\"/g;s/ETCD_INITIAL_ADVERTISE_PEER_URLS\=\"https\:\/\/10.167.130.201\:2380\"/ETCD_INITIAL_ADVERTISE_PEER_URLS\=\"https\:\/\/10.167.130.207\:2380\"/g;s/ETCD_ADVERTISE_CLIENT_URLS\=\"https\:\/\/10.167.130.201\:2379\"/ETCD_ADVERTISE_CLIENT_URLS\=\"https\:\/\/10.167.130.207\:2379\"/g' /opt/etcd/cfg/etcd"
    root@master:/data/src# cp /data/ssl/etcd/ca*pem /opt/etcd/ssl/
    root@master:/data/src# cp /data/ssl/etcd/server*pem /opt/etcd/ssl/
    root@master:/data/src# scp /data/ssl/etcd/ca*pem root@node01:/opt/etcd/ssl/
    root@master:/data/src# scp /data/ssl/etcd/ca*pem root@node02:/opt/etcd/ssl/
    root@master:/data/src# scp /data/ssl/etcd/server*pem root@node01:/opt/etcd/ssl/
    root@master:/data/src# scp /data/ssl/etcd/server*pem root@node02:/opt/etcd/ssl/
    1.6、启动etcd
    root@master:/data/src# systemctl daemon-reload
    root@master:/data/src# systemctl enable etcd
    root@master:/data/src# systemctl start etcd
    root@master:/data/src# ssh node01 "systemctl daemon-reload ;systemctl enable etcd;systemctl start etcd"
    root@master:/data/src# ssh node02 "systemctl daemon-reload ;systemctl enable etcd;systemctl start etcd"
    1.7、检查集群状态
    root@master:/data/src# cd /data/ssl/etcd/
    root@master:/data/ssl/etcd# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.167.130.201:2379,https://10.167.130.202:2379,https://10.167.130.207:2379" cluster-health
    member 60fa4f794a519ffa is healthy: got healthy result from https://10.167.130.207:2379
    member 6640869b9ef07d7c is healthy: got healthy result from https://10.167.130.201:2379
    member ab87db2316db2ca7 is healthy: got healthy result from https://10.167.130.202:2379
    cluster is healthy
    root@master:/data/ssl/etcd#
    四、flanneld v0.10.0 版本部署
    1.1、向 etcd 写入集群 Pod 网段信息
    root@master:/data/ssl/etcd# cd /data/ssl/etcd/
    root@master:/data/ssl/etcd# /opt/etcd/bin/etcdctl \
    --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
    --endpoints="https://10.167.130.201:2379,https://10.167.130.202:2379,https://10.167.130.207:2379" \
    set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
    flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
    写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致;
    flanneld下载地址:https://github.com/coreos/flannel/releases
    1.2、下载解压安装(master node01 node02均安装,如果master不作为节点可不安装)
    root@master:/data/ssl/etcd#cd /data/src/
    wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
    root@master:/data/src# mkdir /opt/kubernetes/bin -p
    root@master:/data/src# tar xf flannel-v0.10.0-linux-amd64.tar.gz
    root@master:/data/src# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

      root@master:/data/src# mkdir  /opt/kubernetes/cfg 

    1.3、配置Flannel
    root@master:~# vim /opt/kubernetes/cfg/flanneld
    FLANNEL_OPTIONS="--etcd-endpoints=https://10.167.130.201:2379,https://10.167.130.202:2379,https://10.167.130.207:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    1.4、创建 flanneld 的 systemd unit 文件
    vim /usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service

    [Service]
    Type=notify
    EnvironmentFile=/opt/kubernetes/cfg/flanneld
    ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
    ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥;
    flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口,如上面的 eth0 接口;
    flanneld 运行时需要 root 权限;
    1.5、配置Docker启动指定子网段
    vim /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target

    [Service]
    Type=notify
    EnvironmentFile=/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s

    [Install]
    WantedBy=multi-user.target

    root@master:/data/src#scp /opt/kubernetes/bin/{flanneld,mk-docker-opts.sh} node01:/opt/kubernetes/bin/
    root@master:/data/src#scp /opt/kubernetes/bin/{flanneld,mk-docker-opts.sh} node02:/opt/kubernetes/bin/
    root@master:/data/src#scp /opt/kubernetes/cfg/flanneld node01:/opt/kubernetes/cfg/flanneld
    root@master:/data/src#scp /opt/kubernetes/cfg/flanneld node02:/opt/kubernetes/cfg/flanneld
    root@master:/data/src#scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
    root@master:/data/src#scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
    root@master:/data/src#scp /usr/lib/systemd/system/flanneld.service node01:/usr/lib/systemd/system/flanneld.service
    root@master:/data/src#scp /usr/lib/systemd/system/flanneld.service node02:/usr/lib/systemd/system/flanneld.service
    1.6、启动flanneld服务
    root@master:/data/src# systemctl daemon-reload
    root@master:/data/src#systemctl start flanneld
    root@master:/data/src#systemctl enable flanneld
    root@master:/data/src#systemctl restart docker
    root@master:/data/src#ssh node01 "systemctl daemon-reload;systemctl start flanneld;systemctl enable flanneld;systemctl restart docker"
    root@master:/data/src#ssh node02 "systemctl daemon-reload;systemctl start flanneld;systemctl enable flanneld;systemctl restart docker"
    1.7、检查
    ps -ef |grep docker
    ip addr
    确保docker0与flannel.1在同一网段。
    测试不同节点互通,在当前节点访问另一个Node节点docker0 IP
    如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel
    五、部署 master 节点
    kubernetes master 节点运行如下组件:
    kube-apiserver
    kube-scheduler
    kube-controller-manager
    kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。
    将二进制文件解压拷贝到master 节点 下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#v1131
    1.1、生成证书
    root@master:/data/ssl/etcd# mkdir /data/ssl/k8s
    root@master:/data/ssl/etcd# cd /data/ssl/k8s
    root@master:/data/ssl/k8s# vim k8s-ssl.sh

    创建 Kubernetes CA 证书

    cat ca-config.json

    cat > ca-config.json << EOF
    {
    "signing": {
    "default": {

    "expiry": "87600h"

    },
    "profiles": {

    "kubernetes": {
       "expiry": "87600h",
       "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
      ]
    }

    }
    }
    }
    EOF

    cat ca-csr.json

    cat > ca-csr.json << EOF
    {
    "CN": "kubernetes",
    "key": {

      "algo": "rsa",
      "size": 2048

    },
    "names": [

      {
          "C": "CN",
          "L": "Beijing",
          "ST": "Beijing",
          "O": "k8s",
          "OU": "System"
      }

    ]
    }
    EOF
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

    生成API_SERVER证书

    server-csr.json

    cat > server-csr.json << EOF
    {
    "CN": "kubernetes",
    "hosts": [

    "10.0.0.1",
    "127.0.0.1",
    "10.167.130.201",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"

    ],
    "key": {

      "algo": "rsa",
      "size": 2048

    },
    "names": [

      {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
      }

    ]
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

    cat kube-proxy-csr.json

    cat > kube-proxy-csr.json << EOF
    {
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {

    "C": "CN",
    "L": "BeiJing",
    "ST": "BeiJing",
    "O": "k8s",
    "OU": "System"

    }
    ]
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    root@master:/data/ssl/k8s# sh k8s-ssl.sh
    2019/01/07 13:04:08 [INFO] generating a new CA key and certificate from CSR
    2019/01/07 13:04:08 [INFO] generate received request
    2019/01/07 13:04:08 [INFO] received CSR
    2019/01/07 13:04:08 [INFO] generating key: rsa-2048
    2019/01/07 13:04:08 [INFO] encoded CSR
    2019/01/07 13:04:08 [INFO] signed certificate with serial number 280137316070160036060323434482450551743079189667
    2019/01/07 13:04:08 [INFO] generate received request
    2019/01/07 13:04:08 [INFO] received CSR
    2019/01/07 13:04:08 [INFO] generating key: rsa-2048
    2019/01/07 13:04:08 [INFO] encoded CSR
    2019/01/07 13:04:08 [INFO] signed certificate with serial number 657136373340728359484392264363451731601962132138
    2019/01/07 13:04:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    2019/01/07 13:04:08 [INFO] generate received request
    2019/01/07 13:04:08 [INFO] received CSR
    2019/01/07 13:04:08 [INFO] generating key: rsa-2048
    2019/01/07 13:04:09 [INFO] encoded CSR
    2019/01/07 13:04:09 [INFO] signed certificate with serial number 111867977348634171877412440227772356240123866110
    2019/01/07 13:04:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    root@master:/data/ssl/k8s#
    root@master:/data/ssl/k8s# ls -lrt
    total 56
    -rw-r--r-- 1 root root 1891 Jan 7 13:04 k8s-ssl.sh
    -rw-r--r-- 1 root root 294 Jan 7 13:04 ca-config.json
    -rw-r--r-- 1 root root 264 Jan 7 13:04 ca-csr.json
    -rw-r--r-- 1 root root 1359 Jan 7 13:04 ca.pem
    -rw------- 1 root root 1675 Jan 7 13:04 ca-key.pem
    -rw-r--r-- 1 root root 1001 Jan 7 13:04 ca.csr
    -rw-r--r-- 1 root root 512 Jan 7 13:04 server-csr.json
    -rw-r--r-- 1 root root 1610 Jan 7 13:04 server.pem
    -rw------- 1 root root 1679 Jan 7 13:04 server-key.pem
    -rw-r--r-- 1 root root 1245 Jan 7 13:04 server.csr
    -rw-r--r-- 1 root root 230 Jan 7 13:04 kube-proxy-csr.json
    -rw-r--r-- 1 root root 1403 Jan 7 13:04 kube-proxy.pem
    -rw------- 1 root root 1675 Jan 7 13:04 kube-proxy-key.pem
    -rw-r--r-- 1 root root 1009 Jan 7 13:04 kube-proxy.csr
    root@master:/data/ssl/k8s#mkdir -p /opt/kubernetes/ssl/;cp /data/ssl/k8s/*pem /opt/kubernetes/ssl/
    1.2 部署 kube-apiserver 组件
    下载https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md
    root@master:/data/ssl/k8s# cd /data/src
    root@master:/data/src# ls -lrt
    total 428368
    -rw-rw-r-- 1 1001 1001 4298 Dec 24 2017 README.md
    -rw-r--r-- 1 root root 11353259 Jan 4 15:54 etcd-v3.3.10-linux-amd64.tar.gz
    drwxr-xr-x 3 6810230 users 96 Jan 4 15:55 etcd-v3.3.10-linux-amd64
    -rw-r--r-- 1 root root 9706487 Jan 4 18:22 flannel-v0.10.0-linux-amd64.tar.gz
    -rw-r--r-- 1 root root 417575573 Jan 7 13:11 kubernetes-server-linux-amd64.tar.gz
    将二进制文件解压拷贝到master 节点
    root@master:/data/src# tar xf kubernetes-server-linux-amd64.tar.gz
    root@master:/data/src# cd kubernetes/server/bin/
    root@master:/data/src/kubernetes/server/bin# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
    创建 TLS Bootstrapping Token
    root@master:/data/src/kubernetes/server/bin# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    da77c862672793987cb7ff70696d0bdc
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/token.csv
    da77c862672793987cb7ff70696d0bdc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    第一列:随机字符串,自己可生成 可用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
    第二列:用户名
    第三列:UID
    第四列:用户组
    创建apiserver配置文件
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kube-apiserver
    KUBE_APISERVER_OPTS="--logtostderr=true \
    --v=4 \
    --etcd-servers=https://10.167.130.201:2379,https://10.167.130.202:2379,https://10.167.130.207:2379 \
    --bind-address=10.167.130.201 \
    --secure-port=6443 \
    --advertise-address=10.167.130.201 \
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --enable-bootstrap-token-auth \
    --token-auth-file=/opt/kubernetes/cfg/token.csv \
    --service-node-port-range=30000-50000 \
    --tls-cert-file=/opt/kubernetes/ssl/server.pem \
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --etcd-cafile=/opt/etcd/ssl/ca.pem \
    --etcd-certfile=/opt/etcd/ssl/server.pem \
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

  • --logtostderr 启用日志
  • ---v 日志等级
  • --etcd-servers etcd集群地址
  • --bind-address 监听地址
  • --secure-port https安全端口
  • --advertise-address 集群通告地址
  • --allow-privileged 启用授权
  • --service-cluster-ip-range Service虚拟IP地址段
  • --enable-admission-plugins 准入控制模块
  • --authorization-mode 认证授权,启用RBAC授权和节点自管理
  • --enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
  • --token-auth-file token文件
  • --service-node-port-range Service Node类型默认分配端口范围
    创建 kube-apiserver systemd unit 文件
    root@master:/data/src/kubernetes/server/bin# vim /usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    启动服务
    root@master:/data/src/kubernetes/server/bin# systemctl daemon-reload
    root@master:/data/src/kubernetes/server/bin# systemctl enable kube-apiserver
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    root@master:/data/src/kubernetes/server/bin# systemctl restart kube-apiserver
    查看apiserver是否运行
    root@master:/data/src/kubernetes/server/bin# ps -ef |grep kube-apiserver
    root 23220 1 51 13:22 ? 00:00:11 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.167.130.201:2379,https://10.167.130.202:2379,https://10.167.130.207:2379 --bind-address=10.167.130.201 --secure-port=6443 --advertise-address=10.167.130.201 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
    1.3、部署kube-scheduler
    创建kube-scheduler配置文件
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kube-scheduler
    KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
    --address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
    --kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
    --leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

  • --master 连接本地apiserver
  • --leader-elect 当该组件启动多个时,自动选举(HA)
    创建kube-scheduler systemd unit 文件
    root@master:/data/src/kubernetes/server/bin# vim /usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    启动服务
    root@master:/data/src/kubernetes/server/bin# systemctl daemon-reload
    root@master:/data/src/kubernetes/server/bin# systemctl enable kube-scheduler.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    root@master:/data/src/kubernetes/server/bin# systemctl restart kube-scheduler.service
    root@master:/data/src/kubernetes/server/bin#
    查看kube-scheduler是否运行
    root@master:/data/src/kubernetes/server/bin# ps -ef |grep kube-scheduler
    root 23738 1 2 13:29 ? 00:00:01 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
    root@master:/data/src/kubernetes/server/bin# systemctl status kube-scheduler.service
    ● kube-scheduler.service - Kubernetes Scheduler
    Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
    Active: active (running) since Mon 2019-01-07 13:29:11 CST; 1min 10s ago
    Docs: https://github.com/kubernetes/kubernetes
    Main PID: 23738 (kube-scheduler)
    Tasks: 7
    Memory: 43.7M
    CGroup: /system.slice/kube-scheduler.service

         └─23738 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
    

    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.280986 23738 shared_informer.go:123] caches populated
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.381096 23738 shared_informer.go:123] caches populated
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.481241 23738 shared_informer.go:123] caches populated
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.581408 23738 shared_informer.go:123] caches populated
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.581485 23738 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.681617 23738 shared_informer.go:123] caches populated
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.681679 23738 controller_utils.go:1034] Caches are synced for scheduler controller
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.681717 23738 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-scheduler...
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.722706 23738 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
    Jan 07 13:29:13 master kube-scheduler[23738]: I0107 13:29:13.823479 23738 shared_informer.go:123] caches populated
    1.4、部署kube-controller-manager
    创建kube-controller-manager配置文件
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kube-controller-manager
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
    --v=4 \
    --master=127.0.0.1:8080 \
    --leader-elect=true \
    --address=127.0.0.1 \
    --service-cluster-ip-range=10.0.0.0/24 \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --root-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
    创建kube-controller-manager systemd unit 文件
    root@master:/data/src/kubernetes/server/bin# vim /usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    启动服务
    root@master:/data/src/kubernetes/server/bin# systemctl daemon-reload
    root@master:/data/src/kubernetes/server/bin# systemctl enable kube-controller-manager
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    root@master:/data/src/kubernetes/server/bin# systemctl restart kube-controller-manager
    查看kube-controller-manager是否运行
    root@master:/data/src/kubernetes/server/bin# systemctl status kube-controller-manager
    ● kube-controller-manager.service - Kubernetes Controller Manager
    Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    Active: active (running) since Mon 2019-01-07 13:34:33 CST; 22s ago
    Docs: https://github.com/kubernetes/kubernetes
    Main PID: 24178 (kube-controller)
    Tasks: 6
    Memory: 84.7M
    CGroup: /system.slice/kube-controller-manager.service

         └─24178 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/...
    

    Jan 07 13:34:45 master kube-controller-manager[24178]: I0107 13:34:45.496067 24178 cronjob_controller.go:111] Found 0 jobs
    Jan 07 13:34:45 master kube-controller-manager[24178]: I0107 13:34:45.500343 24178 cronjob_controller.go:119] Found 0 cronjobs
    Jan 07 13:34:45 master kube-controller-manager[24178]: I0107 13:34:45.500367 24178 cronjob_controller.go:122] Found 0 groups
    Jan 07 13:34:50 master kube-controller-manager[24178]: I0107 13:34:50.487529 24178 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
    Jan 07 13:34:50 master kube-controller-manager[24178]: I0107 13:34:50.599078 24178 pv_controller_base.go:408] resyncing PV controller
    Jan 07 13:34:55 master kube-controller-manager[24178]: I0107 13:34:55.504427 24178 cronjob_controller.go:111] Found 0 jobs
    Jan 07 13:34:55 master kube-controller-manager[24178]: I0107 13:34:55.508401 24178 cronjob_controller.go:119] Found 0 cronjobs
    Jan 07 13:34:55 master kube-controller-manager[24178]: I0107 13:34:55.508438 24178 cronjob_controller.go:122] Found 0 groups
    Jan 07 13:34:55 master kube-controller-manager[24178]: I0107 13:34:55.558927 24178 gc_controller.go:144] GC'ing orphaned
    Jan 07 13:34:55 master kube-controller-manager[24178]: I0107 13:34:55.563519 24178 gc_controller.go:173] GC'ing unscheduled pods which are terminating.
    root@master:/data/src/kubernetes/server/bin# ps -ef |grep kube-controller-manager
    root 24178 1 4 13:34 ? 00:00:02 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem
    将可执行文件路/opt/kubernetes/ 添加到 PATH 变量中
    root@master:/data/src/kubernetes/server/bin# vim /etc/profile
    PATH=/opt/kubernetes/bin:$PATH:$HOME/bin
    root@master:/data/src/kubernetes/server/bin# source /etc/profile
    查看master集群状态
    root@master:/data/src/kubernetes/server/bin# kubectl get cs,nodes
    NAME STATUS MESSAGE ERROR
    componentstatus/controller-manager Healthy ok
    componentstatus/scheduler Healthy ok
    componentstatus/etcd-2 Healthy {"health":"true"}
    componentstatus/etcd-1 Healthy {"health":"true"}
    componentstatus/etcd-0 Healthy {"health":"true"}
    六、部署node节点
    1.1、说明
    Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署
    1.2、kubernetes work 节点运行如下组件:
    docker 前面已经部署
    kubelet
    kube-proxy
    1.3、部署 kubelet 组件(master节点操作)
    kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等;
    kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况;
    为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)。
    将kubelet-bootstrap用户绑定到系统集群角色
    root@master:/data/src/kubernetes/server/bin# /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \
    --clusterrole=system:node-bootstrapper \
    --user=kubelet-bootstrap
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
    root@master:/data/src/kubernetes/server/bin# cd /data/ssl/k8s/
    创建kubeconfig文件
    在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件:

    root@master:/data/ssl/k8s# vim kubeconfig.sh

    创建kubelet bootstrapping kubeconfig

    注意更新密码 和IP

    BOOTSTRAP_TOKEN=da77c862672793987cb7ff70696d0bdc
    KUBE_APISERVER="https://10.167.130.201:6443"

    设置集群参数

    kubectl config set-cluster kubernetes \
    --certificate-authority=./ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=bootstrap.kubeconfig

    设置客户端认证参数

    kubectl config set-credentials kubelet-bootstrap \
    --token=${BOOTSTRAP_TOKEN} \
    --kubeconfig=bootstrap.kubeconfig

    设置上下文参数

    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kubelet-bootstrap \
    --kubeconfig=bootstrap.kubeconfig

    设置默认上下文

    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

    ----------------------

    创建kube-proxy kubeconfig文件

    kubectl config set-cluster kubernetes \
    --certificate-authority=./ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-credentials kube-proxy \
    --client-certificate=./kube-proxy.pem \
    --client-key=./kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    root@master:/data/ssl/k8s# sh kubeconfig.sh
    Cluster "kubernetes" set.
    User "kubelet-bootstrap" set.
    Context "default" created.
    Switched to context "default".
    Cluster "kubernetes" set.
    User "kube-proxy" set.
    Context "default" created.
    Switched to context "default".
    root@master:/data/ssl/k8s# ls bootstrap.kubeconfig kube-proxy.kubeconfig
    bootstrap.kubeconfig kube-proxy.kubeconfig
    root@master:/data/ssl/k8s# cp bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/
    root@master:/data/ssl/k8s# scp bootstrap.kubeconfig kube-proxy.kubeconfig node01:/opt/kubernetes/cfg/
    root@master:/data/ssl/k8s# scp bootstrap.kubeconfig kube-proxy.kubeconfig node02:/opt/kubernetes/cfg/
    将kubelet 二进制文件拷贝node节点
    root@master:/data/ssl/k8s# cd /data/src/kubernetes/server/bin
    root@master:/data/src/kubernetes/server/bin# cp kubelet kube-proxy /opt/kubernetes/bin/
    root@master:/data/src/kubernetes/server/bin# scp kubelet kube-proxy root@node01:/opt/kubernetes/bin/
    root@master:/data/src/kubernetes/server/bin# scp kubelet kube-proxy root@node02:/opt/kubernetes/bin/
    创建kubelet配置文件拷贝node节点并且修改hostname-override=10.167.130.201为本机IP
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kubelet
    KUBELET_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=10.167.130.201 \
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
    --config=/opt/kubernetes/cfg/kubelet.config \
    --cert-dir=/opt/kubernetes/ssl \
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

  • --hostname-override 在集群中显示的主机名
  • --kubeconfig 指定kubeconfig文件位置,会自动生成
  • --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
  • --cert-dir 颁发证书存放位置
  • --pod-infra-container-image 管理Pod网络的镜像
    拷贝到其他node节点
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kubelet node01:/opt/kubernetes/cfg/kubelet
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kubelet node02:/opt/kubernetes/cfg/kubelet
    root@master:/data/src/kubernetes/server/bin# ssh node01 "sed -i 's/hostname\-override\=10.167.130.201/hostname\-override\=10.167.130.202/g' /opt/kubernetes/cfg/kubelet"
    root@master:/data/src/kubernetes/server/bin# ssh node02 "sed -i 's/hostname\-override\=10.167.130.201/hostname\-override\=10.167.130.207/g' /opt/kubernetes/cfg/kubelet"
    创建kubelet 参数配置文件拷贝到所有 node节点并且修改address: 10.167.130.201为本机IP
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kubelet.config
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 10.167.130.201
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS: ["10.0.0.2"]
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
    anonymous:
    enabled: true
    拷贝到其他node节点
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kubelet.config node01:/opt/kubernetes/cfg/kubelet.config
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kubelet.config node02:/opt/kubernetes/cfg/kubelet.config
    root@master:/data/src/kubernetes/server/bin# ssh node01 "sed -i 's/address\: 10.167.130.201/address\: 10.167.130.202/g' /opt/kubernetes/cfg/kubelet.config"
    root@master:/data/src/kubernetes/server/bin# ssh node02 "sed -i 's/address\: 10.167.130.201/address\: 10.167.130.207/g' /opt/kubernetes/cfg/kubelet.config"
    创建kubelet systemd unit 文件
    root@master:/data/src/kubernetes/server/bin# vim /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service

    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
    Restart=on-failure
    KillMode=process

    [Install]
    WantedBy=multi-user.target
    拷贝到其他node节点
    root@master:/data/src/kubernetes/server/bin# scp /usr/lib/systemd/system/kubelet.service node01:/usr/lib/systemd/system/kubelet.service
    root@master:/data/src/kubernetes/server/bin# scp /usr/lib/systemd/system/kubelet.service node02:/usr/lib/systemd/system/kubelet.service
    启动服务
    root@master:/data/src/kubernetes/server/bin# systemctl daemon-reload
    root@master:/data/src/kubernetes/server/bin# systemctl enable kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    root@master:/data/src/kubernetes/server/bin# systemctl restart kubelet
    root@master:/data/src/kubernetes/server/bin# ssh node01 "systemctl daemon-reload;systemctl enable kubelet;systemctl restart kubelet"
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    root@master:/data/src/kubernetes/server/bin# ssh node02 "systemctl daemon-reload;systemctl enable kubelet;systemctl restart kubelet"
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    approve kubelet CSR 请求
    可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。
    手动 approve CSR 请求
    查看 CSR 列表:
    root@master:/data/src/kubernetes/server/bin# kubectl get csr
    NAME AGE REQUESTOR CONDITION
    node-csr-F1jsV3FhVnfVBURk2pDZQA1nDKR0_k0mEsxdTHHIv0s 9m2s kubelet-bootstrap Pending
    node-csr-OdCmMky4L_1CxpheseiKBfbnyTinOqWtDmx_BzKuBA8 81s kubelet-bootstrap Pending
    node-csr-m7No1oXpLFabQ47SdJVu9CNSx2YpEE8D04I-anZRWj8 9m13s kubelet-bootstrap Pending
    root@master:/data/src/kubernetes/server/bin# kubectl certificate approve node-csr-F1jsV3FhVnfVBURk2pDZQA1nDKR0_k0mEsxdTHHIv0s
    certificatesigningrequest.certificates.k8s.io/node-csr-F1jsV3FhVnfVBURk2pDZQA1nDKR0_k0mEsxdTHHIv0s approved
    root@master:/data/src/kubernetes/server/bin# kubectl certificate approve node-csr-OdCmMky4L_1CxpheseiKBfbnyTinOqWtDmx_BzKuBA8
    certificatesigningrequest.certificates.k8s.io/node-csr-OdCmMky4L_1CxpheseiKBfbnyTinOqWtDmx_BzKuBA8 approved
    root@master:/data/src/kubernetes/server/bin# kubectl certificate approve node-csr-m7No1oXpLFabQ47SdJVu9CNSx2YpEE8D04I-anZRWj8
    certificatesigningrequest.certificates.k8s.io/node-csr-m7No1oXpLFabQ47SdJVu9CNSx2YpEE8D04I-anZRWj8 approved
    root@master:/data/src/kubernetes/server/bin# kubectl get csr
    NAME AGE REQUESTOR CONDITION
    node-csr-F1jsV3FhVnfVBURk2pDZQA1nDKR0_k0mEsxdTHHIv0s 12m kubelet-bootstrap Approved,Issued
    node-csr-OdCmMky4L_1CxpheseiKBfbnyTinOqWtDmx_BzKuBA8 4m29s kubelet-bootstrap Approved,Issued
    node-csr-m7No1oXpLFabQ47SdJVu9CNSx2YpEE8D04I-anZRWj8 12m kubelet-bootstrap Approved,Issued
    Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和授权;
    Subject:请求签名的证书信息;
    证书的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;
    查看集群状态
    root@master:/data/src/kubernetes/server/bin# kubectl get node
    NAME STATUS ROLES AGE VERSION
    10.167.130.201 Ready <none> 106s v1.13.1
    10.167.130.202 Ready <none> 96s v1.13.1
    10.167.130.207 Ready <none> 118s v1.13.1
    部署 kube-proxy 组件
    kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。
    创建 kube-proxy 配置文件拷贝node节点并且修改hostname-override=10.167.130.201 为本机IP
    root@master:/data/src/kubernetes/server/bin# vim /opt/kubernetes/cfg/kube-proxy
    KUBE_PROXY_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=10.167.130.201 \
    --cluster-cidr=10.0.0.0/24 \
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    拷贝到其他node节点
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kube-proxy node01:/opt/kubernetes/cfg/kube-proxy
    root@master:/data/src/kubernetes/server/bin# scp /opt/kubernetes/cfg/kube-proxy node02:/opt/kubernetes/cfg/kube-proxy
    root@master:/data/src/kubernetes/server/bin# ssh node01 "sed -i 's/hostname\-override\=10.167.130.201/hostname\-override\=10.167.130.202/g' /opt/kubernetes/cfg/kube-proxy"
    root@master:/data/src/kubernetes/server/bin# ssh node02 "sed -i 's/hostname\-override\=10.167.130.201/hostname\-override\=10.167.130.207/g' /opt/kubernetes/cfg/kube-proxy"
    创建kube-proxy systemd unit 文件
    root@master:/data/src/kubernetes/server/bin# vim /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target

    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    拷贝到其他node节点
    root@master:/data/src/kubernetes/server/bin# scp /usr/lib/systemd/system/kube-proxy.service node01:/usr/lib/systemd/system/kube-proxy.service
    root@master:/data/src/kubernetes/server/bin# scp /usr/lib/systemd/system/kube-proxy.service node02:/usr/lib/systemd/system/kube-proxy.service
    启动服务
    root@master:/data/src/kubernetes/server/bin# systemctl daemon-reload
    root@master:/data/src/kubernetes/server/bin# systemctl enable kube-proxy
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    root@master:/data/src/kubernetes/server/bin# systemctl restart kube-proxy
    root@master:/data/src/kubernetes/server/bin# ssh node01 "systemctl daemon-reload;systemctl enable kube-proxy;systemctl restart kube-proxy"
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    root@master:/data/src/kubernetes/server/bin# ssh node02 "systemctl daemon-reload;systemctl enable kube-proxy;systemctl restart kube-proxy"
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

    打node 或者master 节点的标签(选做)
    kubectl label node 10.167.130.201 node-role.kubernetes.io/master='master'
    kubectl label node 10.167.130.202 node-role.kubernetes.io/node='node'
    kubectl label node 10.167.130.207 node-role.kubernetes.io/node='node'
    检查
    root@master:/data/src/kubernetes/server/bin# kubectl get componentstatus
    NAME STATUS MESSAGE ERROR
    etcd-1 Healthy {"health":"true"}
    etcd-2 Healthy {"health":"true"}
    scheduler Healthy ok
    controller-manager Healthy ok
    etcd-0 Healthy {"health":"true"}
    root@master:/data/src/kubernetes/server/bin# kubectl get node
    NAME STATUS ROLES AGE VERSION
    10.167.130.201 Ready <none> 104m v1.13.1
    10.167.130.202 Ready <none> 103m v1.13.1
    10.167.130.207 Ready <none> 104m v1.13.1
    启动测试示例
    root@master:/data/src/kubernetes/server/bin# kubectl run nginx --image=nginx --replicas=3
    kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
    deployment.apps/nginx created
    root@master:/data/src/kubernetes/server/bin# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx-7cdbd8cdc9-9572f 0/1 ContainerCreating 0 11s <none> 10.167.130.202 <none> <none>
    nginx-7cdbd8cdc9-dxjp4 0/1 ContainerCreating 0 11s <none> 10.167.130.207 <none> <none>
    nginx-7cdbd8cdc9-fcqpd 0/1 ContainerCreating 0 11s <none> 10.167.130.201 <none> <none>
    root@master:/data/src/kubernetes/server/bin# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
    service/nginx exposed
    root@master:/data/src/kubernetes/server/bin# kubectl get svc nginx
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    nginx NodePort 10.0.0.178 <none> 88:36040/TCP 11s
    root@master:/data/src/kubernetes/server/bin# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx-7cdbd8cdc9-9572f 1/1 Running 0 3m34s 172.17.81.2 10.167.130.202 <none> <none>
    nginx-7cdbd8cdc9-dxjp4 1/1 Running 0 3m34s 172.17.21.2 10.167.130.207 <none> <none>
    nginx-7cdbd8cdc9-fcqpd 1/1 Running 0 3m34s 172.17.100.2 10.167.130.201 <none> <none>

如果觉得我的文章对您有用,请点赞。您的支持将鼓励我继续创作!

0

添加新评论0 条评论

Ctrl+Enter 发表

作者其他文章

相关文章

相关问题

相关资料

X社区推广