#云原生征文#二进制安装多master节点的k8s集群-1.20.7 原创 精华

发布于 2022-5-23 17:57
浏览
0收藏

二进制部署k8s高可用集群-v1.20.7

二进制安装多master节点k8s(k8s1.20.7版本)高可用集群-keeplive+nginx实现k8s apiserver高可用。

一、部署规划:

  • Pod网段:10.0.0.0/16
  • Service网段:10.255.0.1/16
  • 操作系统:Centos7.6
  • 配置:4G/6C/100G
  • 网络模式:静态IP
角色 IP地址 安装组件
master01 172.27.11.223 apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx
master02 172.27.11.145 apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx
master03 172.27.11.217 apiserver、controller-manager、scheduler、etcd、docker、keeplived、nginx
work01 172.27.11.106 kubelet、kube-proxy、docker、calico、coredns
work02 172.27.11.128 kubelet、kube-proxy、docker、calico
work03 172.27.11.147 kubelet、kube-proxy、docker、calico
VIP 172.27.11.2 /

二、高可用架构:

#云原生征文#二进制安装多master节点的k8s集群-1.20.7-开源基础软件社区

  1. 主备模式高可用架构说明:
核心组件 高可用模式 高可用实现方式
apiserver 主备 keeplived
controller-manager 主备 leader election
scheduler 主备 leader election
etcd 集群 /
  • apiserver 通过haproxy+keepalived实现高可用,当某个节点故障时触发keepalived vip 转移;
  • controller-manager k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个controller-manager组件运行;
  • scheduler k8s内部通过选举方式产生领导者(由–leader-elect 选型控制,默认为true),同一时刻集群内只有一个scheduler组件运行;
  • etcd 自动创建集群来实现高可用,部署的节点数为奇数。如果剩余可用节点数量超过半数,集群可以几乎没有影响的正常工作(3节点方式最多容忍一台机器宕机);

三、kubeadm和二进制安装k8s适用场景分析:

  • kubeadm:kubeadm是官方提供的开源工具,是一个开源项目,用于快速搭建kubernetes集群,目前是比较方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。Kubeadm初始化k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。kubeadm是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对k8s架构组件理解不深的话,遇到问题比较难排查。kubeadm适合需要经常部署k8s,或者对自动化要求比较高的场景下使用。

  • 二进制:在官网下载相关组件的二进制包,如果手动安装,对kubernetes理解也会更全面。

    Kubeadm和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进行评估。

四、初始化集群:

4.1 安装规划配置静态IP、主机名

4.2 配置hosts文件:

  • 该操作需要在集群中所有节点(master和work)全部执行,添加如下内容:
[root@master01 ~]# vim /etc/hosts
master01 172.27.11.223
master02 172.27.11.145
master03 172.27.11.217
work01  172.27.11.106
work02  172.27.11.128
work03  172.27.11.147

4.3 配置主机之间无密码登录:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#生成ssh密钥对:
[root@master01 ~]# ssh-keygen -t rsa  #一路回车,不输入密码
#把本地的ssh公钥文件安装到远程主机对应的账号中:
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub master02
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub master03
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub work01
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub work02
[root@master01 ~]# ssh-copy-id -i .ssh/id_rsa.pub work03
#其他几个节点的操作方式跟master01一致,在此只展示master01的操作方式;

4.4 关闭firewalld:

  • 该操作需要在集群中所有节点(master和work)全部执行:
[root@master01 ~]# systemctl stop firewalld && systemctl disable firewalld

4.5 关闭selinux:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#修改selinux配置文件之后,重启机器,selinux配置才能永久生效重启之后登录机器验证是否修改成功:
[root@master01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#重启之后登录机器验证是否修改成功:
[root@master01 ~]# getenforce
Disabled
#显示Disabled说明selinux已经关闭

4.6 关闭swap交换分区:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#临时关闭
[root@master01 ~]# swapoff -a
#永久关闭:注释swap挂载,给swap这行开头加一下注释
[root@master01 ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap      swap    defaults        0 0
#如果是克隆的虚拟机,需要删除UUID

4.7 修改内核参数:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#1 修改linux的内核采纳数,添加网桥过滤和地址转发功能
#2 加载网桥过滤模块
[root@master01 ~]# modprobe br_netfilter
#3 编辑/etc/sysctl.d/k8s.conf文件,添加如下配置:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#4 重新加载配置
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
#5 查看网桥过滤模块是否加载成功
[root@master01 ~]# lsmod | grep br_netfilter
#6 开机加载内核模块
[root@master01 ~]# echo "modprobe br_netfilter" >> /etc/profile

4.8 配置部署集群所需要的yum源:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#1 需要的yum.repo源
---CentOS-Base.repo
---docker-ce.repo
---epel.repo
---kubernetes.repo

[root@master01 yum.repos.d]# ll
total 16
-rw-r--r--. 1 root root 2523 Mar 17 09:44 CentOS-Base.repo
-rw-r--r--. 1 root root 2081 Mar 22 21:43 docker-ce.repo
-rw-r--r--. 1 root root 1050 Mar 22 22:43 epel.repo
-rw-r--r--. 1 root root  133 Mar 22 22:23 kubernetes.repo

#2 备份原有yum.repo源
[root@master01 ~]# mkdir /root/yum.bak
[root@master01 ~]# mv /etc/yum.repos.d/* /root/yum.bak
#3 将准备好的yum源通过scp的方式上传到集群所有节点的yum.repo目录下
---在文末会提供yum.repo文件的下载链接
#4 更新yum源
[root@master01 ~]# yum makecache
#5 配置安装基础依赖包
[root@master01 ~]# yum install -y wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack

4.9 配置时间同步:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#安装ntpdate命令,
#yum install ntpdate -y
#跟网络源做同步
ntpdate ntp.aliyun.com
#把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate   ntp.aliyun.com
#重启crond服务
systemctl restart  crond 
#修改时区为中国时区
[root@master01 yum.repos.d]# timedatectl set-timezone Asia/Shanghai

4.10 安装iptables:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#安装iptables
[root@master01 ~]# yum install iptables.x86_64 -y
#禁用iptables,并清空规则
[root@master01 ~]# systemctl iptables stop   && systemctl disable iptables && iptables -F

4.11 安装ipvs:

  • 该操作需要在集群中所有节点(master和work)全部执行:
#1 安装ipset和ipvsadm
[root@master01 ~]# yum install ipvsadm.x86_64 ipset  -y
#2 添加需要加载的模块写入脚本文件
[root@master01 ~]#  cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#3 为脚本添加执行权限
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

4.12 安装docker-ce:

  • 该操作需要在集群中所有节点(master和work)全部执行:
[root@master01 modules]# yum install docker-ce docker-ce-cli containerd.io -y
[root@master01 modules]# systemctl start docker && systemctl enable docker.service && systemctl status docker

4.13 配置docker镜像加速器:

  • 该操作需要在集群中所有节点(master和work)全部执行:
tee /etc/docker/daemon.json << 'EOF'
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF
systemctl daemon-reload
systemctl restart docker
systemctl status docker
#修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。

4.14 配置nginx和keepalived

  • 该操作需要在集群中所有节点(maste)全部执行:
#1 安装keepalived和nginx
[root@master01 ~]#  yum install nginx-mod-stream nginx keepalived -y
#2 修改3个节点上面的keepalived配置文件,修改角色、优先级、网卡名称,其他保持不变即可,nginx配置文件不用修改
--- 在文末会提供nginx.conf和keepalived.conf文件以及keepalived健康检测脚本的下载链接
[root@master01 ~]# ll /etc/keepalived/
total 8
-rw-r--r-- 1 root root 134 Mar 22 23:16 check_nginx.sh
-rw-r--r-- 1 root root 986 Mar 23 22:51 keepalived.conf
[root@master01 ~]# ll /etc/nginx/nginx.conf
-rw-r--r-- 1 root root 1442 Mar 22 23:12 /etc/nginx/nginx.conf
#3 先启动3个节点的nginx服务,在启动keepalived服务
[root@master01 ~]# systemctl enable nginx && systemctl start nginx
[root@master01 ~]# systemctl enable keepalived  && systemctl start keepalived
#4 查看VIP是否生成
[root@master01 work]# ip addr
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:ba:96:34 brd ff:ff:ff:ff:ff:ff
    inet 172.27.11.223/24 brd 172.27.11.255 scope global noprefixroute dynamic ens192
       valid_lft 33828sec preferred_lft 33828sec
    inet 172.27.11.2/24 scope global secondary ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::8593:35f1:7f14:c789/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

4.15 将集群搭建需要的安装包拷贝到集群中

  • 该操作需要在集群中所有节点(master和work)全部执行:
--- 在文末会提供安装包文件以及yaml的下载链接
[root@master01 work]# pwd
/data/work
[root@master01 work]# ll
total 653180
-rw-r--r-- 1 root root   1369600 Mar 15 09:54 busybox-1-28.tar.gz
-rw-r--r-- 1 root root     13581 Mar 15 09:54 calico.yaml
-rw-r--r-- 1 root root   6595195 Mar 15 10:03 cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root   2277873 Mar 15 10:06 cfssljson_linux-amd64
-rw-r--r-- 1 root root  10376657 Mar 15 10:06 cfssl_linux-amd64
-rw-r--r-- 1 root root  83932160 Mar 15 10:07 cni.tar.gz
-rw-r--r-- 1 root root      4371 Mar 15 09:54 coredns.yaml
-rw-r--r-- 1 root root  17373136 Mar 15 10:07 etcd-v3.4.13-linux-amd64.tar.gz
-rw-r--r-- 1 root root 317099618 Mar 15 11:04 kubernetes-server-linux-amd64.tar.gz
-rw-r--r-- 1 root root  73531392 Mar 15 10:08 node.tar.gz
-rw-r--r-- 1 root root  46055424 Mar 15 10:08 pause-cordns.tar.gz
-rw-r--r-- 1 root root       165 Mar 15 09:54 tomcat-service.yaml
-rw-r--r-- 1 root root 110192128 Mar 15 10:07 tomcat.tar.gz
-rw-r--r-- 1 root root       707 Mar 15 09:54 tomcat.yaml

五、搭建etcd集群:

5.1 配置etcd工作目录:

  • 该操作需要在集群中所有节点(master)全部执行:
[root@master01 ~]# mkdir -p /etc/etcd/ssl

5.2 安装签发证书工具cfssl:

  • 该操作需要在集群节点(master01)执行:
[root@master01 ~]# mkdir /data/work -p
[root@master01 ~]# cd /data/work/
[root@master01 work]# ls cfssl*
cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
#把文件变成可执行权限,并移动到/usr/local/bin目录下更名
[root@master01 work]# chmod +x cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
[root@master01 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl && mv cfssljson_linux-amd64 /usr/local/bin/cfssljson && mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

5.3 生成ca根证书(用于后续给集群etcd、apiserver等签发证书):

  • 生成ca证书请求文件,该操作仅需要在集群节点(master01)执行:
[root@master01 work]# vim ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}
[root@master01 work]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca
[root@master01 work]# ll ca*
total 16
-rw-r--r-- 1 root root 1001 Mar 15 17:58 ca.csr           ca证书申请签署文件
-rw-r--r-- 1 root root  256 Mar 15 17:57 ca-csr.json      生成ca证书申请签署文件的文件
-rw------- 1 root root 1679 Mar 15 17:58 ca-key.pem       ca证书私钥
-rw-r--r-- 1 root root 1359 Mar 15 17:58 ca.pem           ca证书

#注: 
#CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。

#O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。

#L 字段:所在城市
#S 字段:所在省份
#C 字段:只能是国家字母缩写,如中国:CN

5.4 生成ca证书配置文件:

  • 生成ca证书配置文件,该操作需要在master01执行:
[root@master01 work]# vim ca-config.json
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}

5.5 生成etcd证书

  • 生成etcd证书请求文件,hosts的ip变成自己etcd所在节点的ip,该操作在集群节点(master01)执行:
[root@master01 work]# vim etcd-csr.json 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.27.11.223",
    "172.27.11.145",
    "172.27.11.217",
    "172.27.11.2"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Beijing",
    "L": "Beijing",
    "O": "k8s",
    "OU": "system"
  }]
}
#上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,可以预留几个,做扩容用。
  • 生成etcd证书:
[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd
[root@master01 work]# ls etcd*.pem
etcd-key.pem  etcd.pem
#至此,etcd证书已经签发完成;

5.6 部署etcd集群:

  • 该操作需要在集群中所有节点(master)全部执行:

5.6.1 master01部署etcd:

  • 解压etcd文件,移动至对应目录中:
[root@master01 system]# cd /data/work/
[root@master01 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master01 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
  • 创建配置文件:
[root@master01 work]# vim etcd.conf 
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.27.11.223:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.27.11.223:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.27.11.223:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.27.11.223:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.27.11.223:2380,etcd2=https://172.27.11.145:2380,etcd3=https://172.27.11.217:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#参数详解:
ETCD_NAME:节点名称,集群中唯一 
ETCD_DATA_DIR:数据目录 
ETCD_LISTEN_PEER_URLS:集群通信监听地址 
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
  • 创建启动服务文件:
[root@master01 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
  • 移动刚才创建的证书、配置文件、启动文件至相应目录中,同时需要将证书拷贝到master02和master03上面:
[root@master01 work]# cp ca*.pem etcd*.pem /etc/etcd/ssl/
[root@master01 work]# cp etcd.conf /etc/etcd/ && cp etcd.service /usr/lib/systemd/system/ 
[root@master01 work]# scp ca*.pem etcd*.pem master02:/etc/etcd/ssl
[root@master01 work]# scp ca*.pem etcd*.pem master03:/etc/etcd/ssl
  • 创建etcd数据目录:
[root@master01 work]# mkdir -p /var/lib/etcd/default.etcd

5.6.2 master02部署etcd:

  • 解压etcd文件,移动至对应目录中:
[root@master02 system]# cd /data/work/
[root@master02 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master02 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
  • 从master01拷贝证书、etcd启动文件、etcd配置文件
[root@master02 ~]# mkdir /etc/etcd/ssl/ -p
[root@master02 ~]# cd /etc/etcd/ssl/
[root@master02 ssl]# scp root@master01:/etc/etcd/ssl/* .
[root@master02 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 28 12:49 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 28 12:49 ca.pem
-rw------- 1 root root 1675 Mar 28 12:49 etcd-key.pem
-rw-r--r-- 1 root root 1444 Mar 28 12:49 etcd.pem

[root@master02 ssl]# cd /etc/etcd/
[root@master02 etcd]# scp root@master01:/etc/etcd/etcd.conf .
[root@master02 etcd]# ll
total 4
-rw-r--r-- 1 root root 526 Mar 28 12:53 etcd.conf

[root@master02 etcd]# cd /usr/lib/systemd/system
[root@master02 system]# scp root@master01:/usr/lib/systemd/system/etcd.service .
[root@master02 system]# ll etcd.service 
-rw-r--r-- 1 root root 634 Mar 28 12:54 etcd.service
  • 创建etcd数据目录:
[root@master02 ~]# mkdir -p /var/lib/etcd/default.etcd
  • 根据实际情况修改配置文件中的参数:
[root@master02 ~]# vim /etc/etcd/etcd.conf 
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.27.11.145:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.27.11.145:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.27.11.145:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.27.11.145:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.27.11.223:2380,etcd2=https://172.27.11.145:2380,etcd3=https://172.27.11.217:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.6.3 master03部署etcd:

  • 解压etcd文件,移动至对应目录中:
[root@master03 system]# cd /data/work/
[root@master03 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master03 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
  • 从master01拷贝证书、etcd启动文件、etcd配置文件
[root@master03 ~]# mkdir /etc/etcd/ssl/ -p
[root@master03 ~]# cd /etc/etcd/ssl/
[root@master03 ssl]# scp root@master01:/etc/etcd/ssl/* .
[root@master03 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 28 12:49 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 28 12:49 ca.pem
-rw------- 1 root root 1675 Mar 28 12:49 etcd-key.pem
-rw-r--r-- 1 root root 1444 Mar 28 12:49 etcd.pem

[root@master03 ssl]# cd /etc/etcd/
[root@master03 etcd]# scp root@master01:/etc/etcd/etcd.conf .
[root@master03 etcd]# ll
total 4
-rw-r--r-- 1 root root 526 Mar 28 12:53 etcd.conf

[root@master03 etcd]# cd /usr/lib/systemd/system
[root@master03 system]# scp root@master01:/usr/lib/systemd/system/etcd.service .
[root@master03 system]# ll etcd.service 
-rw-r--r-- 1 root root 634 Mar 28 12:54 etcd.service
  • 创建etcd数据目录:
[root@master03 ~]# mkdir -p /var/lib/etcd/default.etcd
  • 根据实际情况修改配置文件中的参数:
[root@master03 ~]# vim /etc/etcd/etcd.conf 
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.27.11.217:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.27.11.217:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.27.11.217:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.27.11.217:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.27.11.223:2380,etcd2=https://172.27.11.145:2380,etcd3=https://172.27.11.217:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

5.6.4 依次启动3个节点的etcd服务:

启动etcd的时候,先启动master01的etcd服务,会一直卡住在启动的状态,然后接着再启动master02的etcd,这样master01这个节点etcd才会正常起来,最后启动master03的etcd服务。

#启动etcd服务,并设置为开机自启
[root@master01 work]# systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service
[root@master02 work]# systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service
[root@master03 work]# systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service
#查看etcd服务状态:
[root@master01]# systemctl status etcd
[root@master02]# systemctl status etcd
[root@master03]# systemctl status etcd

5.6.5 查看etcd集群:

[root@master01 work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints="https://172.27.11.223:2379,https://172.27.11.145:2379,https://172.27.11.217:2379" endpoint health --write-out=table
+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://172.27.11.223:2379 |   true |  11.82053ms |       |
| https://172.27.11.145:2379 |   true | 12.840663ms |       |
| https://172.27.11.217:2379 |   true | 15.504948ms |       |
+----------------------------+--------+-------------+-------+

六、安装kubernetes组件:

master节点

  • apiserver
  • controller manager
  • scheduler

work节点

  • kubelet
  • kube-proxy

6.1 解压移动安装包:

#解压所有master节点和work节点上面的kubernetes组件的压缩包:
[root@master01 ~]# cd /data/work/
[root@master01 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@master02 ~]# cd /data/work/
[root@master02 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@master03 ~]# cd /data/work/
[root@master03 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@work01 ~]# cd /data/work/
[root@work01 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@work02 ~]# cd /data/work/
[root@work02 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@work03 ~]# cd /data/work/
[root@work03 work]# tar zxvf kubernetes-server-linux-amd64.tar.gz
#将master节点上面k8s的安装包移动至相应目录下
[root@master01 work]# cd kubernetes/server/bin/
[root@master01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@master02 work]# cd kubernetes/server/bin/
[root@master02 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@master03 work]# cd kubernetes/server/bin/
[root@master03 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
#将work节点上面k8s的安装包移动至相应目录下
[root@work01 work]# cd kubernetes/server/bin/
[root@work01 bin]# cp kubelet kube-proxy /usr/local/bin/
[root@work02 work]# cd kubernetes/server/bin/
[root@work02 bin]# cp kubelet kube-proxy /usr/local/bin/
[root@work03 work]# cd kubernetes/server/bin/
[root@work03 bin]# cp kubelet kube-proxy /usr/local/bin/
#创建kubernetes的目录、ssl证书和日志的存放目录(需要在集群中所有节点全部(master和work节点)):
[root@master01 kubernetes-work]# mkdir -p /etc/kubernetes/ssl && mkdir -p /var/log/kubernetes

6.2 部署apiserver组件:

6.2.1 TLS Bootstrapping机制原理详解:
  • 启动TLS Bootstrapping机制:

Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发kubelet客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

apiVersion: v1
clusters: null
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}
  • TLS Bootstrapping具体引导过程:

    • TLS的作用:TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。
    • RBAC:当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组。
    • 以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。不直接给用户授权,而且给角色去授权,将用户绑定给角色。
  • kubelet首次启动流程:

TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份。
token.csv格式:3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,“system:kubelet-bootstrap”

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。稍后安装kubelet的时候演示。

6.2.2 master01部署kube-apiserver
  • 创建token.csv文件:
[root@master01 bin]# cd /data/work/
[root@master01 kubernetes-work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#格式:token,用户名,UID,用户组
  • 创建csr请求文件,替换为自己机器的IP地址:
[root@master01 work]# vim kube-apiserver-csr.json 
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "172.27.11.223",
    "172.27.11.145",
    "172.27.11.217",
    "172.27.11.106",
    "172.27.11.128",
    "172.27.11.147",
    "172.27.11.2",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
#注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)
  • 生成kube-apiserver证书:
[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@master01 work]# ll kube-apiserver*
-rw-r--r-- 1 root root 1293 Mar 28 13:30 kube-apiserver.csr
-rw-r--r-- 1 root root  561 Mar 28 13:29 kube-apiserver-csr.json
-rw------- 1 root root 1679 Mar 28 13:30 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1659 Mar 28 13:30 kube-apiserver.pem
  • 创建apiserver配置文件,并替换为自己的IP:
[root@master01 work]# vim kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=172.27.11.223 \
  --secure-port=6443 \
  --advertise-address=172.27.11.223 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://172.27.11.223:2379,https://172.27.11.145:2379,https://172.27.11.217:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"
  
  #参数详解:
--logtostderr:启用日志 
--v:日志等级 
--log-dir:日志目录 
--etcd-servers:etcd集群地址 
--bind-address:监听地址 
--secure-port:https安全端口 
--advertise-address:集群通告地址 
--allow-privileged:启用授权 
--service-cluster-ip-range:Service虚拟IP地址段 
--enable-admission-plugins:准入控制模块 
--authorization-mode:认证授权,启用RBAC授权和节点自管理 
--enable-bootstrap-token-auth:启用TLS bootstrap机制 
--token-auth-file:bootstrap token文件 
--service-node-port-range:Service nodeport类型默认分配端口范围 
--kubelet-client-xxx:apiserver访问kubelet客户端证书 
--tls-xxx-file:apiserver https证书 
--etcd-xxxfile:连接Etcd集群证书 –
-audit-log-xxx:审计日志
  • 创建apiserver启动配置文件:
[root@master01 work]# vim kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
  • 移动刚才创建的证书、配置文件、启动配置文件:
[root@master01 ca]# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl && cp token.csv /etc/kubernetes/ && cp kube-apiserver.conf /etc/kubernetes/ && cp kube-apiserver.service /usr/lib/systemd/system/
  • 启动kube-apiserver服务:
[root@master01 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl is-active kube-apiserver.service
6.2.3 master02部署kube-apiserver:
  • 从master01拷贝证书、kube-apiserver启动文件、kube-apiserver配置文件和token文件:
[root@master02 ~]# mkdir /etc/kubernetes/ssl/ -p && cd /etc/kubernetes/ssl/ && scp root@master01:/etc/kubernetes/ssl/* .
[root@master02 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 28 13:41 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 28 13:41 ca.pem
-rw------- 1 root root 1679 Mar 28 13:41 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1659 Mar 28 13:41 kube-apiserver.pem

[root@master02 ssl]# cd /etc/kubernetes/ && scp root@master01:/etc/kubernetes/kube-apiserver.conf .
[root@master02 kubernetes]# ll
total 4
-rw-r--r-- 1 root root 1611 Mar 28 13:42 kube-apiserver.conf

[root@master02 kubernetes]# cd /usr/lib/systemd/system && scp root@master01:/usr/lib/systemd/system/kube-apiserver.service .
[root@master02 system]# ll kube-apiserver.service 
-rw-r--r-- 1 root root 361 Mar 28 13:43 kube-apiserver.service

[root@master02 ssl]# cd /etc/kubernetes/ && scp root@master01:/etc/kubernetes/token.csv .
[root@master02 kubernetes]# ll
total 8
-rw-r--r-- 1 root root 1611 Mar 28 13:42 kube-apiserver.conf
drwxr-xr-x 2 root root   94 Mar 28 13:41 ssl
-rw-r--r-- 1 root root   84 Mar 28 13:44 token.csv
  • 修改配置kube-apiserver.conf配置文件:
#修改监听地址和通告地址为当前节点的地址:
[root@master02 kubernetes]# vim /etc/kubernetes/kube-apiserver.conf
--bind-address=172.27.11.145 \
--advertise-address=172.27.11.145 \
  • 启动kube-apiserver服务:
[root@master02 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl is-active kube-apiserver.service 
6.2.4 master03部署kube-apiserver:
  • 从master01拷贝证书、kube-apiserver启动文件、kube-apiserver配置文件和token文件:
[root@master03 ~]# mkdir /etc/kubernetes/ssl/ -p && cd /etc/kubernetes/ssl/ && scp root@master01:/etc/kubernetes/ssl/* .
[root@master03 ssl]# ll
total 16
-rw------- 1 root root 1679 Mar 28 13:41 ca-key.pem
-rw-r--r-- 1 root root 1359 Mar 28 13:41 ca.pem
-rw------- 1 root root 1679 Mar 28 13:41 kube-apiserver-key.pem
-rw-r--r-- 1 root root 1659 Mar 28 13:41 kube-apiserver.pem

[root@master03 ssl]# cd /etc/kubernetes/ && scp root@master01:/etc/kubernetes/kube-apiserver.conf .
[root@master03 kubernetes]# ll
total 4
-rw-r--r-- 1 root root 1611 Mar 28 13:42 kube-apiserver.conf

[root@master03 kubernetes]# cd /usr/lib/systemd/system && scp root@master01:/usr/lib/systemd/system/kube-apiserver.service .
[root@master03 system]# ll kube-apiserver.service 
-rw-r--r-- 1 root root 361 Mar 28 13:43 kube-apiserver.service

[root@master03 ssl]# cd /etc/kubernetes/ && scp root@master01:/etc/kubernetes/token.csv .
[root@master03 kubernetes]# ll
total 8
-rw-r--r-- 1 root root 1611 Mar 28 13:42 kube-apiserver.conf
drwxr-xr-x 2 root root   94 Mar 28 13:41 ssl
-rw-r--r-- 1 root root   84 Mar 28 13:44 token.csv
  • 修改配置kube-apiserver.conf配置文件:
#修改监听地址和通告地址为当前节点的地址:
[root@master03 kubernetes]# vim /etc/kubernetes/kube-apiserver.conf
--bind-address=172.27.11.217 \
--advertise-address=172.27.11.217 \
  • 启动kube-apiserver服务:
[root@master03 ~]# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl is-active kube-apiserver.service 
6.2.4 测试kube-apiserver节点状态:
  • 登录到任何一个节点进行测试:
[root@master01 work]# curl --insecure https://172.27.11.223:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}
#上面看到401,这个是正常的的状态,还没认证;

七、部署kubectl组件:

Kubectl是客户端工具,操作k8s资源的,如增删改查等。

Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl会根据这个文件的配置,去访问k8s资源。/etc/kubernetes/admin.con文件记录了访问的k8s集群,和用到的证书。

  • 可以设置一个环境变量KUBECONFIG:
[root@ master01 ~]# export KUBECONFIG =/etc/kubernetes/admin.conf
#这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了
  • 也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法:
[root@ master01 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config
#这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源。

7.1 创建csr请求文件:

[root@master01 work]# vim admin-csr.json 
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}

说明:后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限。

这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; “O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起。

7.2 生成证书:

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master01 work]# cp admin*.pem /etc/kubernetes/ssl/

7.3 配置安全上下文:

创建kubeconfig配置文件,比较重要:kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略)

  • 设置集群参数:
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.27.11.2:16443 --kubeconfig=/opt/kube.config
  • 设置客户端认证参数:
[root@master01 work]# kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --client-key=/etc/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=/opt/kube.config
  • 设置上下文参数:
[root@master01 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=/opt/kube.config
  • 设置当前上下文:
[root@master01 work]# kubectl config use-context kubernetes --kubeconfig=/opt/kube.config
[root@master01 work]# mkdir ~/.kube -p && cp /opt/kube.config  ~/.kube/config
  • 授权kubernetes证书访问kubelet api权限:
[root@master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

7.4 查看集群组件状态:

[root@master01 .kube]# kubectl cluster-info
Kubernetes control plane is running at https://172.27.11.2:16443

[root@master01 .kube]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}                                                                             
etcd-1               Healthy     {"health":"true"} 

[root@master01 .kube]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   21m

7.5 将kubectl用到的config文件同步到其他2的master上面:

[root@master02 work]# mkdir /root/.kube
[root@master03 work]# mkdir /root/.kube
[root@master01 work]# scp ~/.kube/config root@master02:/root/.kube && scp ~/.kube/config root@master03:/root/.kube

7.6 配置kubectl子命令补全:

#在集群中3个master节点上执行:
[root@master01 ~]# yum install -y bash-completion && source /usr/share/bash-completion/bash_completion  && source <(kubectl completion bash) && kubectl completion bash > ~/.kube/completion.bash.inc && source '/root/.kube/completion.bash.inc' && source $HOME/.bash_profile

#Kubectl官方备忘单:
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

八、部署kube-controller-manager组件:

8.1 创建crs请求文件:

[root@master01 .kube]# cd /data/work/
[root@master01 work]# vim kube-controller-manager-csr.json
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "172.27.11.223",
      "172.27.11.145",
      "172.27.11.217",
      "172.27.11.2"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

#注: hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。CN是用户名,O是组名。

8.2 生成证书:

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@master01 work]# ll kube-controller-manager*
-rw-r--r-- 1 root root 1143 Mar 28 14:06 kube-controller-manager.csr
-rw-r--r-- 1 root root  417 Mar 28 14:06 kube-controller-manager-csr.json
-rw------- 1 root root 1675 Mar 28 14:06 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1517 Mar 28 14:06 kube-controller-manager.pem

8.3 创建kube-controller-manager的kubeconfig:

  • 设置集群参数:
[root@master01 ssl]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.27.11.2:16443 --kubeconfig=kube-controller-manager.kubeconfig
  • 设置客户端认证参数:
[root@master01 ssl]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
  • 设置上下文参数:
[root@master01 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  • 设置当前上下文:
[root@master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

8.4 创建配置文件:

[root@master01 kube-controller-work]# vim kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

8.5 创建启动文件:

[root@master01 kube-controller-work]# vim kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

8.6 拷贝刚才创建的证书、启动文件和配置文件:

[root@master01 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/ && cp kube-controller-manager.conf /etc/kubernetes/ && cp kube-controller-manager.service /usr/lib/systemd/system/

8.7 启动服务,查看状态:

[root@master01 ssl]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager

8.8 master02和master03部署controller-manager:

  • 拷贝启动文件、配置文件、kubeconfig文件、证书至对应目录:
[root@master01 kubernetes]# scp kube-controller-manager.conf kube-controller-manager.kubeconfig root@master02:/etc/kubernetes/ && scp kube-controller-manager.conf kube-controller-manager.kubeconfig root@master03:/etc/kubernetes/ 
[root@master01 kubernetes]# scp /usr/lib/systemd/system/kube-controller-manager.service root@master02:/usr/lib/systemd/system && scp /usr/lib/systemd/system/kube-controller-manager.service root@master03:/usr/lib/systemd/system
[root@master01 ssl]# scp /etc/kubernetes/ssl/kube-controller-manager*.pem root@master02:/etc/kubernetes/ssl/ && scp /etc/kubernetes/ssl/kube-controller-manager*.pem root@master03:/etc/kubernetes/ssl/
  • 启动服务,查看状态:
[root@master02 ssl]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager
[root@master03 ssl]# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager

九、部署kube-scheduler组件:

9.1 创建csr请求文件:

[root@master01 work]# vim kube-scheduler-csr.json 
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "172.27.11.223",
      "172.27.11.217",
      "172.27.11.106",
      "172.27.11.2"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

#注:hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

9.2 生成证书:

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

9.3 创建kube-scheduler的kubeconfig:

  • 设置集群参数:
[root@master01 kube-scheduler-work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.27.11.2:16443 --kubeconfig=kube-scheduler.kubeconfig
  • 设置客户端认证参数:
[root@master01 ssl]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
  • 设置上下文参数:
[root@master01 kube-scheduler-work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  • 设置当前上下文:
[root@master01 kube-scheduler-work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

9.4 创建配置文件:

[root@master01 work]# vim kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

9.5 创建启动文件:

[root@master01 work]# vim kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

9.6 拷贝刚才创建的证书、启动文件和配置文件:

[root@master01 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/ && cp kube-scheduler.conf /etc/kubernetes/ && cp kube-scheduler.service /usr/lib/systemd/system/ 

9.7 启动服务,查看状态:

[root@master01 ssl]# systemctl daemon-reload  && systemctl enable kube-scheduler.service && systemctl start kube-scheduler.service && systemctl status kube-scheduler.service

9.8 master02和master03部署scheduler:

  • 拷贝启动文件、配置文件、kubeconfig文件、证书至对应目录:
[root@master01 kubernetes]# scp kube-scheduler.conf kube-scheduler.kubeconfig root@master02:/etc/kubernetes/ && scp kube-scheduler.conf kube-scheduler.kubeconfig root@master03:/etc/kubernetes/

[root@master01 kubernetes]# scp /usr/lib/systemd/system/kube-scheduler.service root@master02:/usr/lib/systemd/system && scp /usr/lib/systemd/system/kube-scheduler.service root@master03:/usr/lib/systemd/system

[root@master01 ssl]# scp /etc/kubernetes/ssl/kube-scheduler*.pem root@master02:/etc/kubernetes/ssl/ && scp /etc/kubernetes/ssl/kube-scheduler*.pem root@master03:/etc/kubernetes/ssl/ 
  • 启动服务,查看状态:
[root@master02 ssl]# systemctl daemon-reload  && systemctl enable kube-scheduler.service && systemctl start kube-scheduler.service && systemctl status kube-scheduler.service
[root@master03 ssl]# systemctl daemon-reload  && systemctl enable kube-scheduler.service && systemctl start kube-scheduler.service && systemctl status kube-scheduler.service

十、上传解压后面部署服务需要的镜像:

  • 需要在work01-work03节点上传即可;
#在work01节点上面手动解压pause-cordns.tar.gz、cni.tar.gz和node.tar.gz其他2个节点与work一致:
[root@work01 ~]# docker load -i /data/work/pause-cordns.tar.gz && docker load -i /data/work/cni.tar.gz && docker load -i /data/work/node.tar.gz
[root@work01 ~]# docker image ls
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE
[root@work01 ~]# docker image ls
REPOSITORY           TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/coredns   1.7.0     bfe3a36ebd25   21 months ago   45.2MB
k8s.gcr.io/pause     3.2       80d28bedfe5d   2 years ago     683kB
xianchao/node        v3.5.3    c56b9408bf9e   3 years ago     72.7MB
xianchao/cni         v3.5.3    0e3609429486   3 years ago     83.6MB

十一、部署kubelet组件:

每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod。

11.1 创建kubelet-bootstrap.kubeconfig:

  • 以下操作在master01上操作:
[root@master01 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
[root@master01 kubernetes]# echo $BOOTSTRAP_TOKEN 
3128f0ac412e60c5638e68eb8ab99acf
  • 设置集群参数:
[root@master01 work]#  kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.27.11.2:16443 --kubeconfig=kubelet-bootstrap.kubeconfig
  • 设置客户端认证参数:
[root@master01 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
  • 设置上下文参数:
[root@master01 kubernetes]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
  • 设置当前上下文:
[root@master01 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
  • 创建集群角色绑定kubelet用户:
[root@omaster01 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

11.2 创建配置文件kubelet.json:

“cgroupDriver”: "systemd"要和docker的驱动一致。address替换为自己work1的IP地址。

[root@master01 work]# vim kubelet.json 
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "172.27.11.106",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

11.3 创建启动文件:

[root@master01 work]# vim kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

#注: 
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI 
–kubeconfig:空路径,会自动生成,后面用于连接apiserver 
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件 
–cert-dir:kubelet证书生成目录 
–pod-infra-container-image:管理Pod网络容器的镜像

#注:kubelete.json配置文件address改为各个节点的ip地址,在各个work节点上启动服务

11.4 将对应的启动文件,配置文件,kubeconfig、证书复制到work1-work3:

[root@master01 work]# scp kubelet-bootstrap.kubeconfig root@work01:/etc/kubernetes/ && scp kubelet.json root@work01:/etc/kubernetes/ && scp kubelet.service root@work01:/usr/lib/systemd/system && scp ca.pem root@work01:/etc/kubernetes/ssl/  

[root@master01 work]# scp kubelet-bootstrap.kubeconfig root@work02:/etc/kubernetes/ && scp kubelet.json root@work02:/etc/kubernetes/ && scp kubelet.service root@work02:/usr/lib/systemd/system && scp ca.pem root@work02:/etc/kubernetes/ssl/  

[root@master01 work]# scp kubelet-bootstrap.kubeconfig root@work03:/etc/kubernetes/ && scp kubelet.json root@work03:/etc/kubernetes/ && scp kubelet.service root@work03:/usr/lib/systemd/system && scp ca.pem root@work03:/etc/kubernetes/ssl/  

11.5 修改3个work节点的kubelet.json文件

[root@work01 ~]# vim /etc/kubernetes/kubelet.json 
 "address": "172.27.11.106", 
[root@work02 ~]# vim /etc/kubernetes/kubelet.json 
 "address": "172.27.11.128",
[root@work03 ~]# vim /etc/kubernetes/kubelet.json 
 "address": "172.27.11.147",

11.6 work启动kubelet服务:

#启动kubelet服务
[root@work01 ~]# mkdir /var/lib/kubelet && mkdir /var/log/kubernetes && systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

[root@work02 ~]# mkdir /var/lib/kubelet && mkdir /var/log/kubernetes && systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

[root@work03 ~]# mkdir /var/lib/kubelet && mkdir /var/log/kubernetes && systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

#确认kubelet服务启动成功后,接着到master01节点上Approve一下bootstrap请求。

11.7 master01批准节点CSR请求:

#执行如下命令可以看到一个worker节点发送了一个 CSR 请求:
[root@master01 ~]# kubectl  get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-3XyUVgKBOjclc6ks9U7zYl0pW1BNrarWQwKlgc9JD_c   30s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-QPR-51MEtXDa76dOYlZQ5aQXNVSupCkVqhHElQ5qJTQ   35s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-pr2-yzukTicMkghMLx136nnb3UBZ0seqkhyaBVICcT4   26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#批准kubelet连接apiserver的申请
[root@master01 ~]# kubectl certificate approve node-csr-3XyUVgKBOjclc6ks9U7zYl0pW1BNrarWQwKlgc9JD_c
[root@master01 ~]# kubectl certificate approve node-csr-QPR-51MEtXDa76dOYlZQ5aQXNVSupCkVqhHElQ5qJTQ
[root@master01 ~]# kubectl certificate approve node-csr-pr2-yzukTicMkghMLx136nnb3UBZ0seqkhyaBVICcT4

[root@master01 ~]# kubectl  get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-3XyUVgKBOjclc6ks9U7zYl0pW1BNrarWQwKlgc9JD_c   115s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-QPR-51MEtXDa76dOYlZQ5aQXNVSupCkVqhHElQ5qJTQ   2m     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
node-csr-pr2-yzukTicMkghMLx136nnb3UBZ0seqkhyaBVICcT4   111s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#查看节点状态,STATUS是NotReady表示还没有安装网络插件:
[root@master01 ~]# kubectl  get node
NAME     STATUS     ROLES    AGE     VERSION
work01   NotReady   <none>   6m12s   v1.20.7
work02   NotReady   <none>   89s     v1.20.7
work03   NotReady   <none>   83s     v1.20.7

十二、部署kube-proxy组件:

12.1 创建csr请求:

[root@master01 work]# vim kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

12.2 生成证书:

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

12.3 创建kube-proxy的kubeconfig:

  • 设置集群参数:
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.27.11.2:16443 --kubeconfig=kube-proxy.kubeconfig
  • 设置客户端认证参数:
[root@master01 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
  • 设置上下文参数:
[root@master01 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
  • 设置当前上下文:
[root@master01 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 移动kubeconfig文件到3个work节点:
[root@master01 opt]# scp /data/work/kube-proxy.kubeconfig root@work01:/etc/kubernetes/ && scp /data/work/kube-proxy.kubeconfig root@work02:/etc/kubernetes/ && scp /data/work/kube-proxy.kubeconfig root@work03:/etc/kubernetes/

12.4 创建kube-proxy配置文件:

[root@master01 work]# vim kube-proxy-work1.yaml 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.27.11.106
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.27.11.0/24
healthzBindAddress: 172.27.11.106:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.27.11.106:10249
mode: "ipvs"

[root@master01 kube-proxy]# vim kube-proxy-work2.yaml 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.27.11.128
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.27.11.0/24
healthzBindAddress: 172.27.11.128:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.27.11.128:10249
mode: "ipvs"

[root@master01 kube-proxy]# vim kube-proxy-work3.yaml 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.27.11.147
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.27.11.0/24
healthzBindAddress: 172.27.11.147:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.27.11.147:10249
mode: "ipvs"

12.5 创建服务启动文件:

[root@master01 work]# vim kube-proxy-work1.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy-work1.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

[root@master01 work]# vim kube-proxy-work2.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy-work2.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

[root@master01 work]# vim kube-proxy-work3.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy-work3.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

12.5 将对应的配置文件和服务配置文件复制到work1-work3:

[root@master01 kube-proxy]# scp kube-proxy-work1.yaml root@work01:/etc/kubernetes/ && scp kube-proxy-work1.service root@work01:/usr/lib/systemd/system

[root@master01 kube-proxy]# scp kube-proxy-work2.yaml root@work02:/etc/kubernetes/ && scp kube-proxy-work2.service root@work02:/usr/lib/systemd/system

[root@master01 kube-proxy]# scp kube-proxy-work3.yaml root@work03:/etc/kubernetes/ && scp kube-proxy-work3.service root@work03:/usr/lib/systemd/system

12.6 work启动kube-proxy服务:

[root@work01 ~]# mkdir -p /var/lib/kube-proxy && systemctl daemon-reload && systemctl enable kube-proxy-work1 && systemctl start kube-proxy-work1 && systemctl status kube-proxy-work1
  
[root@work02 ~]# mkdir -p /var/lib/kube-proxy && systemctl daemon-reload && systemctl enable kube-proxy-work2 && systemctl start kube-proxy-work2 && systemctl status kube-proxy-work2
   
[root@work03 ~]# mkdir -p /var/lib/kube-proxy && systemctl daemon-reload && systemctl enable kube-proxy-work3 && systemctl start kube-proxy-work3 && systemctl status kube-proxy-work3

十三、部署calico网络组件:

[root@master01 work]# kubectl apply -f calico.yaml
[root@master01 work]# kubectl get pods -n kube-system
[root@master01 work]# kubectl get pods -n kube-system
NAME                READY   STATUS    RESTARTS   AGE
calico-node-49l7g   1/1     Running   0          21s
calico-node-hlwpv   1/1     Running   0          21s
calico-node-n6d7b   1/1     Running   0          21s

十四、部署coredns组件:

[root@master01 work]# kubectl apply -f coredns.yaml
[root@master01 work]# kubectl get pods -n kube-system
[root@master01 work]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
calico-node-49l7g          1/1     Running   0          79s
calico-node-hlwpv          1/1     Running   0          79s
calico-node-n6d7b          1/1     Running   0          79s
coredns-79677db9bd-2b8jj   1/1     Running   0          6s

#查看集群状态
[root@master01 work]# kubectl  get node
NAME     STATUS   ROLES    AGE   VERSION
work01   Ready    <none>   24m   v1.20.7
work02   Ready    <none>   20m   v1.20.7
work03   Ready    <none>   19m   v1.20.7

十五、测试k8s集群部署tomcat服务

  • 在3个work节点上面解压tomcat.tar.gz和busybox-1-28.tar.gz
[root@work01 ~]# docker load -i /data/work/tomcat.tar.gz && docker load -i /data/work/busybox-1-28.tar.gz 
[root@work01 ~]# docker image ls
REPOSITORY           TAG               IMAGE ID       CREATED         SIZE
tomcat               8.5-jre8-alpine   8b8b1eb786b5   2 years ago     106MB
busybox              1.28              8c811b4aec35   3 years ago     1.15MB

[root@master01 work]# kubectl apply -f tomcat.yaml
[root@master01 work]# kubectl apply -f tomcat-service.yaml

[root@master01 work]# kubectl get pod,svc
NAME           READY   STATUS    RESTARTS   AGE
pod/demo-pod   2/2     Running   0          30s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.255.0.1      <none>        443/TCP          92m
service/tomcat       NodePort    10.255.250.10   <none>        8080:30080/TCP   16s
[root@master01 work]# kubectl get pod,svc -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP         NODE     NOMINATED NODE   READINESS GATES
pod/demo-pod   2/2     Running   0          59s   10.0.2.2   work03   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/kubernetes   ClusterIP   10.255.0.1      <none>        443/TCP          93m   <none>
service/tomcat       NodePort    10.255.250.10   <none>        8080:30080/TCP   45s   app=myapp,env=dev

#在浏览器访问work03节点的ip:30080即可请求到浏览器

#云原生征文#二进制安装多master节点的k8s集群-1.20.7-开源基础软件社区

十六、验证coredns服务是否正常

[root@master01 work]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=49 time=21.763 ms
64 bytes from 110.242.68.4: seq=1 ttl=49 time=10.615 ms
#通过上面可以看到能访问网络
/ # nslookup kubernetes.default.svc.cluster.local
Server:		10.255.0.2
Address:	10.255.0.2:53
Name:	kubernetes.default.svc.cluster.local
Address: 10.255.0.1

/ # nslookup tomcat.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      tomcat.default.svc.cluster.local
Address 1: 10.255.250.10 tomcat.default.svc.cluster.local

#注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip,报错如下:
/ # nslookup kubernetes.default.svc.cluster.local
Server:		10.255.0.2
Address:	10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer

10.255.0.2 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
解析内部Service的名称,是通过coreDNS去解析的。

本文正在参加云原生有奖征文活动】,活动链接: https://ost.51cto.com/posts/12598

©著作权归作者所有,如需转载,请注明出处,否则将追究法律责任
已于2022-5-24 09:27:23修改
收藏
回复
举报
回复
添加资源
添加资源将有机会获得更多曝光,你也可以直接关联已上传资源 去关联
    相关推荐