K8s添加新的master和node节点并配置高可用

###安装docker和kubeadm不再叙述

1.添加节点

1.添加master节点

#在master上生成新的token
kubeadm token create --print-join-command
kubeadm join 192.168.2.249:6443 --token 6wzuvf.01nkjn0b2oq8fdgs     --discovery-token-ca-cert-hash sha256:b37cfd7ab00c9eced97ce78261ff4466cf175bca44c69e1f3adb82718639a920

#在master上生成用于新master加入的证书
kubeadm init phase upload-certs --experimental-upload-certs      #这里是新版本使用的命令
unknown flag: --experimental-upload-certs(此处为报错)

旧版本命令:
kubeadm init phase upload-certs  --upload-certs
W1024 11:02:06.486274   21651 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1024 11:02:06.486355   21651 version.go:103] falling back to the local client version: v1.20.9
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
de720595316a70ad828a4cdb3a99b28f578bceb4d6570a661c9f0304e958e3f9
#在新master上操作
kubeadm join 192.168.2.249:6443 --token 6wzuvf.01nkjn0b2oq8fdgs     --discovery-token-ca-cert-hash sha256:b37cfd7ab00c9eced97ce78261ff4466cf175bca44c69e1f3adb82718639a920 --control-plane --certificate-key de720595316a70ad828a4cdb3a99b28f578bceb4d6570a661c9f0304e958e3f9 (这里是上面的key)

#此处可能报错
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

#解决办法
kubectl -n kube-system get cm kubeadm-config -oyaml | grep controlPlaneEndpoint (发现没有controlPlaneEndpoint)
#如果kubectl不能使用
#将主节点(master)中的/etc/kubernetes/admin.conf文件拷贝到从节点相同目录下
#配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
#这样kubectl就可以使用
#继续解决上面的问题
使用命令:
kubectl -n kube-system edit cm kubeadm-config
kind: ClusterConfiguration
    kubernetesVersion: v1.20.0
    controlPlaneEndpoint: 192.168.2.249:6443  (添加这里ip为主master ip)

#修改完成后再次执行命令
kubeadm join 192.168.2.249:6443 --token 6wzuvf.01nkjn0b2oq8fdgs     --discovery-token-ca-cert-hash sha256:b37cfd7ab00c9eced97ce78261ff4466cf175bca44c69e1f3adb82718639a920 --control-plane --certificate-key de720595316a70ad828a4cdb3a99b28f578bceb4d6570a661c9f0304e958e3f9

kubectl get nodes
#如果新节点一直notready,使用journalctl -f 查看具体报错日志

2.添加node节点

kubeadm join 192.168.2.249:6443 --token 6wzuvf.01nkjn0b2oq8fdgs     --discovery-token-ca-cert-hash sha256:b37cfd7ab00c9eced97ce78261ff4466cf175bca44c69e1f3adb82718639a920

2.配置高可用

Keepalived是Linux下一个轻量级别的高可用解决方案,主要是通过虚拟路由冗余来实现高可用功能,,通过VRRP(Vritrual Router Redundancy Protocol,虚拟路由冗余协议)解决静态路由出现的单点故障问题,实现网络不间断稳定运行

haproxy

环境:

192.168.2.249

k8s-master

192.168.2.239

k8s-master2

192.168.2.250

k8s-node1

192.168.2.251

k8s-node2

192.168.2.238

k8s-node3

#所有master节点安装
yum install -y keepalived haproxy

#master1节点配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s                        #这里为keepalived服务器的路由标识,这里为自定义
   }

vrrp_script check_haproxy {             #定义vrrp脚本
    script "killall -0 haproxy"         #执行的命令
    interval 3                          #脚本调用时间
    weight -2                           #根据权重调整vrrp实例优先级,默认0
    fall 10                             #需要失败多少次,vrrp才进行角色状态切换(从正常切换为fault)
    rise 2                              #需要成功多少次,vrrp才进行角色状态切换(从fault切换为正常)
}

vrrp_instance VI_1 {                    
    state MASTER                        #指定Master或者BACKUP需为大写
    interface ens192                    #网卡名称
    virtual_router_id 51                #组播ID,取值在0-255之间,用来区分多个instance的VRRP组播, 同一网段中该值不能重复,并且同一个vrrp实例使用唯一的标识
    priority 250                        #权重值,数值大优先级高,数值为1-255,谁的优先级高谁就是master
    advert_int 1                        #发送组播包的间隔时间,默认为1秒
    authentication {
        auth_type PASS                  #认证,认证类型有PASS和AH(IPSEC)
        auth_pass ASDFasdf123           #密码,其他实例需使用相同的才可以通信
    }
    virtual_ipaddress {
        192.168.100.100                 #VIP地址
    }

    track_script {
        check_haproxy
    }
}
#master2节点配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass ASDFasdf123
    }
    virtual_ipaddress {
        192.168.100.100
    }
    track_script {
        check_haproxy
    }
}

启动keepalived并检查

#两个master节点
systemctl enable keepalived
systemctl start  keepalived

ip a
#查看是否有VIP192.168.100.100,如果有就停止keepalived查看VIP是否会漂移到master2,并启动是否还会再漂移回master1

配置haproxy

#两个节点都一样
vim /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2                                         #指定了日志记录到本地的地址
    chroot      /var/lib/haproxy                                         #指定了chroot的目录,用于增强安全性
    pidfile     /var/run/haproxy.pid                                     #指定了pid文件的位置
    maxconn     4000                                                     #设置了最大连接数
    user        haproxy                                                  #用户
    group       haproxy                                                  #组
    daemon                                                               #启用了后台守护进程模式
    stats socket /var/lib/haproxy/stats                                  #启用了统计信息的Unix套接字。
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults                                                                 #这里定义了listen和backend默认设置,设置了http模式等
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend kubernetes-apiserver                                           #定义后端不服,用于处理来自前端的请求
    mode                 tcp                                            #TCP模式,负载均衡策略是轮询
    bind                 *:16000                                        #指定haproxy运行的端口为16000
    option               tcplog
    default_backend      kubernetes-apiserver                           #指定后端服务器,需和下方backend一致


backend kubernetes-apiserver                                            #default_backend.....
    mode        tcp
    balance     roundrobin                                              #轮询
    server      k8s-master    192.168.2.249:6443 check                  #节点
    server      k8s-master2   192.168.2.239:6443 check                  #节点

listen stats                                                            #定义了一个监听统计信息的部分,通常用于监控HAProxy的性能
    bind                 *:1080                                         #监听在端口1080
    stats auth           admin:ASDFasdf123                              #设置密码
    stats refresh        5s                                             #指定刷新统计信息的频率
    stats realm          HAProxy\ Statistics                            #设置了统计信息的领域
    stats uri            /admin?stats                                   #指定访问统计信息的URI路径

这个配置文件适用于将来自客户端的请求代理到名为 k8s-master 和 k8s-master2 的两个Kubernetes API服务器上,并提供了统计信息接口。在生产环境中,需要根据自己的需求和网络拓扑进行适当的调整和安全设置。确保HAProxy配置文件的正确性,并根据您的环境和需求进行必要的修改
#启动并查看端口
systemctl start haproxy


systemctl enable haproxy