k8s(4)

02-27 1560阅读

目录

负载均衡部署

做初始化操作:

每台主机添加域名

从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点:

修改02配置文件kube-apiserver,kube-controller-manager,kube-scheduler中的IP:

在 master02 节点上启动各服务并设置开机自启:

由于我们之前定义了负载均衡器和vip地址,直接开启50,60主机:

给负载均衡器做好初始化操作:

修改nginx配置文件,配置四层反向代理负载均衡,指定k8s群集2台master的节点ip和6443端口:

部署keepalived服务:

添加检查nginx存活的脚本:

修改keepalived配置文件:

在进行添加一个周期性执行的脚本配置:

启动keepalived:

修改所有node节点上的bootstrap.kubeconfig,kubelet.kubeconfig,kube-proxy.kubeconfig配置文件为VIP:

修改master01./kube下的config文件的ip:

将master节点的kube-controller-manager.kubeconig,kube-scheduler.kubeconfig文件的ip指向本机ip:


负载均衡部署

准备master02:192.168.233.20

master尽量三个以上,方便组件选择主及集群的高可用。部署操作都是一样。

做初始化操作:

systemctl stop firewalld

systemctl disable firewalld

k8s(4)

setenforce 0

sed -i 's/enforcing/disabled/' /etc/selinux/config

k8s(4)

swapoff -a

sed -ri 's/.*swap.*/#&/' /etc/fstab 

k8s(4)

hostnamectl set-hostname master02

k8s(4)

每台主机添加域名

cat >> /etc/hosts /etc/sysctl.d/k8s.conf /dev/null

k8s(4)

k8s(4)

k8s(4)

从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点:

scp -r kubernetes/ etcd/ master02:/opt/

k8s(4)

查看master02:

k8s(4)

删除日志:

k8s(4)

k8s(4)

 scp -r  .kube/ master02:/root

k8s(4)

查看并将目录删除:

k8s(4)

k8s(4)

将服务文件复制:

cd /usr/lib/systemd/system

ls kube*

k8s(4)

scp kube-* master02:`pwd`

k8s(4)

查看02:

k8s(4)

修改02配置文件kube-apiserver,kube-controller-manager,kube-scheduler中的IP:

cd /opt/kubernetes/cfg

vim kube-apiserver

vim kube-controller-manager

vim kube-scheduler

k8s(4)

k8s(4)

k8s(4)

k8s(4)

k8s(4)

k8s(4)

在 master02 节点上启动各服务并设置开机自启:

systemctl start kube-apiserver.service

systemctl enable --now kube-apiserver.service

systemctl start kube-controller-manager.service

systemctl enable --now kube-controller-manager.service

systemctl start kube-scheduler.service

systemctl enable --now kube-scheduler.service

k8s(4)

查看状态:

systemctl status kube-apiserver.service kube-controller-manager.service kube-scheduler.service

k8s(4)

k8s(4)

查看node节点状态,做软连接:

cd /opt/kubernetes/bin/

k8s(4)

ln -s /opt/kubernetes/bin/* /usr/local/bin/

k8s(4)

kubectl get nodes

k8s(4)

kubectl get nodes -o wide

k8s(4)

k8s(4)

由于我们之前定义了负载均衡器和vip地址,直接开启50,60主机:

k8s(4)

k8s(4)

给负载均衡器做好初始化操作:

k8s(4)

k8s(4)

都进行编译安装nginx:

k8s(4)

k8s(4)

# nginx.repo

[nginx]

name=nginx repo

baseurl=http://nginx.org/packages/centos/$releasever/$basearch/

gpgcheck=1

enabled=1

gpgkey=https://nginx.org/keys/nginx_signing.key

module_hotfixes=true    

k8s(4)

下载nginx:

k8s(4)

k8s(4)

修改nginx配置文件,配置四层反向代理负载均衡,指定k8s群集2台master的节点ip和6443端口:

k8s(4)

k8s(4)

添加:

stream {
    upstream k8s-apiserver {
     server 192.168.233.10:6443;
     server 192.168.233.20:6443;
   } 
     server {
         listen 6443;
      proxy_pass k8s-apiserver;
  }
}

k8s(4)

检查配置文件语法:

nginx -t   

k8s(4)

开启ngixn:

systemctl start nginx

k8s(4)

k8s(4)

部署keepalived服务:

将在线源移出来进行安装:

mv bak/* .

k8s(4)

yum install keepalived -y

k8s(4)

k8s(4)

添加检查nginx存活的脚本:

k8s(4)

#!/bin/bash

if ! killall -0 nginx

   then

 systemctl stop keealived

fi

k8s(4)

授予脚本权限:

chmod +x check.nginx.sh

k8s(4)

修改keepalived配置文件:

cd /etc/keepalived

vim keepalived.conf

k8s(4)

k8s(4)

k8s(4)

下面全部删除掉。

在进行添加一个周期性执行的脚本配置:

vrrp_script check.nginx {

     interval 2

   script "/etc/keepalived/check.nginx.sh"

 }

k8s(4)

     track_script {

           check.nginx

}

k8s(4)

同时60作为备节点需要修改一下:

k8s(4)

启动keepalived:

systemctl start keepalived

systemctl enable keepalived

k8s(4)

k8s(4)

k8s(4)

k8s(4)

测试关闭主的nginx:

k8s(4)

查看备60节点:

k8s(4)

主开启nginx:

k8s(4)

k8s(4)

修改所有node节点上的bootstrap.kubeconfig,kubelet.kubeconfig,kube-proxy.kubeconfig配置文件为VIP:

cd /opt/kubernetes/

vim kube-proxy.kubeconfig

k8s(4)

k8s(4)

vim kubelet.kubeconfig

k8s(4)

k8s(4)

vim bootstrap.kubeconfig

k8s(4)

k8s(4)

重启kubelet和kube-proxy服务:

systemctl restart kubelet.service 

systemctl restart kube-proxy.service

k8s(4)

k8s(4)

在负载均衡主机上查看nginx连接状态:

netstat -natp | grep nginx

k8s(4)

修改master01./kube下的config文件的ip:

k8s(4)

k8s(4)

k8s(4)

k8s(4)

k8s(4)

将master节点的kube-controller-manager.kubeconig,kube-scheduler.kubeconfig文件的ip指向本机ip:

k8s(4)

k8s(4)

k8s(4)

k8s(4)

k8s(4)

节点02:

k8s(4)

k8s(4)

k8s(4)

k8s(4)

重启kube-controller-manager.service,kube-scheduler.service服务:

由于我master01节点的文件不用修改,就不用重启。

systemctl restart kube-controller-manager.service kube-scheduler.service

k8s(4)

VPS购买请点击我

文章版权声明:除非注明,否则均为主机测评原创文章,转载或复制请以超链接形式并注明出处。

目录[+]