博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
kubernetes1.9.1 集群
阅读量:6625 次
发布时间:2019-06-25

本文共 11647 字,大约阅读时间需要 38 分钟。

kubernetes-HA 高可用部署

1 . 部署架构

  • kubernetes组件说明

kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制;

etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群;

kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态;

kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态;

kubelet: kubernetes node agent,负责与node上的docker engine打交道;

kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。

2 . 环境

  • 172.16.50.121 morepay01 CentOS 7.4.1708
  • 172.16.50.122 morepay02 CentOS 7.4.1708
  • 172.16.50.123 morepay03 CentOS 7.4.1708
  • 172.16.50.125 morepay04 CentOS 7.4.1708

3 . 基础配置

3.1 . 配置免密登录

所有软件包上传到服务器172.16.50.121;为了方便拷贝镜像,及配置文件;配置172.16.50.121免密登录其他服务器

  • 在morepay01主机
    ssh-keygen
    for i in 121 122 123 125 ; do ssh-copy-id root@172.16.50.$i ; done

3.2 . 参数调整

所有主机

morepay01主机

创建`/etc/sysctl.d/k8s.conf `文件添加如下:net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1

分发到其他主机

for i in 122 123 125 ; do scp /etc/sysctl.d/k8s.conf root@172.16.50.$i:/etc/sysctl.d/k8s.conf ; done

生效

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "sysctl -p /etc/sysctl.d/k8s.conf " ; done

如提示

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directorysysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

执行命令 modprobe br_netfilter

3.3 . 禁用SELINUX

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  'setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/sysconfig/selinux' ; done

3.4 . hosts配置

所有主机添加

  • 172.16.50.121 morepay01
  • 172.16.50.122 morepay02
  • 172.16.50.123 morepay03
  • 172.16.50.125 morepay04

    for i in 122 123 125 ; do scp /etc/hosts root@172.16.50.$i:/etc/hosts ; done

4 . etcd集群部署

参考:

步骤略

5 . Docker安装

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "yum install docker -y" ; donefor i in 121 122 123 125 ; do ssh root@172.16.50.$i  "systemctl start docker.service && systemctl status docker.service && systemctl enable docker.service" ; done

6 . kubernetes组件安装

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "mkdir /data/soft -p" ; done  cd /data/soft

科学上网下载组件

  • kubeadm-1.9.1-0.x86_64.rpm
  • kubectl-1.9.1-0.x86_64.rpm
  • kubelet-1.9.1-0.x86_64.rpm
  • kubernetes-cni-0.6.0-0.x86_64.rpm

拷贝到其他服务器

for i in 122 123 125 ; do scp /data/soft/* root@172.16.50.$i:/data/soft ; done

安装

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "yum install  /data/soft/* -y" ; done

设置kubelet开机启动

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "systemctl enable kubelet.service" ; done

7 . 初始化集群

7.1 . kubernetes基础镜像导入

科学上网下载docker镜像

  • k8s-dns-dnsmasq-nanny-amd64_1.14.7.tar
  • k8s-dns-kube-dns-amd64_1.14.7.tar
  • k8s-dns-sidecar-amd64_1.14.7.tar
  • kube-apiserver-amd64_v1.9.1.tar
  • kube-controller-manager-amd64_v1.9.1.tar
  • kube-proxy-amd64_v1.9.1.tar
  • kube-scheduler-amd64_v1.9.1.tar
  • pause-amd64_3.0.tar

分发镜像到其他服务器

for i in 121 122 123 125 ; do ssh root@172.16.50.$i  "mkdir /data/images" ; done  for i in 122 123 125 ; do scp /data/images/* root@172.16.50.$i:/data/images ; done

导入镜像

for j in `ls /data/images`; do docker load --input /data/images/$j ; done  # 所有机器执行docker imagesREPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZEgcr.io/google_containers/kube-apiserver-amd64            v1.9.1              e313a3e9d78d        6 days ago          210.4 MBgcr.io/google_containers/kube-controller-manager-amd64   v1.9.1              4978f9a64966        6 days ago          137.8 MBgcr.io/google_containers/kube-proxy-amd64                v1.9.1              e470f20528f9        6 days ago          109.1 MBgcr.io/google_containers/kube-scheduler-amd64            v1.9.1              677911f7ae8f        6 days ago          62.7 MBgcr.io/google_containers/k8s-dns-sidecar-amd64           1.14.7              db76ee297b85        11 weeks ago        42.03 MBgcr.io/google_containers/k8s-dns-kube-dns-amd64          1.14.7              5d049a8c4eec        11 weeks ago        50.27 MBgcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64     1.14.7              5feec37454f4        11 weeks ago        40.95 MBgcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        20 months ago       746.9 kB

7.2 . 初始化

创建配置文件config.yaml 如下:

apiVersion: kubeadm.k8s.io/v1alpha1kind: MasterConfigurationkubernetesVersion: v1.9.1networking:  podSubnet: 10.244.0.0/16apiServerCertSANs:- morepay01- morepay02- morepay03- 172.16.50.121- 172.16.50.122- 172.16.50.123- 172.16.50.200apiServerExtraArgs:  endpoint-reconciler-type: "lease"etcd:  endpoints:  - http://172.16.50.121:2379  - http://172.16.50.122:2379  - http://172.16.50.123:2379token: "deed3a.b3542929fcbce0f0"tokenTTL: "0"

172.16.50.200 作为keep

kubeadm引导集群 morepay01主机进行

kubeadm init --config=config.yaml

拷贝/etc/kubernetes/pki/目录 到 morepay02,morepay03 主机

for i in 122 123 ; do ssh root@172.16.50.$i  "mkdir /etc/kubernetes/pki" ; done for i in 122 123 ; do scp /etc/kubernetes/pki/* root@172.16.50.$i:/etc/kubernetes/pki ; done

kubeadm引导集群 morepay02,morepay03主机

for i in 122 123 ; do ssh root@172.16.50.$i  "kubeadm init --config=/data/soft/config.yaml" ; done

拷贝配置文件

for i in 121 122 123 ; do ssh root@172.16.50.$i  "mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config" ; done

查看kubernetes集群状态

kubectl get pod --all-namespacesNAMESPACE     NAME                                READY     STATUS    RESTARTS   AGEkube-system   kube-apiserver-morepay01            1/1       Running   0          17mkube-system   kube-apiserver-morepay02            1/1       Running   0          1mkube-system   kube-apiserver-morepay03            1/1       Running   0          29skube-system   kube-controller-manager-morepay01   1/1       Running   0          17mkube-system   kube-controller-manager-morepay02   1/1       Running   0          1mkube-system   kube-controller-manager-morepay03   1/1       Running   0          41skube-system   kube-dns-6f4fd4bdf-hh9zg            0/3       Pending   0          18mkube-system   kube-proxy-48k7v                    1/1       Running   0          1mkube-system   kube-proxy-m6m7j                    1/1       Running   0          2mkube-system   kube-proxy-rl5kz                    1/1       Running   0          18mkube-system   kube-scheduler-morepay01            1/1       Running   0          17mkube-system   kube-scheduler-morepay02            1/1       Running   0          1mkube-system   kube-scheduler-morepay03            1/1       Running   0          35s

7.3 . 安装flannel;网络插件

kubectl create -f kube-flannel.yml

查看pods运行状态

kubectl get pod --all-namespacesNAMESPACE     NAME                                READY     STATUS    RESTARTS   AGEkube-system   kube-apiserver-morepay01            1/1       Running   0          21mkube-system   kube-apiserver-morepay02            1/1       Running   0          5mkube-system   kube-apiserver-morepay03            1/1       Running   0          4mkube-system   kube-controller-manager-morepay01   1/1       Running   0          21mkube-system   kube-controller-manager-morepay02   1/1       Running   0          5mkube-system   kube-controller-manager-morepay03   1/1       Running   0          4mkube-system   kube-dns-6f4fd4bdf-hh9zg            3/3       Running   0          22mkube-system   kube-flannel-ds-6vwbs               1/1       Running   0          1mkube-system   kube-flannel-ds-7gqv2               1/1       Running   0          1mkube-system   kube-flannel-ds-k8dp9               1/1       Running   0          1mkube-system   kube-proxy-48k7v                    1/1       Running   0          6mkube-system   kube-proxy-m6m7j                    1/1       Running   0          6mkube-system   kube-proxy-rl5kz                    1/1       Running   0          22mkube-system   kube-scheduler-morepay01            1/1       Running   0          21mkube-system   kube-scheduler-morepay02            1/1       Running   0          5mkube-system   kube-scheduler-morepay03            1/1       Running   0          4m

8 . keepalived 安装

for i in 121 122 123 ; do ssh root@172.16.50.$i  "yum install keepalived -y" ; done for i in 121 122 123 ; do ssh root@172.16.50.$i  "systemctl enable keepalived" ; done

创建健康检查脚本check_apiserver.sh;如下:

#!/bin/basherr=0for k in $( seq 1 10 )do    check_code=$(ps -ef|grep kube-apiserver | wc -l)    if [ "$check_code" = "1" ]; then        err=$(expr $err + 1)        sleep 5        continue    else        err=0        break    fidoneif [ "$err" != "0" ]; then    echo "systemctl stop keepalived"    /usr/bin/systemctl stop keepalived    exit 1else    exit 0fi

拷贝脚本至 morepay01 ,morepay02 , morepay03 ; 添加执行权限

for i in 121 122 123 ; do scp /data/soft/check_apiserver.sh root@172.16.50.$i:/etc/keepalived/ ; donefor i in 121 122 123 ; do ssh root@172.16.50.$i  "chmod +x /etc/keepalived/check_apiserver.sh" ; done

创建keepalived.conf 配置文件;如下:

! Configuration File for keepalivedglobal_defs {    router_id LVS_DEVEL}vrrp_script chk_apiserver {    script "/etc/keepalived/check_apiserver.sh"    interval 60    weight -5    fall 3      rise 2}vrrp_instance VI_1 {    state MASTER    interface eth0    virtual_router_id 53    priority 100    advert_int 2    authentication {        auth_type PASS        auth_pass 4be37dc3b4c90194d1600c483e10ad1d    }    virtual_ipaddress {        172.16.50.200    }    track_script {       chk_apiserver    }}

拷贝配置文件至 morepay01 ,morepay02 , morepay03

for i in 121 122 123 ; do scp /data/soft/keepalived.conf root@172.16.50.$i:/etc/keepalived/keepalived.conf ; done

启动keepalived

for i in 121 122 123 ; do ssh root@172.16.50.$i  "systemctl start keepalived" ; done

9 . kubernetes加入node节点

for i in 125 ; do echo $i ; ssh root@172.16.50.$i  "kubeadm join --token deed3a.b3542929fcbce0f0 172.16.50.200:6443 --discovery-token-ca-cert-hash sha256:d49e5784284ad741aaa8259b9987d52a394b5d76d137d179951f4979e27eb58d" ; done

10 . kube-proxy配置

kubectl edit -n kube-system configmap/kube-proxyserver: https://172.16.50.200:6443

重启重新创建pod

kubectl get pods --all-namespaces -o wide | grep proxykubectl delete pod -n kube-system kube-proxy-xxxxx

11 . 安装插件(可选)

kubectl create -f heapster.yaml kubectl create -f influxdb.yaml kubectl create -f kubernetes-dashboard.yaml kubectl create -f grafana.yaml

12 . 访问仪表盘

12.1 . 证书导出

cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.crtcat /etc/kubernetes/admin.conf | grep client-key-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.keyopenssl pkcs12 -export -inkey /etc/kubernetes/pki/client.key -in /etc/kubernetes/pki/client.crt -out /etc/kubernetes/pki/client.pfx

下载/etc/kubernetes/pki/client.pfx文件 导入浏览器

12.2 . 查看Token

kubectl get Secret --all-namespaces | grep dashboard-adminkubectl describe Secret kubernetes-dashboard-admin-token-xxxx -n kube-system

13.自动伸缩

需要创建一个服务

kubectl create -f deploy.yaml

测试,创建deployment

kubectl run php-apache --image=172.16.50.116/google_containers/hpa-example:latest --requests=cpu=200m --expose --port=80

创建 horizontalpodautoscalers

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

查看hpa

kubectl get hpaNAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGEphp-apache   Deployment/php-apache   0% / 50%   1         10        1          9m

增加cpu负载

while true; do wget -q -O- 172.16.50.121:30150; done 再查看hpakubectl get hpaNAME         REFERENCE               TARGETS      MINPODS   MAXPODS   REPLICAS   AGEphp-apache   Deployment/php-apache   290% / 50%   1         10        4          11m

14 . 手动生成证书

参考:

转载于:https://blog.51cto.com/11889458/2105047

你可能感兴趣的文章
(转)android 牛人必修 ant 编译android工程
查看>>
求最大公约数与最小公倍数
查看>>
C# Winform 跨线程更新UI控件常用方法总结(转)
查看>>
eclipse菜单栏不显示 + the system is running in lou-graphics mode问题
查看>>
【WebService】使用jaxb完成对象和xml的转换
查看>>
如何去除My97 DatePicker控件上右键弹出官网的链接 - 如何debug混淆过的代码
查看>>
多文档
查看>>
输入5个学生的信息(包括学号,姓名,英语成绩,计算机语言成绩和数据库成绩), 统计各学生的总分,然后将学生信息和统计结果存入test.txt文件中...
查看>>
BZOJ2337 [HNOI2011]XOR和路径
查看>>
C# 该行已经属于另一个表 ...
查看>>
android 避免线程的重复创建(HandlerThread、线程池)
查看>>
手游-放开那三国socket协议分析
查看>>
SQL Lazy Spool Eager Spool
查看>>
type的解释
查看>>
Windows Phone 8 开发环境搭建
查看>>
2017:IDC市场规模将持续增长 增速放缓
查看>>
从自动驾驶到学习机器学习:解读2017科技发展的15大趋势
查看>>
SinoBBD探索"一体化"大数据创新发展
查看>>
互联网金融带来新机遇 数据合规性不容忽视
查看>>
在Linux中永久并安全删除文件和目录的方法
查看>>