centos7使用kubeadm安装kubernetes 1.11版本多主高可用

发布时间:2019-08-22 12:00--阅读:459--评论:0条

https://www.kubernetes.org.cn/3808.html

实验环境说明

实验架构图


 
  1. lab1: etcd master haproxy keepalived 11.11.11.111

  2. lab2: etcd master haproxy keepalived 11.11.11.112

  3. lab3: etcd master haproxy keepalived 11.11.11.113

  4. lab4: node 11.11.11.114

  5. lab5: node 11.11.11.115

  6. lab6: node 11.11.11.116

  7.  
  8. vip(loadblancer ip): 11.11.11.110

一、基础配置与安装-在所有节点操作

安装配置docker

v1.11.0版本推荐使用docker v17.03,
v1.11,v1.12,v1.13, 也可以使用,再高版本的docker可能无法正常使用。
测试发现17.09无法正常使用,不能使用资源限制(内存CPU)


 
  1. # 卸载安装指定版本docker-ce

  2. yum remove -y docker-ce docker-ce-selinux container-selinux

  3. yum install -y --setopt=obsoletes=0 \

  4. docker-ce-17.03.1.ce-1.el7.centos \

  5. docker-ce-selinux-17.03.1.ce-1.el7.centos

启动docker

systemctl enable docker && systemctl restart docker

安装 kubeadm, kubelet 和 kubectl

使用阿里镜像安装


 
  1. # 配置源

  2. cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  3. [kubernetes]

  4. name=Kubernetes

  5. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

  6. enabled=1

  7. gpgcheck=1

  8. repo_gpgcheck=1

  9. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  10. EOF

  11.  
  12. # 安装

  13. yum install -y kubelet kubeadm kubectl ipvsadm

配置系统相关参数


 
  1. # 临时禁用selinux

  2. # 永久关闭 修改/etc/sysconfig/selinux文件设置

  3. sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

  4. setenforce 0

  5.  
  6. # 临时关闭swap

  7. # 永久关闭 注释/etc/fstab文件里swap相关的行

  8. swapoff -a

  9.  
  10. # 开启forward

  11. # Docker从1.13版本开始调整了默认的防火墙规则

  12. # 禁用了iptables filter表中FOWARD链

  13. # 这样会引起Kubernetes集群中跨Node的Pod无法通信

  14.  
  15. iptables -P FORWARD ACCEPT

  16.  
  17. # 配置转发相关参数,否则可能会出错

  18. cat <<EOF > /etc/sysctl.d/k8s.conf

  19. net.bridge.bridge-nf-call-ip6tables = 1

  20. net.bridge.bridge-nf-call-iptables = 1

  21. vm.swappiness=0

  22. EOF

  23. sysctl --system

  24.  
  25. # 加载ipvs相关内核模块

  26. # 如果重新开机,需要重新加载

  27. modprobe ip_vs

  28. modprobe ip_vs_rr

  29. modprobe ip_vs_wrr

  30. modprobe ip_vs_sh

  31. modprobe nf_conntrack_ipv4

  32. lsmod | grep ip_vs

配置hosts解析


 
  1. cat >>/etc/hosts<<EOF

  2. 11.11.11.111 lab1

  3. 11.11.11.112 lab2

  4. 11.11.11.113 lab3

  5. 11.11.11.114 lab4

  6. 11.11.11.115 lab5

  7. 11.11.11.116 lab6

  8. EOF

配置haproxy代理和keepalived-在节点lab1,lab2,lab3操作


 
  1. # 拉取haproxy镜像

  2. docker pull haproxy:1.7.8-alpine

  3. mkdir /etc/haproxy

  4. cat >/etc/haproxy/haproxy.cfg<<EOF

  5. global

  6. log 127.0.0.1 local0 err

  7. maxconn 50000

  8. uid 99

  9. gid 99

  10. #daemon

  11. nbproc 1

  12. pidfile haproxy.pid

  13.  
  14. defaults

  15. mode http

  16. log 127.0.0.1 local0 err

  17. maxconn 50000

  18. retries 3

  19. timeout connect 5s

  20. timeout client 30s

  21. timeout server 30s

  22. timeout check 2s

  23.  
  24. listen admin_stats

  25. mode http

  26. bind 0.0.0.0:1080

  27. log 127.0.0.1 local0 err

  28. stats refresh 30s

  29. stats uri /haproxy-status

  30. stats realm Haproxy\ Statistics

  31. stats auth will:will

  32. stats hide-version

  33. stats admin if TRUE

  34.  
  35. frontend k8s-https

  36. bind 0.0.0.0:8443

  37. mode tcp

  38. #maxconn 50000

  39. default_backend k8s-https

  40.  
  41. backend k8s-https

  42. mode tcp

  43. balance roundrobin

  44. server lab1 11.11.11.111:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  45. server lab2 11.11.11.112:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  46. server lab3 11.11.11.113:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  47. EOF

  48.  
  49. # 启动haproxy

  50. docker run -d --name my-haproxy \

  51. -v /etc/haproxy:/usr/local/etc/haproxy:ro \

  52. -p 8443:8443 \

  53. -p 1080:1080 \

  54. --restart always \

  55. haproxy:1.7.8-alpine

  56.  
  57. # 查看日志

  58. docker logs my-haproxy

  59.  
  60. # 浏览器查看状态

  61. http://11.11.11.111:1080/haproxy-status

  62. http://11.11.11.112:1080/haproxy-status

  63.  
  64. # 拉取keepalived镜像

  65. docker pull osixia/keepalived:1.4.4

  66.  
  67. # 启动

  68. # 载入内核相关模块

  69. lsmod | grep ip_vs

  70. modprobe ip_vs

  71.  
  72. # 启动keepalived

  73. # eth1为本次实验11.11.11.0/24网段的所在网卡

  74. docker run --net=host --cap-add=NET_ADMIN \

  75. -e KEEPALIVED_INTERFACE=eth1 \

  76. -e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['11.11.11.110']" \

  77. -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['11.11.11.111','11.11.11.112','11.11.11.113']" \

  78. -e KEEPALIVED_PASSWORD=hello \

  79. --name k8s-keepalived \

  80. --restart always \

  81. -d osixia/keepalived:1.4.4

  82.  
  83. # 查看日志

  84. # 会看到两个成为backup 一个成为master

  85. docker logs k8s-keepalived

  86.  
  87. # 此时会配置 11.11.11.110 到其中一台机器

  88. # ping测试

  89. ping -c4 11.11.11.110

  90.  
  91. # 如果失败后清理后,重新实验

  92. docker rm -f k8s-keepalived

  93. ip a del 11.11.11.110/32 dev eth1

配置启动kubelet-在所有节点操作


 
  1. # 配置kubelet使用国内pause镜像

  2. # 配置kubelet的cgroups

  3. # 获取docker的cgroups

  4. DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3)

  5. echo $DOCKER_CGROUPS

  6. cat >/etc/sysconfig/kubelet<<EOF

  7. KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

  8. EOF

  9.  
  10. # 启动

  11. systemctl daemon-reload

  12. systemctl enable kubelet && systemctl restart kubelet

配置master-在lab1节点操作

配置第一个master节点


 
  1. # 1.11 版本 centos 下使用 ipvs 模式会出问题

  2. # 参考 https://github.com/kubernetes/kubernetes/issues/65461

  3.  
  4. # 生成配置文件

  5. CP0_IP="11.11.11.111"

  6. CP0_HOSTNAME="lab1"

  7. cat >kubeadm-master.config<<EOF

  8. apiVersion: kubeadm.k8s.io/v1alpha2

  9. kind: MasterConfiguration

  10. kubernetesVersion: v1.11.0

  11. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  12.  
  13. apiServerCertSANs:

  14. - "lab1"

  15. - "lab2"

  16. - "lab3"

  17. - "11.11.11.111"

  18. - "11.11.11.112"

  19. - "11.11.11.113"

  20. - "11.11.11.110"

  21. - "127.0.0.1"

  22.  
  23. api:

  24. advertiseAddress: $CP0_IP

  25. controlPlaneEndpoint: 11.11.11.110:8443

  26.  
  27. etcd:

  28. local:

  29. extraArgs:

  30. listen-client-urls: "https://127.0.0.1:2379,https://$CP0_IP:2379"

  31. advertise-client-urls: "https://$CP0_IP:2379"

  32. listen-peer-urls: "https://$CP0_IP:2380"

  33. initial-advertise-peer-urls: "https://$CP0_IP:2380"

  34. initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380"

  35. serverCertSANs:

  36. - $CP0_HOSTNAME

  37. - $CP0_IP

  38. peerCertSANs:

  39. - $CP0_HOSTNAME

  40. - $CP0_IP

  41.  
  42. controllerManagerExtraArgs:

  43. node-monitor-grace-period: 10s

  44. pod-eviction-timeout: 10s

  45.  
  46. networking:

  47. podSubnet: 10.244.0.0/16

  48.  
  49. kubeProxy:

  50. config:

  51. # mode: ipvs

  52. mode: iptables

  53. EOF

  54.  
  55. # 提前拉取镜像

  56. # 如果执行失败 可以多次执行

  57. kubeadm config images pull --config kubeadm-master.config

  58.  
  59. # 初始化

  60. # 注意保存返回的 join 命令

  61. kubeadm init --config kubeadm-master.config

  62.  
  63. # 打包ca相关文件上传至其他master节点

  64. cd /etc/kubernetes && tar cvzf k8s-key.tgz admin.conf pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.*

  65. scp k8s-key.tgz lab2:~/

  66. scp k8s-key.tgz lab3:~/

  67. ssh lab2 'tar xf k8s-key.tgz -C /etc/kubernetes/'

  68. ssh lab3 'tar xf k8s-key.tgz -C /etc/kubernetes/'

配置第二个master节点-在lab2节点操作


 
  1. # 1.11 版本 centos 下使用 ipvs 模式会出问题

  2. # 参考 https://github.com/kubernetes/kubernetes/issues/65461

  3.  
  4. # 生成配置文件

  5. CP0_IP="11.11.11.111"

  6. CP0_HOSTNAME="lab1"

  7. CP1_IP="11.11.11.112"

  8. CP1_HOSTNAME="lab2"

  9. cat >kubeadm-master.config<<EOF

  10. apiVersion: kubeadm.k8s.io/v1alpha2

  11. kind: MasterConfiguration

  12. kubernetesVersion: v1.11.0

  13. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  14.  
  15. apiServerCertSANs:

  16. - "lab1"

  17. - "lab2"

  18. - "lab3"

  19. - "11.11.11.111"

  20. - "11.11.11.112"

  21. - "11.11.11.113"

  22. - "11.11.11.110"

  23. - "127.0.0.1"

  24.  
  25. api:

  26. advertiseAddress: $CP1_IP

  27. controlPlaneEndpoint: 11.11.11.110:8443

  28.  
  29. etcd:

  30. local:

  31. extraArgs:

  32. listen-client-urls: "https://127.0.0.1:2379,https://$CP1_IP:2379"

  33. advertise-client-urls: "https://$CP1_IP:2379"

  34. listen-peer-urls: "https://$CP1_IP:2380"

  35. initial-advertise-peer-urls: "https://$CP1_IP:2380"

  36. initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380"

  37. initial-cluster-state: existing

  38. serverCertSANs:

  39. - $CP1_HOSTNAME

  40. - $CP1_IP

  41. peerCertSANs:

  42. - $CP1_HOSTNAME

  43. - $CP1_IP

  44.  
  45. controllerManagerExtraArgs:

  46. node-monitor-grace-period: 10s

  47. pod-eviction-timeout: 10s

  48.  
  49. networking:

  50. podSubnet: 10.244.0.0/16

  51.  
  52. kubeProxy:

  53. config:

  54. # mode: ipvs

  55. mode: iptables

  56. EOF

  57.  
  58. # 配置kubelet

  59. kubeadm alpha phase certs all --config kubeadm-master.config

  60. kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config

  61. kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config

  62. kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config

  63. systemctl restart kubelet

  64.  
  65. # 添加etcd到集群中

  66. CP0_IP="11.11.11.111"

  67. CP0_HOSTNAME="lab1"

  68. CP1_IP="11.11.11.112"

  69. CP1_HOSTNAME="lab2"

  70. KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380

  71. kubeadm alpha phase etcd local --config kubeadm-master.config

  72.  
  73. # 提前拉取镜像

  74. # 如果执行失败 可以多次执行

  75. kubeadm config images pull --config kubeadm-master.config

  76.  
  77. # 部署

  78. kubeadm alpha phase kubeconfig all --config kubeadm-master.config

  79. kubeadm alpha phase controlplane all --config kubeadm-master.config

  80. kubeadm alpha phase mark-master --config kubeadm-master.config

配置第三个master节点-在lab3节点操作


 
  1. # 1.11 版本 centos 下使用 ipvs 模式会出问题

  2. # 参考 https://github.com/kubernetes/kubernetes/issues/65461

  3.  
  4. # 生成配置文件

  5. CP0_IP="11.11.11.111"

  6. CP0_HOSTNAME="lab1"

  7. CP1_IP="11.11.11.112"

  8. CP1_HOSTNAME="lab2"

  9. CP2_IP="11.11.11.113"

  10. CP2_HOSTNAME="lab3"

  11. cat >kubeadm-master.config<<EOF

  12. apiVersion: kubeadm.k8s.io/v1alpha2

  13. kind: MasterConfiguration

  14. kubernetesVersion: v1.11.0

  15. imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  16.  
  17. apiServerCertSANs:

  18. - "lab1"

  19. - "lab2"

  20. - "lab3"

  21. - "11.11.11.111"

  22. - "11.11.11.112"

  23. - "11.11.11.113"

  24. - "11.11.11.110"

  25. - "127.0.0.1"

  26.  
  27. api:

  28. advertiseAddress: $CP2_IP

  29. controlPlaneEndpoint: 11.11.11.110:8443

  30.  
  31. etcd:

  32. local:

  33. extraArgs:

  34. listen-client-urls: "https://127.0.0.1:2379,https://$CP2_IP:2379"

  35. advertise-client-urls: "https://$CP2_IP:2379"

  36. listen-peer-urls: "https://$CP2_IP:2380"

  37. initial-advertise-peer-urls: "https://$CP2_IP:2380"

  38. initial-cluster: "$CP0_HOSTNAME=https://$CP0_IP:2380,$CP1_HOSTNAME=https://$CP1_IP:2380,$CP2_HOSTNAME=https://$CP2_IP:2380"

  39. initial-cluster-state: existing

  40. serverCertSANs:

  41. - $CP2_HOSTNAME

  42. - $CP2_IP

  43. peerCertSANs:

  44. - $CP2_HOSTNAME

  45. - $CP2_IP

  46.  
  47. controllerManagerExtraArgs:

  48. node-monitor-grace-period: 10s

  49. pod-eviction-timeout: 10s

  50.  
  51. networking:

  52. podSubnet: 10.244.0.0/16

  53.  
  54. kubeProxy:

  55. config:

  56. # mode: ipvs

  57. mode: iptables

  58. EOF

  59.  
  60. # 配置kubelet

  61. kubeadm alpha phase certs all --config kubeadm-master.config

  62. kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config

  63. kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config

  64. kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config

  65. systemctl restart kubelet

  66.  
  67. # 添加etcd到集群中

  68. CP0_IP="11.11.11.111"

  69. CP0_HOSTNAME="lab1"

  70. CP2_IP="11.11.11.113"

  71. CP2_HOSTNAME="lab3"

  72. KUBECONFIG=/etc/kubernetes/admin.conf kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380

  73. kubeadm alpha phase etcd local --config kubeadm-master.config

  74.  
  75. # 提前拉取镜像

  76. # 如果执行失败 可以多次执行

  77. kubeadm config images pull --config kubeadm-master.config

  78.  
  79. # 部署

  80. kubeadm alpha phase kubeconfig all --config kubeadm-master.config

  81. kubeadm alpha phase controlplane all --config kubeadm-master.config

  82. kubeadm alpha phase mark-master --config kubeadm-master.config

配置使用kubectl-在任意master节点操作


 
  1. rm -rf $HOME/.kube

  2. mkdir -p $HOME/.kube

  3. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  4. sudo chown $(id -u):$(id -g) $HOME/.kube/config

  5.  
  6. # 查看node节点

  7. kubectl get nodes

  8.  
  9. # 只有网络插件也安装配置完成之后,才能会显示为ready状态

  10. # 设置master允许部署应用pod,参与工作负载,现在可以部署其他系统组件

  11. # 如 dashboard, heapster, efk等

  12. kubectl taint nodes --all node-role.kubernetes.io/master-

配置使用网络插件-在任意master节点操作


 
  1. # 下载配置

  2. mkdir flannel && cd flannel

  3. wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

  4.  
  5. # 修改配置

  6. # 此处的ip配置要与上面kubeadm的pod-network一致

  7. net-conf.json: |

  8. {

  9. "Network": "10.244.0.0/16",

  10. "Backend": {

  11. "Type": "vxlan"

  12. }

  13. }

  14.  
  15. # 修改镜像

  16. image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64

  17.  
  18. # 如果Node有多个网卡的话,参考flannel issues 39701,

  19. # https://github.com/kubernetes/kubernetes/issues/39701

  20. # 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,

  21. # 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,

  22. # flanneld启动参数加上--iface=<iface-name>

  23. containers:

  24. - name: kube-flannel

  25. image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64

  26. command:

  27. - /opt/bin/flanneld

  28. args:

  29. - --ip-masq

  30. - --kube-subnet-mgr

  31. - --iface=eth1

  32.  
  33. # 启动

  34. kubectl apply -f kube-flannel.yml

  35.  
  36. # 查看

  37. kubectl get pods --namespace kube-system

  38. kubectl get svc --namespace kube-system

配置node节点加入集群-在所有node节点操作


 
  1. # 此命令为初始化master成功后返回的结果

  2. kubeadm join 11.11.11.110:8443 --token yzb7v7.dy40mhlljt1d48i9 --discovery-token-ca-cert-hash sha256:61ec309e6f942305006e6622dcadedcc64420e361231eff23cb535a183c0e77a

基础测试

测试容器间的通信和DNS

配置好网络之后,kubeadm会自动部署coredns

如下测试可以在配置kubectl的节点上操作

启动


 
  1. kubectl run nginx --replicas=2 --image=nginx:alpine --port=80

  2. kubectl expose deployment nginx --type=NodePort --name=example-service-nodeport

  3. kubectl expose deployment nginx --name=example-service

查看状态


 
  1. kubectl get deploy

  2. kubectl get pods

  3. kubectl get svc

  4. kubectl describe svc example-service

DNS解析


 
  1. kubectl run curl --image=radial/busyboxplus:curl -i --tty

  2. nslookup kubernetes

  3. nslookup example-service

  4. curl example-service

访问测试


 
  1. # 10.96.59.56 为查看svc时获取到的clusterip

  2. curl "10.96.59.56:80"

  3.  
  4. # 32223 为查看svc时获取到的 nodeport

  5. http://11.11.11.112:32223/

  6. http://11.11.11.113:32223/

清理删除


 
  1. kubectl delete svc example-service example-service-nodeport

  2. kubectl delete deploy nginx curl

高可用测试

关闭任一master节点测试集群是能否正常执行上一步的基础测试,查看相关信息,不能同时关闭两个节点,因为3个节点组成的etcd集群,最多只能有一个当机。


 
  1. # 查看组件状态

  2. kubectl get pod --all-namespaces -o wide

  3. kubectl get pod --all-namespaces -o wide | grep lab1

  4. kubectl get pod --all-namespaces -o wide | grep lab2

  5. kubectl get pod --all-namespaces -o wide | grep lab3

  6. kubectl get nodes -o wide

  7. kubectl get deploy

  8. kubectl get pods

  9. kubectl get svc

  10.  
  11. # 访问测试

  12. CURL_POD=$(kubectl get pods | grep curl | grep Running | cut -d ' ' -f1)

  13. kubectl exec -ti $CURL_POD -- sh --tty

  14. nslookup kubernetes

  15. nslookup example-service

  16. curl example-service

小技巧

忘记初始master节点时的node节点加入集群命令怎么办


 
  1. # 简单方法

  2. kubeadm token create --print-join-command

  3.  
  4. # 第二种方法

  5. token=$(kubeadm token generate)

  6. kubeadm token create $token --print-join-command --ttl=0

参考文档

 


微信公众号ID(feiutech)

微信公众号ID:feiutech

评论列表

共0条评论

我要评论

用户名:
邮箱: