Centos7使用yum方式部署Kubernetes1.5集群

发布时间:2019-08-18 11:31--阅读:413--评论:0条

1、环境介绍及准备:

1.1 物理机操作系统

  物理机操作系统采用Centos7.3 64位,细节如下。


 
  1. [root@localhost ~]# uname -a

  2. Linux localhost.localdomain 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  3. [root@localhost ~]# cat /etc/redhat-release

  4. CentOS Linux release 7.3.1611 (Core)

1.2 主机信息

  本文准备了三台机器用于部署k8s的运行环境,细节如下:

节点及功能

主机名

IP

kube-apiserver、kube-controller-manager、kube-scheduler、docker、etcd、registry、flannel

K8s-master

10.0.251.148

kubelet、kubeproxy、docker、flannel

K8s-node-1

10.0.251.153

kubelet、kubeproxy、docker、flannel

K8s-node-2

10.0.251.155

  设置三台机器的主机名:

  Master上执行:

[root@localhost ~]#  hostnamectl --static set-hostname  k8s-master

  Node1上执行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-1

  Node2上执行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-node-2

  在三台机器上设置hosts,均执行如下命令:


 
  1. echo '10.0.251.148 k8s-master

  2. 10.0.251.148 etcd

  3. 10.0.251.148 registry

  4. 10.0.251.153 k8s-node-1

  5. 10.0.251.155 k8s-node-2' >> /etc/hosts

1.3 关闭三台机器上的防火墙


 
  1. systemctl disable firewalld.service

  2. systemctl stop firewalld.service

2、部署etcd

  k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:

[root@localhost ~]# yum install etcd -y

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:


 
  1. [root@localhost ~]# vi /etc/etcd/etcd.conf

  2.  
  3. # [member]

  4. ETCD_NAME=master

  5. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

  6. #ETCD_WAL_DIR=""

  7. #ETCD_SNAPSHOT_COUNT="10000"

  8. #ETCD_HEARTBEAT_INTERVAL="100"

  9. #ETCD_ELECTION_TIMEOUT="1000"

  10. #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

  11. ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

  12. #ETCD_MAX_SNAPSHOTS="5"

  13. #ETCD_MAX_WALS="5"

  14. #ETCD_CORS=""

  15. #

  16. #[cluster]

  17. #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

  18. # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

  19. #ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

  20. #ETCD_INITIAL_CLUSTER_STATE="new"

  21. #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

  22. ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"

  23. #ETCD_DISCOVERY=""

  24. #ETCD_DISCOVERY_SRV=""

  25. #ETCD_DISCOVERY_FALLBACK="proxy"

  26. #ETCD_DISCOVERY_PROXY=""

启动并验证状态


 
  1. [root@localhost ~]# systemctl start etcd

  2. [root@localhost ~]# etcdctl set testdir/testkey0 0

  3. 0

  4. [root@localhost ~]# etcdctl get testdir/testkey0

  5. 0

  6. [root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health

  7. member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

  8. cluster is healthy

  9. [root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health

  10. member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

  11. cluster is healthy

3、部署master

3.1 安装Docker

[root@k8s-master ~]# yum install docker

配置Docker配置文件,使其允许从registry中拉取镜像。


 
  1. [root@k8s-master ~]# vim /etc/sysconfig/docker

  2.  
  3. # /etc/sysconfig/docker

  4.  
  5. # Modify these options if you want to change the way the docker daemon runs

  6. OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

  7. if [ -z "${DOCKER_CERT_PATH}" ]; then

  8. DOCKER_CERT_PATH=/etc/docker

  9. fi

  10. OPTIONS='--insecure-registry registry:5000'

设置开机自启动并开启服务


 
  1. [root@k8s-master ~]# chkconfig docker on

  2. [root@k8s-master ~]# service docker start

3.2 安装kubernets

配置kubernetes安装的镜像源:


 
  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

  5. enabled=1

  6. gpgcheck=1

  7. repo_gpgcheck=1

  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

  9. EOF

  10. setenforce 0

[root@k8s-master ~]# yum install kubernetes

3.3 配置并启动kubernetes

在kubernetes master上需要运行以下组件:

    Kubernets API Server

    Kubernets Controller Manager

    Kubernets Scheduler

相应的要更改以下几个配置中带颜色部分信息:

3.3.1 /etc/kubernetes/apiserver


 
  1. [root@k8s-master ~]# vim /etc/kubernetes/apiserver

  2.  
  3. ###

  4. # kubernetes system config

  5. #

  6. # The following values are used to configure the kube-apiserver

  7. #

  8.  
  9. # The address on the local server to listen to.

  10. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

  11.  
  12. # The port on the local server to listen on.

  13. KUBE_API_PORT="--port=8080"

  14.  
  15. # Port minions listen on

  16. # KUBELET_PORT="--kubelet-port=10250"

  17.  
  18. # Comma separated list of nodes in the etcd cluster

  19. KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

  20.  
  21. # Address range to use for services

  22. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

  23.  
  24. # default admission control policies

  25. #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

  26. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

  27.  
  28. # Add your own!

  29. KUBE_API_ARGS=""

3.3.2  /etc/kubernetes/config


 
  1. [root@k8s-master ~]# vim /etc/kubernetes/config

  2.  
  3. ###

  4. # kubernetes system config

  5. #

  6. # The following values are used to configure various aspects of all

  7. # kubernetes services, including

  8. #

  9. # kube-apiserver.service

  10. # kube-controller-manager.service

  11. # kube-scheduler.service

  12. # kubelet.service

  13. # kube-proxy.service

  14. # logging to stderr means we get it in the systemd journal

  15. KUBE_LOGTOSTDERR="--logtostderr=true"

  16.  
  17. # journal message level, 0 is debug

  18. KUBE_LOG_LEVEL="--v=0"

  19.  
  20. # Should this cluster be allowed to run privileged docker containers

  21. KUBE_ALLOW_PRIV="--allow-privileged=false"

  22.  
  23. # How the controller-manager, scheduler, and proxy find the apiserver

  24. KUBE_MASTER="--master=http://k8s-master:8080"

启动服务并设置开机自启动


 
  1. [root@k8s-master ~]# systemctl enable kube-apiserver.service

  2. [root@k8s-master ~]# systemctl start kube-apiserver.service

  3. [root@k8s-master ~]# systemctl enable kube-controller-manager.service

  4. [root@k8s-master ~]# systemctl start kube-controller-manager.service

  5. [root@k8s-master ~]# systemctl enable kube-scheduler.service

  6. [root@k8s-master ~]# systemctl start kube-scheduler.service

4、部署node

4.1 安装docker

  参见3.1

4.2 安装kubernets

  参见3.2

4.3 配置并启动kubernetes

  在kubernetes node上需要运行以下组件:

    Kubelet

    Kubernets Proxy

相应的要更改以下几个配置文中带颜色部分信息:

4.3.1 /etc/kubernetes/config


 
  1. [root@K8s-node-1 ~]# vim /etc/kubernetes/config

  2.  
  3. ###

  4. # kubernetes system config

  5. #

  6. # The following values are used to configure various aspects of all

  7. # kubernetes services, including

  8. #

  9. # kube-apiserver.service

  10. # kube-controller-manager.service

  11. # kube-scheduler.service

  12. # kubelet.service

  13. # kube-proxy.service

  14. # logging to stderr means we get it in the systemd journal

  15. KUBE_LOGTOSTDERR="--logtostderr=true"

  16.  
  17. # journal message level, 0 is debug

  18. KUBE_LOG_LEVEL="--v=0"

  19.  
  20. # Should this cluster be allowed to run privileged docker containers

  21. KUBE_ALLOW_PRIV="--allow-privileged=false"

  22.  
  23. # How the controller-manager, scheduler, and proxy find the apiserver

  24. KUBE_MASTER="--master=http://k8s-master:8080"

4.3.2 /etc/kubernetes/kubelet


 
  1. [root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet

  2.  
  3. ###

  4. # kubernetes kubelet (minion) config

  5.  
  6. # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

  7. KUBELET_ADDRESS="--address=0.0.0.0"

  8.  
  9. # The port for the info server to serve on

  10. # KUBELET_PORT="--port=10250"

  11.  
  12. # You may leave this blank to use the actual hostname

  13. KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

  14.  
  15. # location of the api-server

  16. KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

  17.  
  18. # pod infrastructure container

  19. KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

  20.  
  21. # Add your own!

  22. KUBELET_ARGS=""

启动服务并设置开机自启动


 
  1. [root@k8s-master ~]# systemctl enable kubelet.service

  2. [root@k8s-master ~]# systemctl start kubelet.service

  3. [root@k8s-master ~]# systemctl enable kube-proxy.service

  4. [root@k8s-master ~]# systemctl start kube-proxy.service

4.4 查看状态

  在master上查看集群中节点及节点状态


 
  1. [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node

  2. NAME STATUS AGE

  3. k8s-node-1 Ready 3m

  4. k8s-node-2 Ready 16s

  5. [root@k8s-master ~]# kubectl get nodes

  6. NAME STATUS AGE

  7. k8s-node-1 Ready 3m

  8. k8s-node-2 Ready 43s

至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。

5、创建覆盖网络——Flannel

5.1 安装Flannel

  在master、node上均执行如下命令,进行安装

[root@k8s-master ~]# yum install flannel

版本为0.0.5

5.2 配置Flannel

  master、node上均编辑/etc/sysconfig/flanneld,修改红色部分


 
  1. [root@k8s-master ~]# vi /etc/sysconfig/flanneld

  2.  
  3. # Flanneld configuration options

  4.  
  5. # etcd url location. Point this to the server where etcd runs

  6. FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

  7.  
  8. # etcd config key. This is the configuration key that flannel queries

  9. # For address range assignment

  10. FLANNEL_ETCD_PREFIX="/atomic.io/network"

  11.  
  12. # Any additional options that you want to pass

  13. #FLANNEL_OPTIONS=""

5.3 配置etcd中关于flannel的key

  Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)


 
  1. [root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

  2. { "Network": "10.0.0.0/16" }

5.4 启动

  启动Flannel之后,需要依次重启docker、kubernete。

  在master执行:


 
  1. systemctl enable flanneld.service

  2. systemctl start flanneld.service

  3. service docker restart

  4. systemctl restart kube-apiserver.service

  5. systemctl restart kube-controller-manager.service

  6. systemctl restart kube-scheduler.service

  在node上执行:


 
  1. systemctl enable flanneld.service

  2. systemctl start flanneld.service

  3. service docker restart

  4. systemctl restart kubelet.service

  5. systemctl restart kube-proxy.service

6.安装kube-dns

6.1创建kubedns-rc.yaml和kubedns-svc.yaml

主要是在master上安装

vi kubedns-rc.yaml


 
  1. apiVersion: extensions/v1beta1

  2. kind: Deployment

  3. metadata:

  4. name: kube-dns

  5. namespace: kube-system

  6. labels:

  7. k8s-app: kube-dns

  8. kubernetes.io/cluster-service: "true"

  9. spec:

  10.  #指定副本数

  11. replicas: 1

  12. # replicas: not specified here:

  13. # 1. In order to make Addon Manager do not reconcile this replicas parameter.

  14. # 2. Default is 1.

  15. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

  16. strategy:

  17. rollingUpdate:

  18. maxSurge: 10%

  19. maxUnavailable: 0

  20. selector:

  21. matchLabels:

  22. k8s-app: kube-dns

  23. template:

  24. metadata:

  25. labels:

  26. k8s-app: kube-dns

  27. annotations:

  28. scheduler.alpha.kubernetes.io/critical-pod: ''

  29. scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'

  30. spec:

  31. containers:

  32. - name: kubedns

  33.      #修改image地址,默认是google的仓库地址,如果不担心被墙,可以直接使用,我这里使用的私有仓库地址,如果要使用国内其他仓库地址,推荐使用阿里云镜像库

  34. image: myhub.fdccloud.com/library/kubedns-amd64:1.9

  35. resources:

  36. # TODO: Set memory limits when we've profiled the container for large

  37. # clusters, then set request = limit to keep this container in

  38. # guaranteed class. Currently, this container falls into the

  39. # "burstable" category so the kubelet doesn't backoff from restarting it.

  40. limits:

  41. memory: 170Mi

  42. requests:

  43. cpu: 100m

  44. memory: 70Mi

  45. livenessProbe:

  46. httpGet:

  47. path: /healthz-kubedns

  48. port: 8080

  49. scheme: HTTP

  50. initialDelaySeconds: 60

  51. timeoutSeconds: 5

  52. successThreshold: 1

  53. failureThreshold: 5

  54. readinessProbe:

  55. httpGet:

  56. path: /readiness

  57. port: 8081

  58. scheme: HTTP

  59. # we poll on pod startup for the Kubernetes master service and

  60. # only setup the /readiness HTTP server once that's available.

  61. initialDelaySeconds: 3

  62. timeoutSeconds: 5

  63. args:

  64.      # --domain指定一级域名,可自定义

  65. - --domain=cluster.local.

  66. - --dns-port=10053

  67. - --config-map=kube-dns

  68.      #增加--kube-master-url,指向kube master的地址

  69. - --kube-master-url=http://10.0.251.148:8080

  70. # This should be set to v=2 only after the new image (cut from 1.5) has

  71. # been released, otherwise we will flood the logs.

  72. - --v=0

  73. #__PILLAR__FEDERATIONS__DOMAIN__MAP__

  74. env:

  75. - name: PROMETHEUS_PORT

  76. value: "10055"

  77. ports:

  78. - containerPort: 10053

  79. name: dns-local

  80. protocol: UDP

  81. - containerPort: 10053

  82. name: dns-tcp-local

  83. protocol: TCP

  84. - containerPort: 10055

  85. name: metrics

  86. protocol: TCP

  87. - name: dnsmasq

  88. image: myhub.fdccloud.com/library/kube-dnsmasq-amd64:1.4

  89. livenessProbe:

  90. httpGet:

  91. path: /healthz-dnsmasq

  92. port: 8080

  93. scheme: HTTP

  94. initialDelaySeconds: 60

  95. timeoutSeconds: 5

  96. successThreshold: 1

  97. failureThreshold: 5

  98. args:

  99. - --cache-size=1000

  100. - --no-resolv

  101. - --server=127.0.0.1#10053

  102. #- --log-facility=-  #注释掉该行

  103. ports:

  104. - containerPort: 53

  105. name: dns

  106. protocol: UDP

  107. - containerPort: 53

  108. name: dns-tcp

  109. protocol: TCP

  110. # see: https://github.com/kubernetes/kubernetes/issues/29055 for details

  111. resources:

  112. requests:

  113. cpu: 150m

  114. memory: 10Mi

  115. - name: dnsmasq-metrics

  116. image: myhub.fdccloud.com/library/dnsmasq-metrics-amd64:1.0

  117. livenessProbe:

  118. httpGet:

  119. path: /metrics

  120. port: 10054

  121. scheme: HTTP

  122. initialDelaySeconds: 60

  123. timeoutSeconds: 5

  124. successThreshold: 1

  125. failureThreshold: 5

  126. args:

  127. - --v=2

  128. - --logtostderr

  129. ports:

  130. - containerPort: 10054

  131. name: metrics

  132. protocol: TCP

  133. resources:

  134. requests:

  135. memory: 10Mi

  136. - name: healthz

  137. image: myhub.fdccloud.com/library/exechealthz-amd64:1.2

  138. resources:

  139. limits:

  140. memory: 50Mi

  141. requests:

  142. cpu: 10m

  143. # Note that this container shouldn't really need 50Mi of memory. The

  144. # limits are set higher than expected pending investigation on #29688.

  145. # The extra memory was stolen from the kubedns container to keep the

  146. # net memory requested by the pod constant.

  147. memory: 50Mi

  148. args:

  149. - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null

  150. - --url=/healthz-dnsmasq

  151. - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null

  152. - --url=/healthz-kubedns

  153. - --port=8080

  154. - --quiet

  155. ports:

  156. - containerPort: 8080

  157. protocol: TCP

  158. dnsPolicy: Default # Don't use cluster DNS.

 

vi kubedns-svc.yaml内容如下:

 
  1. apiVersion: v1

  2. kind: Service

  3. metadata:

  4. name: kube-dns

  5. namespace: kube-system

  6. labels:

  7. k8s-app: kube-dns

  8. kubernetes.io/cluster-service: "true"

  9. kubernetes.io/name: "KubeDNS"

  10. spec:

  11. selector:

  12. k8s-app: kube-dns

  13. #指定clusterIP,后面各pod的dns地址都会指向该地址

  14. clusterIP: 10.254.0.100

  15. ports:

  16. - name: dns

  17. port: 53

  18. protocol: UDP

  19. - name: dns-tcp

  20. port: 53

  21. protocol: TCP

 

6.2启动dns


 
  1. kubectl create -f skydns-rc.yaml

  2. kubectl create -f skydns-svc.yaml

修改各node节点上的/etc/kubernetes/kubelet配置文件,增加如下行:

KUBELET_ARGS="--cluster_dns=10.254.0.100 --cluster_domain=cluster.local"

重启各节点:

systemctl restart kubelet

7.本部署dashboard服务

7.1使用http方式访问api server的部署

cat dashboard-controller.yaml 


 
  1. apiVersion: extensions/v1beta1

  2. kind: Deployment

  3. metadata:

  4. labels:

  5. k8s-app: kubernetes-dashboard

  6. kubernetes.io/cluster-service: "true"

  7. name: kubernetes-dashboard

  8. namespace: kube-system

  9. selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/kubernetes-dashboard

  10. spec:

  11. replicas: 1

  12. selector:

  13. matchLabels:

  14. k8s-app: kubernetes-dashboard

  15. strategy:

  16. rollingUpdate:

  17. maxSurge: 1

  18. maxUnavailable: 1

  19. type: RollingUpdate

  20. template:

  21. metadata:

  22. labels:

  23. k8s-app: kubernetes-dashboard

  24. spec:

  25. containers:

  26. - args:

  27. - --apiserver-host=http://10.0.251.148:8080

  28. image: docker.io/googlecontainer/kubernetes-dashboard-amd64:v1.6.1

  29. imagePullPolicy: IfNotPresent

  30. livenessProbe:

  31. failureThreshold: 3

  32. httpGet:

  33. path: /

  34. port: 9090

  35. scheme: HTTP

  36. initialDelaySeconds: 30

  37. periodSeconds: 10

  38. successThreshold: 1

  39. timeoutSeconds: 30

  40. name: kubernetes-dashboard

  41. ports:

  42. - containerPort: 9090

  43. protocol: TCP

  44. resources:

  45. limits:

  46. cpu: 100m

  47. memory: 50Mi

  48. requests:

  49. cpu: 100m

  50. memory: 50Mi

  51. dnsPolicy: ClusterFirst

  52. restartPolicy: Always

7.2使用https访问api server部署

cat dashboard-controller.yaml 


 
  1. apiVersion: extensions/v1beta1

  2. kind: Deployment

  3. metadata:

  4. name: kubernetes-dashboard

  5. namespace: kube-system

  6. labels:

  7. k8s-app: kubernetes-dashboard

  8. kubernetes.io/cluster-service: "true"

  9. spec:

  10. selector:

  11. matchLabels:

  12. k8s-app: kubernetes-dashboard

  13. template:

  14. metadata:

  15. labels:

  16. k8s-app: kubernetes-dashboard

  17. annotations:

  18. scheduler.alpha.kubernetes.io/critical-pod: ''

  19. scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'

  20. spec:

  21. containers:

  22. - name: kubernetes-dashboard

  23. image: docker.io/googlecontainer/kubernetes-dashboard-amd64:v1.6.1

  24. imagePullPolicy: IfNotPresent

  25. resources:

  26. limits:

  27. cpu: 100m

  28. memory: 512Mi

  29. requests:

  30. cpu: 100m

  31. memory: 128Mi

  32. livenessProbe:

  33. httpGet:

  34. path: /

  35. port: 9090

  36. initialDelaySeconds: 30

  37. timeoutSeconds: 30

  38. ports:

  39. - containerPort: 9090

  40. args:

  41. - --apiserver-host=https://10.0.251.148:6443

  42. - --kubeconfig=/etc/kubernetes/kubelet-config

  43. volumeMounts:

  44. - name: config

  45. mountPath: /etc/kubernetes/kubelet-config

  46. readOnly: True

  47. - name: certs

  48. mountPath: /etc/ssl/kube

  49. readOnly: True

  50. volumes:

  51. - name: certs

  52. hostPath:

  53. path: /etc/ssl/kube

  54. - name: config

  55. hostPath:

  56. path: /etc/kubernetes/kubelet-config

  57.  

7.3service

cat dashboard-service.yaml


 
  1. apiVersion: v1

  2. kind: Service

  3. metadata:

  4. name: kubernetes-dashboard

  5. namespace: kube-system

  6. labels:

  7. k8s-app: kubernetes-dashboard

  8. kubernetes.io/cluster-service: "true"

  9. spec:

  10. selector:

  11. k8s-app: kubernetes-dashboard

  12. ports:

  13. - port: 80

  14. targetPort: 9090

7.4启动dashboard

kubectl create -f dashboard-controller.yaml

kubectl create -f dashboard-service.yaml

7.5访问地址

http:http://10.0.251.148:8080/ui

https:https://10.0.251.148:6443/ui

如果API Server配置文件中没有配置登陆账号和密码(--basic-auth-file=/etc/kubernetes/useraccount.csv),登陆失败; 如果配置了,账号和密码为/etc/kubernetes/useraccount.csv任意中的一个


微信公众号ID(feiutech)

微信公众号ID:feiutech

评论列表

共0条评论

我要评论

用户名:
邮箱: