安装Kubernetes集群

Posted by Dayong Chan on 2019-11-17
Words 1.5k and Reading Time 7 Minutes
Viewed Times

在开始搭建前需要:

一台或多台运行一下系统的主机:

1
2
3
4
5
6
7
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Fedora 25/26 (best-effort)
HypriotOS v1.0.1+
Container Linux (tested with 1800.6.0)

大于或等于2 GB内存的主机 (否则会没有空间给app运行)
大于或等于2 CPUs
在集群中的主机必须网络互通 (公网或者私网都ok)
每一个节点都有唯一的主机名,mac地址和UUID
主机上特定的端口是打开的
关闭Swap. 必须关闭主机的swap,否则kubelet会报错.

安装kubeadm

本例的集群中包含三台节点

1
2
3
4
节点名    ip
master 192.168.0.1
worker1 192.168.0.2
worker2 192.168.0.3

由于无法访问Google,需要把安装的地址修改为阿里云或者其他国内节点的地址。在所有节点是运行一下命令安装kubelet kubeadm kubectl

对于CentOS / RHEL / Fedora系统

script
1
2
3
4
5
6
7
8
9
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

script
1
2
3
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

对于Debian / Ubuntu
script
1
2
3
4
5
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

script
1
2
apt-get update
apt-get install -y kubelet kubeadm kubectl

初始化kubernetes master节点

可以直接运行kubeadm init来初始化Master节点,也可以自定义选项运行。比如修改master绑定的节点IP,pod的地址段,service的地址段以及k8s所需要的镜像仓库。

script
1
kubeadm init --apiserver-advertise-address="192.168.0.1" --pod-network-cidr="172.22.0.0/16" --service-cidr="172.20.0.0/16"

Master初始化过程中可能会出现以下错误,是由于docker无法访问镜像仓库”k8s.gcr.io”,拉取镜像失败。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1

在错误中找到各个镜像所对应的版本,通过其他仓库拉去,重新给镜像打标签来获取镜像。
script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.0
docker pull mirrorgooglecontainers/kube-proxy:v1.13.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker tag mirrorgooglecontainers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

也可以在kubeadm init通过—image-repository设置从其他仓库拉去镜像。例如
script
1
kubeadm init --image-repository="mirrorgooglecontainers"

运行成功之后,会返回worker节点加入到集群和kubectl设置的命令,分别在worker中运行kubeadm join加入到集群中。
1
2
3
4
5
6
7
8
9
10
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.0.1:6443 --token wmabv0.7333s87jq258wsjy --discovery-token-ca-cert-hash sha256:f076e5e912345c658eb61866da326bcf83c0e339ee7937758799cc045086e000

把Worker服务器加入到k8s的集群

在Worker服务器执行以下命令加入到k8s的集群里面

script
1
kubeadm join 192.168.0.1:6443 --token wmabv0.7333s87jq258wsjy --discovery-token-ca-cert-hash sha256:f076e5e912345c658eb61866da326bcf83c0e339ee7937758799cc045086e000

Worker加入成功之后会返回”Run ‘kubectl get nodes’ on the master to see this node join the cluster”。运行kubectl设置命令,从而可以使用kubectl控制集群。

安装network addon

部署应用之前必须首先安装network addon。官方文档中有Calico, Canal, Flannel, Kube-router, Romana, Weave Net,JuniperContrail/TungstenFabric等,本例中使用Calico。使用Calico需要在kubeadm init的时候添加—pod-network-cidr=192.168.0.0/16设置pod的ip段或者修改calico.yml中的pod网段为k8s(172.22.0.0/16)中的网段。

1
2
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

重新初始化集群

为了撤回kubeadm的操作,在关闭node前需要保证node已经清空。

在master中运行命令

1
2
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

然后重设所有的kubeadm安装:
1
kubeadm reset

结尾

至此,kubeadm搭建k8s集群已经完成,通过在master中运行kubectl get nodes查看集群节点信息。

1
2
3
4
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker1 Ready <none> 5h11m v1.13.0
worker2 Ready <none> 5h20m v1.13.0


本站版权使用署名-非商业性使用-禁止演绎 4.0 国际,转载请保留原文链接及作者。