Kubernetes安装笔记
目录
Kubernetes安装笔记
国外的机器安装Kubernetes很简单,因为google的资源可以随意访问。国内的机器则需要一些变通–使用阿里云提供的镜像仓库。本文以Ubuntu-20.04 LTS系统为例进行安装,其它系统大同小异~
-
安装kubeadm和docker
#apt-get update && apt-get install -y apt-transport-https #curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - #cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF #apt-get update #apt-get install -y kubelet kubeadm kubectl docker.io
-
初始化安装
需要先从国内镜像站点自行拉取镜像,然后运行kubeadm init进行默认安装。
-
运行kubeadm查看镜像版本
#kubeadm config images list k8s.gcr.io/kube-apiserver:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7
-
从国内镜像站拉取镜像并重命名,脚本如下:
images=(`kubeadm config images list | awk -F/ '{if($1=="k8s.gcr.io"){print $2}}'`) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
-
运行kubeadm进行安装
# kubeadm init Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.4:6443 --token iv0c9w.c6hazld6y3tpnq32 \ --discovery-token-ca-cert-hash sha256:2861372fcc7913033512b8e10441a124c94271efad0180f64efa389158ef666a
-
-
完成配置
Kubernetes 集群默认需要加密方式访问。所以,这几条命令,就是将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认会使用这个目录下的授权信息访问 Kubernetes 集群。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
配置完后,就可以用kubectl get命令查看当前结点的状态了
$kubectl get nodes NAME STATUS ROLES AGE VERSION searky-vm Ready master 0h13m v1.18.2
-
配置网络插件
访问 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 可以获取各种网络插件,这里以安装weave为例。首先需要获得Kubernetes的版本号,然后安装对应版本的插件。
$kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} $kubectl apply -f https://cloud.weave.works/k8s/v1.18.2/net.yaml
-
部署容器存储插件
用数据卷(volume)把外面宿主机上的目录或者文件挂在进容器的Mount Namespace中,从而达到容器和宿主机共享这些目录或者文件的目的。很多存储项目都可以为Kubernetes提供持久化存储能力,如Ceph,GlusterFS,NFS。这里以Ceph为例。
-
安装Ceph
$kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml $kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml $kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/crds.yaml $kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
-
添加storageclass
$kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/storageclass-bucket-retain.yaml
-
查看storageclass
$kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-retain-bucket rook-ceph.ceph.rook.io/bucket Retain Immediate false 0h28m
-
设置为默认storageclass
$kubectl patch storageclass rook-ceph-retain-bucket -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
-
-
设置Master结点可运行POD
默认情况下,Master结点是不允许运行用户Pod的,Kubernetes依靠Taint/Toleration机制实现这一点。一个节点被加上一个Taint,那么所有Pod就不能在这个节点上运行,因为Kubernetes的Pod都有“洁癖”。除非Pod声明了Toleration,它才可以在这个节点上运行。
$kubectl describe node searky-vm Name: searky-vm Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=searky-vm kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: csi.volume.kubernetes.io/nodeid: {"rook-ceph.cephfs.csi.ceph.com":"searky-vm","rook-ceph.rbd.csi.ceph.com":"searky-vm"} kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 11 May 2020 01:21:04 +0000 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false
可以看到,Taints字段的键是“node-role.kubernetes.io/master”,而没有提供值。如果就是想部署一个单节点Kubernetes,可以删除这个Taint。
$kubectl taint nodes --all node-role.kubernetes.io/master-
在“node-role.kubernetes.io/master”这个键后面加上了一个短横线“-”,这个格式就意味着移除所有以“node-role.kubernetes.io/master”为键的 Taint。
$kubectl describe node searky-vm Name: searky-vm Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=searky-vm kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: csi.volume.kubernetes.io/nodeid: {"rook-ceph.cephfs.csi.ceph.com":"searky-vm","rook-ceph.rbd.csi.ceph.com":"searky-vm"} kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 11 May 2020 01:21:04 +0000 Taints: <none> Unschedulable: false
-
遇到的错误
-
安装docker时遇到无法启动docker service的问题
Setting up docker.io (19.03.6-0ubuntu1~18.04.1) ... docker.service is a disabled or a static unit, not starting it. Job for docker.socket failed. See "systemctl status docker.socket" and "journalctl -xe" for details.
看Log,有一行提示:Failed to start docker.service: Unit docker.service is masked.
解决方法:
#systemctl unmask docker.service #systemctl unmask docker.socket #systemctl start docker.service
-
cgroup driver问题
kubeadm init开始安装的时候,有一行提示
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
意思是推荐docker使用systemd做为cgroup-driver,而当前(默认)使用的是cgroupfs。修改方法为:
#cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF #systemctl daemon-reload #systemctl restart docker
-
修改cgroup-driver后无法启动Kubernetes
Kubernetes安装成功后如果修改了cgroup driver,会导致Kubernetes启动失败。通过日志查看发现:
#journalctl -xeu kubelet ............ May 11 01:40:24 searky-vm kubelet[18071]: F0511 01:40:24.574623 18071 server.go:274] failed to run Kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" .................
原因是安装时Kubernetes用的cgroup-driver是cgroupfs,后来又改为systemd,造成不匹配。解决方法:修改Kubernetes的启动参数,指定cgroup-driver为systemd,与docker一致。
#vim /var/lib/kubelet/kubeadm-flags.env –cgroup-driver=cgroupfs 改成 –cgroup-driver=systemd
-
关闭swap
在使用kubeadm安装Kubernetes之前,需要禁用swap。
#swapoff -a
-