侧边栏壁纸
博主头像
路小飞博主等级

行动起来,活在当下

  • 累计撰写 72 篇文章
  • 累计创建 12 个标签
  • 累计收到 0 条评论

目 录CONTENT

文章目录

03- 使用 Kubeadm 部署 Kubernetes 集群

路小飞
2024-07-09 / 0 评论 / 0 点赞 / 132 阅读 / 18849 字

一、Kubeadm简介

为了简化 Kubernetes 的部署工作,让它能够更“接地气”,社区里就出现了一个专门用来在集群中安装 Kubernetes 的工具,名字就叫“kubeadm”,意思就是“Kubernetes 管理员”。

Kubeadm 是用容器和镜像来封装 Kubernetes 的各种组件,但它的目标不是单机部署,而是要能够轻松地在集群环境里部署 Kubernetes,并且让这个集群接近甚至达到生产级质量。

二、实验环境架构

角色主机名IP配置
mastermaster-01192.168.17.1102C4G
workerworker-01192.168.17.1202C2G

所谓的多节点集群,要求服务器应该有两台或者更多,为了简化我们只取最小值,所以这个Kubernetes 集群就只有两台主机,一台是 Master 节点,另一台是 Worker 节点。当然,在完全掌握了 kubeadm 的用法之后,你可以在这个集群里添加更多的节点。 Master 节点需要运行 apiserver、etcd、scheduler、controller-manager 等组件,管理整个集群,所以对配置要求比较高,至少是 2 核 CPU、4GB 的内存。

而 Worker 节点没有管理工作,只运行业务应用,所以配置可以低一些,为了节省资源可以给它分配 2 核 CPU 和 2GB 的内存。

基于模拟生产环境的考虑,在 Kubernetes 集群之外还需要有一台起辅助作用的服务器。它的名字叫 Console,意思是控制台,我们要在上面安装命令行工具 kubectl,所有对Kubernetes 集群的管理命令都是从这台主机发出去的。这也比较符合实际情况,因为安全的原因,集群里的主机部署好之后应该尽量少直接登录上去操作。要提醒你的是,Console 这台主机只是逻辑上的概念,不一定要是独立,你在实际安装部署的时候完全可以复用之前 minikube 的虚拟机,或者直接使用 Master/Worker 节点作为控制。

三、Both master and worker

1. 修改主机名并配置主机名解析记录
hostnamectl set-hostname master-01
hostnamectl set-hostname worker-01
cat >>/etc/hosts << EOF
192.168.17.110 master-01
192.168.17.120 worker-01
EOF
2. 禁用firewalld
systemctl disable firewalld && systemctl stop firewalld
3. 禁用SElinux
setenforce 0
sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
4. 禁用swap
swapoff -a && sed -i '/swap/d' /etc/fstab
5. 修改网络设置
cat >>/etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
6. 配置时间同步
方案一:机器可以联网,使用ntpdate命令同步网络时间服务器
yum -y install ntpdate
ntpdate -u ntp1.aliyun.com
clock -w
方案二:机器无法联网,安装 chrony 服务,搭建时间同步服务器
#chrony 服务器端
yum -y install chrony
vim /ec/chrony.conf
---
#注释掉所有的时间同步server,指定从自己机器上同步
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

#配置允许同步客户机
allow 192.168.17.0/24

#设置同步级别
local stratum 10
----
systemctl start chronyd
systemctl enable chronyd
ss -antup | grep chrony
clock-w
#chrony客户端
vim /etc/chrony.conf
---
#设置server为局域网的时间同步服务器,注释掉其他的。
server 192.168.66.24 iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
---
systemctl start chronyd
systemctl enable chronyd
ss -antup | grep chron

#查看客户端同步情况:
timedatectl
#查看时间同步源:
chronyc sources -v
clock -w
7. 安装docker
# 配置阿里yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 配置epel源
yum clean all
yum makecache
yum -y install yum-utils
# 配置docker的yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-20.10.24-3.el7.x86_64 docker-ce-cli-20.10.24-3.el7.x86_64 containerd.io docker-buildx-plugin-0.10.5-1.el7.x86_64.rpm   docker-compose-plugin
systemctl enable --now docker
docker version
docker compose version
systemctl status docker

修改配置 /etc/docker/daemon.json ,若没有该文件,手动创建即可。

mkdir -p /etc/docker

cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload && systemctl restart docker
8. 安装Kubernetes
8.1 添加yum仓库
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
8.2 安装Kubernetes组件
yum install -y kubeadm-1.23.15-0 kubelet-1.23.15-0 kubectl-1.23.15-0
  • kubeadm 是 Kubernetes 提供的一个工具,用于快速部署 Kubernetes 集群。

  • kubelet 是 Kubernetes 集群中每个节点上运行的代理,负责与 Master 节点通信,接收集群管理的指令并在节点上执行这些指令。

    kubelet 进程同时也会监视特定目录(默认为 /etc/kubernetes/manifests)中的静态 Pod (控制面的四个组件,即apiserver、etcd、scheduler、controller-manager。有需求,也可以自定义一些静态pod)配置文件,并根据这些配置文件来创建和维护静态 Pod。

  • kubectl 是 Kubernetes 的命令行工具,用于与 Kubernetes 集群进行交互操作。通过 kubectl 可以管理集群中的资源对象,如 Pod、Service、Deployment 等。

    该组件可以按需安装,以哪台机器作为console机在哪里安装即可。

8.3 开启kubelet服务
systemctl enable --now kubelet

四、On master

1. 初始化 Kubernetes 集群
1.1 生成kubeadm初始配置文件
kubeadm config print init-defaults > kubeadm-config.yaml
1.2 修改部分配置
localAPIEndpoint:
  advertiseAddress: 192.168.17.110  # 修改为master节点IP地址
  name: master-01  # 修改节点名为 master-01
  
  imageRepository: registry.aliyuncs.com/google_containers   # 使用阿里云镜像
  
  kubernetesVersion: 1.23.15  # 配置安装的版本号
  
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16   # 添加 pod 网络 CIDR 地址
  serviceSubnet: 10.96.0.0/12
---							# 添加内容: 修改 kube-proxy 为 ipvs 模式,默认iptables模式
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

完整版 kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.17.110
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master-01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers 
kind: ClusterConfiguration
kubernetesVersion: 1.23.15
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
1.3 初始化集群
kubeadm init --config=kubeadm-config.yaml

看到如下信息代表集群初始化成功。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.19.16.3:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:69cb0f91095a702bfec5c18209520e6e770682d16d33441b7d26c28bb7584f23 

2. 配置kubeconifg
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时使用 kubectl get node 查看节点信息,发现是 NotReady 状态,这是因为没有安装网络插件。

[root@master-01 ~]# kubectl get node
NAME        STATUS     ROLES                  AGE     VERSION
master-01   NotReady   control-plane,master   3m25s   v1.23.15

接下来我们开始网络插件,本文以 calico 为例。

3. 安装网络插件
curl https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml -O
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f calico.yaml

Ref: calico官网

4. 打印node节点加入集群的token
 kubeadm token create --print-join-command

打印结果如下:

kubeadm join 192.168.17.110:6443 --token dbtugq.cfda2vnvih4y03bm --discovery-token-ca-cert-hash sha256:7f476986bd8597cd55eb3bd59777930840e560ffd6008789fcee54d8ebb1d24e 

五、On worker

1. 加入集群
kubeadm join 172.19.16.3:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:c0967195798b902eec0c8cffd3f2f2c8cb2bd2c416afc1e2cd4653b1d34dcd30 --node-name node-01

--node-name 字段可以设置节点加入后的节点名称。

六、验证

1. 查看集群中节点的状态
kubectl get node
[root@master-01 ~]# kubectl get node 
NAME        STATUS   ROLES                  AGE     VERSION
master-01   Ready    control-plane,master   7m40s   v1.23.15
node-01     Ready    <none>                 57s     v1.23.15
2. 查看集群状态
kubectl get cs
[root@master-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
3. 查看集群中Pod的状态
kubectl get pod -A

Pod 状态有以下几种:

  • Running:Pod 正在运行中。
  • Pending:Pod 正在被调度或者等待某些资源的创建。
  • Succeeded:Pod 中的容器已经成功完成了它们的工作并退出。
  • Failed:Pod 中有一个或多个容器已经失败并退出。
  • Unknown:Pod 的状态无法被识别或者无法取得当前状态。

所有 Pod 的状态都是 Running ,说明应用程序正在正常运行中,没有出现任何错误或异常情况。

七、局限性

此处创建的集群具有单个控制平面节点,运行单个 etcd 数据库。 这意味着如果控制平面节点发生故障,你的集群可能会丢失数据并且可能需要从头开始重新创建。

解决方法:

  • 定期备份 etcd。 kubeadm 配置的 etcd 数据目录位于控制平面节点上的 /var/lib/etcd 中。

  • 使用多个控制平面节点。

八、故障排查

1. kubeadm 初始化日志
[root@master-01 ~]#  cat install.log  #  log文件是自己重定向的
[init] Using Kubernetes version: v1.23.15
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-01] and IPs [10.96.0.1 192.168.17.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-01] and IPs [192.168.17.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-01] and IPs [192.168.17.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.003281 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.110:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:7f476986bd8597cd55eb3bd59777930840e560ffd6008789fcee54d8ebb1d24e 
[root@master-01 ~]# 
2. 查看系统日志
tail -f /var/log/messages
journalctl -f -u kubelet
3. 重置集群

可以删除集群重新安装

kubeadm reset
4. kubeadm支持的命令
kubeadm init 用于搭建控制平面节点
kubeadm join 用于搭建工作节点并将其加入到集群中
kubeadm upgrade 用于升级 Kubernetes 集群到新版本
kubeadm config 如果你使用了 v1.7.x 或更低版本的 kubeadm 版本初始化你的集群,则使用 kubeadm upgrade 来配置你的集群
kubeadm token 用于管理 kubeadm join 使用的令牌
kubeadm reset 用于恢复通过 kubeadm init 或者 kubeadm join 命令对节点进行的任何变更
kubeadm certs 用于管理 Kubernetes 证书
kubeadm kubeconfig 用于管理 kubeconfig 文件
kubeadm version 用于打印 kubeadm 的版本信息
kubeadm alpha 用于预览一组可用于收集社区反馈的特性

Ref: Kubeadm | Kubernetes

九、kubectl 优化

1. 实现kubectl的命令补全功能
yum -y install bash-completion 
echo "source <(kubectl completion bash)" >> ~/.bashrc     
source  ~/.bashrc

十、安装

0

评论区