[!NOTE|label:references:]
- kubernetes official:
- others:
- K8S集群搭建——cri-dockerd版(包含问题解决方案)
- ubernetes那些事 —— 使用cri-o作为容器运行时
- * Kubernetes provisioning with CRI-O as container runtime
- kubeadm keepalived haproxy containerd部署高可用k8s集群
- * kubeadm-conf.yaml: 1、Kubernetes核心技术 - 高可用集群搭建(kubeadm+keepalived+haproxy)
- * kubeadm部署3master3node centos7的crio(1.24.0)+k8s(1.26.0)
CRI-O
[!NOTE|label:references:]
$ CRIO_VERSION='v1.30'
$ cat <<EOF | sudo tee /etc/yum.repos.d/cri-o.repo
[cri-o]
name=CRI-O
baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/$CRIO_VERSION/rpm/repodata/repomd.xml.key
exclude=cri-o
EOF
$ sudo dnf install -y cri-o-1.30.4-150500.1.1 --disableexcludes=cri-o
$ sudo sed -i 's/10.85.0.0/10.185.0.0/g' /etc/cni/net.d/11-crio-ipv4-bridge.conflist
$ sudo systemctl daemon-reload
$ sudo systemctl enable crio --now
$ sudo systemctl status crio
kubeadm
[!NOTE|label:references:]
install kuberentes
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
$ sudo systemctl enable --now kubelet.service
# lock kube* for auto upgrade
$ sudo tail -1 /etc/yum.repos.d/kubernetes.repo
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
show default kubeadm-config.yaml
[!TIP|label:references:]
$ kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock # unix:///var/run/crio/crio.sock ( same with /etc/crictl.yaml )
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.30.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
init
[!NOTE|label:references:]
- official doc:
- references:
- * 使用CRI-O 和 Kubeadm 搭建高可用 Kubernetes 集群
- 通过cri-o部署k8s集群环境
- Cri-O方式部署Kubernetes集群
- * 使用 Kubeadm 和 CRI-O 在 Rocky Linux 8 上安装 Kubernetes 集群
- Kubernetes provisioning with CRI-O as container runtime
- Creating Kubernetes Cluster With CRI-O Container Runtime
- Kubernetes 1.23 + CRI-O
- How To Deploy a Kubernetes Cluster Using the CRI-O Container Runtime
- Using kubeadm with CRI-O
- Install Kubernetes using kubeadm
- Deploying k8s on Oracle Linux 8
- video:
init first control plane
$ sudo kubeadm init --config kubeadm-config.yaml --upload-certs --v=5
add another control plane node
$ kubeadm join 10.28.63.16:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:23868e8ddd7888c412d5579d8d1d3e6ae7678d19e146bbae86106767c2c45add \
--control-plane --certificate-key 1c67096a3e1938d552eafbc913f8ef7d0ee966b097da21ce0c508603b29540ea
--discovery-token-ca-cert-hash
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \ openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'
reupload certificate
[!NOTE|label:references:]
$ sudo kubeadm init phase upload-certs --upload-certs
# or
$ sudo kubeadm init phase upload-certs --upload-certs --config=/path/to/kubeadm-conf.yaml
HA Cluster
[!TIP|label:references:]
extend etcd
[!TIP|label:references:]
verify
cluster
- componentstatuses
$ kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy ok
crio
$ curl -s --unix-socket /var/run/crio/crio.sock http://localhost/info | jq -r
{
"storage_driver": "overlay",
"storage_image": "",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings": {
"uids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
],
"gids": [
{
"container_id": 0,
"host_id": 0,
"size": 4294967295
}
]
}
}
taint control plane
$ kubectl taint node k8s-01 node-role.kubernetes.io/control-plane-
CNI
[!NOTE|label:references:]
addons
helm
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Downloading https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
TSL
$ cat star_sample_com.crt >> star_sample_com.full.crt
$ cat DigiCertCA.crt >> star_sample_com.full.crt
$ cat TrustedRoot.crt >> star_sample_com.full.crt
# create secret yaml
$ kubectl -n kube-system create secret tls sample-tls \
--cert star_sample_com.full.crt \
--key star_sample_com.key \
--dry-run=client -o yaml > kube-system.sample-tls.yaml
# apply secret
$ kubectl -n kube-system apply -f kube-system.sample-tls.yaml
# copy to another namespace
$ kubectl --namespace=kube-system get secrets sample-tls -o yaml |
grep -v '^\s*namespace:\s' |
kubectl apply --namespace=ingress-nginx -f -
secret/sample-tls created
kubernetes-dashboard
[!NOTE|label:references:]
install kube-dashboard
# add kubernetes-dashboard repository
$ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
ingress for kubernetes-dashboard
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- sms-k8s-dashboard.sample.com
secretName: sample-tls
rules:
- host: sms-k8s-dashboard.sample.com
http:
paths:
- path: /
backend:
service:
# or kubernetes-dashboard-kong-proxy for latest version
name: kubernetes-dashboard
port:
number: 443
pathType: Prefix
rbac
clusterrole
$ kubectl get clusterrole kubernetes-dashboard -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: - apiGroups: - '*' resources: - '*' verbs: - '*'
clusterrolebinding
$ kubectl -n kube-system get clusterrolebindings kubernetes-dashboard -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
serviceaccount
$ kubectl -n kube-system get sa kubernetes-dashboard -o yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system
generate token
$ kubectl -n kube-system create token kubernetes-dashboard ey**********************WAA
ingress-nginx
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
grafana
$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo list
NAME URL
kubernetes-dashboard https://kubernetes.github.io/dashboard/
ingress-nginx https://kubernetes.github.io/ingress-nginx
grafana https://grafana.github.io/helm-charts
$ helm repo update
$ helm search repo grafana/grafana
$ helm install grafana grafana/grafana --namespace monitoring --create-namespace
metrics-server
$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
$ helm upgrade --install metrics-server metrics-server/metrics-server --namespace monitoring --create-namespace
# without tls: https://github.com/kubernetes-sigs/metrics-server/issues/1221
$ helm upgrade metrics-server metrics-server/metrics-server --set args="{--kubelet-insecure-tls}" --namespace monitoring
teardown
[!NOTE|label:references:]
$ sudo kubeadm reset --cri-socket /var/run/crio/crio.sock --v=5 -f
$ sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
$ sudo ipvsadm -C
-
# cni0 $ sudo ifconfig cni0 down $ sudo ip link delete cni0 # calico $ sudo ifconfig vxlan.calico down $ sudo ip link delete vxlan.calico # flannel $ sudo ifconfig flannel.1 down $ sudo ip link delete flannel.1
clean up images
$ crictl rmi --prune # or $ crictl rmi $(crictl images -q | uniq)
troubleshooting
scheduler and controller-manager unhealthy
[!NOTE|label:references:]
- scheduler:
/etc/kubernetes/manifests/kube-scheduler.yaml
- controller-manager:
/etc/kubernetes/manifests/kube-controller-manager.yaml
- references:
$ sudo sed -re 's:^.+port=0$:# &:' -i /etc/kubernetes/manifests/kube-scheduler.yaml
$ sudo sed -re 's:^.+port=0$:# &:' -i /etc/kubernetes/manifests/kube-controller-manager.yaml
other references
calico tools
# calicoctl
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.28.1/calicoctl-linux-amd64 -o calicoctl
$ chmod +x calicoctl
$ sudo mv calicoctl /usr/local/bin/
# kubectl-calico
$ curl -L https://github.com/projectcalico/calico/releases/download/v3.28.1/calicoctl-linux-amd64 -o kubectl-calico
$ chmod +x kubectl-calico
$ sudo mv kubectl-calico /usr/local/bin/
kubecolor
$ [[ -d /tmp/kubecolor ]] && sudo mkdir -p /tmp/kubecolor
$ curl -fsSL https://github.com/hidetatz/kubecolor/releases/download/v0.0.25/kubecolor_0.0.25_Linux_x86_64.tar.gz | tar xzf - -C /tmp/kubecolor
$ sudo mv /tmp/kubecolor/kubecolor /usr/local/bin/
$ sudo chmod +x /usr/local/bin/kubecolor
bash completion
$ sudo dnf install -y bash-completion
$ sudo curl -fsSL https://github.com/marslo/dotfiles/raw/main/.marslo/.completion/complete_alias -o /etc/profile.d/complete_alias.sh
$ sudo bash -c "cat >> /etc/bashrc" << EOF
alias k='kubecolor'
[[ -f /etc/profile.d/complete_alias.sh ]] && source /etc/profile.d/complete_alias.sh
command -v kubectl >/dev/null && source <(kubectl completion bash)
complete -o default -F __start_kubectl kubecolor
complete -o nosort -o bashdefault -o default -F _complete_alias $(alias | sed -rn 's/^alias ([^=]+)=.+kubec.+$/\1/p' | xargs)
EOF
kubeadm-conf.yaml
-
apiServer: certSANs: - k8s-master-01 - k8s-master-02 - k8s-master-03 - master.k8s.io - 192.168.1.35 - 192.168.1.36 - 192.168.1.39 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "master.k8s.io:16443" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.21.3 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.1.0.0/16 scheduler: {}
kubeadm config print init-defaults --component-configs KubeletConfiguration
$ kubeadm config print init-defaults --component-configs KubeletConfiguration apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: node taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.30.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerRuntimeEndpoint: "" cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMaximumGCAge: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" text: infoBufferSize: "0" verbosity: 0 memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s