ry's Tech blog

Cloud Native技術などについて書いていきます。

kubeadm を用いた簡単なkubernetes環境構成

kubernetesを手軽に構築しよう

記事は移してきたものなので、最新バージョンではないですが、現在のバージョン(1.18.0)でも使用可能です。

環境

Master ×1 (CentOS 7, 2 vCPU, 2 GB or more of RAM)

kubemaster1.mydom.local

Worker × 2 (CentOS 7, 2 vCPU, 2 GB or more of RAM)

kubeworker1.mydom.local

kubeworker2.mydom.local

事前準備

kubeadmをインストールするのに必要な準備をしていきます。

hosts設定及びhosts.listの作成

hostファイルにサーバー情報を書き、for分ようにhosts.listを作成します。

# vi /etc/hosts

192.168.1.20 kubemaster1.mydom.local kubemaster
192.168.1.21 kubeworker1.mydom.local kubeworker1
192.168.1.22 kubeworker2.mydom.local kubeworker2

# cat /etc/hosts | grep "192.168" | awk -F' ' '{print $2}' > /root/hosts.list 

Selinux無効化

# for i in `cat hosts.list`; do ssh $i "sed -i 's/=enforcing/=disabled/' /etc/selinux/config"; done

firewall無効化

# for i in `cat hosts.list`; do ssh $i "systemctl stop firewalld; systemctl disable firewalld"; done

IPv6の無効化

# for i in `cat hosts.list`; do ssh $i 'sed -i "s/GRUB_CMDLINE_LINUX=\"/GRUB_CMDLINE_LINUX=\"ipv6.disable=1 transparent_hugepage=never /" /etc/default/grub; grub2-mkconfig -o /boot/grub2/grub.cfg'; done

vm.swappness設定

# for i in `cat hosts.list`; do ssh $i 'echo "vm.swappiness=1" >> /etc/sysctl.conf'; done

SSH設定

ここは、少し手抜きです。

# ssh-keygen
# cd ./.ssh/
# cat id_rsa.pub >> authorized_keys
# chmod 600 ~/.ssh/authorized_keys
# scp -rp .ssh root@kubeworker1.mydom.local:~/
# scp -rp .ssh root@kubeworker2.mydom.local:~/

hosts fileの送付

# scp -p /etc/hosts root@kubeworker1.mydom.local:/etc/hosts
# scp -p /etc/hosts root@kubeworker2.mydom.local:/etc/hosts

その他

localrepo, http, ntpの設定を必要に応じて実施します。

Containerdのインストール(Master node上での実施)

準備

以下を順に実施します。

# yum install -y yum-utils
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y yum-utils'; done

# yum install -y device-mapper-persistent-data
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y device-mapper-persistent-data'; done

# yum install -y lvm2
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y lvm2'; done

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'; done

# for i in $(cat /root/hosts.list); do
  ssh $i 'yum clean all'
  ssh $i 'yum repolist'
  done

# for i in $(cat hosts.list); do ssh $i 'yum update -y'; done

# yum install -y container-selinux
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y container-selinux'; done

# yum install -y docker-ce-18.09.6-3.el7.x86_64 docker-ce-cli-18.09.6-3.el7.x86_64 containerd.
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y docker-ce-18.09.6-3.el7.x86_64 docker-ce-cli-18.09.6-3.el7.x86_64 containerd.'; done

# mkdir /etc/docker
# for i in $(cat hosts.list | sed 1d); do ssh $i 'mkdir /etc/docker'; done

# cat > /etc/docker/daemon.json <<EOF 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

dockerの起動

dockerを起動していきます。

# mkdir -p /etc/systemd/system/docker.service.d
# for i in $(cat hosts.list | sed 1d); do ssh $i 'mkdir -p /etc/systemd/system/docker.service.d'; done

# systemctl daemon-reload
# for i in $(cat hosts.list | sed 1d); do ssh $i 'systemctl daemon-reload'; done

# systemctl enable docker; systemctl start docker

# for i in $(cat hosts.list | sed 1d); do ssh $i 'systemctl enable docker; systemctl start docker'; done

# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2019-07-14 01:50:22 JST; 1min 13s ago
     Docs: https://docs.docker.com
 Main PID: 19164 (dockerd)
    Tasks: 11
   Memory: 29.0M
   CGroup: /system.slice/docker.service
           mq19164 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.317228884+09:00" level=info msg="pickfirstBalancer: HandleSubConnStateCh...ule=grpc
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.318563845+09:00" level=warning msg="Using pre-4.0.0 kernel for overlay2,...overlay2
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.337250856+09:00" level=info msg="Graph migration to content-addressabili...seconds"
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.338023813+09:00" level=info msg="Loading containers: start."
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.553369544+09:00" level=info msg="Default bridge (docker0) is assigned wi...address"
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.597964337+09:00" level=info msg="Loading containers: done."
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.621984713+09:00" level=info msg="Docker daemon" commit=481bc77 graphdriv...=18.09.6
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.622089589+09:00" level=info msg="Daemon has completed initialization"
 7月 14 01:50:22 kubemaster1.mydom.local dockerd[19164]: time="2019-07-14T01:50:22.630930072+09:00" level=info msg="API listen on /var/run/docker.sock"
 7月 14 01:50:22 kubemaster1.mydom.local systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

Kubernetesインストール

準備(前Nodeで実施)

# cat >> EOF < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

# yum clean all

# yum repolist

yum repolistにおいて、kubernetesの数が0の場合、kubernetes.reposslverify=0を追加します。

再度、yum repolistを個々に実施します。

以下のように質問が来ますので、yを返します。

Loading mirror speeds from cached hostfile
 * base: mirrors.vcea.wsu.edu
 * extras: centos.mirror.ndchost.com
 * updates: mirror.team-cymru.com
kubernetes/signature                                                                                                                            |  454 B  00:00:00
https://packages.cloud.google.com/yum/doc/yum-key.gpg から鍵を取得中です。
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
 Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
 From       : https://packages.cloud.google.com/yum/doc/yum-key.gpg
上記の処理を行います。よろしいでしょうか? [y/N]y
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg から鍵を取得中です。
kubernetes/signature                                                                                                                            | 1.4 kB  00:00:07 !!!
kubernetes/primary                                                                                                                              |  52 kB  00:00:02
kubernetes

kubernetesのインストール

以下をmaster nodeにて実施します。

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# for i in $(cat hosts.list | sed 1d); do ssh $i 'yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes'; done

# for i in $(cat hosts.list); do ssh $i 'docker info | grep Cgroup'; done

# for i in $(cat hosts.list); do ssh $i 'mkdir /var/lib/kubelet'; done

次に以下を全ノードで実施します。

# cat >> EOF < /var/lib/kubelet/config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: \"systemd\"
EOF

以下をmaster nodeにて実施します。

# for i in $(cat hosts.list); do ssh $i 'mkdir /etc/systemd/system/kubelet.service.d'; done

次に以下を全ノードで実施します。

# cat >> EOF < /etc/systemd/system/kubelet.service.d/20-extra-args.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
EOF

以下をmaster nodeにて実施します。

# for i in $(cat hosts.list); do ssh $i 'systemctl enable kubelet'; done

# for i in $(cat hosts.list); do ssh $i 'systemctl daemon-reload'; done

Master Nodeの構築

ここから、Master Nodeの構築をしていきます。

Initialize

kubeadmを初期化していきます。

# for i in `cat hosts.list`; do ssh $i 'swapoff -a'; done
# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster1.mydom.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster1.mydom.local localhost] and IPs [192.168.1.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster1.mydom.local localhost] and IPs [192.168.1.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.003966 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubemaster1.mydom.local as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubemaster1.mydom.local as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: owmrsk.d41wmnxetwhr2hsd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.20:6443 --token owmrsk.d41wmnxetwhr2hsd \
    --discovery-token-ca-cert-hash sha256:35e654eeeb9c125eaee57b12202aa0139729a8cd6209ef77a719c8977dc12905

次にkubectlを使えるようにしていきます。

公式ページなどを参照して、kubectlが叩けるとことまで設定をしてください。

kubectlのinstall

その後、kube-apiserverと疎通が取れるようにしていきます。

# mkdir -p $HOME/.kube

# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# chown $(id -u):$(id -g) $HOME/.kube/config

# kubectl cluster-info

# kubectl get nodes
NAME                      STATUS     ROLES    AGE   VERSION
kubemaster1.mydom.local   NotReady   master   16m   v1.15.0

この時、StatusはNotReadyのままで大丈夫です。

CNIの構築

CNIとしてflannelを構築していきます。

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

Podの状態を確認していきます。

kubectl get pod --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-n92tg                          1/1     Running   0          14m
kube-system   coredns-6955765f44-xxfj5                          1/1     Running   0          14m
kube-system   etcd-kubemaster1.mydom.local                      1/1     Running   0          14m
kube-system   kube-apiserver-kubemaster1.mydom.local            1/1     Running   0          14m
kube-system   kube-controller-manager-kubemaster1.mydom.local   1/1     Running   0          14m
kube-system   kube-flannel-ds-amd64-szb8d                       1/1     Running   0          63s
kube-system   kube-proxy-dw5jc                                  1/1     Running   0          14m
kube-system   kube-scheduler-kubemaster1.mydom.local            1/1     Running   0          14m

Node状態を確認します。

# kubectl get node
NAME                      STATUS   ROLES    AGE   VERSION
kubemaster1.mydom.local   Ready    master   15m   v1.17.2

Worker Nodeの構築

これで最後です。

以下の手順を全Worker Nodeで実施してください。

# sysctl -n net.bridge.bridge-nf-call-iptables
0

ここで、0が出た場合以下を実施します。

# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1

# sysctl -n net.bridge.bridge-nf-call-iptables
1

Worker NodeをClusterにJoinさせます。

# kubeadm join 192.168.1.20:6443 --token owmrsk.d41wmnxetwhr2hsd --discovery-token-ca-cert-hash sha256:35e654eeeb9c125eaee57b12202aa0139729a8cd6209ef77a719c8977dc12905
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

全Worker Nodeに実施後、以下のようになります。

# kubectl get nodes
NAME                      STATUS   ROLES    AGE    VERSION
kubemaster1.mydom.local   Ready    master   163m   v1.15.0
kubeworker1.mydom.local   Ready       29s    v1.15.0
kubeworker2.mydom.local   Ready       127m   v1.15.0

終わりに

すごく簡単にClusterが構築できますので、ぜひKubernetesを勉強する際に使ってみてください。