본문 바로가기
DevOps/Kubernetes

[kubernetes]로드밸런싱을 위한 control-plane 증설

by Yoon_estar 2025. 1. 11.
728x90

개요

  • 아래와 같은 환경에서 현재 로드 밸런싱을 위하여 control-plane 2대를 증설하려고 한다. 
# k get no
NAME            STATUS     ROLES           AGE   VERSION
kubemaster210   Ready      control-plane   61d   v1.28.15
kubenode211     NotReady   <none>          61d   v1.28.15
kubenode212     Ready      <none>          61d   v1.28.15
kubenode213     Ready      <none>          61d   v1.28.15
kubenode214     Ready      <none>          61d   v1.28.15

 

환경 구성

OS : Ubuntu24.04

kubernetes version : v1.28.15

Master(기존 control-plane 노드에서 작업)

기존 Control-plane 노드 정보 확인

# kubeadm init phase upload-certs --upload-certs
I0111 17:47:05.903635 3408118 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.28
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4a148031888af8e7dc0d701ea6120f1729b12b080829166cd9507ada666f0477

#  kubeadm token create --print-join-command
kubeadm join 192.168.207.210:6443 --token abhj3i.riijmng0wmyt4jga --discovery-token-ca-cert-hash sha256:07c4ede398ee4ed3e3077925e2237f0484576e0372a79dd6502d21b5e6103973

 

 

Control 노드 Endpoint 설정(파일 수정)

kubectl edit configmap -n kube-system kubeadm-config


apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.k8s.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.28.15
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.96.0.0/12
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    controlPlaneEndpoint: "192.168.207.210:6443"  # 이 줄을 추가합니다
kind: ConfigMap
metadata:
  creationTimestamp: "2024-11-11T04:25:05Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "8799587"
  uid: d5fc1d8c-6bf5-42e8-bb24-89fe4ab2cb7a

 

vi /etc/kubernetes/kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controlPlaneEndpoint: "192.168.207.210:6443" # 로드밸런서 주소
networking:
  podSubnet: "192.168.0.0/16"

기존 정보 삭제

kubectl delete pod -n kube-system kube-apiserver-kubemaster210

 

추가할 노드에서 진행(오류 발생)

  • control-plane 옵션 추가후 진행
kubeadm join 192.168.207.210:6443 --token ibp92e.4gp3lwvy36cppehx \  
--discovery-token-ca-cert-hash sha256:07c4ede398ee4ed3e3077925e2237f0484576e0372a79dd6502d21b5e6103973 \   
--control-plane --certificate-key 8144e364f98c59cda8c9acd3fe621988764ca14021b1c62ea88721d35e9c14c1

 

  • 아래 같은 로그가 발생하며 에러가 발생하였다. 포트가 중복되는 오류로 판단
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@kubemaster230:~#

 

Proxy 설정

  • 설정 파일 맨 아래에 아래와 같이 입력(frontend 설정의 bind 포트 번호는 default로 6443으로 설정되어 있었는데 포트 Err가 지속적으로 발생하여 16443으로 변경하여 진행하였다.)
apt -y install haproxy 
vi /etc/haproxy/haproxy.cfg
------------------------------------

frontend kubernetes-frontend
    bind *:16443
    default_backend kubernetes-backend

backend kubernetes-backend
    balance roundrobin
    server kubemaster210 192.168.207.210:6443 check
    server kubemaster220 192.168.207.220:6443 check
    server kubemaster230 192.168.207.230:6443 check
------------------------------------

systemctl restart haproxy.service

 

이미지 다운로드

kubeadm config images pull

 

노드 추가 명령어(위의 오류 해결)

 

완료