반응형

데이터베이스 같은 어플리케이션의 경우 스토리지가 필요하다.

  • PV라고도 부른다.
  • 스토리지 추상화 또는 볼륨 플러그인이다.
  • 스토리지를 제공한다.
Term                  Description                  example
Storage class 스토리지 타입 aws EBS SSD, AWS EBS disk based 등등
Persistent Volume Claim(PVC) PV를 생성하라는 요청이다. 5GB의 GCP PD ssd를 app을 위해 생성하라고 요청한다.
Persistent Volume (PV) 영구 볼륨인 실제 스토리지 이다.  

 

Storage Class

스토리지 클래스의 참조 문서
https://kubernetes.io/ko/docs/concepts/storage/storage-classes/

아래 명령어를 사용해서 모든 스토리지 클래스를 가져올수 있다.

kubectl get storageclass
kubectl get sc

 

Access Modes

  • ReadWriteOnce
  • ReadWriteOncePod
  • ReadOnlyMany
  • ReadWriteMany

 

Persistent Volume Claim

아래 파일을 생성하자

#01-simple-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G


kubectl apply -f 01-simple-pvc.yaml


생성 후 아래 명령어를 통해 확인할 수 있다.

kubectl get pvc

#결과
NAME     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
my-pvc   Pending                                      standard       <unset>                 25s

 

현재 상태가 pending인데 describe를 통해 확인해보자

kubectl describe pvc

# 결과
Name:          my-pvc
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  WaitForFirstConsumer  4s (x6 over 74s)  persistentvolume-controller  waiting for first consumer to be created before binding


pod와 연결하지 않아서 아직 기다리고 있는것을 확인할 수 있다.
아래 파일을 생성하자

#02-pod-with-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  terminationGracePeriodSeconds: 1
  restartPolicy: Never
  containers:
    - name: nginx
      image: nginx
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: pod-volume
  volumes:
    - name: pod-volume
      persistentVolumeClaim:
        claimName: my-pvc



kubectl apply -f 02-pod-with-pvc.yaml


적용 후 확인해보자

kubectl get pvc

# 결과
NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
my-pvc   Bound    pvc-27af6678-3a6c-4059-9c53-8404d125af4d   5G         RWO            standard       <unset>                 9m45s



kubectl describe pvc

# 결과
Name:          my-pvc
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-27af6678-3a6c-4059-9c53-8404d125af4d
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: dev-cluster-worker
               volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5G
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       my-pod
Events:
  Type    Reason                 Age                   From
         Message
  ----    ------                 ----                  ----
         -------
  Normal  WaitForFirstConsumer   3m48s (x26 over 10m)  persistentvolume-controller
         waiting for first consumer to be created before binding
  Normal  Provisioning           104s                  rancher.io/local-path_local-path-provisioner-57c5987fd4-2k8pk_20c2ba64-75fe-4e53-83b3-a9646cfc0d09  External provisioner is provisioning volume for claim "default/my-pvc"
  Normal  ProvisioningSucceeded  102s                  rancher.io/local-path_local-path-provisioner-57c5987fd4-2k8pk_20c2ba64-75fe-4e53-83b3-a9646cfc0d09  Successfully provisioned volume pvc-27af6678-3a6c-4059-9c53-8404d125af4d


실제 생성된 볼륨을 가져와보자

kubectl get pv
# 결과
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-27af6678-3a6c-4059-9c53-8404d125af4d   5G         RWO            Delete           Bound    default/my-pvc   standard       <unset>    2m27s


kubectl describe pv
# 결과
Name:              pvc-27af6678-3a6c-4059-9c53-8404d125af4d
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      standard
Status:            Bound
Claim:             default/my-pvc
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          5G
Node Affinity:
  Required Terms:
    Term 0:        kubernetes.io/hostname in [dev-cluster-worker]
Message:
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /var/local-path-provisioner/pvc-27af6678-3a6c-4059-9c53-8404d125af4d_default_my-pvc
    HostPathType:  DirectoryOrCreate
Events:            <none>


이제 실제 PV가 있는 도커 안으로 진입해서 확인해보자

# 진입할 worker는 위의 describe 결과로 확인하자
docker exec -it dev-cluster-worker bash
# 아래 경로는 위의 describe 경로를 확인하자
cd /var/local-path-provisioner/pvc-27af6678-3a6c-4059-9c53-8404d125af4d_default_my-pvc

# 파일을 생성한다
"Hello PVC" > index.html

#도커 진입 종료
exit


이제 port-forawd를 통해서 확인해보자

kubectl port-forward my-pod 8080:80


http://localhost:8080/ 으로 접근해보면 Hello PVC가 출력되는것을 확인할 수 있다.

이제 파드를 삭제후 무슨일이 일어나는지 확인해보자

kubectl delete -f 02-pod-with-pvc.yaml
kubectl get pvc
kubectl get pv


pod를 삭제 후에도 pvc와 pv는 살아있는것을 확인할수 있다. 다시 pod를 생성해보자

kubectl apply -f 02-pod-with-pvc.yaml
kubectl port-forward my-pod 8080:80

 

파드를 생성하고 port-forward를 적용하면 여전히 hello pvc가 나오는것을 확인할 수 있다.

 

Persistent Volume - Delete

파드와 PVC를 삭제하자

kubectl delete -f 02-pod-with-pvc.yaml
kubectl delete -f 01-simple-pvc.yaml


이제 확인해보자

kubectl get pv
kubectl get pvc


PVC를 지우면 PV는 자동으로 삭제되는걸 확인할 수 있다.

 

Deployment with PVC

deployment를 사용해서 3개의 replica가 동시에 pv안의 같은 파일에 쓰면 어떤일이 일어날까
아래 파일을 생성해보자

#03-deployment-with-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deploy
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: nginx
        image: nginx
        startupProbe: # for demo
          exec:
            command:
              - "/bin/sh"
              - "-c"
              - 'echo "$(hostname)</br>" >> /usr/share/nginx/html/index.html'
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: pod-volume
      volumes:
        - name: pod-volume
          persistentVolumeClaim:
            claimName: my-pvc



kubectl apply -f 03-deployment-with-pvc.yaml


아래 명령어를 확인해보면 pvc와 pv는 하나만 생성된걸 확인할 수있다.

kubectl get pvc
kubectl get pv

 

포트포워딩을 해서 확인해보자

kubectl port-forward deploy/my-deploy 8080:80

# localhost:8080접속
my-deploy-85dbd57976-49pdl
my-deploy-85dbd57976-8r9mh
my-deploy-85dbd57976-xc2jh

 

결과를 보면 한 파일에 3개의 pod가 write를 한 것을 확인할 수 있다.
원할때는 이렇게 사용하고 원하지 않는다면 조심해서 사용하자
각각의 Pod가 pv를 갖기 위해서는 statefulSet을 사용해야 한다.

 

StatefulSet

개념은 배포와 비슷하다.

# 04-simple-ss.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-ss
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: nginx
          image: nginx:1.14



kubectl apply -f 04-simple-ss.yaml


파일을 생성하고 적용해서 확인해보면 순차적으로 pod가 생성되는것을 확인할수 있다.

watch -t -x kubectl get all

# 결과
NAME          READY   STATUS    RESTARTS   AGE
pod/my-ss-0   1/1     Running   0          116s
pod/my-ss-1   1/1     Running   0          109s
pod/my-ss-2   1/1     Running   0          102s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d22h

NAME                     READY   AGE
statefulset.apps/my-ss   3/3     116s


생성된 파드를 확인해보면 실제로 random한 이름으로 생성되지 않고 index이름이 끝에 붙어서 생성되는걸 확인할 수 있다.
또한 한번에 모든 pod가 생성되지 않고 순차적으로 생성된다.
pod를 하나 지워보자

kubectl delete pod/my-ss-1

# 결과
NAME          READY   STATUS    RESTARTS   AGE
pod/my-ss-0   1/1     Running   0          3m44s
pod/my-ss-1   1/1     Running   0          19s
pod/my-ss-2   1/1     Running   0          3m30s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d22h

NAME                     READY   AGE
statefulset.apps/my-ss   3/3     3m44s


파드를 지워도 랜덤한 이름으로 생성되지 않고 다시 그 이름으로 생성되는것을 확인할 수 있다.
replicaSet을 5개로 늘려서 적용 후 다시 2개로 변경한다음 적용해보자
pod가 index 끝부터 사라지는걸 확인할 수 있다.

NAME          READY   STATUS    RESTARTS   AGE
pod/my-ss-0   1/1     Running   0          5m13s
pod/my-ss-1   1/1     Running   0          108s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d22h

NAME                     READY   AGE
statefulset.apps/my-ss   2/2     5m13s

 

이번엔 이미지를 바꿔보자

...
    spec:
      containers:
        - name: nginx
          image: nginx

 

적용해보면 index 끝부터 바뀌는걸 확인할 수 있다.

 

StatefulSet with Service

#05-ss-svc.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-ss
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      terminationGracePeriodSeconds: 1
      containers:
        - name: nginx
          image: vinsdocker/nginx-gke
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
spec:
  terminationGracePeriodSeconds: 1
  containers:
    - name: demo
      image: vinsdocker/util
      args:
        - "sleep"
        - "3600"


kubectl apply -f 05-ss-svc.yaml

 

이제 demo-pod에 접속해서 호출해보자

kubectl exec -it demo-pod -- bash

# 결과
root@demo-pod:/# curl nginx
<H1>Hello my-ss-1 </H1>
root@demo-pod:/# curl nginx
<H1>Hello my-ss-2 </H1>
root@demo-pod:/# curl nginx
<H1>Hello my-ss-1 </H1>
root@demo-pod:/# curl nginx
<H1>Hello my-ss-0 </H1>

 

이렇게 service가 load balancing을 해주는 것을알수 있다.
특정 pod에 접근하려면 어떻게 해야할까?

 

Headless Service

헤드리스 서비스는 IP가 없고 kube-proxy가 로드밸런싱을 하지 않는다.
대신에 <pod-name>.<svc-name> 형식으로 DNS가 생성된다.

# 06-ss-headless-svc.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-ss
spec:
  serviceName: nginx
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      terminationGracePeriodSeconds: 1
      containers:
        - name: nginx
          image: vinsdocker/nginx-gke
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: ClusterIP
  clusterIP: None # headless service
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
spec:
  terminationGracePeriodSeconds: 1
  containers:
    - name: demo
      image: vinsdocker/util
      args:
        - "sleep"
        - "3600"



 kubectl apply -f 06-ss-headless-svc.yaml


이제 파드에 접속해서 살펴보자

kubectl exec -it demo-pod -- bash

# 결과
root@demo-pod:/# nslookup nginx
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   nginx.default.svc.cluster.local
Address: 10.244.2.10
Name:   nginx.default.svc.cluster.local
Address: 10.244.2.9
Name:   nginx.default.svc.cluster.local
Address: 10.244.1.19

# 특정 IP만 얻기
root@demo-pod:/# nslookup my-ss-0.nginx
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   my-ss-0.nginx.default.svc.cluster.local
Address: 10.244.2.9

 

이제 요청을 해보자

root@demo-pod:/# curl nginx
<H1>Hello my-ss-1 </H1>
root@demo-pod:/# curl nginx
<H1>Hello my-ss-1 </H1>
root@demo-pod:/# curl my-ss-0.nginx
<H1>Hello my-ss-0 </H1>
root@demo-pod:/# curl my-ss-0.nginx
<H1>Hello my-ss-0 </H1>

 

이렇게 특정 pod로만 요청을 보낼수 있다.

 

Dynamic Persistent Volume Claim

파일을 생성하고 적용해보자

# 07-ss-pvc.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-ss
spec:
  selector:
    matchLabels:
      app: my-app
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: nginx
        image: nginx
        startupProbe: # for demo
          exec:
            command:
              - "/bin/sh"
              - "-c"
              - 'echo "$(hostname)</br>" >> /usr/share/nginx/html/index.html'
        volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: my-pvc
  volumeClaimTemplates:
    - metadata:
        name: my-pvc
      spec:
        storageClassName: standard
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1G



kubectl apply -f 07-ss-pvc.yaml

 

아래 명령어를 통해 각각 pvc와 pv가 확인된걸 확인할 수 있다.

kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
my-pvc-my-ss-0   Bound    pvc-624f0b5a-89e2-40e4-9ab9-7f260e6d7436   1G         RWO            standard       <unset>                 39s
my-pvc-my-ss-1   Bound    pvc-cd5916e7-f20d-4414-a3c7-eadb6e6c466a   1G         RWO            standard       <unset>                 22s
my-pvc-my-ss-2   Bound    pvc-c31dd928-3f4d-4c65-9cc9-8606db0b23b4   1G         RWO            standard       <unset>                 6s

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-624f0b5a-89e2-40e4-9ab9-7f260e6d7436   1G         RWO            Delete           Bound    default/my-pvc-my-deploy-0   standard       <unset>                          69s
pvc-c31dd928-3f4d-4c65-9cc9-8606db0b23b4   1G         RWO            Delete           Bound    default/my-pvc-my-deploy-2   standard       <unset>                          37s
pvc-cd5916e7-f20d-4414-a3c7-eadb6e6c466a   1G         RWO            Delete           Bound    default/my-pvc-my-deploy-1   standard       <unset>                          52s

 

포트포워딩을 통해 연결된 storage를 확인할 수 있다.

kubectl port-forward pod/my-ss-0 8080:80
# localhost:8080 접속
my-ss-0

kubectl port-forward pod/my-ss-1 8080:80
# localhost:8080 접속
my-ss-1

kubectl port-forward pod/my-ss-2 8080:80
# localhost:8080 접속
my-ss-2


파일을 지우고 다시 적용해보면 PVC가 새로 생성되지 않는다.
이미 존재하는 PVC를 사용한다.
다시 포트 포워딩을 해보면 2번 적혀있는것을 확인할 수 있다.

kubectl port-forward pod/my-ss-0 8080:80
# localhost:8080 접속
my-ss-0
my-ss-0

kubectl port-forward pod/my-ss-1 8080:80
# localhost:8080 접속
my-ss-1
my-ss-1

kubectl port-forward pod/my-ss-2 8080:80
# localhost:8080 접속
my-ss-2
my-ss-2

 

아래 명령어를 통해서 모든 pvc를 삭제할 수 있다.

kubectl delete pvc --all




반응형

'Kubernetes' 카테고리의 다른 글

[Kubernetes] Ingress  (0) 2024.12.30
[Kubernetes] HPA - Horizontal Pod Autoscaler  (0) 2024.12.30
[Kubernetes] Secret  (0) 2024.12.19
[Kubernetes] ConfigMap  (0) 2024.12.19
[Kubernetes] Probes  (0) 2024.12.19
얼은펭귄