Benutzer-Werkzeuge

Webseiten-Werkzeuge


it-wiki:kubernetes:local_persistant_storage

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Nächste Überarbeitung
Vorhergehende Überarbeitung
it-wiki:kubernetes:local_persistant_storage [2024/03/12 06:37] – angelegt markoit-wiki:kubernetes:local_persistant_storage [2024/03/13 12:18] (aktuell) marko
Zeile 15: Zeile 15:
 ===== Step 1: Create StorageClass with WaitForFirstConsumer Binding Mode ===== ===== Step 1: Create StorageClass with WaitForFirstConsumer Binding Mode =====
 According to the docs, persistent local volumes require to have a binding mode of WaitForFirstConsumer. the only way to assign the volumeBindingMode to a persistent volume seems to be to create a storageClass with the respective volumeBindingMode and to assign the storageClass to the persistent volume. Let us start with According to the docs, persistent local volumes require to have a binding mode of WaitForFirstConsumer. the only way to assign the volumeBindingMode to a persistent volume seems to be to create a storageClass with the respective volumeBindingMode and to assign the storageClass to the persistent volume. Let us start with
-<code bash> + 
-cat > storageClass.yaml << EOF+//storageClass.yaml// 
 +<code yaml>
 kind: StorageClass kind: StorageClass
 apiVersion: storage.k8s.io/v1 apiVersion: storage.k8s.io/v1
Zeile 23: Zeile 24:
 provisioner: kubernetes.io/no-provisioner provisioner: kubernetes.io/no-provisioner
 volumeBindingMode: WaitForFirstConsumer volumeBindingMode: WaitForFirstConsumer
-EOF +</code> 
 +<code bash>
 kubectl create -f storageClass.yaml kubectl create -f storageClass.yaml
 </code> </code>
Zeile 33: Zeile 34:
 ===== Step 2: Create Local Persistent Volume ===== ===== Step 2: Create Local Persistent Volume =====
 Since the storage class is available now, we can create local persistent volume with a reference to the storage class we have just created: Since the storage class is available now, we can create local persistent volume with a reference to the storage class we have just created:
-<code bash> + 
-cat > persistentVolume.yaml << EOF+//persistentVolume.yaml// 
 +<code yaml>
 apiVersion: v1 apiVersion: v1
 kind: PersistentVolume kind: PersistentVolume
Zeile 56: Zeile 58:
           values:           values:
           - node1           - node1
-EOF 
 </code> </code>
  
 <note tip>Note: You might need to exchange the hostname value „<color #ffaec9>node1</color>“ in the nodeAffinity section by the name of the node that matches your environment. <note tip>Note: You might need to exchange the hostname value „<color #ffaec9>node1</color>“ in the nodeAffinity section by the name of the node that matches your environment.
  
-The „hostPath“ we had defined in our last blog post is replaced by the so-called „<color #c8bfe7>local path</color>„.</note>+The „hostPath“ we had defined in our post is replaced by the so-called „<color #c8bfe7>local path</color>„.</note>
  
-Similar to what we have done in case of a hostPath volume in our last blog post, we need to prepare the volume on node1, before we create the persistent local volume on the master:+Similar to what we have done in case of a hostPath volume in our post, we need to prepare the volume on node1, before we create the persistent local volume on the master:
 <code bash> <code bash>
 # on the node, where the POD will be located (node1 in our case): # on the node, where the POD will be located (node1 in our case):
Zeile 81: Zeile 82:
 ===== Step 3: Create a Persistent Volume Claim ===== ===== Step 3: Create a Persistent Volume Claim =====
 Similar to hostPath volumes, we now create a persistent volume claim that describes the volume requirements. One of the requirement is that the persistent volume has the <color #00a2e8>volumeBindingMode: WaitForFirstConsumer</color>. We can assure this by referencing the previously created a storageClass: Similar to hostPath volumes, we now create a persistent volume claim that describes the volume requirements. One of the requirement is that the persistent volume has the <color #00a2e8>volumeBindingMode: WaitForFirstConsumer</color>. We can assure this by referencing the previously created a storageClass:
-<code bash> + 
-cat > persistentVolumeClaim.yaml << EOF+//persistentVolumeClaim.yaml// 
 +<code yaml>
 kind: PersistentVolumeClaim kind: PersistentVolumeClaim
 apiVersion: v1 apiVersion: v1
Zeile 94: Zeile 96:
     requests:     requests:
       storage: 500Gi       storage: 500Gi
-EOF +</code> 
 +<code bash>
 kubectl create -f persistentVolumeClaim.yaml kubectl create -f persistentVolumeClaim.yaml
 </code> </code>
Zeile 103: Zeile 105:
  
 From point of view of the persistent volume claim, this is the only difference between a local volume and a host volume. From point of view of the persistent volume claim, this is the only difference between a local volume and a host volume.
-However, different to our observations about host volumes in the last blog post, the persistent volume claim is not bound to the persistent volume automatically. Instead, it will remain „Available“ until the first consumer shows up:+However, different to our observations about host volumes in the post, the persistent volume claim is not bound to the persistent volume automatically. Instead, it will remain „Available“ until the first consumer shows up:
 <code bash> <code bash>
 # kubectl get pv # kubectl get pv
Zeile 120: Zeile 122:
  
 Okay, let us perform the last required step to complete the described picture. The only missing piece is the POD, which we will create now: Okay, let us perform the last required step to complete the described picture. The only missing piece is the POD, which we will create now:
-<code bash> + 
-cat > http-pod.yaml << EOF+//http-pod.yaml// 
 +<code yaml>
 apiVersion: v1 apiVersion: v1
 kind: Pod kind: Pod
Zeile 142: Zeile 145:
       persistentVolumeClaim:       persistentVolumeClaim:
         claimName: my-claim         claimName: my-claim
-EOF +</code> 
 +<code bash>
 kubectl create -f http-pod.yaml kubectl create -f http-pod.yaml
 </code> </code>
  
 This should yield: This should yield:
 +<code bash>pod/www created</code>
 +
 +Before, we have seen that the persistent volume claim was not bound to a persistent volume yet. Now, we expect the binding to happen, since the last missing piece of the puzzle has fallen in place already:
 +<code bash>
 +# kubectl get pv
 +NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS       REASON   AGE
 +my-local-pv   500Gi      RWO            Retain           Bound    default/my-claim     my-local-storage            10m
 +</code>
 +
 +Yes, we can see that the status is bound to claim named „default/my-claim“. Since we have not chosen any namespace, the claim is located in the „default“ namespace.
 +
 +The POD is up and running:
 +<code bash>
 +# kubectl get pods
 +NAME     READY   STATUS    RESTARTS   AGE
 +www      1/1     Running            3m29s
 +</code>
 +
 +===== Summary =====
 +In this post, we have shown that Kubernetes local volumes can be run on multi-node clusters without the need to pin PODs to certain nodes explicitly. Local volumes with their node affinity rules make sure that a POD is bound to a certain node implicitly, though. Kubernetes local volumes have following features:
 +
 +  * Persistent volume claims will wait for a POD to show up before a local persistent volume is bound
 +  * Once a persistent local volume is bound to a claim, it remains bound, even if the requesting POD has died or has been deleted
 +  * A new POD can attach to the existing data in a local volume by referencing the same persistent volume claim
 +  * Similar to NFS shares, Kubernetes persistent local volumes allow multiple PODs to have read/write access
 +
 +Kubernetes local persistent volume they work well in clustered Kubernetes environments without the need to explicitly bind a POD to a certain node. However, the POD is bound to the node implicitly by referencing a persistent volume claim that is pointing to the local persistent volume. Once a node has died, the data of all local volumes of that node are lost. In that sense, Kubernetes local persistent volume cannot compete with distributed solutions like Glusterfs and Portworx volumes.
it-wiki/kubernetes/local_persistant_storage.1710225436.txt.gz · Zuletzt geändert: von marko