Persistence Volume
1.1 Volume
1.1.1 Volume Types
We just used the host path option to configure a directory and however it is not recommended for use in a multi node cluster because the PODs would use the /data directory on all the nodes, and expect all of them to be the same and have the same data.
Kubernetes supports several types of standard storage solutions such as NFS, glusterFS, Flocker, FibreChannel, CephFS, ScaleIO or public cloud solutions like AWS EBS, Azure Disk or File or Google’s Persistent Disk.
For example: AWS EBS
1.2 Persistent Volume
We configure volumes within the pod definition file. When you have a large environment with a lot of pods, whatever storage solution is used, the users would have to configure that on all pod definition files. Every time it changes, the user would have to make them on all of his pods.
Instead you would like to manage storage more centrally, that is where persistent volumes can help us.
kubectl create –f pv-definition.yaml
- There are three accessModes:
- ReadOnlyMany
- ReadWriteOnce
- ReadWriteMany
- Host path option is not recommended.
To list persistent volumes.
kubectl get persistentvolume
1.3 Persistent Volume Claim
kubectl create –f pvc-definition.yaml
Once the Persistent Volume Claims are created, Kubernetes binds the Persistent Volumes to Claims based on the request and properties set on the volume. Kubernetes tries to find a persistent volume that has sufficient capacity as requested by the claim and any other request properties such as access modes volume modes storage class etc.
However if there are multiple possible matches for a single claim and you would like to specifically use a particular volume you could still use labels and selectors to bind to the right volumes.
Finally note that a smaller claim may get bound to a larger volume if all the other criteria matches and there are no better options. There is a one to one relationship between claims and volumes so no other claims can utilize the remaining capacity in the volume.
If there are no volumes available the persistent volume claim will remain in a pending state until newer volumes are made available to the cluster.
To list PVs.
kubectl get persistentvolumeclaim
To delete a PV.
kubectl delete persistentvolumeclaim myclaim
You can choose what is to happen to the volume by setting persistentVolumeReclaimPolicy:.
- Retain - (Default) the persistent volume will remain until it is manually deleted by the administrator.
- Delete
- Recycle - In this case the data in the data volume will be scrubbed before making it available to other claims
Reference:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
1.3.1 Use
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim