Deploying an Application

Explore this Page

Overview

This document provides step-by-step instructions for deploying test applications using different types of Persistent Volumes (PVs) in a Kubernetes environment specifically focusing on Replicated PV Mayastor, Local PV Hostpath, Local PV LVM, and Local PV ZFS. Each section guides you through provisioning the appropriate PersistentVolumeClaim (PVC), deploying a test Pod (such as one using FIO or BusyBox), verifying the volume status and pod health.

These procedures are useful for validating volume performance, understanding storage configurations, and confirming the correct functioning of dynamic provisioning through custom StorageClasses.

Deploying an Application - Replicated PV Mayastor

Deploy a Test Pod Using FIO

After Replicated PV Mayastor has been deployed and verified, begin by provisioning a PVC using the configured StorageClass. Then, deploy a simple test pod using the FIO utility to perform read/write operations on the volume.

The CSI driver makes a best-effort attempt to co-locate the volume target on the same node where the application pod is scheduled, thereby ensuring data gravity and enhancing availability during node failures.

The following YAML references a PVC named ms-volume-claim. If your PVC name differs, update the claimName field accordingly.

Copy
FIO Pod YAML
kind: Pod
apiVersion: v1
metadata:
  name: fio
spec:
  nodeSelector:
    openebs.io/engine: mayastor
  volumes:
    - name: ms-volume
      persistentVolumeClaim:
        claimName: ms-volume-claim
  containers:
    - name: fio
      image: nixery.dev/shell/fio
      args:
        - sleep
        - "1000000"
      volumeMounts:
        - mountPath: "/volume"
          name: ms-volume

Verify the PVC and Volume Resources

Before running any tests, confirm that all components have been correctly created and are in a healthy state.

  1. Verify the PVC Status.

    Copy
    Check the PVC Status
    kubectl get pvc ms-volume-claim
    Copy
    Sample Output
    NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    ms-volume-claim     Bound    pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b   1Gi        RWO            mayastor-1       15s

    The PersistentVolumeClaim should show a Bound status.

  2. Verify the Corresponding PV.

    Replace the volume name in the command below with the one returned in your kubectl get pvc output.

    Copy
    Check the PV
    kubectl get pv pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b
    Copy
    Sample Output
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS     REASON   AGE
    pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b   1Gi        RWO            Delete           Bound    default/ms-volume-claim     mayastor-1       16m
  3. Verify Replicated PV Mayastor Volume Status.

    Use the Replicated PV Mayastor plugin to inspect the volume and confirm that its status is Online.

    Copy
    Check the Volume Status
    kubectl puls8 mayastor get volumes
    Copy
    Sample Output
    ID                                      REPLICAS    TARGET-NODE                  ACCESSIBILITY    STATUS    SIZE
    18e30e83-b106-4e0d-9fb6-2b04e761e18a    3           aks-agentpool-12194210-0     nvmf             Online    1073741824 
  4. Verify that the FIO test pod has been deployed and is in a Running state.

    The pod may initially appear in a ContainerCreating state before transitioning to Running.

    Copy
    Confirm Pod Deployment
    kubectl get pod fio
    Copy
    Sample Output
    NAME   READY   STATUS    RESTARTS   AGE
    fio    1/1     Running   0          34s

Run the FIO Test Application

Run the FIO tool from within the test pod to simulate random read/write operations and measure I/O performance.

Copy
Execute the FIO Test
kubectl exec -it fio -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
Copy
Sample Output
benchtest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16
fio-3.20
Starting 1 process
benchtest: Laying out IO file (1 file / 800MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=376KiB/s,w=340KiB/s][r=94,w=85 IOPS][eta 00m:00s]
benchtest: (groupid=0, jobs=1): err= 0: pid=19: Thu Aug 27 20:31:49 2020
  read: IOPS=679, BW=2720KiB/s (2785kB/s)(159MiB/60011msec)
    slat (usec): min=6, max=19379, avg=33.91, stdev=270.47
    clat (usec): min=2, max=270840, avg=9328.57, stdev=23276.01
     lat (msec): min=2, max=270, avg= 9.37, stdev=23.29
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    4], 50.00th=[    4], 60.00th=[    4],
     | 70.00th=[    4], 80.00th=[    4], 90.00th=[    7], 95.00th=[   45],
     | 99.00th=[  136], 99.50th=[  153], 99.90th=[  165], 99.95th=[  178],
     | 99.99th=[  213]
   bw (  KiB/s): min=  184, max= 9968, per=100.00%, avg=2735.00, stdev=3795.59, samples=119
   iops        : min=   46, max= 2492, avg=683.60, stdev=948.92, samples=119
  write: IOPS=678, BW=2713KiB/s (2778kB/s)(159MiB/60011msec); 0 zone resets
    slat (usec): min=6, max=22191, avg=45.90, stdev=271.52
    clat (usec): min=454, max=241225, avg=14143.39, stdev=34629.43
     lat (msec): min=2, max=241, avg=14.19, stdev=34.65
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[    3],
     | 30.00th=[    3], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    4], 90.00th=[   22], 95.00th=[  110],
     | 99.00th=[  155], 99.50th=[  157], 99.90th=[  169], 99.95th=[  197],
     | 99.99th=[  228]
   bw (  KiB/s): min=  303, max= 9904, per=100.00%, avg=2727.41, stdev=3808.95, samples=119
   iops        : min=   75, max= 2476, avg=681.69, stdev=952.25, samples=119
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.02%, 4=82.46%, 10=7.20%, 20=1.62%, 50=1.50%
  lat (msec)   : 100=2.58%, 250=4.60%, 500=0.01%
  cpu          : usr=1.19%, sys=3.28%, ctx=134029, majf=0, minf=17
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=40801,40696,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
   READ: bw=2720KiB/s (2785kB/s), 2720KiB/s-2720KiB/s (2785kB/s-2785kB/s), io=159MiB (167MB), run=60011-60011msec
  WRITE: bw=2713KiB/s (2778kB/s), 2713KiB/s-2713KiB/s (2778kB/s-2778kB/s), io=159MiB (167MB), run=60011-60011msec

Disk stats (read/write):
  sdd: ios=40795/40692, merge=0/9, ticks=375308/568708, in_queue=891648, util=99.53%

If no errors appear in the output, it confirms that the Replicated PV Mayastor volume is functioning correctly and is able to handle I/O operations as expected.

Deploying an Application - Local PV Hostpath

Creating a Pod

  1. Save the following Pod configuration as local-hostpath-pod.yaml. This Pod will use the Local Persistent Volume:

    Copy
    Pod Configuration
    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-local-hostpath-pod
    spec:
      volumes:
      - name: local-storage
        persistentVolumeClaim:
          claimName: local-hostpath-pvc
      containers:
      - name: hello-container
        image: busybox
        command:
           - sh
           - -c
           - 'while true; do echo "`date` [`hostname`] Hello from Puls8 Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
        volumeMounts:
        - mountPath: /mnt/store
          name: local-storage

    Since Local PV storage classes use waitForFirstConsumer, avoid specifying nodeName in the Pod specification. If nodeName is included, the PVC will remain in the Pending state.

  2. Create the Pod.

    Copy
    Create Pod
    kubectl apply -f local-hostpath-pod.yaml
  3. Verify that the container in the Pod is running.

    Copy
    Verify the Container
    kubectl get pod hello-local-hostpath-pod
  4. Verify whether data is being written to the volume.

    Copy
    Verify the Data
    kubectl exec hello-local-hostpath-pod -- cat /mnt/store/greet.txt
  5. Verify that the container is using the Local PV Hostpath storage.

    Copy
    Verify the Container
    kubectl describe pod hello-local-hostpath-pod

The output confirms that the Pod is running on a specific node and using the persistent volume associated with local-hostpath-pvc.

Verifying the Persistent Volume Binding

Verify the PVC status.

Copy
Verify PVC Status
kubectl get pvc local-hostpath-pvc

The output should indicate that the PVC status is now Bound, signifying that a PV has been successfully created and provisioned.

Copy
Sample Output
NAME                 STATUS   VOLUME                                     CAPACITY    ACCESS MODES    STORAGECLASS      AGE
local-hostpath-pvc   Bound    pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425   5G          RWO             puls8-hostpath    28m

To retrieve details about the dynamically provisioned PV, replace the PV name in the following command with the one displayed in your output:

Copy
Retrieve Details about Dynamically Provisioned PV
kubectl get pv pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425 -o yaml

The output confirms that the PV was provisioned in response to the PVC request. Below are some key details:

Copy
Key Details
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425
  annotations:
    pv.kubernetes.io/provisioned-by: puls8.io/local
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5G
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: local-hostpath-pvc
    namespace: default
  local:
    fsType: ""
    path: /var/puls8/local/pvc-864a5ac8-dd3f-416b-9f4b-ffd7d285b425
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - gke-user-helm-default-pool-3a63aff5-1tmf
  persistentVolumeReclaimPolicy: Delete
  storageClassName: puls8-hostpath
  volumeMode: Filesystem
status:
  phase: Bound

Verifying the StorageClass

Confirm that the StorageClass has been created successfully:

Copy
Verify the Custom StorageClass
kubectl get sc local-hostpath -o yaml

Deploying an Application - Local PV LVM

Creating the Deployment YAML

To deploy an application, create a deployment YAML file that references the PVC backed by LVM storage.

Copy
Deployment YAML File (fio.yaml)
apiVersion: v1
kind: Pod
metadata:
 name: fio
spec:
 restartPolicy: Never
 containers:
 - name: perfrunner
   image: openebs/tests-fio
   command: ["/bin/bash"]
   args: ["-c", "while true ;do sleep 50; done"]
   volumeMounts:
      - mountPath: /datadir
        name: fio-vol
   tty: true
 volumes:
 - name: fio-vol
   persistentVolumeClaim:
     claimName: csi-lvmpv

Once the application is deployed, the node utilizes the LVM volume for data read/write operations, and storage space is consumed accordingly.

Parameters

AccessMode

Local PV LVM supports only the ReadWriteOnce access mode, meaning the volume can be mounted as read-write by a single node.

Copy
PVC Configuration for AccessMode
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  accessModes:
    - ReadWriteOnce        ## Specify ReadWriteOnce(RWO) access modes
  storageClassName: puls8-lvm
  resources:
    requests:
      storage: 4Gi

AccessMode is a required field, if the field is unspecified then it will lead to creation error.

StorageClassName

Local PV LVM CSI driver supports dynamic provisioning of volumes via the LVM storage class. This field is mandatory; an unspecified storageClassName results in provisioning errors.

Copy
PVC Configuration for AccessMode
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  accessModes:
    - ReadWriteOnce        ## Specify ReadWriteOnce(RWO) access modes
  storageClassName: puls8-lvm
  resources:
    requests:
      storage: 4Gi

StorageClassName is a required field, if the field is unspecified then it will lead to provision error.

Capacity Resource

You can specify the desired capacity for the LVM volume. The CSI driver provisions a volume only if the requested capacity is available in the underlying volume group.

Copy
PVC Configuration for Capacity Resource
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: puls8-lvm
  resources:
    requests:
      storage: 4Gi       ## Specify required storage for an application

VolumeMode (Optional)

Local PV LVM supports two volume modes:

  • Block: Used when the application maintains the filesystem itself.
  • Filesystem (Default): Required for applications that depend on a filesystem.
Copy
PVC Configuration for VolumeMode
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: puls8-lvm
  volumeMode: Filesystem     ## Specifies in which mode volume should be attached to pod
  resources:
    requests:
      storage: 4Gi

Selectors (Optional)

You can bind a retained LVM volume to a new PVC using selectors. If neither selector nor volumeName is specified, the LVM CSI driver provisions a new volume.

Copy
Listing Released PersistentVolumes
kubectl get pv -ojsonpath='{range .items[?(@.status.phase=="Released")]}{.metadata.name} {.metadata.labels}{"\n"}'
pvc-8376b776-75f9-4786-8311-f8780adfabdb {"openebs.io/lvm-volume":"reuse"}
Copy
Adding Labels to Persistent Volume
kubectl label pv pvc-8376b776-75f9-4786-8311-f8780adfabdb openebs.io/lvm-volume=reuse
Copy
Marking PV as Available
kubectl patch pv pvc-8376b776-75f9-4786-8311-f8780adfabdb -p '{"spec":{"claimRef": null}}'
persistentvolume/pvc-8376b776-75f9-4786-8311-f8780adfabdb patched

Create PVC with the selector.

Copy
PVC Configuration with Selector
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  storageClassName: puls8-lvmpv
  ## Specify selector matching to available PVs label, K8s will be bound to any of the available PVs matching the specified labels
  selector:
    matchLabels:
      openebs.io/lvm-volume: reuse
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi   ## Capacity should be less than or equal to available PV capacities
Copy
Verify Bound Status of PV
kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS    REASON   AGE
pvc-8376b776-75f9-4786-8311-f8780adfabdb   6Gi        RWO            Retain           Bound    default/csi-lvmpv   puls8-lvmpv   9h

VolumeName (Optional)

You can explicitly bind a PVC to a retained PersistentVolume by specifying volumeName. When volumeName is set, Kubernetes ignores the selector field.

Copy
PVC Configuration with VolumeName
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-lvmpv
spec:
  storageClassName: puls8-lvmpv
  volumeName: pvc-8376b776-75f9-4786-8311-f8780adfabdb   ## Name of LVM volume present in Available state
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi  ## Capacity should be less than or equal to available PV capacities

Deprovisioning a Volume

Copy
Deleting the Application and PVC
kubectl delete -f fio.yaml
pod "fio" deleted
kubectl delete -f pvc.yaml
persistentvolumeclaim "csi-lvmpv" deleted

Volume resizing with snapshots is not supported.

Deploying an Application - Local PV ZFS

Creating the Deployment YAML

To deploy an application, create a deployment YAML file that references the PVC backed by ZFS storage.

Copy
Deployment YAML File (fio.yaml)
apiVersion: v1
kind: Pod
metadata:
  name: fio
spec:
  restartPolicy: Never
  containers:
  - name: perfrunner
    image: openebs/tests-fio
    command: ["/bin/bash"]
    args: ["-c", "while true ;do sleep 50; done"]
    volumeMounts:
       - mountPath: /datadir
         name: fio-vol
    tty: true
  volumes:
  - name: fio-vol
    persistentVolumeClaim:
      claimName: csi-zfspv

Once the application is deployed, you can verify that the ZFS volume is in use. Navigate to the corresponding node to confirm that the application is utilizing the volume for read/write operations, and observe the space consumption within the ZFS pool.

Modifying ZFS Volume Properties

ZFS volume properties, such as enabling or disabling compression, can be modified by editing the corresponding Kubernetes resource. Use the following command to edit the properties of a specific ZFS volume:

Copy
Edit ZFS Volume Properties
kubectl edit zv pvc-34133838-0d0d-11ea-96e3-42010a800114 -n puls8

Modify the desired properties (Example: Enable compression or deduplication), save the changes, and apply them. You can verify the updated properties by executing the following command on the node:

Copy
Check ZFS Volume Properties after Modification
zfs get all zfspv-pool/pvc-34133838-0d0d-11ea-96e3-42010a800114

Deprovisioning a Volume

To remove a volume, first delete the application using it. Then, delete the associated Persistent Volume (PV). Upon deletion, the volume will be removed from the ZFS pool, freeing up the allocated space.

Copy
Delete the Deployed Application
kubectl delete -f fio.yaml
Copy
Delete the PVC
kubectl delete -f pvc.yaml

Learn More