Volume Snapshots

Explore this Page

Overview

Volume Snapshots enable point-in-time, consistent copies of persistent volumes managed by storages such as Replicated PV Mayastor and Local PV ZFS. These snapshots are critical for data protection, backup and restore, disaster recovery, and migration use cases in Kubernetes environments.

DataCore Puls8 supports copy-on-write (COW) based snapshots that capture only the changed blocks, thereby improving storage efficiency. Snapshots can be created, listed, and deleted using standard Kubernetes commands.

Typical use cases include:

  • Backup and disaster recovery
  • Application cloning and testing
  • Rollbacks or troubleshooting

Snapshots are:

  • Consistent: Data remains consistent across all volume replicas.
  • Immutable: Once created, snapshot data cannot be altered.

Snapshots are not reconstructable in the event of node failure, unlike volume replicas.

Requirements

Before using Volume Snapshots, ensure the following requirements are met:

  • Kubernetes cluster with CSI snapshot CRDs installed.
  • CSI Snapshot Controller is deployed and running.
  • DataCore Puls8 is installed and configured with:
    • Replicated PV Mayastor
    • Local PV ZFS
  • DiskPools are created and operational (For Replicated PV Mayastor).
  • A PVC is deployed using a compatible StorageClass.

Volume Snapshots for Replicated PV Mayastor

This section explains how to create, manage, and validate volume snapshots for persistent volumes provisioned by Replicated PV Mayastor. These snapshots are block-level, application-consistent, and support advanced storage capabilities such as incremental backup and thin provisioning.

Creating a StorageClass

Copy
Create a Mayastor StorageClass with NVMf Protocol and 3 Replicas
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mayastor-3
parameters:
  protocol: nvmf
  repl: "3"
provisioner: io.openebs.csi-mayastor
EOF

Create and Verify a PVC

Copy
Check the Status of the Created PVC
kubectl get pvc
Copy
Sample Output
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
ms-volume-claim     Bound    pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b   1Gi        RWO            mayastor-3       15s

Copy the PVC name (ms-volume-claim in this case) for later use.

Ensure an application is deployed using the created PVC, as described in the Deploy an Application documentation.

Creating a Volume Snapshot

Snapshots can be created either with an active application or directly from the PVC.

  1. Define the VolumeSnapshotClass.

    Copy
    Create a Default VolumeSnapshotClass for Replicated PV Mayastor
    cat <<EOF | kubectl create -f -
    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: csi-mayastor-snapshotclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: io.openebs.csi-mayastor
    deletionPolicy: Delete
    EOF

    Alternatively, apply using a YAML file:

    Copy
    Apply the VolumeSnapshotClass Definition
    kubectl apply -f class.yaml
    Copy
    Sample Output
    volumesnapshotclass.snapshot.storage.k8s.io/csi-mayastor-snapshotclass created
  2. Create the Snapshot.

    Copy
    Create a VolumeSnapshot from an Existing PVC
    cat <<EOF | kubectl create -f -
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: mayastor-pvc-snap
    spec:
      volumeSnapshotClassName: csi-mayastor-snapshotclass
      source:
        persistentVolumeClaimName: ms-volume-claim   
    EOF
    Copy
    Apply the VolumeSnapshot Configuration
    kubectl apply -f snapshot.yaml
    Copy
    Sample Output
    volumesnapshot.snapshot.storage.k8s.io/mayastor-pvc-snap created

    Snapshots on thick-provisioned volumes are automatically converted to thin-provisioned volumes.

Listing Snapshots

Use the following commands to view snapshot and snapshot content details.

Copy
List all VolumeSnapshots
kubectl get volumesnapshot
Copy
Sample Output
NAME                READYTOUSE   SOURCEPVC         SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS                SNAPSHOTCONTENT                                     CREATIONTIME      AGE
mayastor-pvc-snap   true         ms-volume-claim                           1Gi           csi-mayastor-snapshotclass   snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df    57s               57s

 

Copy
List VolumeSnapshotContent Resources
kubectl get volumesnapshotcontent
Copy
Sample Output
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                    VOLUMESNAPSHOTCLASS          VOLUMESNAPSHOT      VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df   true         1073741824    Delete           io.openebs.csi-mayastor   csi-mayastor-snapshotclass   mayastor-pvc-snap   default                   87s

Deleting a Snapshot

Use the following commands to delete a snapshot.

Copy
Delete a specific VolumeSnapshot
kubectl delete volumesnapshot mayastor-pvc-snap
Copy
Sample Output
volumesnapshot.snapshot.storage.k8s.io "mayastor-pvc-snap" deleted

Filesystem Consistent Snapshots

DataCore Puls8 supports filesystem-consistent snapshots by default. This ensures data integrity by quiescing active I/O operations using the FIFREEZE and FITHAW ioctls during snapshot creation. If the filesystem quiescing process fails, the entire snapshot operation is retried automatically by the Mayastor CSI controller.

To disable filesystem quiescing, modify the VolumeSnapshotClass as follows:

Copy
Disable filesystem Consistency by Setting quiesceFs to None
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
metadata:
  name: csi-mayastor-snapshotclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
parameters:
  quiesceFs: none
driver: io.openebs.csi-mayastor
deletionPolicy: Delete

Snapshot Capacity and Commitment Considerations

When using Volume Snapshots with Replicated PV Mayastor, snapshot creation is subject to capacity and commitment checks on each replica pool. Understanding this behavior is essential to prevent unexpected snapshot or backup failures in production environments.

Snapshot Commitment Behavior

Snapshot creation requires that each replica pool has sufficient free space relative to the volume size. This requirement is enforced using the snapshot commitment threshold, which is expressed as a percentage of the volume size.

If the snapshot commitment threshold is not met on any replica pool, snapshot creation fails and the operation does not proceed. For example, if the snapshot commitment is set to 40% and the volume size is 100 GiB, each replica pool must have at least 40 GiB of free space available. If even one replica pool has less than the required free space, the snapshot operation fails.

Pool Commitment Impact

In addition to snapshot-specific checks, overall pool commitment also affects snapshot operations. Pool commitment defines how much a pool can be over committed when thin provisioning is enabled.

If a pool reaches its configured pool commitment limit, snapshot creation may fail even when the snapshot commitment requirement appears to be satisfied. This behavior is intentional and ensures data safety by preventing snapshot creation on pools that are approaching capacity exhaustion.

The following example illustrates how snapshot commitment affects snapshot creation when replica pool capacity is constrained:

Volume Size Free Space per Pool Required Free Space (40%) Snapshot Result
7 GiB 3 GiB 2.8 GiB Successful
8 GiB 2 GiB 3.2 GiB Failed
9 GiB 1 GiB 3.6 GiB Failed

Snapshot creation succeeds only when all replica pools meet the snapshot commitment requirement. If any replica pool fails the check, the snapshot and any dependent backup operation fails.

Default Commitment Values and Configuration

Replicated PV Mayastor enforces snapshot and pool capacity checks using the following Helm configuration parameters:

  • Snapshot Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.snapshotCommitment
    • Default: 40%
    • Defines the minimum free space required on each replica pool to allow snapshot creation.
  • Pool Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.poolCommitment
    • Default: 250%
    • Defines the maximum allowed overcommitment for thin-provisioned DiskPools.

The default values are suitable for most environments and provide a balanced trade-off between capacity utilization and operational safety. In typical deployments, these values do not require modification.

However, environments with large volumes, frequent snapshots, or aggressive thin provisioning may require tuning these parameters during installation or upgrade. Any changes should be accompanied by careful capacity planning and continuous monitoring of DiskPool utilization to ensure reliable snapshot creation.

Volume Snapshots for Local PV ZFS

This section explains the process for creating and managing volume snapshots for Local PV ZFS volumes. ZFS snapshots are file-system level, point-in-time copies that support clone and backup workflows in Kubernetes environments.

Creating a Volume Snapshot

  1. Define the VolumeSnapshotClass.

    Copy
    Create a Default VolumeSnapshotClass for Local PV ZFS
    $ cat snapshotclass.yaml
    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: zfspv-snapclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: zfs.csi.openebs.io
    deletionPolicy: Delete

    Alternatively, apply using a YAML file:

    Copy
    Apply the VolumeSnapshotClass Definition
    kubectl apply -f class.yaml
    Copy
    Sample Output
    volumesnapshotclass.snapshot.storage.k8s.io/zfspv-snapclass created
  2. Identify the PVC.

    Copy
    Identify PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    csi-zfspv   Bound    pvc-73402f6e-d054-4ec2-95a4-eb8452724afb   4Gi        RWO            openebs-zfspv   2m35s
  3. Create the Snapshot.

    Copy
    Create a VolumeSnapshot
    $ cat snapshot.yaml
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: zfspv-snap
    spec:
      volumeSnapshotClassName: zfspv-snapclass
      source:
        persistentVolumeClaimName: csi-zfspv
    Copy
    Apply the VolumeSnapshot Configuration
    kubectl apply -f snapshot.yaml
    Copy
    Sample Output
    volumesnapshot.snapshot.storage.k8s.io/zfspv-snap created

    Create the snapshot in the same namespace as the PVC. Ensure readyToUse: true before using the snapshot.

Listing Snapshots

Use the following commands to view snapshot and snapshot content details.

Copy
List all VolumeSnapshots
kubectl get volumesnapshot.snapshot
Copy
Sample Output
NAME         READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS     SNAPSHOTCONTENT                                    CREATIONTIME   AGE
zfspv-snap   true         csi-zfspv   -                       4Gi           zfspv-snapclass   snapcontent-b747cc44-6845-4e72-b0a9-4fb65858e013   106s           106s

 

Copy
Get VolumeSnapshot - YAML
kubectl get volumesnapshot.snapshot zfspv-snap -o yaml
Copy
Sample Output
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"snapshot.storage.k8s.io/v1","kind":
      "VolumeSnapshot","metadata":{"annotations":{},"name":
      "zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
  creationTimestamp: "2020-02-25T08:25:51Z"
  finalizers:
  - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
  - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
  generation: 1
  name: zfspv-snap
  namespace: puls8
  resourceVersion: "447494"
  selfLink: /apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots/zfspv-snap
  uid: 3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
spec:
  source:
    persistentVolumeClaimName: csi-zfspv
  volumeSnapshotClassName: zfspv-snapclass
status:
  boundVolumeSnapshotContentName: snapcontent-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  creationTime: "2020-02-25T08:25:51Z"
  readyToUse: true
  restoreSize: "0"

Verifying Local PV ZFS Snapshot CR

Check the custom resource created by the ZFS driver.

Copy
List ZFS Snapshots
kubectl get zfssnap -n puls8
Copy
Sample Output
NAME                                            AGE
snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd   3m32s

 

Copy
Get ZFS Snapshot Details - YAML
kubectl get zfssnap snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd -n puls8 -o yaml
Copy
Sample Output
apiVersion: openebs.io/v1alpha1
kind: ZFSSnapshot
metadata:
  creationTimestamp: "2020-02-25T08:25:51Z"
  finalizers:
  - zfs.openebs.io/finalizer
  generation: 2
  labels:
    kubernetes.io/nodename: e2e1-node2
    openebs.io/persistent-volume: pvc-73402f6e-d054-4ec2-95a4-eb8452724afb
  name: snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  namespace: puls8
  resourceVersion: "447328"
  selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  uid: 6142492c-3785-498f-aa4a-569ec6c0e2b8
spec:
  capacity: "4294967296"
  fsType: zfs
  ownerNodeID: e2e1-node2
  poolName: test-pool
  volumeType: DATASET
status:
  state: Ready

Node-Level Validation

Validate snapshot presence directly on the node.

Copy
List All ZFS Datasets and Snapshots
zfs list -t all
Copy
Sample Output
NAME                                                                                               USED  AVAIL  REFER  MOUNTPOINT
test-pool                                                                                          818K  9.63G    24K  /test-pool
test-pool/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb                                                  24K  4.00G    24K  /var/lib/kubelet/pods/3862895a-8a67-446e-80f7-f3c18881e391/volumes/kubernetes.io~csi/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb/mount
test-pool/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb@snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd     0B      -    24K  -

Benefits of Volume Snapshots

  • Enables point-in-time backup for critical applications.
  • Facilitates fast and reliable disaster recovery.
  • Minimizes downtime by enabling quick volume restores.
  • Simplifies data migration across environments or clusters.

Learn More