Volume Snapshots

Explore this Page

Overview

Volume Snapshots enable point-in-time, consistent copies of persistent volumes managed by storages such as Replicated PV Mayastor and Local PV ZFS. These snapshots are critical for data protection, backup and restore, disaster recovery, and migration use cases in Kubernetes environments.

DataCore Puls8 supports copy-on-write (COW) based snapshots that capture only the changed blocks, thereby improving storage efficiency. Snapshots can be created, listed, and deleted using standard Kubernetes commands.

Typical use cases include:

  • Backup and disaster recovery
  • Application cloning and testing
  • Rollbacks or troubleshooting

Snapshots are:

  • Consistent: Data remains consistent across all volume replicas.
  • Immutable: Once created, snapshot data cannot be altered.

Snapshots are not reconstructable in the event of node failure, unlike volume replicas.

Requirements

Before using Volume Snapshots, ensure the following requirements are met:

  • Kubernetes cluster with CSI snapshot CRDs installed.
  • CSI Snapshot Controller is deployed and running.
  • DataCore Puls8 is installed and configured with:
    • Replicated PV Mayastor
    • Local PV ZFS
  • DiskPools are created and operational (For Replicated PV Mayastor).
  • A PVC is deployed using a compatible StorageClass.

Volume Snapshots for Replicated PV Mayastor

This section explains how to create, manage, and validate volume snapshots for persistent volumes provisioned by Replicated PV Mayastor. These snapshots are block-level, application-consistent, and support advanced storage capabilities such as incremental backup and thin provisioning.

Creating a StorageClass

Copy
Create a Mayastor StorageClass with NVMf Protocol and 3 Replicas
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mayastor-3
parameters:
  protocol: nvmf
  repl: "3"
provisioner: io.openebs.csi-mayastor
EOF

Create and Verify a PVC

Copy
Check the Status of the Created PVC
kubectl get pvc
Copy
Sample Output
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
ms-volume-claim     Bound    pvc-fe1a5a16-ef70-4775-9eac-2f9c67b3cd5b   1Gi        RWO            mayastor-3       15s

Copy the PVC name (ms-volume-claim in this case) for later use.

Ensure an application is deployed using the created PVC, as described in the Deploy an Application documentation.

Creating a Volume Snapshot

Snapshots can be created either with an active application or directly from the PVC.

  1. Define the VolumeSnapshotClass.

    Copy
    Create a Default VolumeSnapshotClass for Replicated PV Mayastor
    cat <<EOF | kubectl create -f -
    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: csi-mayastor-snapshotclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: io.openebs.csi-mayastor
    deletionPolicy: Delete
    EOF

    Alternatively, apply using a YAML file:

    Copy
    Apply the VolumeSnapshotClass Definition
    kubectl apply -f class.yaml
    Copy
    Sample Output
    volumesnapshotclass.snapshot.storage.k8s.io/csi-mayastor-snapshotclass created
  2. Create the Snapshot.

    Copy
    Create a VolumeSnapshot from an Existing PVC
    cat <<EOF | kubectl create -f -
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: mayastor-pvc-snap
    spec:
      volumeSnapshotClassName: csi-mayastor-snapshotclass
      source:
        persistentVolumeClaimName: ms-volume-claim   
    EOF
    Copy
    Apply the VolumeSnapshot Configuration
    kubectl apply -f snapshot.yaml
    Copy
    Sample Output
    volumesnapshot.snapshot.storage.k8s.io/mayastor-pvc-snap created

    Snapshots on thick-provisioned volumes are automatically converted to thin-provisioned volumes.

Listing Snapshots

Use the following commands to view snapshot and snapshot content details.

Copy
List all VolumeSnapshots
kubectl get volumesnapshot
Copy
Sample Output
NAME                READYTOUSE   SOURCEPVC         SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS                SNAPSHOTCONTENT                                     CREATIONTIME      AGE
mayastor-pvc-snap   true         ms-volume-claim                           1Gi           csi-mayastor-snapshotclass   snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df    57s               57s

 

Copy
List VolumeSnapshotContent Resources
kubectl get volumesnapshotcontent
Copy
Sample Output
NAME                                               READYTOUSE   RESTORESIZE   DELETIONPOLICY   DRIVER                    VOLUMESNAPSHOTCLASS          VOLUMESNAPSHOT      VOLUMESNAPSHOTNAMESPACE   AGE
snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df   true         1073741824    Delete           io.openebs.csi-mayastor   csi-mayastor-snapshotclass   mayastor-pvc-snap   default                   87s

Deleting a Snapshot

Use the following commands to delete a snapshot.

Copy
Delete a specific VolumeSnapshot
kubectl delete volumesnapshot mayastor-pvc-snap
Copy
Sample Output
volumesnapshot.snapshot.storage.k8s.io "mayastor-pvc-snap" deleted

Filesystem Consistent Snapshots

DataCore Puls8 supports filesystem-consistent snapshots by default. This ensures data integrity by quiescing active I/O operations using the FIFREEZE and FITHAW ioctls during snapshot creation. If the filesystem quiescing process fails, the entire snapshot operation is retried automatically by the Mayastor CSI controller.

To disable filesystem quiescing, modify the VolumeSnapshotClass as follows:

Copy
Disable filesystem Consistency by Setting quiesceFs to None
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
metadata:
  name: csi-mayastor-snapshotclass
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
parameters:
  quiesceFs: none
driver: io.openebs.csi-mayastor
deletionPolicy: Delete

Snapshot Capacity and Commitment Considerations

Replicated PV Mayastor enforces capacity admission controls using commitment thresholds to ensure safe and reliable operation of storage pools. These commitment limits apply broadly to volume creation, replica placement, and snapshot operations, and help prevent pool over-allocation and potential data integrity risks.

Understanding these commitment controls is important for capacity planning and for avoiding failures during volume provisioning, replica creation, or snapshot-based backup operations.

Commitment Controls

Replicated PV Mayastor uses the following commitment thresholds to regulate storage allocation:

Pool Commitment

Pool commitment defines the maximum allowed logical overcommitment of a DiskPool when thin provisioning is enabled.

This allows logical allocation to exceed physical capacity within safe limits. For example, if a pool has 10 GiB physical capacity and the pool commitment is configured as 250%, the pool can support up to 25 GiB of logically provisioned storage, subject to other commitment constraints.

If the pool reaches its configured commitment limit, further operations may be blocked, including:

  • Volume creation
  • Replica creation
  • Snapshot creation

This behavior prevents unsafe over-allocation of storage resources.

Volume Commitment

Volume commitment defines the minimum required free space on each replica pool when creating replicas for an existing volume. This ensures that sufficient space is available to safely accommodate replica data and future write operations.

If the required free space is not available on a replica pool, replica creation fails and the volume cannot be provisioned on that pool.

Initial Volume Commitment

Initial volume commitment applies when creating replicas for a new volume. It enforces the same minimum free space requirement during initial volume provisioning to ensure reliable replica placement.

Snapshot Commitment

Snapshot commitment defines the minimum required free space on each replica pool when creating snapshots of an existing volume. This ensures that sufficient space is available to support copy-on-write operations after the snapshot is created.

If any replica pool does not meet the snapshot commitment requirement, snapshot creation fails and any dependent backup operation cannot proceed.

Snapshot Commitment Behavior

VolumeSnapshots require sufficient free space on each replica pool relative to the volume size.

For example:

  • Snapshot commitment: 40%
  • Volume size: 100 GiB
  • Required free space per replica pool: 40 GiB

If any replica pool has less than the required free space, snapshot creation fails.

This check ensures reliable snapshot operation and protects against capacity exhaustion during snapshot lifecycle operations.

Pool Commitment Impact on Snapshots and Thick-Provisioned Volumes

Pool commitment can also affect snapshot creation, particularly for thick-provisioned volumes.

When a snapshot is taken of a thick-provisioned volume:

  • The snapshot’s logical size equals the volume size.
  • The snapshot initially references existing data using copy-on-write semantics.
  • As part of snapshot creation, the volume is internally converted to thin provisioning.

Due to this conversion, the pool must account for potential copy-on-write growth, increasing the pool’s committed capacity by an amount equal to the volume size. If this increase causes the pool to reach its commitment limit, snapshot creation may fail even when sufficient physical free space appears to be available.

This behavior ensures safe storage allocation and prevents capacity exhaustion scenarios.

Example: Snapshot Creation with Commitment Constraints

The following example illustrates how snapshot commitment affects snapshot creation when replica pool capacity is constrained. In this example, volumes are thick-provisioned and snapshot commitment is configured to 40%.

Volume Size Free Space per Pool Required Free Space (40%) Snapshot Result
7 GiB 3 GiB 2.8 GiB Successful
8 GiB 2 GiB 3.2 GiB Failed
9 GiB 1 GiB 3.6 GiB Failed

Snapshot creation succeeds only when all replica pools meet the snapshot commitment requirement. If any replica pool fails the check, the snapshot and any dependent backup operation fails.

Default Commitment Values and Configuration

Replicated PV Mayastor uses configurable Helm parameters to enforce commitment controls:

  • Pool Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.poolCommitment
    • Default: 250%
    • Defines the maximum logical overcommitment allowed per DiskPool.
  • Volume Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.volumeCommitment
    • Default: 40%
    • Defines the minimum free space required to create replicas for existing volumes.
  • Initial Volume Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.volumeCommitmentInitial
    • Default: 40%
    • Defines the minimum free space required when provisioning new volumes.
  • Snapshot Commitment
    • --set openebs.mayastor.agents.core.capacity.thin.snapshotCommitment
    • Default: 40%
    • Defines the minimum free space required on each replica pool to allow snapshot creation.

The default values are suitable for most environments and provide a balanced trade-off between capacity utilization and operational safety. In typical deployments, these values do not require modification.

However, environments with large volumes, frequent snapshots, or aggressive thin provisioning may require tuning these parameters during installation or upgrade. Any changes should be accompanied by careful capacity planning and continuous monitoring of DiskPool utilization to ensure reliable snapshot creation.

Volume Snapshots for Local PV ZFS

This section explains the process for creating and managing volume snapshots for Local PV ZFS volumes. ZFS snapshots are file-system level, point-in-time copies that support clone and backup workflows in Kubernetes environments.

Creating a Volume Snapshot

  1. Define the VolumeSnapshotClass.

    Copy
    Create a Default VolumeSnapshotClass for Local PV ZFS
    $ cat snapshotclass.yaml
    kind: VolumeSnapshotClass
    apiVersion: snapshot.storage.k8s.io/v1
    metadata:
      name: zfspv-snapclass
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: zfs.csi.openebs.io
    deletionPolicy: Delete

    Alternatively, apply using a YAML file:

    Copy
    Apply the VolumeSnapshotClass Definition
    kubectl apply -f class.yaml
    Copy
    Sample Output
    volumesnapshotclass.snapshot.storage.k8s.io/zfspv-snapclass created
  2. Identify the PVC.

    Copy
    Identify PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
    csi-zfspv   Bound    pvc-73402f6e-d054-4ec2-95a4-eb8452724afb   4Gi        RWO            openebs-zfspv   2m35s
  3. Create the Snapshot.

    Copy
    Create a VolumeSnapshot
    $ cat snapshot.yaml
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: zfspv-snap
    spec:
      volumeSnapshotClassName: zfspv-snapclass
      source:
        persistentVolumeClaimName: csi-zfspv
    Copy
    Apply the VolumeSnapshot Configuration
    kubectl apply -f snapshot.yaml
    Copy
    Sample Output
    volumesnapshot.snapshot.storage.k8s.io/zfspv-snap created

    Create the snapshot in the same namespace as the PVC. Ensure readyToUse: true before using the snapshot.

Listing Snapshots

Use the following commands to view snapshot and snapshot content details.

Copy
List all VolumeSnapshots
kubectl get volumesnapshot.snapshot
Copy
Sample Output
NAME         READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS     SNAPSHOTCONTENT                                    CREATIONTIME   AGE
zfspv-snap   true         csi-zfspv   -                       4Gi           zfspv-snapclass   snapcontent-b747cc44-6845-4e72-b0a9-4fb65858e013   106s           106s

 

Copy
Get VolumeSnapshot - YAML
kubectl get volumesnapshot.snapshot zfspv-snap -o yaml
Copy
Sample Output
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"snapshot.storage.k8s.io/v1","kind":
      "VolumeSnapshot","metadata":{"annotations":{},"name":
      "zfspv-snap","namespace":"default"},"spec":{"source":{"persistentVolumeClaimName":"csi-zfspv"},"volumeSnapshotClassName":"zfspv-snapclass"}}
  creationTimestamp: "2020-02-25T08:25:51Z"
  finalizers:
  - snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
  - snapshot.storage.kubernetes.io/volumesnapshot-bound-protection
  generation: 1
  name: zfspv-snap
  namespace: puls8
  resourceVersion: "447494"
  selfLink: /apis/snapshot.storage.k8s.io/v1/namespaces/default/volumesnapshots/zfspv-snap
  uid: 3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
spec:
  source:
    persistentVolumeClaimName: csi-zfspv
  volumeSnapshotClassName: zfspv-snapclass
status:
  boundVolumeSnapshotContentName: snapcontent-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  creationTime: "2020-02-25T08:25:51Z"
  readyToUse: true
  restoreSize: "0"

Verifying Local PV ZFS Snapshot CR

Check the custom resource created by the ZFS driver.

Copy
List ZFS Snapshots
kubectl get zfssnap -n puls8
Copy
Sample Output
NAME                                            AGE
snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd   3m32s

 

Copy
Get ZFS Snapshot Details - YAML
kubectl get zfssnap snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd -n puls8 -o yaml
Copy
Sample Output
apiVersion: openebs.io/v1alpha1
kind: ZFSSnapshot
metadata:
  creationTimestamp: "2020-02-25T08:25:51Z"
  finalizers:
  - zfs.openebs.io/finalizer
  generation: 2
  labels:
    kubernetes.io/nodename: e2e1-node2
    openebs.io/persistent-volume: pvc-73402f6e-d054-4ec2-95a4-eb8452724afb
  name: snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  namespace: puls8
  resourceVersion: "447328"
  selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/zfssnapshots/snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd
  uid: 6142492c-3785-498f-aa4a-569ec6c0e2b8
spec:
  capacity: "4294967296"
  fsType: zfs
  ownerNodeID: e2e1-node2
  poolName: test-pool
  volumeType: DATASET
status:
  state: Ready

Node-Level Validation

Validate snapshot presence directly on the node.

Copy
List All ZFS Datasets and Snapshots
zfs list -t all
Copy
Sample Output
NAME                                                                                               USED  AVAIL  REFER  MOUNTPOINT
test-pool                                                                                          818K  9.63G    24K  /test-pool
test-pool/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb                                                  24K  4.00G    24K  /var/lib/kubelet/pods/3862895a-8a67-446e-80f7-f3c18881e391/volumes/kubernetes.io~csi/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb/mount
test-pool/pvc-73402f6e-d054-4ec2-95a4-eb8452724afb@snapshot-3cbd5e59-4c6f-4bd6-95ba-7f72c9f12fcd     0B      -    24K  -

Benefits of Volume Snapshots

  • Enables point-in-time backup for critical applications.
  • Facilitates fast and reliable disaster recovery.
  • Minimizes downtime by enabling quick volume restores.
  • Simplifies data migration across environments or clusters.

Learn More