Restore a Volume from a Snapshot

Explore this Page

Overview

Volume snapshots are a vital feature for data protection, disaster recovery, and workload portability in Kubernetes environments. A volume snapshot captures the state of a storage volume at a specific point in time. Restoring a volume from a snapshot enables users to revert to a previously known good state or clone volumes for development and testing.

This document outlines the step-by-step procedure to restore a volume from an existing snapshot.

This feature is currently supported exclusively for Replicated PV Mayastor, Local PV LVM, and Local PV ZFS.

Requirements

Before performing a snapshot restore, ensure the following requirements are met:

  • A volume snapshot has already been created.
  • A compatible StorageClass is available for the restore operation.
  • The snapshot and restore operations are performed in the same namespace as the source Persistent Volume Claim (PVC).
  • Ensure a snapshot has been created for the source volume. Refer to the Volume Snapshots documentation for detailed instructions.

Additional Requirements for Local PV LVM

Before restoring a volume from a snapshot created with Local PV LVM, ensure the following prerequisites are met:

  • Snapshot Type: Only thin snapshots are supported for restore operations. To verify whether a snapshot is thin-provisioned, describe the LVMSnapshot Custom Resource (CR) and check the spec.thinProvision field.
  • Restore Volume Size: The restore PVC's requested capacity must exactly match the snapshot Logical Volume (LV) size. You can verify the snapshot LV size by checking the status.lvSize field in the LVMSnapshot CR.
  • Volume Group Name: The volume group (spec.volGroup) used for the restore operation must match the volume group specified in the snapshot.
  • Copy
    Example: Local PV LVM Snapshot CR used for Restore Validation
    apiVersion: local.openebs.io/v1alpha1
    kind: LVMSnapshot
    metadata:
      name: snapshot-cc82975a-c652-41fa-892a-744eb04ccbd1
      namespace: openebs
    spec:
      ownerNodeID: worker-node-1
      thinProvision: true
      volGroup: lvmvg
    status:
      lvSize: 5Gi
      state: Ready

The restore operation for Local PV LVM currently supports only snapshots created using the LVM CSI driver (local.openebs.io/lvm).

Create a Volume Restore for Replicated PV Mayastor

This section describes how to restore a volume from snapshot for Replicated PV Mayastor.

Restoring a volume from a snapshot leverages point-in-time, consistent data copies across all volume replicas. For example, restoring a source volume with a replica count of 3 (i.e., repl=3) means snapshots must exist for all three replicas. The new volume’s replica count must be less than or equal to the number of available replica snapshots. If fewer snapshots are available, the restore process will fail or remain incomplete.

  1. Before initiating a restore, determine the number of available replicas and snapshot readiness.

    Copy
    Check Volume Replica Topology
    kubectl puls8 mayastor get volume-replica-topology ec4e66fd-3b33-4439-b504-d49aba53da26
    Copy
    Sample Output
    ID                                    NODE         POOL    STATUS  CAPACITY  ALLOCATED  SNAPSHOTS  CHILD-STATUS  REASON  REBUILD 
    5de77b1e-56cf-47c9-8fca-e6a5f316684b  io-engine-1  pool-1  Online  12MiB     0 B        12MiB      <none>        <none>  <none> 
    78fa3173-175b-4339-9250-47ddccb79201  io-engine-2  pool-2  Online  12MiB     0 B        12MiB      <none>        <none>  <none> 
    7b4e678a-e607-40e3-afce-b3b7e99e511a  io-engine-3  pool-3  Online  12MiB     8MiB       0 B        <none>        <none>  <none> 
    Copy
    Check Snapshot Availability Across Replicas
    kubectl puls8 mayastor get volume-snapshot-topology --volume ec4e66fd-3b33-4439-b504-d49aba53da26
    Copy
    Sample Output
    SNAPSHOT-ID                           ID                                    POOL    SNAPSHOT_STATUS  SIZE      ALLOCATED_SIZE  SOURCE 
    25823425-41fa-434a-9efd-a356b70b5d7c  cb8d200b-c7d8-4ccd-bb62-78f903e444e4  pool-2  Online           12582912  12582912        78fa3173-175b-4339-9250-47ddccb79201 
                                          b09f4097-85fa-41c9-a2f8-56198641258d  pool-1  Online           12582912  12582912        5de77b1e-56cf-47c9-8fca-e6a5f316684b 
     
    25823425-41fa-434a-9efd-a356b70b5d7d  1b69b3ca-8f08-4889-8f90-1a428e088c46  pool-2  Online           12582912  0               78fa3173-175b-4339-9250-47ddccb79201 
                                          1837b17a-9773-437c-b926-dda3272e3c60  pool-1  Online           12582912  0               5de77b1e-56cf-47c9-8fca-e6a5f316684b 
                                          1f8827e7-9674-4879-8f29-b15752aef902  pool-3  Online           12582912  8388608         7b4e678a-e607-40e3-afce-b3b7e99e511a 
  2. Create a PVC from Snapshot.

    Copy
    Create a PVC for Replicated PV Mayastor Volume Restore
    cat <<EOF | kubectl create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: restore-pvc  # Name of the new PVC to be created from a snapshot
    spec:
      storageClassName: mayastor-3-restore  # StorageClass used for restoration
      dataSource:
        name: mayastor-pvc-snap  # Name of the VolumeSnapshot to restore from
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    EOF
  3. Verify the Restored Volume.

    Copy
    Verify Restored PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
    mayastor-pvc         Bound    pvc-a84e1aa0-6bc1-4f9f-91a9-3e9b98cf0102   10Gi       RWO            mayastor-3-replicas   3m
    mayastor-restore-pvc Bound    pvc-153c9c1a-97e4-418f-a84d-b29b4f11c9ad   10Gi       RWO            mayastor-3-restore    1m

After the PVC is created, DataCore Puls8 provisions a new volume that replicates the state and content of the source volume at the moment the snapshot was taken.

A new volume, identical in data and configuration to the original at the time of the snapshot, will be created and available for use. You can now mount it to a pod and begin consuming it like any other persistent volume.

Create a Volume Restore for Local PV LVM

This section describes how to restore a volume from snapshot for Local PV LVM.

Volume restore is supported only for thin-provisioned LVM volumes that were created from DataCore Puls8 version 4.4.0 or later.

  1. Create a PVC from Snapshot.

    Copy
    Create a PVC for Local PV LVM Volume Restore
    cat <<EOF | kubectl create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: lvmpv-snap-restore-pvc
    spec:
      storageClassName: lvm-snapshot-restore-sc
      dataSource:
        name: lvmpv-snap          # Name of the existing VolumeSnapshot
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    EOF
  2. Verify the Restored Volume.

    Copy
    Verify Restored PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
    lvmpv-origin-pvc         Bound    pvc-dca183ae-4096-48fc-bf08-740d1c03d583   5Gi        RWO            lvmpv-sc-2xz5t              46s
    lvmpv-snap-restore-pvc   Bound    pvc-e9c895c0-ddbc-44ea-8f90-7fe81ba723bb   5Gi        RWO            lvm-snapshot-restore-sc     30s
  3. Verify the Local PV LVM volumes.

    Copy
    Verify Restored Local PV LVM Volumes
    kubectl get lvmvolume -n puls8
    Copy
    Sample Output
    NAME                                       VOLGROUP   NODE             SIZE         STATUS   AGE
    pvc-dca183ae-4096-48fc-bf08-740d1c03d583   lvmvg      worker-node-1    5368709120   Ready    82s
    pvc-e9c895c0-ddbc-44ea-8f90-7fe81ba723bb   lvmvg      worker-node-1    5368709120   Ready    66s
  • To prevent write failures when the thin pool becomes full, enable these parameters in the lvm.conf file:
    • thin_pool_autoextend_threshold
    • thin_pool_autoextend_percent
  • Ensure the Volume Group (VG) has sufficient free capacity. Auto-extend will not function correctly if the VG lacks space.

Create a Volume Restore for Local PV ZFS

This section describes how to restore a volume from snapshot for Local PV ZFS.

  1. Create a PVC from Snapshot.

    Copy
    Create a PVC for Local PV ZFS Volume Restore
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: zfspv-restore-pvc
    spec:
      storageClassName: openebs-zfspv
      dataSource:
        name: zfspv-snap          # Name of the existing VolumeSnapshot
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
    EOF
  2. Verify the Restored Volume.

    Copy
    Verify Restored PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    csi-zfspv          Bound    pvc-73402f6e-d054-4ec2-95a4-eb8452724afb   4Gi        RWO            openebs-zfspv    3m
    zfspv-restore-pvc  Bound    pvc-9b5f124b-82fc-47e9-9b12-9e4d210a9f45   4Gi        RWO            openebs-zfspv    1m
  3. Confirm snapshot and restore presence at the node level.

    Copy
    Verify Snapshots on the Node
    zfs list -t all

Benefits of Volume Restore

  • Quick Recovery: Rapidly restore application state after data loss or corruption.
  • Disaster Recovery: Strengthen your backup and recovery strategies with consistent, replica-aware snapshots.
  • Reduced Downtime: Minimize service interruptions by speeding up volume replacement or rollback.

Learn More