StorageClass Parameters

Explore this Page

Overview

This document provides a comprehensive guide to configuring StorageClass parameters for provisioning replicated Persistent Volumes (PVs) in DataCore Puls8. These parameters play a crucial role in defining how storage volumes behave, including aspects like filesystem type, replication levels, provisioning strategy (thick/thin), and support for volume expansion. In addition, it details topology-aware configurations to optimize the placement of volume replicas across nodes and pools in a Kubernetes cluster.

Beyond Replicated PV Mayastor, the document also outlines configuration parameters for Local PVs, enabling customized local storage provisioning with options like mount control, file system selection, and storage type enforcement.

Common StorageClass Parameters

This section describes the common StorageClass parameters supported for both Local Storage and Replicated Storage. These parameters are essential to determine the behavior of volume provisioning and binding.

Provisioner (Required)

Specifies the external Container Storage Interface or Container Native Storage driver responsible for provisioning volumes. The provisioner value must match the storage being used. This field is required for the StorageClass to function properly.

Copy
Replicated Storage Example
provisioner: io.openebs.csi-mayastor
Copy
Local Storage Example
provisioner: openebs.io/local

The provisioner is responsible for managing the lifecycle of persistent volumes, including creation, attachment, detachment, and deletion. Always ensure the specified provisioner matches the installed storage engine in your cluster.

Reclaim Policy (Optional)

Reclaim Policy defines what happens to a PV after its associated PVC is deleted. If not specified, the default policy is Delete.

  • Delete: Automatically deletes the persistent volume when the associated PVC is deleted.
  • Retain: Retains the persistent volume and its data for manual recovery or reattachment.
Copy
Example StorageClass with ReclaimPolicy - Replicated PV Mayastor
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-mayastor-1
provisioner: io.openebs.csi-mayastor
parameters:
  protocol: nvmf
  repl: "3"
  fsType: "ext4"
reclaimPolicy: Delete          ## Reclaim policy can be specified here. It also accepts Retain
Copy
Example StorageClass with ReclaimPolicy - Local PV LVM
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"
reclaimPolicy: Delete          ## Reclaim policy can be specified here. It also accepts Retain

StorageClass Parameters for Replicated PV Mayastor

The following parameters are commonly used to control the behavior of volumes provisioned through a StorageClass in DataCore Puls8. These define core attributes such as the file system type, replication, provisioning, and expansion capabilities.

repl (Optional)

Indicates the desired replication factor. This value must be a number greater than zero. A value of 1 means no fault tolerance, 2 tolerates one node failure, 3 tolerates two node failures, and so on. The default repl value is 1.

protocol (Optional)

Defines the protocol for mounting the volume. Currently supports nvmf (NVMe over TCP protocol).

thin (Optional)

Enables thin provisioning of volumes when set to "true". Thin provisioning allows dynamic allocation of storage. Monitoring is required to prevent degradation or faults due to space exhaustion. Additional configurations can be set using the openebs.mayastor.agents.core.capacity.thin spec in the Helm chart:

  • poolCommitment: Maximum allowed pool commitment (%). The default value is 250%.
  • volumeCommitment: Minimum free space (%) required in each replica pool for new replicas of an existing volume. The default value is 40%.
  • volumeCommitmentInitial: Minimum free space (%) required in each replica pool for new volume creation. The default value is 40%.

The volumes can either be thick or thin provisioned. Use thin: "true" in environments with limited capacity or when using snapshots/cloning.

encrypted (Optional)

Enables encryption of volumes when set to "true". Encrypted volumes are provisioned only if a sufficient number of encrypted pools are available, as required by the repl (replication factor) setting in the StorageClass.

fsType (Optional)

Specifies the file system to use when mounting the volume. Supported file systems are ext4 (Default), xfs, and btrfs.

It is recommended to use xfs for better performance. Ensure the required filesystem driver is installed on all worker nodes in the cluster before use.

formatOptions (Optional)

Allows you to specify additional formatting options when initializing the device with a file system. By default, Replicated PV Mayastor uses ext4 to format the devices. Based on fsType parameter (Example: xfs, btrfs), refer to the Linux Documentation for supported formatting options.

overrideGlobalFormatOpts (Optional)

Overrides the global XFS formatting options defined via Helm values. In certain environments, Helm charts may configure global xfs format options which get applied to all volumes using the XFS file system.

To override these global options for a specific volume, set overrideGlobalFormatOpts: true in the StorageClass and define the custom options via formatOptions. This ensures the provided formatOptions are used instead of the global settings.

If both global Helm options and per-volume formatOptions are specified, Replicated PV Mayastor applies both sets of options together unless overrideGlobalFormatOpts is explicitly set to true.

Example

If the global Helm configuration sets -m bigtime=0 -m inobtcount=0 and you wish to override these settings for a specific volume with -m bigtime=1 -m inobtcount=1, then the following parameters should be specified in the StorageClass:

Copy
Example: Overriding Global XFS Format Options for a Specific Volume
overrideGlobalFormatOpts: true
formatOptions: "-m bigtime=1 -m inobtcount=1"

allowVolumeExpansion (Optional)

Enables expansion of PVs through PVCs. Set this parameter to true in the StorageClass. To expand, edit the PVC size. Refer to the Volume Resize Documentation for more information.

nodeAffinityTopologyLabel (Optional)

Places replicas only on nodes with labels that exactly match those defined in the StorageClass.

StorageClass Definition

Copy
Sample StorageClass YAML with nodeAffinityTopologyLabel
cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: puls8-mayastor-1
parameters:
  protocol: nvmf
  repl: "2"
  nodeAffinityTopologyLabel: |
    zone: us-west-1
provisioner: io.openebs.csi-mayastor
volumeBindingMode: Immediate
EOF

Apply Node Labels

Copy
Command to Label Nodes
kubectl puls8 mayastor label node worker-node-1 zone=us-west-1
kubectl puls8 mayastor label node worker-node-2 zone=eu-east-1
kubectl puls8 mayastor label node worker-node-3 zone=us-west-1
Copy
Get Nodes
kubectl puls8 mayastor get nodes -n puls8 --show-labels
ID             GRPC ENDPOINT        STATUS  LABELS
worker-node-1  65.108.91.181:10124  Online  zone=eu-west-1
worker-node-3  65.21.4.103:10124    Online  zone=eu-east-1
worker-node-3  37.27.13.10:10124    Online  zone=us-west-1

nodeHasTopologyKey (Optional)

Places replicas on nodes that have label keys matching the provided key, regardless of their values.

StorageClass Definition

Copy
Sample StorageClass YAML with nodeHasTopologyKey
cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: puls8-mayastor-1
parameters:
  protocol: nvmf
  repl: "2"
  nodeHasTopologykey: |
    rack
provisioner: io.openebs.csi-mayastor
volumeBindingMode: Immediate
EOF

Apply Node Labels

Copy
Command to Apply Labels to Nodes
kubectl puls8 mayastor label node worker-node-1 rack=1
kubectl puls8 mayastor label node worker-node-2 rack=2
kubectl puls8 mayastor label node worker-node-3 rack=2
Copy
Get Nodes
 kubectl puls8 mayastor get nodes -n puls8 --show-labels
 ID             GRPC ENDPOINT        STATUS  LABELS
 worker-node-1  65.108.91.181:10124  Online  rack=1
 worker-node-3  65.21.4.103:10124    Online  rack=2
 worker-node-3  37.27.13.10:10124    Online  rack=2

poolAffinityTopologyLabel (Optional)

Places replicas only on pools with labels that exactly match the values provided.

StorageClass Definition

Copy
Sample StorageClass YAML with poolAffinityTopologyLabel
cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: puls8-mayastor-1
parameters:
  protocol: nvmf
  repl: "2"
  poolAffinityTopologyLabel: |
    zone: us-west-1
provisioner: io.openebs.csi-mayastor
volumeBindingMode: Immediate
EOF

Label Pools via DiskPool Definitions

Copy
YAML to Apply Labels to Pools
cat <<EOF | kubectl create -f -
apiVersion: "openebs.io/v1beta2"
kind: DiskPool
metadata:
  name: pool-on-node-0
  namespace: mayastor
spec:
  node: worker-node-0
  disks: ["/dev/sdb"]
  topology:
    labelled:
        zone: us-west-1
---
apiVersion: "openebs.io/v1beta2"
kind: DiskPool
metadata:
  name: pool-on-node-1
  namespace: mayastor
spec:
  node: worker-node-1
  disks: ["/dev/sdb"]
  topology:
    labelled:
        zone: us-east-1
---
apiVersion: "openebs.io/v1beta2"
kind: DiskPool
metadata:
  name: pool-on-node-2
  namespace: mayastor
spec:
  node: worker-node-2
  disks: ["/dev/sdb"]
  topology:
    labelled:
        zone: us-west-1
EOF

poolHasTopologyKey (Optional)

Selects pools with label keys that match the key specified in the StorageClass.

StorageClass Definition

Copy
Sample StorageClass YAML with poolHasTopologyKey
cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: puls8-mayastor-1
parameters:
  protocol: nvmf
  repl: "2"
  poolHasTopologykey: |
    zone
provisioner: io.openebs.csi-mayastor
volumeBindingMode: Immediate
EOF

Filter Pools Based on Labels

Copy
Command to View Pools with Matching Labels
kubectl puls8 mayastor get pools -n puls8 --selector zone=eu-west-1
ID             GRPC ENDPOINT        STATUS  LABELS
ID              DISKS                                                     MANAGED  NODE           STATUS  CAPACITY  ALLOCATED  AVAILABLE  COMMITTED
pool-on-node-0  aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8  true     worker-node-0  Online  10GiB     0 B        10GiB      0 B
pool-on-node-2  aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8  true     worker-node-2  Online  10GiB     0 B        10GiB      0 B

kubectl puls8 mayastor get pools -n puls8 --selector zone=eu-east-1
ID             GRPC ENDPOINT        STATUS  LABELS
ID              DISKS                                                     MANAGED  NODE           STATUS  CAPACITY  ALLOCATED  AVAILABLE  COMMITTED
pool-on-node-1  aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8  true     worker-node-1  Online  10GiB     0 B        10GiB      0 B

stsAffinityGroup (Optional)

Groups volumes associated with StatefulSet pods to prevent single points of failure. The following rules are enforced:

  • Anti-affinity among single-replica volumes
  • Optimized distribution for multi-replica volumes
  • Anti-affinity for volume targets

To enable, set stsAffinityGroup: true in the StorageClass YAML.

Limitation

For multi-replica volumes that are part of a stsAffinityGroup, scaling down is permitted only up to two replicas. Reducing the replica count below two is not supported.

cloneFsIdAsVolumeId (Optional)

Controls how the UUID of a cloned/restored volume is handled:

  • true: The clone/restore receives a new UUID.
  • false (default): The clone retains the original UUID and is mounted using the nouuid flag.

Set to true when using btrfs and concurrent mounts on the same node are expected.

StorageClass Parameters for Local PV Hostpath

These parameters allow customization of HostPath volumes, enabling configurations such as storage type, custom base paths, node affinity control, and XFS quota management. The default StorageClass is called local-hostpath and its BasePath is configured as /var/openebs/local.

StorageType (Required)

Defines the type of backend storage used by the Local PV. For HostPath volumes, this must be set to hostpath.

Copy
Example StorageClass with StorageType
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-local-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

BasePath (Optional)

The BasePath parameter defines the root directory on each node where DataCore Puls8 provisions local volumes. By default, the Local PV Hostpath provisioner uses /var/openebs/local as the base path. You can customize this path in your StorageClass definition to meet your storage architecture or policy requirements.

If BasePath does not exist on the node, Dynamic Local PV Provisioner will attempt to create the directory, when the first local volume is scheduled on to that node. You must ensure that the value provided for BasePath is a valid absolute path.

NodeAffinityLabels (Optional)

Specifies custom node labels to restrict volume provisioning to specific nodes matching these labels.

Copy
Example StorageClass with NodeAffinityLabels
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-local-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: NodeAffinityLabels
        list:
          - "openebs.io/custom-node-unique-id"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer

XFSQuota (Optional)

Enables support for XFS project quotas on XFS-formatted HostPath volumes. Useful for enforcing space usage limits per volume.

Copy
Example StorageClass with XFSQuota
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-hostpath-xfs
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: "hostpath"
      - name: BasePath
        value: "/var/openebs/local/"
      - name: XFSQuota
        enabled: "true"
        data:
          softLimitGrace: "0%"
          hardLimitGrace: "0%"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

StorageClass Parameters for Local PV LVM

These parameters allows customization of features such as volume expansion, mount options, file systems, volume sharing, and more.

volgroup or vgpattern (Required)

Either volgroup or vgpattern must be provided to specify volume group identification.

  • volgroup: Required if vgpattern is not provided.
  • vgpattern: Required if volgroup is not provided.
Copy
Example StorageClass with volgroup
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  volgroup: "lvmvg"       ## volgroup specifies name of lvm volume group
Copy
Example StorageClass with vgpattern
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"     ## vgpattern specifies pattern of lvm volume group name

It is recommended to use vgpattern since volumegroup will be deprecated in future.

AllowVolumeExpansion (Optional)

To enable volume expansion, set the allowVolumeExpansion field to true in the StorageClass definition. If not specified, volume expansion is not supported.

Copy
Example StorageClass with AllowVolumeExpansion
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
allowVolumeExpansion: true  # If set to true then dynamically it allows expansion of volume
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"

MountOptions (Optional)

Volumes provisioned via Local PV LVM can be mounted with specified options in the StorageClass. If not specified, the -o default option is used. Invalid mount options may cause volume mount failures.

Copy
Example StorageClass with MountOptions
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"
mountOptions:
  - debug  # Various mount options of volume can be specified here

FsType (Optional)

Defines the file system type for the volume. If not specified, it defaults to ext4.

Copy
Example StorageClass with FsType
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
allowVolumeExpansion: true
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"
  fsType: xfs               ## Supported filesystems are ext2, ext3, ext4, xfs & btrfs

Shared (Optional)

To allow multiple pods on the same node to share a volume, set shared to yes.

Copy
Example StorageClass with Shared Option
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-shared-lvmsc
allowVolumeExpansion: true
provisioner: local.csi.openebs.io
parameters:
  volgroup: "lvmvg"
  shared: "yes"             ## Parameter that states volume can be shared among multiple pods

ThinProvision (Optional)

To enable thin provisioning, set thinProvision to yes (default is no). Ensure the dm_thin_pool kernel module is loaded before using thin provisioning.

Copy
Example StorageClass with Thin Provisioning
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  volgroup: "lvmvg"
  thinProvision: "yes"      ## Parameter that enables thinprovisioning

Verify if Thin Provisioning Module is Loaded:

Copy
Verify the Modules
lsmod | grep dm_thin_pool

If not loaded, execute:

Copy
Load the Modules
modprobe dm_thin_pool

VolumeBindingMode (Optional)

The volumeBindingMode determines when and how a PV is bound to a PersistentVolumeClaim (PVC).

  • Immediate: Volume binding and dynamic provisioning occur as soon as the PVC is created.
  • WaitForFirstConsumer (Late Binding): Binding and provisioning of the PVC are delayed until a pod requesting the PVC is created.
Copy
Example StorageClass with VolumeBindingMode - Local PV LVM
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
  storage: "lvm"
  vgpattern: "lvmvg.*"
volumeBindingMode: WaitForFirstConsumer     ## It can also replaced by Immediate volume binding mode depending on the use case.

VolumeBindingMode "Immediate" is not supported for Local PV Hostpath.

StorageClass with Custom Node Labels

To assign volumes to specific nodes based on available volume groups, use allowedTopologies.

Copy
Example StorageClass with Custom Node Labels
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: puls8-lvm-sc
allowVolumeExpansion: true
parameters:
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
 - key: openebs.io/nodename
   values:
     - node-1
     - node-2

VolumeGroup Availability

If the LVM volume group is available only on certain nodes, use allowedTopologies to specify those nodes.

Copy
Example StorageClass with VolumeGroup Availability
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: puls8-lvmpv
allowVolumeExpansion: true
parameters:
  storage: "lvm"
  volgroup: "lvmvg"
provisioner: local.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
  - key: kubernetes.io/hostname
    values:
      - lvmpv-node1
      - lvmpv-node2

The provisioner name for the LVM driver is "local.csi.openebs.io"; use this while creating the StorageClass to ensure volume provisioning requests are correctly routed.

StorageClass Parameters for Local PV ZFS

These parameters define essential aspects of the storage configuration, such as volume creation and operation while additional optional parameters such as FsType, recordsize, compression, and deduplication - enable further customization.

Poolname (Required)

The poolname parameter specifies the name of the storage pool where the volume is created. This parameter is required and can either refer to the root dataset or a child dataset. The dataset provided under poolname must exist on all nodes with the same name specified in the StorageClass.

Copy
Example Configuration
poolname: "zfspv-pool"
poolname: "zfspv-pool/child"

FsType (Optional)

Defines the file system type for the volume. If FsType is set to zfs, the driver creates a ZFS dataset, and no additional formatting is required. If set to ext2, ext3, ext4, btrfs, or xfs, the driver creates a ZVOL and formats the volume accordingly. This parameter cannot be modified once the volume has been provisioned. If omitted, Kubernetes defaults to ext4.

Copy
Allowed Values
"zfs", "ext2", "ext3", "ext4", "xfs", "btrfs"

Recordsize (Optional)

Applicable only when FsType is set to zfs. This parameter specifies the suggested block size for files stored in the filesystem.

Copy
Allowed Values
Any power of 2 from 512 bytes to 128 KB

Volblocksize (Optional)

When FsType is anything other than zfs, a ZVOL (a raw block device carved from the ZFS pool) is created. The volblocksize parameter defines the block size for the ZVOL. The volume size must be a multiple of volblocksize and cannot be zero.

Copy
Allowed Values
Any power of 2 from 512 bytes to 128 KB

Compression (Optional)

The compression parameter specifies the block-level compression algorithm to be applied to the ZFS volume and datasets. Setting it to on enables ZFS to use the default compression algorithm.

Copy
Allowed Values
"on", "off", "lzjb", "zstd", "zstd-1" to "zstd-19", "gzip", "gzip-1" to "gzip-9", "zle", "lz4"

Deduplication/Dedup (Optional)

The dedup parameter enables block-level deduplication, reducing redundant data storage.

Copy
Allowed Values
"on", "off"

Thin Provisioning/Thinprovision (Optional)

The thinProvision parameter determines whether space reservation is required for the source volume. If set to yes, the volume is thin-provisioned and can be created even if the ZPOOL lacks sufficient capacity. If set to no, the volume is thick-provisioned, requiring adequate reserved space.

Copy
Allowed Values
"yes", "no"

Shared Volume Access/Shared (Optional)

The shared parameter specifies whether the volume can be accessed by multiple pods simultaneously. If not explicitly set to yes, the ZFS-LocalPV Driver restricts the volume to a single pod. The default value is no.

Copy
Allowed Values
"yes", "no"

Learn More