Creating a StorageClass
Explore this Page
- Overview
- Requirements
- Creating a StorageClass for Replicated PV Mayastor
- Creating a StorageClass for Local PV Hostpath
- Creating a StorageClass for Local PV LVM
- Creating a StorageClass for Local PV ZFS
Overview
DataCore Puls8 provides a flexible, pluggable architecture that allows you to configure storage classes tailored to your performance, availability, and redundancy needs. Whether you require high-performance block storage, local volumes, or replicated volumes with fault tolerance, DataCore Puls8 supports multiple backends to meet diverse workload demands.
This document provide instructions for creating custom StorageClass definitions for various storages:
- Replicated PV Mayastor
- Local PV Hostpath
- Local PV LVM
- Local PV ZFS
Each storage includes example configurations for customization. Use these templates as starting points to configure storage for your cluster.
Requirements
Before configuring StorageClasses for Local PV LVM and Local PV ZFS, ensure that the following components are already set up on the nodes:
- A LVM volume group must be created and available for Local PV LVM.
- A ZFS pool must be created and available for Local PV ZFS.
These resources are not provisioned automatically by the CSI drivers and must exist prior to StorageClass creation.
Creating a StorageClass for Replicated PV Mayastor
Replicated PV Mayastor dynamically provisions PersistentVolumes (PVs) based on user-defined StorageClass configurations. These define key parameters such as the replication factor, protocol, and provisioner.
One Replica
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-1
parameters:
protocol: nvmf
repl: "1"
provisioner: io.openebs.csi-mayastor
EOF
Three Replicas
cat <<EOF | kubectl create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
parameters:
protocol: nvmf
repl: "3"
provisioner: io.openebs.csi-mayastor
EOF
DataCore Puls8 installs a default StorageClass named mayastor-single-replica with replication factor 1.
Refer to the StorageClass Parameters documentation for detailed information on the various Replicated PV Mayastor StorageClass parameters.
Creating a StorageClass for Local PV Hostpath
This StorageClass is ideal for provisioning local volumes backed by hostpath directories. The default StorageClass is puls8-hostpath
with a BasePath of /var/puls8/local
.
YAML Configuration
To define a custom StorageClass with a specified BasePath
, save the following YAML configuration as local-hostpath-sc.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: hostpath
- name: BasePath
value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Custom Node Labeling (Optional)
In Kubernetes, Local PV Hostpath identifies nodes using default labels such as kubernetes.io/hostname=<node-name>
. However, these labels may not uniquely distinguish nodes across an entire cluster. To address this, you can define custom labels when configuring a StorageClass.
Below is an example of a StorageClass with custom node labeling:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath
annotations:
openebs.io/cas-type: local
cas.openebs.io/config: |
- name: StorageType
value: "hostpath"
- name: NodeAffinityLabels
list:
- "openebs.io/custom-node-unique-id"
provisioner: openebs.io/local
volumeBindingMode: WaitForFirstConsumer
Using NodeAffinityLabels
does not impact the scheduling of application Pods. To configure scheduling, use Kubernetes Allowed Topologies.
Edit local-hostpath-sc.yaml
and modify the metadata.name
and cas.puls8.io/config.BasePath
values as needed.
If the specified BasePath
does not exist on a node, the Dynamic Local PV Provisioner will attempt to create the directory when the first Local Volume is scheduled on that node. Ensure that BasePath
is a valid absolute path.
Refer to the StorageClass Parameters documentation for detailed information on the various Local PV Hostpath StorageClass parameters.
Creating a StorageClass for Local PV LVM
This StorageClass enables dynamic volume provisioning from LVM volume groups. It supports features like thin provisioning and multiple scheduling logic configurations, including SpaceWeighted, CapacityWeighted, and VolumeWeighted.
Ensure that the specified volume group (volgroup
) already exists on the node before applying the StorageClass.
YAML Configuration
To define a custom StorageClass for Local PV LVM, save the following YAML configuration:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-lvmpv
parameters:
storage: "lvm"
volgroup: "lvmvg"
provisioner: local.csi.openebs.io
With Scheduler Parameter
The Local PV LVM driver supports three types of scheduling logic:
- SpaceWeighted
- CapacityWeighted
- VolumeWeighted
To define a scheduler in the StorageClass, add the scheduler parameter with an appropriate value:
parameters:
storage: "lvm"
volgroup: "lvmvg"
scheduler: "CapacityWeighted" ## or "VolumeWeighted"
- SpaceWeighted (Default Behavior): If the scheduler parameter is not specified, the driver selects a node where the volume group (VG) has the highest available free space, following the
volgroup
orvgpattern
parameter. - CapacityWeighted: Selects the node with the volume group that has the least allocated storage in terms of capacity.
- VolumeWeighted: Selects the node with the volume group (matching
vgpattern
orvolgroup
) that has the least number of provisioned volumes.
Refer to the StorageClass Parameters documentation for detailed information on the various Local PV LVM StorageClass parameters.
Creating a StorageClass for Local PV ZFS
This StorageClass offers robust data integrity and features like compression and snapshots. The StorageClass can provision either ZFS datasets or ZVOLs depending on fstype
.
Ensure that the specified ZFS pool (poolname
) has already been created on the node.
YAML Configuration
To define a custom StorageClass for Local PV ZFS, save the following YAML configuration:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
parameters:
recordsize: "128k"
compression: "off"
dedup: "off"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
Using ext2/3/4, XFS, or Btrfs as fstype
If fstype
is set to ext2
, ext3
, ext4
, xfs
, or btrfs
, the driver will create a ZVOL (block device carved out of the ZFS pool). This block device will then be formatted with the specified filesystem before use.
A filesystem layer on top of the ZFS volume may impact performance.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
parameters:
volblocksize: "4k"
compression: "off"
dedup: "off"
fstype: "ext4"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
- Use
volblocksize
instead ofrecordsize
when creating a ZVOL. volblocksize
must be a power of 2.
Using ZFS as fstype
If fstype
is set to zfs
, the driver will create a ZFS dataset within the ZFS pool. Unlike block-based storage, this setup provides optimal performance by eliminating extra layers between the application and storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
parameters:
recordsize: "128k"
compression: "off"
dedup: "off"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
recordsize
defines the block size for ZFS datasets and must be a power of 2.
Managing ZFS Pool Availability
If the ZFS pool is available only on specific nodes, the allowedTopologies
parameter should be used to restrict volume provisioning to those nodes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
allowVolumeExpansion: true
parameters:
recordsize: "128k"
compression: "off"
dedup: "off"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: kubernetes.io/hostname
values:
- zfspv-node1
- zfspv-node2
This configuration ensures that volumes are provisioned only on zfspv-node1
and zfspv-node2
.
The provisioner name for the ZFS driver is zfs.csi.openebs.io
. It must be used in the StorageClass definition to handle volume provisioning and deprovisioning requests.
Scheduling in ZFS Driver
The ZFS driver includes a built-in scheduler that balances storage volumes across nodes. The driver supports two scheduling algorithms:
- VolumeWeighted: Prefers nodes with fewer volumes provisioned.
- CapacityWeighted: Prefers nodes with more available storage capacity.
If finer control over scheduling is required, Kubernetes scheduler can be used instead. To enable this, set volumeBindingMode
to WaitForFirstConsumer
.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
allowVolumeExpansion: true
parameters:
recordsize: "128k"
compression: "off"
dedup: "off"
fstype: "zfs"
poolname: "zfspv-pool"
provisioner: zfs.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
- When
WaitForFirstConsumer
is set, the Kubernetes scheduler first schedules the application pod and then triggers PV creation. - Once a Local PV is assigned to a node, it remains bound to that node.
- The Local PV's scheduling algorithm operates only at deployment time. Once provisioned, the application cannot be moved since its data resides on the allocated node.
Refer to the StorageClass Parameters documentation for detailed information on the various Local PV ZFS StorageClass parameters.
Learn More