Installing DataCore Puls8 on OpenShift
Explore this Page
- Overview
- Requirements
- Install DataCore Puls8 on OpenShift
- Disk Pool Configuration
- StorageClass Configuration
- Deploying and Validating a Persistent Volume
- Verify Volume and Application Binding
- Deploying a Sample Application
- Benefits of Using DataCore Puls8 with OpenShift
Overview
This document provides detailed instructions for installing DataCore Puls8 on OpenShift. DataCore Puls8 delivers Kubernetes-native storage and simplifies persistent volume management for cloud-native and DevOps-driven environments. You can gain scalable, high-performance, and highly available storage for stateful applications running in OpenShift clusters.
The installation process includes validating prerequisites, configuring the environment, and using Helm charts to deploy the solution. This document also covers how to set up DiskPools and StorageClasses, and how to validate the installation using Persistent Volume Claims (PVCs).
Requirements
Ensure the following requirements are met before installing DataCore Puls8 on OpenShift:
Worker Node Considerations
IO Engine pods must be scheduled on a number of worker nodes equal to or greater than the desired replication factor.
Additional Storage Disks
Additional unmounted and unformatted disks must be attached to the nodes.
Huge Pages Configuration
- 2MiB Huge Pages must be enabled on storage nodes.
- Each node should reserve at least 1024 Huge Pages (2GiB total) exclusively for the IO Engine.
- Refer to Red Hat documentation for enabling Huge Pages during or after OpenShift Container Platform (OCP) installation.
Kernel Module Support
The nvme
kernel modules are preloaded in CoreOS.
Preparing the Cluster
Refer to the DataCore Puls8 Prerequisites documentation for steps to prepare the cluster environment.
Configure Security Context Constraints (SCCs)
Ensure all relevant DataCore Puls8 service accounts have the privileged SCC assigned.
oc adm policy -n openebs add-scc-to-user privileged -z openebs-promtail
oc adm policy -n openebs add-scc-to-user privileged -z openebs-loki
oc adm policy -n openebs add-scc-to-user privileged -z openebs-localpv-provisioner
oc adm policy -n openebs add-scc-to-user privileged -z openebs-nats
oc adm policy -n openebs add-scc-to-user privileged -z openebs-lvm-controller-sa
oc adm policy -n openebs add-scc-to-user privileged -z openebs-lvm-node-sa
oc adm policy -n openebs add-scc-to-user privileged -z openebs-service-account
oc adm policy -n openebs add-scc-to-user privileged -z openebs-zfs-controller-sa
oc adm policy -n openebs add-scc-to-user privileged -z openebs-zfs-node-sa
oc adm policy -n openebs add-scc-to-user privileged -z default
Install DataCore Puls8 on OpenShift
Install DataCore Puls8 with Mayastor using Helm.
helm install openebs --namespace openebs openebs/openebs --create-namespace --set openebs-crds.csi.volumeSnapshots.enabled=false
OpenShift includes default VolumeSnapshot CRDs. Disabling them in the Helm chart avoids resource conflicts during installation.
Disk Pool Configuration
Use the kubectl puls8 mayastor
plugin (not compatible with oc) to view available block devices.
kubectl puls8 mayastor get block-devices <NODE_ID> -n openebs
DEVNAME DEVTYPE SIZE AVAILABLE MODEL DEVPATH MAJOR MINOR DEVLINKS
/dev/sdb disk 30GiB yes Virtual_disk /devices/pci0000:00/0000:00:10.0/host2/target2:0:1/2:0:1:0/block/sdb 8 16 "/dev/disk/by-id/scsi-SVMware_Virtual_disk_6000c2915164f6cc7af0aa6cb040cf67", "/dev/disk/by-id/wwn-0x6000c2915164f6cc7af0aa6cb040cf67", "/dev/disk/by-id/scsi-36000c2915164f6cc7af0aa6cb040cf67", "/dev/disk/by-diskseq/2", "/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:1:0"
Use stable and persistent device links such as /dev/disk/by-path
or /dev/disk/by-id
to ensure disks are reliably identified after node reboots.
Verify DiskPools and Status
NAME NODE STATE POOL_STATUS CAPACITY USED AVAILABLE
pool-on-worker worker Created Online 32178700288 0 32178700288
StorageClass Configuration
Refer to the Creating a StorageClass Documentation for creating StorageClasses. Below is an example for a 3-replica configuration.
cat <<EOF | oc create -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
parameters:
protocol: nvmf
repl: "3"
provisioner: io.openebs.csi-mayastor
EOF
Deploying and Validating a Persistent Volume
After defining a StorageClass, create a PVC and test it with a sample application.
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ms-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: openebs-single-replica
EOF
Verify PVC and PV Status.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
ms-volume-claim Bound pvc-144d54db-a3cf-4194-821d-34eae9dafc1d 1Gi RWO openebs-single-replica <unset> 40s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-02333bf8-8a07-4ce0-a00e-bd6bc67af380 2Gi RWO Delete Bound openebs/data-openebs-etcd-0 puls8-etcd-localpv <unset> - 47h
pvc-144d54db-a3cf-4194-821d-34eae9dafc1d 1Gi RWO Delete Bound default/ms-volume-claim openebs-single-replica <unset> - 42s
pvc-233aafb1-59e9-4836-b8a1-f74ab2f5a6e4 10Gi RWO Delete Bound openebs/storage-openebs-loki-0 mayastor-loki-localpv <unset> - 47h
Verify Volume and Application Binding
After provisioning a PVC, you can inspect the corresponding volume using the following command:
ID REPLICAS TARGET-NODE ACCESSIBILITY STATUS SIZE THIN-PROVISIONED ALLOCATED SNAPSHOTS SOURCE
144d54db-a3cf-4194-821d-34eae9dafc1d 1 <none> <none> Online 1GiB false 1GiB 0 <none>
The Replicated PV Mayastor CSI driver ensures that the application pod and the associated NVMe target (also called the Nexus) are co-located on the same node. This design improves fault tolerance and accelerates recovery in case of node failures.
Deploying a Sample Application
Use the following pod specification to run an application (Example: fio
) that consumes the volume:
kind: Pod
apiVersion: v1
metadata:
name: fio
spec:
nodeSelector:
openebs.io/engine: mayastor
volumes:
- name: ms-volume
persistentVolumeClaim:
claimName: ms-volume-claim
containers:
- name: fio
image: nixery.dev/shell/fio
args:
- sleep
- "1000000"
volumeMounts:
- mountPath: "/volume"
name: ms-volume
Once deployed, confirm that the application pod is running:
Benefits of Using DataCore Puls8 with OpenShift
- Cloud-Native and Container-Aware Architecture: DataCore Puls8 integrates seamlessly with OpenShift, delivering Container Native Storage (CNS) that operates as Kubernetes microservices and supports dynamic provisioning.
- Dynamic and Scalable Storage: Automatically provisions volumes on demand to match the scale and pace of application growth in OpenShift.
- Support for Stateful Applications: Ideal for databases, message queues, and other workloads that require persistent, high-performance, and redundant storage.
- Integration with OpenShift Ecosystem: Compatible with OpenShift Operators, pipelines, and monitoring tools, enhancing observability and management.
Learn More