DataCore Puls8 Prerequisites
Explore this Page
- Overview
- General Requirements
- Supported Versions
- System Configuration
- Storage Configuration
- Kernel Module Requirements
- Volume Management
- Other Installation Considerations
Overview
This document outlines the prerequisites for installing DataCore Puls8 in a Kubernetes environment. It provides the necessary requirements for storage configurations, ensuring compatibility and optimal performance in a Kubernetes environment. Before installation, ensure that your cluster meets the necessary requirements to support DataCore Puls8 storage effectively.
General Requirements
- Linux kernel version 5.15 or higher
- Required kernel modules:
nvme_tcp
ext4
(optional:xfs
)
- x86-64 CPU cores with SSE4.2 instruction support
- Helm version 3.7 or higher
- HugePage Support: Minimum 2GiB of 2MiB-sized pages
- Dedicated resources for each io-engine pod:
- 2 CPU cores
- 1GiB RAM
Supported Versions
Component | Version Requirements |
---|---|
Kubernetes | 1.23 or higher |
Linux Kernel | 5.15 or higher |
Operating Systems | Ubuntu, RHEL 8.8 |
LVM Version | LVM 2 |
ZFS Version | ZFS 0.8 |
System Configuration
- Ensure the network ports 8420, 10124, 10199, 50052, and 50053 are not in use.
- Firewall settings must allow node connectivity.
- A minimum of three nodes is required.
- For synchronous replication, the node count should match or exceed the desired replication factor.
- Only NVMe-oF TCP is supported for volume export/mounting.
- Worker nodes must have the NVMe-oF TCP initiator software installed and configured.
- Outbound Network Access - Port 443 (HTTPS) must be open for outbound traffic to allow:
- Communication with S3-compatible object storage (for backups, if the DataCore Puls8 backup feature is enabled).
- Access to the call-home endpoint (for telemetry, if call-home feature is enabled).
-
Configure Huge Page Support:
The IO engine requires at least 2GiB of HugePages (2MiB size) to run. Perform the following steps to verify, configure, and persist Huge Page support on your nodes:
-
Check Huge Page count.
CopySample OutputAnonHugePages: 0 kB
ShmemHugePages: 0 kB
FileHugePages: 0 kB
HugePages_Total: 1024 # total no of hugepages
HugePages_Free: 559
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB # huge page size
Hugetlb: 2097152 kBStorage nodes, where IO engine pods are deployed, must support and enable 2MiB-sized Huge Pages. Each node must allocate a minimum of 1024 such pages (equivalent to 2GiB in total) exclusively for the IO engine pod.
-
Adjust Huge Page count.
CopyAdjust Huge Page Countecho 1024 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
If fewer than 1024 pages are available, the page count must be adjusted on the worker node as needed. This adjustment should consider other workloads running on the same node that may also require Huge Pages.
-
Persist Huge Page settings across reboots.
If you modify a node's Huge Page configuration, you must either restart the kubelet service or reboot the node. The deployment of DataCore Puls8 may fail if the kubelet instance on the node does not report a Huge Page count that meets the minimum requirements.
-
- Label worker nodes for storage.If
csi.node.topology.nodeSelector
is set totrue
, you must label the worker nodes according tocsi.node.topology.segments
. Both thecsi-node
andagent-ha-node
DaemonSets will use these topology segments as node selectors.
Storage Configuration
- Configure the directory on your Kubernetes nodes where storage volumes will be created. Default:
/var/puls8/local
. - Configuration options:
- Root Disk: A directory on the root (OS) disk, such as
/var/puls8/local
. - Bare-Metal Kubernetes Nodes: A mounted directory using an additional drive or SSD (
/mnt/puls8-local
). - Cloud or Virtual Instances: A mounted directory using an external cloud volume or virtual disk (
/mnt/disk/ssd1
).
- Root Disk: A directory on the root (OS) disk, such as
- Ensure the following container images are available in air-gapped environments:
openebs/provisioner-localpv
openebs/linux-utils
- Rancher RKE Cluster: Configure kubelet service with
extra_binds
for storage paths.
Kernel Module Requirements
-
Configure native NVMe multipathing for High Availability (HA):
If the output is
N
, native NVMe multipathing is disabled. If the output isY
, it is enabled.Native NVMe multipathing must be enabled for high availability in environments using NVMe-oF. Follow the steps below based on your Linux distribution:
-
RHEL-Based Systems
-
Enable via kernel parameter.
CopyEnable multipath in kernel (RHEL)sudo grubby --update-kernel=ALL --args="nvme_core.multipath=Y"
For IBM Z architecture only:
sudo zipl
-
Reboot and verify after reboot.
-
Alternatively, enable via modprobe configuration.
CopyModprobe Configurationsudo cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
echo "options nvme_core multipath=Y" | sudo tee /etc/modprobe.d/nvme_core.conf
sudo dracut --force --verbose
sudo reboot
-
-
Debian-Based Systems (Ubuntu, Debian)
-
Edit GRUB configuration.
Append:
nvme_core.multipath=Y
toGRUB_CMDLINE_LINUX_DEFAULT
. -
Apply GRUB changes.
-
Reboot and verify after reboot.
-
Alternatively, use modprobe directly.
CopyModprobe Configurationecho "options nvme_core multipath=Y" | sudo tee /etc/modprobe.d/nvme_core.conf
sudo update-initramfs -u
sudo reboot
-
-
SUSE Linux
-
Edit GRUB configuration.
Append:
nvme_core.multipath=Y
toGRUB_CMDLINE_LINUX_DEFAULT
. -
Apply GRUB changes.
-
Reboot and verify after reboot.
If the output is Y, native NVMe multipathing is successfully enabled.
-
-
The following modules are required by specific storage's and must be verified and loaded accordingly:
-
For Local PV LVM:
-
dm_thin_pool
CopyLoad dm_thin_pool Kernel Modulesudo modprobe dm_thin_pool
echo "dm_thin_pool" | sudo tee -a /etc/modules-load.d/puls8-modules.conf
-
-
For Local PV ZFS:
-
zfs
CopyLoad ZFS Kernel Modulesudo modprobe zfs
echo "zfs" | sudo tee -a /etc/modules-load.d/puls8-modules.confThe ZFS kernel module is included by default in Ubuntu, but it may not be available by default in other Linux distributions. Ensure that the module is installed and loaded before proceeding with ZFS pool creation.
-
-
For Replicated Storage (Replicated PV Mayastor):
-
nvme_tcp
CopyLoad nvme_tcp Modulesudo modprobe nvme_tcp
echo "nvme_tcp" | sudo tee -a /etc/modules-load.d/puls8-modules.conf -
nvme_rdma
CopyLoad nvme_rdma Modulesudo modprobe nvme_rdma
echo "nvme_rdma" | sudo tee -a /etc/modules-load.d/puls8-modules.conf -
Verify the configuration.
Check if the file contains the correct module names.
-
Verify Modules Are Loaded (Optional)
CopyCheck if Kernel Modules are Loadedls /sys/module/nvme_tcp # Required by Replicated PV Mayastor - NVMe-oF TCP
ls /sys/module/dm_thin_pool # Required by Local PV LVM
ls /sys/module/zfs # Required by Local PV ZFS
ls /sys/module/nvme_rdma # Required by Replicated PV Mayastor - NVMe-oF RDMA
If the modules appear, they are persistently loaded.
-
Volume Management
- All nodes have
lvm2
utilities installed. - Create a Volume Group for LVM storage:
- Install ZFS utilities on each node:
- Create a ZFS Pool:
- Verify the ZFS Pool:CopySample Output
pool: zfspv-pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfspv-pool ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors - Ensure the pool state is ONLINE with no data errors.
Other Installation Considerations
- Ensure admin access to your Kubernetes cluster.
- Configure appropriate bind mounts based on your Kubernetes platform (Example: Rancher, MicroK8s).
- Determine which storage devices will be used by DataCore Puls8, including LVM Volume Groups or ZFS Pools.
Supported Versions
Component | Version Requirements |
---|---|
Kubernetes | 1.23 or higher |
Linux Kernel | 5.15 or higher |
Operating Systems | Ubuntu, RHEL 8.8 |
LVM Version | LVM 2 |
ZFS Version | ZFS 0.8 |
Learn More