DataCore Puls8 Prerequisites

Explore this Page

Overview

This document outlines the prerequisites for installing DataCore Puls8 in a Kubernetes environment. It provides the necessary requirements for storage configurations, ensuring compatibility and optimal performance in a Kubernetes environment. Before installation, ensure that your cluster meets the necessary requirements to support DataCore Puls8 storage effectively.

General Requirements

  • Linux kernel version 5.15 or higher
  • Required kernel modules:
    • nvme_tcp
    • ext4 (optional: xfs)
  • x86-64 CPU cores with SSE4.2 instruction support
  • Helm version 3.7 or higher
  • HugePage Support: Minimum 2GiB of 2MiB-sized pages
  • Dedicated resources for each io-engine pod:
    • 2 CPU cores
    • 1GiB RAM

Supported Versions

Component Version Requirements
Kubernetes 1.23 or higher
Linux Kernel 5.15 or higher
Operating Systems Ubuntu, RHEL 8.8
LVM Version LVM 2
ZFS Version ZFS 0.8

System Configuration

  • Ensure the network ports 8420, 10124, 10199, 50052, and 50053 are not in use.
  • Firewall settings must allow node connectivity.
  • A minimum of three nodes is required.
  • For synchronous replication, the node count should match or exceed the desired replication factor.
  • Only NVMe-oF TCP is supported for volume export/mounting.
  • Worker nodes must have the NVMe-oF TCP initiator software installed and configured.
  • Outbound Network Access - Port 443 (HTTPS) must be open for outbound traffic to allow:
    • Communication with S3-compatible object storage (for backups, if the DataCore Puls8 backup feature is enabled).
    • Access to the call-home endpoint (for telemetry, if call-home feature is enabled).
  • Configure Huge Page Support:

    The IO engine requires at least 2GiB of HugePages (2MiB size) to run. Perform the following steps to verify, configure, and persist Huge Page support on your nodes:

    • Check Huge Page count.

      Copy
      Validate Huge Page Support
      grep Huge /proc/meminfo
      Copy
      Sample Output
      AnonHugePages:         0 kB
      ShmemHugePages:        0 kB
      FileHugePages:         0 kB
      HugePages_Total:    1024     # total no of hugepages
      HugePages_Free:      559
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      Hugepagesize:       2048 kB  # huge page size
      Hugetlb:         2097152 kB

      Storage nodes, where IO engine pods are deployed, must support and enable 2MiB-sized Huge Pages. Each node must allocate a minimum of 1024 such pages (equivalent to 2GiB in total) exclusively for the IO engine pod.

    • Adjust Huge Page count.

      Copy
      Adjust Huge Page Count
      echo 1024 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

      If fewer than 1024 pages are available, the page count must be adjusted on the worker node as needed. This adjustment should consider other workloads running on the same node that may also require Huge Pages.

    • Persist Huge Page settings across reboots.

      Copy
      Persist Huge Page Settings
      echo vm.nr_hugepages = 1024 | sudo tee -a /etc/sysctl.conf
      If you modify a node's Huge Page configuration, you must either restart the kubelet service or reboot the node. The deployment of DataCore Puls8 may fail if the kubelet instance on the node does not report a Huge Page count that meets the minimum requirements.
  • Label worker nodes for storage.
    Copy
    Label Nodes
    kubectl label node <node_name> openebs.io/engine=mayastor
    If csi.node.topology.nodeSelector is set to true, you must label the worker nodes according to csi.node.topology.segments. Both the csi-node and agent-ha-node DaemonSets will use these topology segments as node selectors.

Storage Configuration

  • Configure the directory on your Kubernetes nodes where storage volumes will be created. Default: /var/puls8/local.
  • Configuration options:
    • Root Disk: A directory on the root (OS) disk, such as /var/puls8/local.
    • Bare-Metal Kubernetes Nodes: A mounted directory using an additional drive or SSD (/mnt/puls8-local).
    • Cloud or Virtual Instances: A mounted directory using an external cloud volume or virtual disk (/mnt/disk/ssd1).
  • Ensure the following container images are available in air-gapped environments:
    • openebs/provisioner-localpv
    • openebs/linux-utils
  • Rancher RKE Cluster: Configure kubelet service with extra_binds for storage paths.

Kernel Module Requirements

  • Configure native NVMe multipathing for High Availability (HA):

    Copy
    Check NVMe Multipath for HA
    cat /sys/module/nvme_core/parameters/multipath

    If the output is N, native NVMe multipathing is disabled. If the output is Y, it is enabled.

    Native NVMe multipathing must be enabled for high availability in environments using NVMe-oF. Follow the steps below based on your Linux distribution:

    • RHEL-Based Systems

      • Enable via kernel parameter.

        Copy
        Enable multipath in kernel (RHEL)
        sudo grubby --update-kernel=ALL --args="nvme_core.multipath=Y"

        For IBM Z architecture only: sudo zipl

      • Reboot and verify after reboot.

        Copy
        Reboot System
        sudo reboot
        Copy
        Verify Multipath
        cat /sys/module/nvme_core/parameters/multipath
      • Alternatively, enable via modprobe configuration.

        Copy
        Modprobe Configuration
        sudo cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
        echo "options nvme_core multipath=Y" | sudo tee /etc/modprobe.d/nvme_core.conf
        sudo dracut --force --verbose
        sudo reboot
    • Debian-Based Systems (Ubuntu, Debian)

      • Edit GRUB configuration.

        Copy
        Edit GRUB Config (Debian)
        sudo nano /etc/default/grub

        Append: nvme_core.multipath=Y to GRUB_CMDLINE_LINUX_DEFAULT.

      • Apply GRUB changes.

        Copy
        Update GRUB
        sudo update-grub
      • Reboot and verify after reboot.

        Copy
        Reboot
        sudo reboot
        Copy
        Verify Multipath
        cat /sys/module/nvme_core/parameters/multipath
      • Alternatively, use modprobe directly.

        Copy
        Modprobe Configuration
        echo "options nvme_core multipath=Y" | sudo tee /etc/modprobe.d/nvme_core.conf
        sudo update-initramfs -u
        sudo reboot
    • SUSE Linux

      • Edit GRUB configuration.

        Copy
        Edit GRUB (SUSE)
        sudo nano /etc/default/grub

        Append: nvme_core.multipath=Y to GRUB_CMDLINE_LINUX_DEFAULT.

      • Apply GRUB changes.

        Copy
        Update GRUB (SUSE)
        sudo grub2-mkconfig -o /boot/grub2/grub.cfg
      • Reboot and verify after reboot.

        Copy
        Reboot
        sudo reboot
        Copy
        Verify Multipath
        cat /sys/module/nvme_core/parameters/multipath

        If the output is Y, native NVMe multipathing is successfully enabled.

The following modules are required by specific storage's and must be verified and loaded accordingly:

  • For Local PV LVM:

    • dm_thin_pool

      Copy
      Load dm_thin_pool Kernel Module
      sudo modprobe dm_thin_pool
      echo "dm_thin_pool" | sudo tee -a /etc/modules-load.d/puls8-modules.conf
  • For Local PV ZFS:

    • zfs

      Copy
      Load ZFS Kernel Module
      sudo modprobe zfs
      echo "zfs" | sudo tee -a /etc/modules-load.d/puls8-modules.conf

      The ZFS kernel module is included by default in Ubuntu, but it may not be available by default in other Linux distributions. Ensure that the module is installed and loaded before proceeding with ZFS pool creation.

  • For Replicated Storage (Replicated PV Mayastor):

    • nvme_tcp

      Copy
      Load nvme_tcp Module
      sudo modprobe nvme_tcp
      echo "nvme_tcp" | sudo tee -a /etc/modules-load.d/puls8-modules.conf
    • nvme_rdma

      Copy
      Load nvme_rdma Module
      sudo modprobe nvme_rdma
      echo "nvme_rdma" | sudo tee -a /etc/modules-load.d/puls8-modules.conf
    • Verify the configuration.

      Copy
      Recheck Modules After Reboot
      cat /etc/modules-load.d/puls8-modules.conf

      Check if the file contains the correct module names.

    • Verify Modules Are Loaded (Optional)

      Copy
      Check if Kernel Modules are Loaded
      ls /sys/module/nvme_tcp       # Required by Replicated PV Mayastor - NVMe-oF TCP
      ls /sys/module/dm_thin_pool   # Required by Local PV LVM
      ls /sys/module/zfs            # Required by Local PV ZFS
      ls /sys/module/nvme_rdma      # Required by Replicated PV Mayastor - NVMe-oF RDMA

    If the modules appear, they are persistently loaded.

Volume Management

  • All nodes have lvm2 utilities installed.
  • Create a Volume Group for LVM storage:
    Copy
    Create Volume Group
    sudo pvcreate /dev/sdb
    sudo vgcreate storage-vg /dev/sdb
  • Install ZFS utilities on each node:
    Copy
    Install ZFS Utilities
    apt-get install zfsutils-linux
  • Create a ZFS Pool:
    Copy
    Create a ZFS Pool
    zpool create zfspv-pool /dev/sdb
  • Verify the ZFS Pool:
    Copy
    Verify ZFS Pool Status
    zpool status
    Copy
    Sample Output
      pool: zfspv-pool
     state: ONLINE
      scan: none requested
    config:
        NAME        STATE     READ WRITE CKSUM
        zfspv-pool  ONLINE       0     0     0
          sdb       ONLINE       0     0     0

    errors: No known data errors
  • Ensure the pool state is ONLINE with no data errors.

Other Installation Considerations

  • Ensure admin access to your Kubernetes cluster.
  • Configure appropriate bind mounts based on your Kubernetes platform (Example: Rancher, MicroK8s).
  • Determine which storage devices will be used by DataCore Puls8, including LVM Volume Groups or ZFS Pools.

Supported Versions

Component Version Requirements
Kubernetes 1.23 or higher
Linux Kernel 5.15 or higher
Operating Systems Ubuntu, RHEL 8.8
LVM Version LVM 2
ZFS Version ZFS 0.8

Learn More