Installing DataCore Puls8 on Talos

Explore this Page

Overview

This document provides detailed instructions for installing DataCore Puls8 on a Kubernetes cluster running on Talos. Talos is an immutable, API-driven operating system designed for Kubernetes. Since, Talos does not provide traditional shell access, specific system extensions and configuration patches are required to enable storages such as Replicated PV Mayastor, Local PV LVM, and Local PV ZFS.

This document covers:

  • Required Talos system extensions
  • Kernel module and HugePages configuration
  • NVMe and multipath verification
  • Puls8 installation using Helm
  • DiskPool creation for Replicated PV Mayastor
  • Local PV LVM and Local PV ZFS configuration
  • Talos upgrade considerations

Requirements

Ensure the following requirements are met before installing DataCore Puls8 on Talos:

  • A running Talos Kubernetes cluster
  • talosctl, kubectl, and helm installed
  • Administrative access to Talos nodes
  • Available block devices for storage configuration

Talos System Preparation

Install Required System Extensions

Talos requires specific system extensions for storages.

Generate Talos boot assets using the Talos Factory UI with the following customization:

Copy
Enable Required Talos System Extensions for Puls8 Storage
customization:
  systemExtensions:
    officialExtensions:
      - siderolabs/btrfs
      - siderolabs/nvme-cli
      - siderolabs/util-linux-tools
      - siderolabs/zfs

The installer image ensures these extensions are included during installation or upgrades.

Verify Installed Extensions

Verify if the extensions are installed:

Copy
Verify Required Talos Extensions are Installed
talosctl --talosconfig talosconfig -n <node-ip> get extensions
Copy
Sample Command
talosctl  --talosconfig talosconfig -n 10.200.35.35 get extensions
Copy
Sample Output
NODE           NAMESPACE   TYPE              ID            VERSION   NAME               VERSION
10.200.35.35   runtime     ExtensionStatus   0             1         btrfs              v1.11.2
10.200.35.35   runtime     ExtensionStatus   1             1         nvme-cli           v2.14
10.200.35.35   runtime     ExtensionStatus   2             1         util-linux-tools   2.41.1
10.200.35.35   runtime     ExtensionStatus   3             1         zfs                2.3.3-v1.11.2
10.200.35.35   runtime     ExtensionStatus   4             1         schematic          482fd6b67b34093c6b2a28d15b7a96c64f5cbebce52f59f9946aeb961518e13d
10.200.35.35   runtime     ExtensionStatus   modules.dep   1         modules.dep        6.12.48-talos

Verify NVMe TCP Module

The nvme_tcp kernel module should be inbuilt in Talos.

Copy
Verify NVMe Kernel Modules are Available
talosctl --talosconfig ./talosconfig -n <node-ip> list /sys/module/ | grep -i nvme
Copy
Sample Output
10.200.35.36   nvme_core
10.200.35.36   nvme_tcp

Configure HugePages and Required Kernel Modules

Create a machine configuration patch file (example: wp.yaml):

Copy
Sample Patch (wp.yaml)
machine:
  kernel:
    modules:
      - name: btrfs
      - name: zfs
  sysctls:
    vm.nr_hugepages: "1024"
  nodeLabels:
    openebs.io/engine: "mayastor"
  kubelet:
    extraMounts:
      - destination: /var/local
        type: bind
        source: /var/local
        options:
          - bind
          - rshared
          - rw
      - destination: /var/openebs
        type: bind
        source: /var/openebs
        options:
          - bind
          - rshared
          - rw

Apply the configuration:

Copy
Patch Talos Worker Node Configuration for HugePages and Storage Requirements
talosctl patch --mode=no-reboot machineconfig -n <worker-node-ip> --patch @wp.yaml --talosconfig <path-to-talosconfig>
Copy
Sample Command
talosctl patch --mode=no-reboot machineconfig -n 10.200.35.35 --patch @wp.yaml --talosconfig ./talosconfig
Copy
Sample Output
patched MachineConfigs.config.talos.dev/v1alpha1 at the node 10.200.35.35
Applied configuration without a reboot

Verify HugePages Configuration

Each node should reserve at least 1024 Huge Pages (2GiB total) exclusively for the IO Engine.

Copy
Verify HugePages Allocation
talosctl --talosconfig <path-to-talosconfig> -n <worker-node-ip> read /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Copy
Sample Command
talosctl --talosconfig ./talosconfig -n 10.200.35.36 read /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Copy
Sample Output
1024

Label Worker Nodes

Copy
Label Worker Node for Replicated PV Mayastor Scheduling
kubectl label node <node-name> openebs.io/engine=mayastor

Verify NVMe Multipath Configuration

Verify if NVMe native multipath support is enabled.

Copy
Verify NVMe Native Multipath Support
talosctl --talosconfig <path-to-talosconfig> -n <worker-node-ip> read /sys/module/nvme_core/parameters/multipath
Copy
Sample Command
talosctl --talosconfig ./talosconfig -n 10.200.35.35 read /sys/module/nvme_core/parameters/multipath
Copy
Sample Output
Y

If CPU core isolation is required, refer to the Performance Optimization documentation. This document provides instructions on configuring the necessary kernel parameters. Refer to the Talos documentation for more information.

Installing DataCore Puls8 on Talos

Once the system is configured, proceed with installing DataCore Puls8 on Talos.

  1. Create a Namespace.

    Copy
    Create a Dedicated Namespace for DataCore Puls8
    kubectl create ns puls8
  2. Label namespace for privileged admission.

    Copy
    Enable Privileged Pod Admission for Puls8 Namespace
    kubectl label ns puls8 \
      pod-security.kubernetes.io/enforce=privileged \
      pod-security.kubernetes.io/enforce-version=latest \
      pod-security.kubernetes.io/audit=privileged \
      pod-security.kubernetes.io/audit-version=latest \
      pod-security.kubernetes.io/warn=privileged \
      pod-security.kubernetes.io/warn-version=latest --overwrite
  3. Install DataCore Puls8 via Helm.

    Copy
    Install Puls8 on Talos with Required etcd and ZFS Configuration
    helm install puls8 -n puls8 --create-namespace \
      oci://docker.io/datacoresoftware/puls8 \
      --<puls8-chart-version> \
      --set openebs.zfs-localpv.zfsNode.encrKeysDir=/var/local/puls8/zfs/keys \
      --set openebs.mayastor.etcd.image.repository=openebs/etcd
    • /home/keys is read-only in Talos, encryption directory is changed.
    • The etcd repository override is required for this Puls8 version.
  4. Verify Pods.

    Copy
    Verify Puls8 Pods are Running Successfully
    kubectl get pods -n puls8
    Copy
    Verify Puls8 Pods are Running Successfully
    NAME                                                         READY   STATUS    RESTARTS   AGE
    alertmanager-puls8-kube-prometheus-stac-alertmanager-0       2/2     Running   0          26m
    dcs-puls8-down-pro-diskpoolclaim-operator-5b6c5d8cbb-8kbhz   1/1     Running   0          31m
    dcs-puls8-license-agent-c5976869f-fcvgt                      1/1     Running   0          31m
    prometheus-puls8-kube-prometheus-stac-prometheus-0           2/2     Running   0          26m
    puls8-agent-core-74976b7549-stqgd                            2/2     Running   0          31m
    puls8-agent-ha-node-pw6cf                                    1/1     Running   0          31m
    puls8-agent-ha-node-qpcj2                                    1/1     Running   0          31m
    puls8-agent-ha-node-v4rn4                                    1/1     Running   0          31m
    puls8-alloy-22z9h                                            2/2     Running   0          31m
    puls8-alloy-m2v9f                                            2/2     Running   0          31m
    puls8-alloy-r76vd                                            2/2     Running   0          31m
    puls8-api-rest-6589fcd7fb-jbbb4                              1/1     Running   0          31m
    puls8-csi-controller-76c95df644-w7hs7                        6/6     Running   0          31m
    puls8-csi-node-2qtm7                                         2/2     Running   0          31m
    puls8-csi-node-5c9nj                                         2/2     Running   0          31m
    puls8-csi-node-8dp7n                                         2/2     Running   0          31m
    puls8-etcd-0                                                 1/1     Running   0          31m
    puls8-etcd-1                                                 1/1     Running   0          31m
    puls8-etcd-2                                                 1/1     Running   0          31m
    puls8-grafana-649c9c6978-grpwv                               3/3     Running   0          31m
    puls8-io-engine-lw6lw                                        2/2     Running   0          31m
    puls8-io-engine-qfnzw                                        2/2     Running   0          31m
    puls8-io-engine-rdzx4                                        2/2     Running   0          31m
    puls8-kube-prometheus-stac-operator-7bf69c5b6c-zdw77         1/1     Running   0          31m
    puls8-kube-state-metrics-8c97b96df-6gm95                     1/1     Running   0          31m
    puls8-localpv-provisioner-98d6796cc-x4lbz                    1/1     Running   0          31m
    puls8-loki-0                                                 2/2     Running   0          31m
    puls8-loki-1                                                 2/2     Running   0          31m
    puls8-loki-2                                                 2/2     Running   0          31m
    puls8-lvm-localpv-controller-6b59b87648-2jw2c                5/5     Running   0          31m
    puls8-lvm-localpv-node-7tb92                                 2/2     Running   0          31m
    puls8-lvm-localpv-node-m7nlt                                 2/2     Running   0          31m
    puls8-lvm-localpv-node-r2xg4                                 2/2     Running   0          31m
    puls8-minio-0                                                1/1     Running   0          31m
    puls8-minio-1                                                1/1     Running   0          31m
    puls8-minio-2                                                1/1     Running   0          31m
    puls8-nats-0                                                 3/3     Running   0          31m
    puls8-nats-1                                                 3/3     Running   0          31m
    puls8-nats-2                                                 3/3     Running   0          31m
    puls8-obs-callhome-6d57669d54-rc5nw                          2/2     Running   0          31m
    puls8-operator-diskpool-74ccdb865-psdpv                      1/1     Running   0          31m
    puls8-prometheus-node-exporter-7kpcj                         1/1     Running   0          31m
    puls8-prometheus-node-exporter-dsx6s                         1/1     Running   0          31m
    puls8-prometheus-node-exporter-dx9qh                         1/1     Running   0          31m
    puls8-prometheus-node-exporter-qphlm                         1/1     Running   0          31m
    puls8-prometheus-node-exporter-tp4dr                         1/1     Running   0          31m
    puls8-prometheus-node-exporter-vpzvj                         1/1     Running   0          31m
    puls8-zfs-localpv-controller-965c45fff-sqxml                 5/5     Running   0          31m
    puls8-zfs-localpv-node-4gk7l                                 2/2     Running   0          31m
    puls8-zfs-localpv-node-4jn79                                 2/2     Running   0          31m
    puls8-zfs-localpv-node-z9q89                                 2/2     Running   0          31m

    All components should show Running status.

Replicated Storage Configuration

  1. Discover block devices.

    Copy
    List Available Block Devices for DiskPool Creation
    kubectl puls8 mayastor get block-devices -n puls8 <NodeName>
    Copy
    Sample Command
    kubectl puls8 mayastor get block-devices -n puls8 talos-5mz-5dl
    Copy
    Sample Output
    DEVNAME    DEVTYPE    SIZE    AVAILABLE  MODEL         DEVPATH                                                                                 MAJOR  MINOR  DEVLINKS 
    /dev/sdb   disk       10 GiB  yes        Virtual_disk  /devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/target2:0:1/2:0:1:0/block/sdb       8      16     "/dev/disk/by-id/scsi-36000c29fa20cfba95e070bfe2a39e6ed", "/dev/disk/by-id/wwn-0x6000c29fa20cfba95e070bfe2a39e6ed", "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0", "/dev/disk/by-diskseq/18"  talos-5mz-5dl
  2. Create a DiskPool.

    Copy
    Create Replicated PV Mayastor DiskPool
    cat <<EOF | kubectl create -f -
    apiVersion: openebs.io/v1beta3
    kind: DiskPool
    metadata:
      name: pool1
      namespace: puls8
    spec:
      node: <worker-node>
      disks: ["aio:///dev/disk/by-id/<id>"]
    EOF

    Apply kubectl apply -f diskpool.yaml.

  3. Verify DiskPool.

    Copy
    Verify DiskPool Status
    kubectl puls8 mayastor get pools -n puls8
    Copy
    Sample Output
    ID     DISKS                                                                                                   MANAGED  NODE           STATUS  CAPACITY  ALLOCATED  AVAILABLE  COMMITTED  ENCRYPTED 
    pool1  aio:///dev/disk/by-id/scsi-36000c29fa20cfba95e070bfe2a39e6ed?uuid=2ba50b9a-2489-4766-b37c-915471769955  true     talos-5mz-5dl  Online  10 GiB    0 B        10 GiB     0 B        false 
  4. Create a StorageClass.

    Copy
    Create Replicated PV Mayastor StorageClass
    cat <<EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: mayastor-1
    provisioner: io.openebs.csi-mayastor
    parameters:
      protocol: nvmf
      repl: "1"
    EOF

    Refer to the StorageClass Parameters documentation for more information.

  5. Create a PersistentVolumeClaim (PVC).

    Copy
    Create PVC using Replicated PV Mayastor StorageClass
    cat <<EOF | kubectl create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ms-volume-claim
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: mayastor-1
    EOF
    Copy
    Sample Output
    NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    ms-volume-claim   Bound    pvc-f67eb1f9-0503-40a0-a28d-b6b328eb0761   1Gi        RWO            mayastor-1     <unset>                 5s
  6. Deploy test pod.

    Copy
    fio.yaml
    kind: Pod
    apiVersion: v1
    metadata:
      name: fio
    spec:
      nodeSelector:
        openebs.io/engine: mayastor
      volumes:
        - name: ms-volume
          persistentVolumeClaim:
            claimName: ms-volume-claim
      containers:
        - name: fio
          image: nixery.dev/shell/fio
          args:
            - sleep
            - "1000000"
          volumeMounts:
            - mountPath: "/volume"
              name: ms-volume
    Copy
    Deploy Test Pod to Validate Volume Provisioning
    kubectl apply -f fio.yaml
  7. Verify the pod status.

    Copy
    Verify Pod Status
    kubectl get pods
    Copy
    Sample Output
    NAME   READY   STATUS    RESTARTS   AGE
    fio    1/1     Running   0          2m18s

Local Storage Configuration

Local PV LVM

  1. Create a Volume Group.

    Talos does not provide shell access. A privileged container with LVM utilities must be used. We use the LVM PV node plugin container.

    Copy
    Access LVM Node Plugin Container
    kubectl exec -it <lvm-node-pod> -n puls8 -c openebs-lvm-plugin -- /bin/bash
    Copy
    Sample Command
    kubectl exec -it puls8-lvm-localpv-node-p8qv5 -n puls8 -c openebs-lvm-plugin -- /bin/bash
    Copy
    Create Physical Volume
    pvcreate /dev/sdb
    Copy
    Sample Output
    Physical volume "/dev/sdb" successfully created.
    Copy
    Create Volume Group
    vgcreate storage-vg /dev/sdb
    Copy
    Sample Output
    Volume group "storage-vg" successfully created
  2. Create a StorageClass.

    Copy
    Create Local PV LVM StorageClass
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: puls8-lvm
    provisioner: local.csi.openebs.io
    parameters:
      storage: "lvm"
      volgroup: "storage-vg"
    reclaimPolicy: Delete
    allowedTopologies:
      - matchLabelExpressions:
          - key: openebs.io/nodename
            values:
              - <your-node-name>

    Refer to the StorageClass Parameters documentation for more information.

  3. Create a PVC.

    Copy
    Create a Local PV LVM PVC
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: csi-lvmpv
    spec:
      storageClassName: puls8-lvm
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi

    Apply kubectl apply -f lvm-sc.yaml.

  4. Verify if the PVC is bound.

    Copy
    Verify PVC
    kubectl get pvc csi-lvmpv
    Copy
    Sample Output
    NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    csi-lvmpv   Bound    pvc-a169a946-279f-4377-a58f-ee77611fe8de   4Gi        RWO            openebs-lvm    <unset>                 2m36s

Local PV ZFS

  1. Deploy Local PV ZFS Helper Pod.

    Talos has no direct shell access. Use a privileged helper pod with ZFS tools.

    Copy
    Sample Pod Manifest
    apiVersion: v1
    kind: Pod
    metadata:
      name: zfs-tools
      namespace: puls8
    spec:
      nodeSelector:
        kubernetes.io/hostname: <your-node>
      hostNetwork: true
      hostPID: true
      restartPolicy: Never
      containers:
        - name: zfs-tools
          image: jasl8r/zfs-tools:4.9.80-rancher
          args: ["sleep", "1000000"]
          securityContext:
            privileged: true

    Apply kubectl apply -f zfs-tools.yaml. Enter Container kubectl exec -it zfs-tools -n puls8 -- bash.

  2. Create GPT Partition.

    Copy
    Create GPT Partition for Local PV ZFS Pool
    parted -s /dev/sdb mklabel gpt mkpart primary 0% 100%

    Partition type must be Solaris /usr & Apple ZFS (47).

  3. Create ZPool on the GPT partition.

    Copy
    Create ZPOOL
    zpool create -f myzpool /dev/<partition>
    Copy
    Verify the Pool
    zpool list
    Copy
    Sample Output
    NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
    mz    9.50G   434K  9.50G         -     0%     0%  1.00x  ONLINE  -
  4. Create a StorageClass.

    Copy
    Create Local PV ZFS StorageClass
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: puls8-zfspv
    parameters:
      recordsize: "128k"
      compression: "off"
      dedup: "off"
      fstype: "zfs"
      poolname: "myzpool"
    provisioner: zfs.csi.openebs.io
    allowedTopologies:
      - matchLabelExpressions:
          - key: openebs.io/nodename
            values:
              - <your-node-name>

    Refer to the StorageClass Parameters documentation for more information.

  5. Create a PVC.

    Copy
    Create Local PV ZFS PVC
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: csi-zfspv
    spec:
      storageClassName: puls8-zfspv
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi

    Apply kubectl apply -f zfspvc.yaml.

  6. Verify if the PVC is bound.

    Copy
    Verify PVC
    kubectl get pvc
    Copy
    Sample Output
    NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    csi-zfspv   Bound    pvc-85891d6d-48e6-4730-8434-abb409d53fd1   4Gi        RWO            openebs-zfspv    <unset>                 23h

Talos Upgrade Considerations

Talos upgrades must be performed carefully to avoid unintended data loss or disruption to the Kubernetes cluster. The upgrade behavior differs depending on the Talos version in use.

Follow the appropriate procedure based on the Talos version running in your environment.

Talos 1.7 and Lower

In Talos version 1.7 and earlier, upgrades require the --preserve flag to retain node configuration and data. If this flag is not used, Talos resets the node configuration and may remove existing etcd data and other stored state.

To ensure that cluster data remains intact during the upgrade, perform the upgrade using the --preserve flag.

  1. Upgrade the Node.

    Copy
    Upgrade Talos While Preserving Node Configuration and Data
    talosctl -n <node-ip> upgrade --preserve --image $IMAGE_URL
  2. Verify the Node Version.

    Copy
    Verify that the Node is Running the Upgraded Talos Version
    talosctl -n <node-ip> version
  3. Repeat this process for all nodes in the cluster.

Talos Version 1.8 and Later

Starting with Talos version 1.8, the upgrade process no longer wipes the system disk. The installer automatically preserves node configuration and data during upgrades.

As a result, the --preserve flag is no longer required when running the talosctl upgrade command. Refer to the release documentation for more information.

Benefits of Using DataCore Puls8 with Talos

  • Immutable and Secure OS (Talos): Talos is an API-managed, minimal, and immutable Linux distribution designed for Kubernetes. Its secure design enhances the reliability and predictability of Replicated PV Mayastor deployments by minimizing OS-level drift and vulnerabilities.
  • Consistent and Declarative Configuration: Talos supports fully declarative and auditable configuration management. This aligns well with Replicated PV Mayastor’s infrastructure-as-code approach, making it easy to version, replicate, and automate complex storage configurations.
  • Performance-Optimized Storage: DataCore Puls8 is built for speed. It leverages SPDK and NVMe for low-latency, high-throughput block storage. Combined with Talos's lightweight, optimized OS, this pairing is ideal for performance-critical workloads.

Learn More