Installing DataCore Puls8 on Talos
Explore this Page
- Overview
- Requirements
- Talos System Preparation
- Installing DataCore Puls8 on Talos
- Replicated Storage Configuration
- Local Storage Configuration
- Talos Upgrade Considerations
- Benefits of Using DataCore Puls8 with Talos
Overview
This document provides detailed instructions for installing DataCore Puls8 on a Kubernetes cluster running on Talos. Talos is an immutable, API-driven operating system designed for Kubernetes. Since, Talos does not provide traditional shell access, specific system extensions and configuration patches are required to enable storages such as Replicated PV Mayastor, Local PV LVM, and Local PV ZFS.
This document covers:
- Required Talos system extensions
- Kernel module and HugePages configuration
- NVMe and multipath verification
- Puls8 installation using Helm
- DiskPool creation for Replicated PV Mayastor
- Local PV LVM and Local PV ZFS configuration
- Talos upgrade considerations
Requirements
Ensure the following requirements are met before installing DataCore Puls8 on Talos:
- A running Talos Kubernetes cluster
talosctl,kubectl, andhelminstalled- Administrative access to Talos nodes
- Available block devices for storage configuration
Talos System Preparation
Install Required System Extensions
Talos requires specific system extensions for storages.
Generate Talos boot assets using the Talos Factory UI with the following customization:
customization:
systemExtensions:
officialExtensions:
- siderolabs/btrfs
- siderolabs/nvme-cli
- siderolabs/util-linux-tools
- siderolabs/zfs
The installer image ensures these extensions are included during installation or upgrades.
Verify Installed Extensions
Verify if the extensions are installed:
talosctl --talosconfig talosconfig -n <node-ip> get extensions
NODE NAMESPACE TYPE ID VERSION NAME VERSION
10.200.35.35 runtime ExtensionStatus 0 1 btrfs v1.11.2
10.200.35.35 runtime ExtensionStatus 1 1 nvme-cli v2.14
10.200.35.35 runtime ExtensionStatus 2 1 util-linux-tools 2.41.1
10.200.35.35 runtime ExtensionStatus 3 1 zfs 2.3.3-v1.11.2
10.200.35.35 runtime ExtensionStatus 4 1 schematic 482fd6b67b34093c6b2a28d15b7a96c64f5cbebce52f59f9946aeb961518e13d
10.200.35.35 runtime ExtensionStatus modules.dep 1 modules.dep 6.12.48-talos
Verify NVMe TCP Module
The nvme_tcp kernel module should be inbuilt in Talos.
talosctl --talosconfig ./talosconfig -n <node-ip> list /sys/module/ | grep -i nvme
Configure HugePages and Required Kernel Modules
Create a machine configuration patch file (example: wp.yaml):
machine:
kernel:
modules:
- name: btrfs
- name: zfs
sysctls:
vm.nr_hugepages: "1024"
nodeLabels:
openebs.io/engine: "mayastor"
kubelet:
extraMounts:
- destination: /var/local
type: bind
source: /var/local
options:
- bind
- rshared
- rw
- destination: /var/openebs
type: bind
source: /var/openebs
options:
- bind
- rshared
- rw
Apply the configuration:
talosctl patch --mode=no-reboot machineconfig -n <worker-node-ip> --patch @wp.yaml --talosconfig <path-to-talosconfig>
talosctl patch --mode=no-reboot machineconfig -n 10.200.35.35 --patch @wp.yaml --talosconfig ./talosconfig
patched MachineConfigs.config.talos.dev/v1alpha1 at the node 10.200.35.35
Applied configuration without a reboot
Verify HugePages Configuration
Each node should reserve at least 1024 Huge Pages (2GiB total) exclusively for the IO Engine.
talosctl --talosconfig <path-to-talosconfig> -n <worker-node-ip> read /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
talosctl --talosconfig ./talosconfig -n 10.200.35.36 read /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Label Worker Nodes
kubectl label node <node-name> openebs.io/engine=mayastor
Verify NVMe Multipath Configuration
Verify if NVMe native multipath support is enabled.
talosctl --talosconfig <path-to-talosconfig> -n <worker-node-ip> read /sys/module/nvme_core/parameters/multipath
talosctl --talosconfig ./talosconfig -n 10.200.35.35 read /sys/module/nvme_core/parameters/multipath
If CPU core isolation is required, refer to the Performance Optimization documentation. This document provides instructions on configuring the necessary kernel parameters. Refer to the Talos documentation for more information.
Installing DataCore Puls8 on Talos
Once the system is configured, proceed with installing DataCore Puls8 on Talos.
-
Create a Namespace.
-
Label namespace for privileged admission.
CopyEnable Privileged Pod Admission for Puls8 Namespacekubectl label ns puls8 \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/enforce-version=latest \
pod-security.kubernetes.io/audit=privileged \
pod-security.kubernetes.io/audit-version=latest \
pod-security.kubernetes.io/warn=privileged \
pod-security.kubernetes.io/warn-version=latest --overwrite -
Install DataCore Puls8 via Helm.
CopyInstall Puls8 on Talos with Required etcd and ZFS Configurationhelm install puls8 -n puls8 --create-namespace \
oci://docker.io/datacoresoftware/puls8 \
--<puls8-chart-version> \
--set openebs.zfs-localpv.zfsNode.encrKeysDir=/var/local/puls8/zfs/keys \
--set openebs.mayastor.etcd.image.repository=openebs/etcd/home/keysis read-only in Talos, encryption directory is changed.- The etcd repository override is required for this Puls8 version.
-
Verify Pods.
CopyVerify Puls8 Pods are Running SuccessfullyNAME READY STATUS RESTARTS AGE
alertmanager-puls8-kube-prometheus-stac-alertmanager-0 2/2 Running 0 26m
dcs-puls8-down-pro-diskpoolclaim-operator-5b6c5d8cbb-8kbhz 1/1 Running 0 31m
dcs-puls8-license-agent-c5976869f-fcvgt 1/1 Running 0 31m
prometheus-puls8-kube-prometheus-stac-prometheus-0 2/2 Running 0 26m
puls8-agent-core-74976b7549-stqgd 2/2 Running 0 31m
puls8-agent-ha-node-pw6cf 1/1 Running 0 31m
puls8-agent-ha-node-qpcj2 1/1 Running 0 31m
puls8-agent-ha-node-v4rn4 1/1 Running 0 31m
puls8-alloy-22z9h 2/2 Running 0 31m
puls8-alloy-m2v9f 2/2 Running 0 31m
puls8-alloy-r76vd 2/2 Running 0 31m
puls8-api-rest-6589fcd7fb-jbbb4 1/1 Running 0 31m
puls8-csi-controller-76c95df644-w7hs7 6/6 Running 0 31m
puls8-csi-node-2qtm7 2/2 Running 0 31m
puls8-csi-node-5c9nj 2/2 Running 0 31m
puls8-csi-node-8dp7n 2/2 Running 0 31m
puls8-etcd-0 1/1 Running 0 31m
puls8-etcd-1 1/1 Running 0 31m
puls8-etcd-2 1/1 Running 0 31m
puls8-grafana-649c9c6978-grpwv 3/3 Running 0 31m
puls8-io-engine-lw6lw 2/2 Running 0 31m
puls8-io-engine-qfnzw 2/2 Running 0 31m
puls8-io-engine-rdzx4 2/2 Running 0 31m
puls8-kube-prometheus-stac-operator-7bf69c5b6c-zdw77 1/1 Running 0 31m
puls8-kube-state-metrics-8c97b96df-6gm95 1/1 Running 0 31m
puls8-localpv-provisioner-98d6796cc-x4lbz 1/1 Running 0 31m
puls8-loki-0 2/2 Running 0 31m
puls8-loki-1 2/2 Running 0 31m
puls8-loki-2 2/2 Running 0 31m
puls8-lvm-localpv-controller-6b59b87648-2jw2c 5/5 Running 0 31m
puls8-lvm-localpv-node-7tb92 2/2 Running 0 31m
puls8-lvm-localpv-node-m7nlt 2/2 Running 0 31m
puls8-lvm-localpv-node-r2xg4 2/2 Running 0 31m
puls8-minio-0 1/1 Running 0 31m
puls8-minio-1 1/1 Running 0 31m
puls8-minio-2 1/1 Running 0 31m
puls8-nats-0 3/3 Running 0 31m
puls8-nats-1 3/3 Running 0 31m
puls8-nats-2 3/3 Running 0 31m
puls8-obs-callhome-6d57669d54-rc5nw 2/2 Running 0 31m
puls8-operator-diskpool-74ccdb865-psdpv 1/1 Running 0 31m
puls8-prometheus-node-exporter-7kpcj 1/1 Running 0 31m
puls8-prometheus-node-exporter-dsx6s 1/1 Running 0 31m
puls8-prometheus-node-exporter-dx9qh 1/1 Running 0 31m
puls8-prometheus-node-exporter-qphlm 1/1 Running 0 31m
puls8-prometheus-node-exporter-tp4dr 1/1 Running 0 31m
puls8-prometheus-node-exporter-vpzvj 1/1 Running 0 31m
puls8-zfs-localpv-controller-965c45fff-sqxml 5/5 Running 0 31m
puls8-zfs-localpv-node-4gk7l 2/2 Running 0 31m
puls8-zfs-localpv-node-4jn79 2/2 Running 0 31m
puls8-zfs-localpv-node-z9q89 2/2 Running 0 31mAll components should show
Runningstatus.
Replicated Storage Configuration
-
Discover block devices.
CopyList Available Block Devices for DiskPool Creationkubectl puls8 mayastor get block-devices -n puls8 <NodeName>CopySample OutputDEVNAME DEVTYPE SIZE AVAILABLE MODEL DEVPATH MAJOR MINOR DEVLINKS
/dev/sdb disk 10 GiB yes Virtual_disk /devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/target2:0:1/2:0:1:0/block/sdb 8 16 "/dev/disk/by-id/scsi-36000c29fa20cfba95e070bfe2a39e6ed", "/dev/disk/by-id/wwn-0x6000c29fa20cfba95e070bfe2a39e6ed", "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0", "/dev/disk/by-diskseq/18" talos-5mz-5dl -
Create a DiskPool.
CopyCreate Replicated PV Mayastor DiskPoolcat <<EOF | kubectl create -f -
apiVersion: openebs.io/v1beta3
kind: DiskPool
metadata:
name: pool1
namespace: puls8
spec:
node: <worker-node>
disks: ["aio:///dev/disk/by-id/<id>"]
EOFApply
kubectl apply -f diskpool.yaml. -
Verify DiskPool.
CopySample OutputID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED ENCRYPTED
pool1 aio:///dev/disk/by-id/scsi-36000c29fa20cfba95e070bfe2a39e6ed?uuid=2ba50b9a-2489-4766-b37c-915471769955 true talos-5mz-5dl Online 10 GiB 0 B 10 GiB 0 B false -
Create a StorageClass.
CopyCreate Replicated PV Mayastor StorageClasscat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-1
provisioner: io.openebs.csi-mayastor
parameters:
protocol: nvmf
repl: "1"
EOFRefer to the StorageClass Parameters documentation for more information.
-
Create a PersistentVolumeClaim (PVC).
CopyCreate PVC using Replicated PV Mayastor StorageClasscat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ms-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: mayastor-1
EOFCopySample OutputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
ms-volume-claim Bound pvc-f67eb1f9-0503-40a0-a28d-b6b328eb0761 1Gi RWO mayastor-1 <unset> 5s -
Deploy test pod.
Copyfio.yamlkind: Pod
apiVersion: v1
metadata:
name: fio
spec:
nodeSelector:
openebs.io/engine: mayastor
volumes:
- name: ms-volume
persistentVolumeClaim:
claimName: ms-volume-claim
containers:
- name: fio
image: nixery.dev/shell/fio
args:
- sleep
- "1000000"
volumeMounts:
- mountPath: "/volume"
name: ms-volume -
Verify the pod status.
Local Storage Configuration
Local PV LVM
-
Create a Volume Group.
Talos does not provide shell access. A privileged container with LVM utilities must be used. We use the LVM PV node plugin container.
CopyAccess LVM Node Plugin Containerkubectl exec -it <lvm-node-pod> -n puls8 -c openebs-lvm-plugin -- /bin/bashCopySample Commandkubectl exec -it puls8-lvm-localpv-node-p8qv5 -n puls8 -c openebs-lvm-plugin -- /bin/bash -
Create a StorageClass.
CopyCreate Local PV LVM StorageClassapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-lvm
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "storage-vg"
reclaimPolicy: Delete
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/nodename
values:
- <your-node-name>Refer to the StorageClass Parameters documentation for more information.
-
Create a PVC.
CopyCreate a Local PV LVM PVCkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-lvmpv
spec:
storageClassName: puls8-lvm
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4GiApply
kubectl apply -f lvm-sc.yaml. -
Verify if the PVC is bound.
CopySample OutputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
csi-lvmpv Bound pvc-a169a946-279f-4377-a58f-ee77611fe8de 4Gi RWO openebs-lvm <unset> 2m36s
Local PV ZFS
-
Deploy Local PV ZFS Helper Pod.
Talos has no direct shell access. Use a privileged helper pod with ZFS tools.
CopySample Pod ManifestapiVersion: v1
kind: Pod
metadata:
name: zfs-tools
namespace: puls8
spec:
nodeSelector:
kubernetes.io/hostname: <your-node>
hostNetwork: true
hostPID: true
restartPolicy: Never
containers:
- name: zfs-tools
image: jasl8r/zfs-tools:4.9.80-rancher
args: ["sleep", "1000000"]
securityContext:
privileged: trueApply
kubectl apply -f zfs-tools.yaml. Enter Containerkubectl exec -it zfs-tools -n puls8 -- bash. -
Create GPT Partition.
CopyCreate GPT Partition for Local PV ZFS Poolparted -s /dev/sdb mklabel gpt mkpart primary 0% 100%Partition type must be Solaris /usr & Apple ZFS (47).
-
Create ZPool on the GPT partition.
CopySample OutputNAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
mz 9.50G 434K 9.50G - 0% 0% 1.00x ONLINE - -
Create a StorageClass.
CopyCreate Local PV ZFS StorageClassapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: puls8-zfspv
parameters:
recordsize: "128k"
compression: "off"
dedup: "off"
fstype: "zfs"
poolname: "myzpool"
provisioner: zfs.csi.openebs.io
allowedTopologies:
- matchLabelExpressions:
- key: openebs.io/nodename
values:
- <your-node-name>Refer to the StorageClass Parameters documentation for more information.
-
Create a PVC.
CopyCreate Local PV ZFS PVCkind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: csi-zfspv
spec:
storageClassName: puls8-zfspv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4GiApply
kubectl apply -f zfspvc.yaml. -
Verify if the PVC is bound.
CopySample OutputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
csi-zfspv Bound pvc-85891d6d-48e6-4730-8434-abb409d53fd1 4Gi RWO openebs-zfspv <unset> 23h
Talos Upgrade Considerations
Talos upgrades must be performed carefully to avoid unintended data loss or disruption to the Kubernetes cluster. The upgrade behavior differs depending on the Talos version in use.
Follow the appropriate procedure based on the Talos version running in your environment.
Talos 1.7 and Lower
In Talos version 1.7 and earlier, upgrades require the --preserve flag to retain node configuration and data. If this flag is not used, Talos resets the node configuration and may remove existing etcd data and other stored state.
To ensure that cluster data remains intact during the upgrade, perform the upgrade using the --preserve flag.
-
Upgrade the Node.
CopyUpgrade Talos While Preserving Node Configuration and Datatalosctl -n <node-ip> upgrade --preserve --image $IMAGE_URL -
Verify the Node Version.
-
Repeat this process for all nodes in the cluster.
Talos Version 1.8 and Later
Starting with Talos version 1.8, the upgrade process no longer wipes the system disk. The installer automatically preserves node configuration and data during upgrades.
As a result, the --preserve flag is no longer required when running the talosctl upgrade command. Refer to the release documentation for more information.
Benefits of Using DataCore Puls8 with Talos
- Immutable and Secure OS (Talos): Talos is an API-managed, minimal, and immutable Linux distribution designed for Kubernetes. Its secure design enhances the reliability and predictability of Replicated PV Mayastor deployments by minimizing OS-level drift and vulnerabilities.
- Consistent and Declarative Configuration: Talos supports fully declarative and auditable configuration management. This aligns well with Replicated PV Mayastor’s infrastructure-as-code approach, making it easy to version, replicate, and automate complex storage configurations.
- Performance-Optimized Storage: DataCore Puls8 is built for speed. It leverages SPDK and NVMe for low-latency, high-throughput block storage. Combined with Talos's lightweight, optimized OS, this pairing is ideal for performance-critical workloads.
Learn More