Frequently Asked Questions

Explore this Page

Overview

This document provides answers to frequently asked questions (FAQs) related to DataCore Puls8, including details on deployment, data protection, volume management, and platform capabilities. It is intended to assist users both new and experienced in understanding and utilizing DataCore Puls8 features efficiently.

Deployment and Setup Related

How do I get started, and what is the typical trial deployment?

To begin using DataCore Puls8, refer to the Getting Started section. This section provides step-by-step instructions to help you deploy and validate DataCore Puls8 in your environment.

How do I choose between Local PV and Replicated PV?

  • Local PV: Ideal for single-node deployments or environments where replication is managed externally.
  • Replicated PV: Recommended for environments that require high availability and data resilience.

Are there prerequisites for installing DataCore Puls8?

Yes. Prerequisites include:

  • System configuration
  • Storage configuration

Refer to the Prerequisites Documentation for the full list of requirements.

How do I create a StorageClass in DataCore Puls8?

Create a StorageClass using a YAML definition that specifies the correct provisioner, engine-specific parameters, and optional topology settings. Refer to the Creating a StorageClass Documentation for more information.

Upgrade and Version Management Related

How is the DataCore Puls8 upgrade performed?

DataCore Puls8 upgrade is performed using the kubectl puls8 upgrade command. The upgrade updates control-plane components first, followed by a rolling restart of Replicated PV Mayastor io-engine pods to apply the new version while preserving existing storage resources and data.

Does upgrading DataCore Puls8 cause application downtime?

For Replicated PV Mayastor volumes:

  • Multi-replica volumes remain available during upgrade and may briefly degrade.
  • Single-replica volumes become temporarily unavailable while their io-engine pod restarts.

Local Storages (Local PV Hostpath, Local PV LVM, and Local PV ZFS) are not affected because storage I/O occurs in kernel space.

  • For Local Storages, storage I/O is handled in kernel space. Puls8 containers manage provisioning and control-plane functions only. Restarting Puls8 components does not interrupt data access for Local PV volumes.
  • For Replicated Storage, the io-engine container provides the storage data plane and handles volume I/O operations. Restarting an io-engine pod temporarily interrupts access to volumes hosted on that node until the pod is fully operational.

What determines the DataCore Puls8 version installed during upgrade?

The version of the kubectl puls8 plugin determines the target Puls8 version. To upgrade to a specific Puls8 version, install the matching version of the kubectl puls8 plugin before running the upgrade command.

Can I downgrade DataCore Puls8 after upgrading?

No. Downgrading DataCore Puls8 is not supported.

What happens if the upgrade fails?

If the upgrade fails, you can review upgrade logs to identify the issue. After resolving the issue, retry the upgrade. If upgrade resources remain stuck, they can be removed using the force cleanup option. Refer to the Upgrading DataCore Puls8 for more information.

Storage Configuration Related

Can I use a replica count of 2 in the StorageClass on a single-node cluster?

No. When defining a StorageClass, specifying a replica count of 2 on a single-node cluster will prevent volume creation. The number of replicas must not exceed the number of available nodes. DataCore Puls8 requires that each replica be placed on a separate node to ensure high availability.

How do I ensure that replicas are not scheduled onto the same node? What about nodes in the same rack or availability zone?

Replica placement logic ensures that multiple replicas of the same volume are never placed on the same node, even if they are associated with different Disk Pools. For instance, a volume with a replication factor of 3 requires three distinct nodes, each hosted on a healthy Disk Pool with sufficient capacity.

For advanced placement control, such as spreading replicas across racks or availability zones, DataCore Puls8 supports topology-aware scheduling based on Kubernetes node labels. This ensures that Disk Pools and volumes are scheduled on nodes in distinct failure domains (Example: Racks or Zones) when correctly configured.

When using single-replica volumes (Example: For StatefulSet applications), it is equally important to ensure that both the application pods and the associated volumes are scheduled on the correct nodes. In such cases, StorageClass topology parameters can be used to align volume placement with application scheduling constraints.

Refer to the Topology Configuration Documentation for more details on configuring and using topology constraints.

Can Replicated Storage perform asynchronous replication to another node or cluster?

No. Replicated Storage does not support asynchronous replication within the same cluster or across clusters.

For disaster recovery and off-site data protection, it is recommended to use a backup solution such as Velero integrated with DataCore Puls8. Refer to the Backup and Restore Documentation for more information.

Data Protection Related

How is data protected in Replicated Storage? What happens in case of host, workload, or data center failure?

Replicated Storage) ensures resilience through a highly available architecture. It supports automatic NVMe controller failover to maintain I/O continuity during host failures. Data is synchronously replicated across nodes, according to the configured replication factor, to avoid any single point of failure. Failed replicas are automatically rebuilt in the background without disrupting I/O.

What happens when a single replica node fails?

It is recommended to provision volumes with at least two replicas for higher availability. If a volume is created with a single replica and the node hosting that replica fails, the volume will become unavailable, and I/O operations will be disrupted due to the absence of healthy replicas.

Does the supportability tool expose sensitive data?

No. The supportability tool creates diagnostic support bundles for debugging, which are generated only upon user request. These bundles are accessible only by the user and must be explicitly shared. Refer to the Supportability Documentation for a list of collected data.

Pool and Volume Management Related

Can the size or capacity of a Disk Pool be changed?

No. The size of a Replicated Storage Disk Pool is fixed at creation and cannot be modified. Each Disk Pool is backed by a single block device.

How do I resize a volume in DataCore Puls8?

Volume resizing is supported by Replicated PV Mayastor and Local PV LVM engines using Persistent Volume Claim (PVC) expansion. Ensure that your StorageClass and the selected engine support volume resizing.

Snapshots and Cloning Related

What can cause a snapshot operation to fail after a node reboot?

If a pod-based workload without a controller (Example: Deployment or StatefulSet) is scheduled on a node that reboots, the volume unpublish operation may not be triggered. As a result, the control plane incorrectly assumes the volume is still published, and the FIFREEZE operation fails during snapshot creation.

To resolve this, recreate or reinstate the pod to allow proper volume mounting and recognition by the control plane.

Does DataCore Puls8 support snapshots and clones?

  • Replicated PV Mayastor: Supports volume snapshots (via CSI), and restores from snapshot to a new volume but it does not support cloning from volume to volume at this time.
  • Local PV ZFS: Fully supports both snapshots and clones natively through the ZFS file system.

Refer to the Snapshots and Clones documentation for more information.

Learn More