Disk Emulation
Explore this Page
Overview
When testing Replicated PV Mayastor, allocating physical disk devices on every node may not always be practical or possible. To support such scenarios, Replicated PV Mayastor provides two types of disk emulation options: memory-backed disks and file-backed disks. These virtual devices allow you to simulate pool creation and volume provisioning without relying on actual hardware.
This document describes how to configure and use these virtual devices, provides important usage considerations, and offers examples for practical implementation.
Types of Emulated Disks
When physical disk devices are unavailable for testing purposes, Replicated PV Mayastor offers two types of emulated disk devices:
- Memory-Backed Disks (RAM Drives)
- File-Backed Disks
Memory-Backed Disks
Memory-backed disks are the easiest to provision, provided sufficient node resources are available. Replicated PV Mayastor automatically creates these disks when the corresponding pool is defined. However, memory-backed disks are volatile: because they exist entirely in memory, data will be lost if the IO Engine pod is terminated or rescheduled by Kubernetes.
Memory-backed disks should only be used for short-term testing and must not be used in production environments.
File-Backed Disks
File-backed disks store pool data within a file on a file system accessible to the IO Engine pod. Their durability depends on how they are provisioned:
- If placed on ephemeral storage (Example: Kubernetes
EmptyDir
), they offer no more persistence than memory-backed disks. - If placed on a persistent volume (Example: Kubernetes
HostPath
volume), they can be considered stable.
Configuring Memory-Backed Disks
To create a memory-backed disk, use the malloc:///
URI in the DiskPool
manifest. The following example shows how to define a pool with a 64MiB memory-backed disk.
apiVersion: "openebs.io/v1alpha1"
kind: DiskPool
metadata:
name: mempool-1
namespace: puls8
spec:
node: worker-node-1
disks: ["malloc:///malloc0?size_mb=64"]
This defines a pool named mempool-1
on worker-node-1
, creating a 64MiB emulated disk (malloc0
) within the IO Engine pod, assuming sufficient Huge Page memory is available.
The malloc:/// URI Schema
The malloc:///
URI specifies the use of Huge Page-backed memory for emulated disks. The format is malloc:///malloc<DeviceId>?<parameters>
<DeviceId>
: Unique device identifier (Example:malloc0
,malloc1
)<parameters>
: Optional query parameters
Parameter | Function | Type | Notes |
---|---|---|---|
size_mb
|
Disk size in MiB | Integer | Mutually exclusive with num_blocks |
num_blocks
|
Disk size in number of blocks | Mutually exclusive with size_mb |
|
blk_size
|
Block size in bytes (default: 512) | Optional. Valid values: 512, 4096 |
Memory is allocated from 2MiB Huge Page resources. For example, a 64MiB disk requires at least 33 Huge Pages (32MiB + overhead). You may need to adjust your node's Huge Page settings and IO Engine pod resource limits.
Configuring File-Backed Disks
File-backed disks require that a backing file exists on a path accessible to the IO Engine pod. The file can be used by referencing its location using the aio:///
URI scheme.
apiVersion: "openebs.io/v1alpha1"
kind: DiskPool
metadata:
name: filepool-1
namespace: puls8
spec:
node: worker-node-1
disks: ["aio:///var/tmp/disk1.img"]
apiVersion: "openebs.io/v1alpha1"
kind: DiskPool
metadata:
name: filepool-1
namespace: puls8
spec:
node: worker-node-1
disks: ["aio:///tmp/disk1.img?blk_size=4096"]
The aio:/// URI Schema
The aio:///
URI allows specification of a disk emulated via a file on disk. The only optional parameter is blk_size
: Block size in bytes (512 or 4096). Defaults to 512 if not specified.
The file must exist at the full absolute path before the pool is created. Ensure this path is accessible inside the IO Engine container, especially if using persistent volumes or HostPath mounts.
Creating the Backing File
Use the truncate command to create a file-backed disk of the desired size. This example creates a 1GiB file.
This file can then be referenced using the aio:///
schema in the DiskPool manifest.
Learn More