Prerequisites

In this topic:

General requirements

Networking requirements

Recommended resource requirements

DiskPool requirements

RBAC permission requirements

Minimum worker node count

Transport protocols



General requirements

All worker nodes must satisfy the following requirements:

  • x86-64 CPU cores with SSE4.2 instruction support:

    • First generation Intel Core i5 and Core i7 (Nehalem microarchitecture) or newer

    • AMD Bulldozer processor and newer

  • Linux kernel 5.4 or higher with the following modules loaded:

    • nvme-tcp

    • ext4 and optionally xfs

  • Helm version must be v3.7 or later.

  • Supported Kubernetes versions:


Networking requirements

The minimum networking requirements to install DataCore Bolt are listed below:

  • An Ethernet connection with a minimum speed of 10 Gbps between the nodes must be available at all times.

  • Ensure that the following ports are not in use on the node:

    • 10124: Bolt gRPC server will use this port.
    • 8420 / 4421 : NVMf targets will use these ports.

  • If the REST server needs to be accessed from outside the cluster setup:

    • The NodePort should be accessible at 30010 / 30011

    • The firewall settings should not restrict connection to the node.


Recommended resource requirements

The resource requirements for deployment of DataCore Bolt are as follows:

Per Bolt component (deployed container) resource requirements:
  • io-engine DaemonSet

    Copy
    resources:
      limits:
        cpu: "2"
        memory: "1Gi"
        hugepages-2Mi: "2Gi"
      requests:
        cpu: "2"
        memory: "1Gi"
        hugepages-2Mi: "2Gi"
  • csi-node DaemonSet

    Copy
    resources:
      limits:
        cpu: "100m"
        memory: "50Mi"
      requests:
        cpu: "100m"
        memory: "50Mi"
  • csi-controller Deployment

    Copy
    resources:
      limits:
        cpu: "32m"
        memory: "128Mi"
      requests:
        cpu: "16m"
        memory: "64Mi"
  • api-rest Deployment

    Copy
    resources:
      limits:
        cpu: "100m"
        memory: "64Mi"
      requests:
        cpu: "50m"
        memory: "32Mi"
  • agent-core Deployment

    Copy
    resources:
      limits:
        cpu: "1000m"
        memory: "32Mi"
      requests:
        cpu: "500m"
        memory: "16Mi"
  • operator-diskpool

    Copy
    resources:
      limits:
        cpu: "100m"
        memory: "32Mi"
      requests:
        cpu: "50m"
        memory: "16Mi"

DiskPool requirements

The DiskPool requirements are as follows:

  • Disks must be unpartitioned, unformatted, and used exclusively by the DiskPool.

  • The minimum capacity of the disks should be 10 GB.


RBAC permission requirements

Access to the following Kubernetes resources is required to install, manage, use and uninstall DataCore Bolt:

  • Kubernetes core v1 API-group resources: Pod, Event, Node, Namespace, ServiceAccount, PersistentVolume, PersistentVolumeClaim, ConfigMap, Secret, Service, Endpoint, Event

  • Kubernetes batch API-group resources: CronJob, Job

  • Kubernetes apps API-group resources: Deployment, ReplicaSet, StatefulSet, DaemonSet

  • Kubernetes storage.k8s.io API-group resources: StorageClass, VolumeSnapshot, VolumeSnapshotContent, VolumeAttachment, CSI-Node

  • Kubernetes apiextensions.k8s.io API-group resources: CustomResourceDefinition

  • Bolt Custom Resources that is openebs.io API-group resources: DiskPool

  • Custom Resources from Helm chart dependencies of Jaeger that is helpful for debugging:

    • ConsoleLink Resource from console.openshift.io API group

    • ElasticSearch Resource from logging.openshift.io API group

    • Kafka and KafkaUsers from kafka.strimzi.io API group

    • ServiceMonitor from monitoring.coreos.com API group

    • Ingress from networking.k8s.io API group and from extensions API group

    • Route from route.openshift.io API group

    • All resources from jaegertracing.io API group

As an example, a ClusterRole and ClusterRoleBinding YAML spec is attached to a Kubernetes user bolt-user and also to a Group bolt-admin. The below YAML can be used if a Kubernetes Cluster Administrator wants to give restricted access to the resources on the Kubernetes cluster where DataCore Bolt is about to be installed.

Copy
ClusterRole YAML
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ .Release.Name }}-service-account
  namespace: {{ .Release.Namespace }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bolt-cluster-role
rules:
  # must create bolt crd if it doesn't exist
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["create", "list"]
  # must read diskpool info
- apiGroups: ["datacore.com"]
  resources: ["diskpools"]
  verbs: ["get", "list", "watch", "update", "replace", "patch"]
  # must update diskpool status
- apiGroups: ["datacore.com"]
  resources: ["diskpools/status"]
  verbs: ["update", "patch"]
  # external provisioner & attacher
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]

  # external provisioner
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
  resources: ["volumesnapshots"]
  verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
  resources: ["volumesnapshotcontents"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]

  # external attacher
- apiGroups: ["storage.k8s.io"]
  resources: ["volumeattachments"]
  verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["volumeattachments/status"]
  verbs: ["patch"]
  # CSI nodes must be listed
- apiGroups: ["storage.k8s.io"]
  resources: ["csinodes"]
  verbs: ["get", "list", "watch"]

  # get kube-system namespace to retrieve Uid
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bolt-cluster-role-binding
subjects:
- kind: ServiceAccount
  name: {{ .Release.Name }}-service-account
  namespace: {{ .Release.Namespace }}
roleRef:
  kind: ClusterRole
  name: bolt-cluster-role
  apiGroup: rbac.authorization.k8s.io

Minimum worker node count

The minimum supported worker node count is three nodes.

When using the synchronous replication feature (N+1 mirroring), the number of worker nodes to which DataCore Bolt is deployed should be no less than the desired replication factor.


Transport protocols

DataCore Bolt supports the export and mounting of volumes over NVMe-oF TCP only. Worker node(s) on which a volume may be scheduled (to be mounted) must have the requisite initiator support installed and configured.

In order to reliably mount application volumes over NVMe-oF TCP, a worker node's kernel version must be 5.4 or later and the nvme-tcp kernel module must be loaded.