githubEdit

Deploy Persistent Storage

Helm Install

Install Helm and add the Nexqloud repo if not done previously by following the steps in this guide.

All steps in this section should be conducted from the Kubernetes control plane node on which Helm has been installed.

Rook has published the following Helm charts for the Ceph storage provider:

  • Rook Ceph Operator: Starts the Ceph Operator, which will watch for Ceph CRs (custom resources)

  • Rook Ceph Cluster: Creates Ceph CRs that the operator will use to configure the cluster

The Helm charts are intended to simplify deployment and upgrades.

Persistent Storage Deployment

  • Note - if any issues are encountered during the Rook deployment, tear down the Rook-Ceph components via the steps listed here and begin anew.

  • Deployment typically takes approximately 10 minutes to complete**.**

Migration procedure

If you already have the nexqloud-rook helm chart installed, make sure to use the following documentation:

Rook Ceph repository

Add Repo

  • Add the Rook repo to Helm

  • Expected/Example Result

Verify Repo

  • Verify the Rook repo has been added

  • Expected/Example Result

Deployment Steps

STEP 1 - Install Ceph Operator Helm Chart

Testing

For additional Operator chart values refer to thisarrow-up-right page.

All In One Provisioner Replicas

For all-in-one deployments, you will likely want only one replica of the CSI provisioners.

  • Add following to rook-ceph-operator.values.yml created in the subsequent step

  • By setting provisionerReplicas to 1, you ensure that only a single replica of the CSI provisioner is deployed. This defaults to 2 when it is not explicitly set.

Default Resource Limits

You can disable default resource limits by using the following yaml config, this is useful when testing:

Install the Operator Chart

PRODUCTION

No customization is required by default.

  • Install the Operator chart:

STEP 2 - Install Ceph Cluster Helm Chart

For additional Cluster chart values refer to thisarrow-up-right page. For custom storage configuration refer to thisarrow-up-right example.

TESTING / ALL-IN-ONE

  • Update deviceFilter to match your disks

  • Change storageClass name from beta3 to one you are planning to use based on this tablearrow-up-right

  • Add your nodes you want the Ceph storage to use the disks on under the nodes section; (make sure to change node1, node2, ... to your K8s node names!

When planning all-in-one production provider (or a single storage node) with multiple storage drives (minimum 3):

  • Change failureDomain to osd

  • Change min_size to 2and size to 3

  • Comment or remove resources: field to make sure Ceph services will get enough resources before running them

PRODUCTION

  • Update deviceFilter to match your disks

  • Change storageClass name from beta3 to one you are planning to use based on this tablearrow-up-right

  • Update osdsPerDevice based on this tablearrow-up-right

  • Add your nodes you want the Ceph storage to use the disks on under the nodes section; (make sure to change node1, node2, ... to your K8s node names!

  • When planning a single storage node with multiple storage drives (minimum 3):

    • Change failureDomain to osd

  • Install the Cluster chart:

STEP 3 - Label the storageClass

This label is mandatory and is usedarrow-up-right by the Nexqloud's inventory-operator for searching the storageClass.

  • Change beta3 to your storageClass you have picked before

STEP 4 - Update Failure Domain (Single Storage Node or All-In-One Scenarios Only)

When running a single storage node or all-in-one, make sure to change the failure domain from host to osd for the .mgr pool.

Last updated