Deploy Persistent Storage
Helm Install
Install Helm and add the Nexqloud repo if not done previously by following the steps in this guide.
All steps in this section should be conducted from the Kubernetes control plane node on which Helm has been installed.
Rook has published the following Helm charts for the Ceph storage provider:
Rook Ceph Operator: Starts the Ceph Operator, which will watch for Ceph CRs (custom resources)
Rook Ceph Cluster: Creates Ceph CRs that the operator will use to configure the cluster
The Helm charts are intended to simplify deployment and upgrades.
Persistent Storage Deployment
Note - if any issues are encountered during the Rook deployment, tear down the Rook-Ceph components via the steps listed here and begin anew.
Deployment typically takes approximately 10 minutes to complete**.**
Migration procedure
If you already have the nexqloud-rook helm chart installed, make sure to use the following documentation:
Rook Ceph repository
Add Repo
Add the Rook repo to Helm
Expected/Example Result
Verify Repo
Verify the Rook repo has been added
Expected/Example Result
Deployment Steps
STEP 1 - Install Ceph Operator Helm Chart
Testing
For additional Operator chart values refer to this page.
All In One Provisioner Replicas
For all-in-one deployments, you will likely want only one replica of the CSI provisioners.
Add following to
rook-ceph-operator.values.ymlcreated in the subsequent stepBy setting
provisionerReplicasto1, you ensure that only a single replica of the CSI provisioner is deployed. This defaults to2when it is not explicitly set.
Default Resource Limits
You can disable default resource limits by using the following yaml config, this is useful when testing:
Install the Operator Chart
PRODUCTION
No customization is required by default.
Install the Operator chart:
STEP 2 - Install Ceph Cluster Helm Chart
For additional Cluster chart values refer to this page. For custom storage configuration refer to this example.
TESTING / ALL-IN-ONE
Update
deviceFilterto match your disksChange storageClass name from
beta3to one you are planning to use based on this tableAdd your nodes you want the Ceph storage to use the disks on under the
nodessection; (make sure to changenode1,node2, ... to your K8s node names!When planning all-in-one production provider (or a single storage node) with multiple storage drives (minimum 3):
Change
failureDomaintoosdChange
min_sizeto2and size to3Comment or remove
resources:field to make sure Ceph services will get enough resources before running them
PRODUCTION
Update
deviceFilterto match your disksChange storageClass name from
beta3to one you are planning to use based on this tableUpdate
osdsPerDevicebased on this tableAdd your nodes you want the Ceph storage to use the disks on under the
nodessection; (make sure to changenode1,node2, ... to your K8s node names!When planning a single storage node with multiple storage drives (minimum 3):
Change
failureDomaintoosd
Install the Cluster chart:
STEP 3 - Label the storageClass
This label is mandatory and is used by the Nexqloud's
inventory-operatorfor searching the storageClass.
Change beta3 to your storageClass you have picked before
STEP 4 - Update Failure Domain (Single Storage Node or All-In-One Scenarios Only)
When running a single storage node or all-in-one, make sure to change the failure domain from
hosttoosdfor the.mgrpool.
Last updated