Home Rook ceph: cloud-native storage orchestrator + distributed storage system
Post
Cancel

Rook ceph: cloud-native storage orchestrator + distributed storage system

Prerequisites:

  • Kubernetes v1.22 (or) higher
  • Helm 3.x
  • Raw devices (no partitions or formatted filesystems)
  • Raw partitions (no formatted filesystem)
  • LVM Logical Volumes (no formatted filesystem)
  • Persistent Volumes available from a storage class in block mode

Installation Requirements:

  • Ceph OSDs have a dependency on LVM when OSDs are created on raw devices (or) partitions.
  • Reset the disk used by rook for osds.

Prepare nodes for ceph osds:

Rest the disks on all hosts(worker nodes) to usable state

1
2
3
4
5
6
7
df -h

DISK="/dev/sdX"

sgdisk --zap-all $DISK

lsblk -f
Install lvm2 package on all the hosts(worker nodes) where OSDs will be running.
1
2
3
apt-get update -y 

apt-get install -y lvm2

Installing Ceph Operator with Helm:

The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.

Add the rook-ceph Helm repository:

1
helm repo add rook-release https://charts.rook.io/release

Fetch the latest charts from the repository:

1
helm repo update

Retrieve the package from rook-release repository, and download it locally:

1
helm fetch rook-release/rook-ceph --untar

Install ceph operator in the rook-ceph namespace:

1
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml --version 1.12

To confirm that the deployment succeeded, run:

1
kubectl -n rook-ceph get pod

Installing Ceph cluster with Helm:

1
helm fetch rook-release/rook-ceph-cluster --untar
1
2
helm install --namespace rook-ceph rook-ceph-cluster \
   --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml

Deploy the rook toolbox to run arbitrary Ceph commands

1
kubectl create -f https://raw.githubusercontent.com/rook/rook/release-1.12/deploy/examples/toolbox.yaml
Once the rook-ceph-tools pod is running, we can connect to it with:
1
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bash-4.4$ ceph osd status
ID  HOST                    USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  k8s-worker01-ceph  21.1M   199G      0        0       0        0   exists,up
 1  k8s-worker03-ceph  21.1M   199G      0        0       0        0   exists,up
 2  k8s-worker02-ceph  21.1M   199G      0        0       0        0   exists,up

bash-4.4$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    600 GiB  600 GiB  63 MiB    63 MiB       0.01
TOTAL  600 GiB  600 GiB  63 MiB    63 MiB       0.01

--- POOLS ---
POOL         ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr          1    1  449 KiB        2  1.3 MiB      0    190 GiB
replicapool   2   32     19 B        1   12 KiB      0    190 GiB

Accessing the ceph dashboard:

Get the external service IP:

1
kubectl get svc -n rook-ceph
1
2
3
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
rook-ceph-mgr             ClusterIP   X.X.X.X     <none>        9283/TCP            
rook-ceph-mgr-dashboard   ClusterIP   X.X.X.X    <none>        7000/TCP 

Use CLUSTER-IP of the rook-ceph-mgr-dashboard to access the dashboard using port forward:

1
kubectl port-forward  --address 0.0.0.0  service/rook-ceph-mgr-dashboard 8443:7000 -n rook-ceph
1
http://node-ip:8443 
1
2
Login Credentials:
default user named `admin`
To retrieve the generated password, we can run the following:
1
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

Exposes the dashboard on the Internet (using an reverse proxy)

Create certificate resource to generate certs for ceph-dashboard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: dashboard-cert
  namespace: rook-ceph
spec:
  secretName: rook-ceph-secret   
  issuerRef:
    name: ca-issuer
    kind: ClusterIssuer
  dnsNames:
    - rook-ceph.com
EOF

Create a ingress resource

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rook-ceph-mgr-dashboard
  namespace: rook-ceph
  annotations:
    kubernetes.io/tls-acme: "false"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/server-snippet: |
      proxy_ssl_verify off;
spec:
  ingressClassName: "nginx"
  tls:
   - hosts:
     - rook-ceph.com
     secretName: rook-ceph-secret
  rules:
  - host: rook-ceph.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: rook-ceph-mgr-dashboard
            port:
              number: 8443
EOF

You can now browse to https://rook-ceph.com/ to log into the dashboard.

Reference Links:

This post is licensed under CC BY 4.0 by the author.