Rook-Ceph Installation Guide (On-Prem Kubernetes)
This guide explains how to install Ceph using Rook on a Kubernetes cluster and create an S3-compatible object store using the Ceph Dashboard.
It assumes:
- On-prem Kubernetes cluster
- One or more worker nodes with dedicated block devices
1. Prerequisites
1.1 Kubernetes requirements
- Kubernetes cluster is running
kubectlis configured- Helm is installed
1.2 Storage requirements
- A raw block device available on a worker node
Example:/dev/blkdevice0 - The device must not:
- Have a filesystem
- Be mounted
- Be used by LVM or any other storage system
Verify on the node:
lsblk
2. Ceph Cluster Configuration
Create a file named cluster-local-values.yaml:
cephClusterSpec:
dataDirHostPath: /var/lib/rook-ceph
dashboard:
enabled: true
ssl: false
disruptionManagement:
managePodBudgets: false
storage:
useAllDevices: false
nodes:
- name: worker-node-1
devices:
- name: /dev/blkdevice0
Explanation
- dataDirHostPath: Stores Ceph metadata and configuration on the node
- storage.nodes: Explicitly defines which disk Ceph is allowed to use
- Ceph will wipe and claim
/dev/blkdevice0
3. Install the Rook Operator
Note: For detailed instruction and documentation please visit:
Below is a short summary of the steps to follow to setup ceph with the default configurations.
Install the Rook operator in the rook-ceph namespace:
helm repo add rook-release https://charts.rook.io/release
helm install \
--create-namespace \
--namespace rook-ceph \
rook-ceph rook-release/rook-ceph
Verify:
kubectl get pods -n rook-ceph
You should see the Rook operator pod running.
4. Install the Ceph Cluster
Install the Ceph cluster using the configuration file:
helm upgrade --install \
--create-namespace \
--namespace rook-ceph \
rook-ceph-cluster \
rook-release/rook-ceph-cluster \
--set operatorNamespace=rook-ceph \
-f cluster-local-values.yaml
5. Verify Ceph Cluster Status
Check that the Ceph cluster is created:
kubectl -n rook-ceph get cephcluster
Wait until the cluster status becomes Ready.
Check pods:
kubectl get pods -n rook-ceph
Example ready state:
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID
rook-ceph /var/lib/rook-ceph 1 2m1s Ready Cluster created successfully HEALTH_OK 53e56219-6088-4815-9c30-bb3e0c7af0dd
Example ready pods:
NAME READY RESTARTS STATUS IP NODE AGE
ceph-csi-controller-manager-5dc6b7cf95-zcnld 1/1 0 Running 10.1.163.28 rook-ceph-operator 3h9m
rook-ceph-crashcollector-rook-ceph-operator-74bb767fcb-gfhkf 1/1 0 Running 10.1.163.62 rook-ceph-operator 3h5m
rook-ceph-exporter-rook-ceph-operator-67bbd9ffc9-wfrg5 1/1 0 Running 10.1.163.34 rook-ceph-operator 3h5m
rook-ceph-mds-ceph-filesystem-a-65d68cdcc-dc2zt 2/2 0 Running 10.1.163.52 rook-ceph-operator 3h5m
rook-ceph-mds-ceph-filesystem-b-865c476dbf-bskfx 2/2 0 Running 10.1.163.35 rook-ceph-operator 3h5m
rook-ceph-mgr-a-7b644f9d66-tnv9x 2/2 0 Running 10.1.163.11 rook-ceph-operator 3h5m
rook-ceph-mon-a-7d5d6b6f7c-2nln4 2/2 0 Running 10.1.163.44 rook-ceph-operator 3h6m
rook-ceph-operator-84f6b7f9fb-l7c68 1/1 0 Running 10.1.163.39 rook-ceph-operator 3h9m
rook-ceph-osd-0-7966455c8c-2td6z 2/2 0 Running 10.1.163.3 rook-ceph-operator 3h5m
rook-ceph-osd-prepare-rook-ceph-operator-smf9t 0/1 0 Completed 10.1.163.21 rook-ceph-operator 3h5m
rook-ceph-rgw-ceph-objectstore-a-78997b57dd-gzr4j 2/2 0 Running 10.1.163.30 rook-ceph-operator 3h4m
rook-ceph.cephfs.csi.ceph.com-ctrlplugin-74fd6f86fbqh4 5/5 0 Running 10.1.163.26 rook-ceph-operator 3h6m
rook-ceph.cephfs.csi.ceph.com-nodeplugin-9bd8f 2/2 0 Running 192.168.29.130 rook-ceph-operator 3h6m
rook-ceph.rbd.csi.ceph.com-ctrlplugin-644c4cc86b-zr 5/5 0 Running 10.1.163.7 rook-ceph-operator 3h6m
rook-ceph.rbd.csi.ceph.com-nodeplugin-79567 2/2 0 Running 192.168.29.130 rook-ceph-operator 3h6m
6. Access the Ceph Dashboard
6.1 Port-forward the dashboard service
kubectl -n rook-ceph port-forward svc/rook-ceph-mgr-dashboard 7000:7000
Open a browser:
http://localhost:7000
6.2 Get dashboard login password
Username: admin
Get the password to login using this:
kubectl -n rook-ceph get secret rook-ceph-dashboard-password \
-o jsonpath="{.data.password}" | base64 -d && echo
7. Create an S3 User
- Log in to the Ceph Dashboard
- Go to Object Gateway
- Select Users
- Click Create
Enter:
- Username:
mach5 - Display Name:
mach5
After creation:
- Set Access Key:
mach5 - Set Secret Key:
mach5
Save the user.
8. Create an S3 Bucket
- Go to Object Gateway
- Select Buckets
- Click Create
Enter:
- Bucket Name:
mach5 - Owner:
mach5
Create the bucket.
9. S3 Access Details
Applications can now connect using:
| Setting | Value |
|---|---|
| Access Key | mach5 |
| Secret Key | mach5 |
| Bucket | mach5 |
| Endpoint | http://rook-ceph-rgw-ceph-objectstore.rook-ceph.svc:80 |
The endpoint can be exposed via a Kubernetes Service or Ingress as needed.
10. Notes for Production Use
For production environments:
- Use at least 3 nodes
- Ensure low-latency networking
- Use real disks (SSD/NVMe recommended)
- Configure monitoring and backups
Summary
- Rook runs Ceph inside Kubernetes
- Ceph provides S3-compatible object storage
- Disk usage is explicit and safe
- The dashboard is used for administration
- Applications can use Ceph just like AWS S3
- Ceph Documentation