Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,110 +6,164 @@ updated: 2022-06-22

## Objective

There are different ways to use your Ceph cluster. We'll describe how to map your cluster using **rbd client**.
This guide explains how to access your **OVHcloud Ceph cluster** from a machine configured as an **RBD client**. It describes how to prepare your environment, configure network access, and connect securely to your **Cloud Disk Array**.

## Requirements

You must first ensure that you have done those steps :
Before proceeding:

- [Create a pool](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_a_pool)
- [Create a user](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_a_user)
- [Add rights to a user on a pool](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_change_user_rights)
- [Add an IP ACL](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_an_ip_acl) to allow your server to contact the cluster
- A [Cloud Disk Array](/links/storage/cloud-disk-array) solution
- Your client machine’s public or private IP is allowed in the Access Control List (ACL){} of your Ceph cluster. See our guide [Cloud Disk Array - IP ACL creation](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_an_ip_acl)
- You have the following credentials (available in the OVHcloud Control Panel):
- Cluster monitor IPs
- Ceph username (`client.<username>`)
- Secret key (keyring content)

## Ceph installation
For **deb based** distributions:
## Instructions

### Installing Ceph on the Client Machine

For **Debian/Ubuntu** distributions:

```bash
ubuntu@server:~$ sudo apt-get -y install ceph ceph-common
[...]
Setting up ceph-common (10.2.0-0ubuntu0.16.04.2) ...
Setting up ceph (10.2.0-0ubuntu0.16.04.2) ...
sudo apt-get update
sudo apt-get -y install ceph ceph-common
```

For **rpm based** distributions:
For **RHEL/CentOS** distributions:

```bash
[centos@server ~]$ sudo yum install -y ceph-common
[...]
Installed:
ceph-common.x86_64 1:0.80.7-3.el7
sudo yum install -y ceph-common
```

## Ceph configuration
Create file `/etc/ceph/ceph.conf`
### Retrieve Connection Details

Access the [OVHcloud Control Panel](/links/manager) and navigate to your **Cloud Disk Array service**.

Overview:

- Locate the monitor IPs for your Ceph cluster.

Users:

- Find the Ceph username and key required for authentication.

> [!primary]
>
> **Note:** If no users exist yet, follow these guides:
>
> - [Cloud Disk Array - User creation](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_create_a_user)
> - [Change user rights](/pages/storage_and_backup/block_storage/cloud_disk_array/ceph_change_user_rights)
>

### Configure the Client

Create or edit the file `/etc/ceph/ceph.conf` with the following content:

```ini
```bash
[global]
mon_host = <mon_1_IP>,<mon_2_IP>,<mon_3_IP>
mon_host = <MONITOR_IP_1>:6789, <MONITOR_IP_2>:6789, <MONITOR_IP_3>:6789
```

Create the file `/etc/ceph/ceph.client.<ceph_user_name>.keyring`
> [!primary]
>
> **Note:** The default Ceph monitor port is :6789 (Messenger v1). Some clusters may also expose :3300 for Messenger v2.
>

Create a keyring file for your Ceph user at `/etc/ceph/ceph.client.<username>.keyring`:

```ini
[client.<ceph_user_name>]
key = <my_user_key>
```bash
[client.<username>]
key = <your_secret_key>
```

`<mon_X_IP>` has to be replaced by monitors IP you can find on the [Cloud Disk Array manager](https://ca.ovh.com/manager/). Under 'Platforms and services' select your Ceph cluster.
Ensure the keyring file has restricted permissions for security:

`<my_user_key>` has to be replaced by the users's key you can find on your Cloud Disk Array manager.
```bash
sudo chmod 600 /etc/ceph/ceph.client.<username>.keyring
```

## Configuration check
You can check the configuration by listing the images inside your pool.
### Test the Connection and Configuration

Verify that the client can successfully connect to the Ceph cluster:

```bash
ubuntu@server:~$ rbd -n client.myuser list mypool
ceph -s --id <username>
```

In this case, the result is empty because we have not have created an image yet. If you have an error, please double check your configuration.
If the configuration is correct, the command returns the current cluster status.

## Image creation
You can't directly mount a pool, you have to **mount an image** that exists on the pool.
To further validate the setup, list the images available in your pool:

```bash
ubuntu@server:~$ rbd -n client.myuser create mypool/myimage -s $((10*1024*1024)) --image-format 2 --image-feature layering
ubuntu@server:~$ rbd -n client.myuser list mypool
myimage
rbd -n client.<username> list <pool_name>
```

We make sure that the image was created correctly by listing the pool content.
An empty result indicates that no images have been created yet. If an error occurs, review the configuration files and credentials to ensure they are correct.

### Create, Map, and Mount an RBD Volume

## Map the image
A Ceph pool cannot be mounted directly. You must first create an RBD image within the pool and then map it to a block device.

Create an RBD image:

```bash
ubuntu@server:~$ sudo rbd -n client.myuser map mypool/myimage
/dev/rbd0
rbd -n client.<username> create <pool_name>/<image_name> \
-s <size_in_MB> \
--image-format 2 \
--image-feature layering
```

My rbd image is not mapped to /dev/rbd0, it's a block storage. Therefore we have to **setup a filesystem**.
Verify image creation:

## Setup the filesystem
```bash
rbd -n client.<username> list <pool_name>
```

Map the image to a Block Device

```bash
sudo rbd -n client.<username> map <pool_name>/<image_name>
```

Verify the mapping

```bash
rbd showmapped
```

Format the Block Device (XFS Example)

```bash
ubuntu@server:~$ sudo mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=33, agsize=83885056 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=2684354560, imaxpct=5
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
sudo mkfs.xfs /dev/rbd0
```

## Mount the filesystem
Mount the Filesystem

```bash
ubuntu@server:~$ sudo mkdir /mnt/rbd
ubuntu@server:~$ sudo mount /dev/rbd0 /mnt/rbd
ubuntu@server:~$ df -h /mnt/rbd
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 10T 34M 10T 1% /mnt/rbd
sudo mkdir -p /mnt/<mount_point>
sudo mount /dev/rbd0 /mnt/<mount_point>
df -h /mnt/<mount_point>
```

You can now use your Ceph cluster!
You can now start using your Ceph block storage.

### Unmount and Unmap the RBD Volume

Before detaching an RBD image, ensure the filesystem is properly unmounted:

```bash
sudo umount /mnt/<mount_point>
sudo rbd unmap /dev/rbd0
```

The RBD image is now safely detached from the client.

### Notes and Best Practices

- Always use the monitor IP addresses provided in the OVHcloud Control Panel.
- Avoid storing sensitive information in plain-text configuration files.
- For Kubernetes environments, use the **CSI RBD driver** with the same configuration and credentials.

## Go further

Expand Down
Loading