In this guide, we will look at ways client data can be backed up and how to restore that data in the event it becomes necessary.

It is suggested a good backup policy be in place in the event data needs to be restored. This policy will depend on various factors, such as importance of data, how much data needs to be stored, how far back in time to store data, what data to keep backups of, cost, and other factors that may be relevant to your OpenStack cloud.

Ceph acts as the distributed storage backend for this deployment of OpenStack. Ceph is naturally self-healing and there is no single point of failure, however it is still possible for it to fail, although this will be a rare occurrence. The more replicas used will only decrease the likelihood of failure at the expense of cost. This deployment of Ceph uses three replicas and is generally recommended as a good starting point.

If data loss is of the utmost importance, it is recommended that RBD mirroring within Ceph be setup. This means another Ceph cluster will have to exist and the data be mirrored between them. The additional Ceph cluster can be setup in another data center to further decrease the failure domain.


Where is client data currently stored?

All OpenStack client data is stored in Ceph pools.

This section will explain how to see the data stored in Ceph.

By running rados lspools from one of the OpenStack hardware nodes, you can see the individual Ceph pools and what data is stored in each pool.

The following are the Ceph pools where data is stored:

# rados lspools
device_health_metrics
images
volumes
vms
backups
metrics
manila_data
manila_metadata
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
default.rgw.buckets.index

The configuration for these pools is maintained using ceph-ansible.

You can see the contents of each pool by running rbd ls -l POOL_NAME.

An example listing the images pool:

# rbd ls -l images
NAME                                       SIZE     PARENT  FMT  PROT  LOCK
20b56b2d-1af5-46e1-a5c6-fa1f9a45245d       500 MiB            2
20b56b2d-1af5-46e1-a5c6-fa1f9a45245d@snap  500 MiB            2  yes
26a0fde5-69e7-4d85-ae4e-e167e295ecfa           0 B            2
26a0fde5-69e7-4d85-ae4e-e167e295ecfa@snap      0 B            2  yes

 

How to create backups of volumes

Using Horizon

Volumes can be backed up using Horizon. When backups are made they will be created within the backups Ceph pool.

To create volume backups in Horizon, navigate to the Volumes tab, then to the next Volumes tab. This will display current volumes.

Find the dropdown next to the volume to back up, and click the “Create Backup” button.

 

Fill out the relevant details:

 

The volume backup should show as in progress:

 

And when done it should show listed like so:

 

Using OpenStackClient

Volume backups can also be created using OpenStackClient.

The following explains how to list volumes and make a volume backup.

List volumes and obtain UUID of volume to back up:

$ openstack volume list --fit-width
+--------------------------------------+-----------------------+-----------+------+----------------------------------------+
| ID                                   | Name                  | Status    | Size | Attached to                            |
+--------------------------------------+-----------------------+-----------+------+----------------------------------------+
| 40011aaa-3875-4236-9fd0-fff44c1fad21 |                       | in-use    |   20 | Attached to centos8_test on /dev/vda   |
| 663419d7-df14-4472-9eb2-1f4c976103e9 | CentOS 8 (ce8-x86_64) | in-use    |   20 | Attached to image volume testing on    |
|                                      |                       |           |      | /dev/vda                               |
| e9d98e7b-5837-45bc-a0b1-5d9e50d7e686 |                       | in-use    |   20 | Attached to debian_test on /dev/vda    |
| 057b53e7-eba9-4d2d-bec3-184240c59b29 |                       | available |   20 |                                        |
| 15848ac7-67db-460c-8be1-be1dcbb286f0 |                       | available |   20 |                                        |
+--------------------------------------+-----------------------+-----------+------+----------------------------------------+

 

Create volume backup of volume UUID 663419d7-df14-4472-9eb2-1f4c976103e9:

$ openstack volume backup create 663419d7-df14-4472-9eb2-1f4c976103e9 --force
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 0481a8ce-e571-45bc-b133-ac5496dce181 |
| name  | None                                 |
+-------+--------------------------------------+

NOTE!: The --force flag is needed to make a backup of a volume that is in use.

 

List volume backups:

$ openstack volume backup list
+--------------------------------------+-----------------+---------------------+-----------+------+
| ID                                   | Name            | Description         | Status    | Size |
+--------------------------------------+-----------------+---------------------+-----------+------+
| 0481a8ce-e571-45bc-b133-ac5496dce181 | None            | None                | available |   20 |
+--------------------------------------+-----------------+---------------------+-----------+------+

 

And finally for more detail on the volume backup created, you can use openstack volume backup show UUID where UUID is the volume backup UUID:

$ openstack volume backup show 0481a8ce-e571-45bc-b133-ac5496dce181
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| availability_zone     | nova                                 |
| container             | backups                              |
| created_at            | 2020-12-15T16:55:39.000000           |
| data_timestamp        | 2020-12-15T16:55:39.000000           |
| description           | None                                 |
| fail_reason           | None                                 |
| has_dependent_backups | False                                |
| id                    | 0481a8ce-e571-45bc-b133-ac5496dce181 |
| is_incremental        | False                                |
| name                  | None                                 |
| object_count          | 0                                    |
| size                  | 20                                   |
| snapshot_id           | None                                 |
| status                | available                            |
| updated_at            | 2020-12-15T16:56:09.000000           |
| volume_id             | 663419d7-df14-4472-9eb2-1f4c976103e9 |
+-----------------------+--------------------------------------+

Backing up Ceph pool data

Depending on your needs for data protection, it may be useful to back up data in the Ceph pools to a third party, such as an Amazon S3 bucket for example.


 

What to do in the event of a hard drive failure?

There is no system in place that monitors for hardware failures, however it something being considered for future releases.

If it has been determined a hard drive has failed, a ticket will need to be created from the Flex Metal Central control panel. This ticket will be routed to our data center team who will replace the failed drive and will alert you when this task is done.

For now, monitoring of the hardware is something that will need to be set up. Multiple monitoring solutions exist already. Icinga or Nagios are two options that immediately stand out.