OpenStack Root Disk/Volumes (Cinder) to vSphere Datastore Mapping
Cinder, the OpenStack block storage service, provides persistent storage to store instances that are managed by the OpenStack compute service.
There are four different types of disks exposed to instances (or VMs):
Root disk ‒ The disk the instance (VM) will boot from in most cases. (The exception is if the instance is booted from a Cinder volume.) The root disk will be a clone of the image selected by the user.
By default, with the vSphere Nova Drivers, this will be a linked clone of the image in the selected datastore image cache. This root disk is ephemeral, so it is deleted when the instance is terminated.
Ephemeral disk ‒ Second disk that can be defined in a flavor. By default, flavors in VMware Integrated OpenStack do not have a second ephemeral disk. This additional disk is created as an empty disk alongside the root disk, and it is deleted when the instance is terminated.
Swap disk ‒ Third ephemeral disk, usually of smaller sub-gigabyte size, that can be attached to an instance (VM). Currently the Nova vSphere driver does not support the use of swap disks.
Cinder volumes ‒ Disks are also called persistent storage, as they are not destroyed when the instance (VM) is terminated. Cinder volumes are usually held externally from the hypervisor. Volumes can be mapped and un-mapped from the instances. Instances can also boot from volumes. So, in this case, the instance root disk is the Cinder volume. But, in most cases, Cinder volumes are attached as secondary (data) disks to instances.
Table 75. Block Storage Service Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
Using blade servers mainly serving the compute needs of general application workloads. These blade servers do not hold a lot of local disk capacity, and the general workload instances (VMs) disks are assumed to be placed on NFS-based vSphere datastores. | |||
Using rack mount servers with larger local storage capacity, composed of SSDs, serves the high I/O needs of Hadoop/Couchbase workloads. The instances (VMs) of this workload type are assumed to have a root disk that is placed on NFS-based vSphere datastores, but also one or more secondary disks that need to be placed on the local SSD drives. |
The OpenStack vSphere driver is configured by the VMware Integrated OpenStack deployed to use specific vSphere Datastores on a per Nova-Compute level.
Figure 12. OpenStack vSphere Driver configuration
By default, the instances’ root disks and ephemeral disks are placed on data stores using the following placement logic:
If there is more than one vSphere datastore available in the cluster, and multiple datastores are configured to be used by Nova-Compute, the datastore that is attached to the highest number of hosts in the vSphere cluster is preferred. This makes sure vSphere DRS and HA functions are optimized.
If all available datastores have the same number of hosts attached, the datastore that has the most available capacity left is used.
This same logic in placing the root disks and ephemeral disks can be used in all availability zones/vSphere clusters, as the root disk will always be placed on NFS-based shared storage. Ephemeral disks, if used, would be placed alongside the root disks on the same datastore.
Using a shared datastore for the root and ephemeral disks is beneficial for the following reasons:
Without a shared datastore, instances (VMs) cannot be live migrated (using vSphere Motion) by vSphere DRS to rebalance the vSphere cluster.
Without a shared datastore, vSphere HA cannot function.
As linked clones are used by the Nova vSphere driver, for each instance (VM) there needs to be an expanded (flat) base disk image available on the same datastore that the instance (VM) is placed on. If multiple datastores are used, for example, if all local datastores are exposed to Nova-Compute, each datastore might receive a copy of the base image, expanded, and per flavor. This wastes storage space and creates unnecessary network traffic.
The logic used by the vSphere Cinder driver to map a persistent volume to an instance (VM) is the following:
When a new volume is created, this is only a database entry in the cinder db table. No real volume is created yet, and no storage space is claimed.
When the volume is attached to an instance (VM), a VMDK file is created on a datastore configured for the use by Nova-Compute, and is accessible by the ESXi host that hosts the instance (VM).
Using this logic for Cinder volume creation, the same rules apply as for root disk and ephemeral disk placement. So the datastore with the maximum number of attached hosts, and the lowest storage utilization, is used. This means that local datastores, like the local SSD datastores, would not be used, as their number of attached hosts is only one.
Using SSD based datastores
The following configuration is needed to be able to use the local SSD based datastores. On the Hadoop/Couchbase clusters, the NFS-based shared datastores and all local SSD-based datastores are added as datastores to the Nova-Compute node.
Figure 13. Nova-Compute Node Datastore Overview
Next, vCenter storage policy-based management is configured. First, the local SSD datastores are tagged with a label, for example, “spbm-gold”. A storage policy will be created that matches the configured datastore tag.
Figure 14. vCenter Storage Policy-Based Management Overview
Finally, a new Cinder volume type is created, which includes the storage policy-based management information that will match the local SSDs:
$ cinder type-create local-ssd
$ cinder type-key local-ssd set vmware:storage_profile=openstack-gold
When creating a new volume from the IaaS deployment/orchestration layer, the specific volume type “local-ssd” can now be selected. That way, the volume is created on the local SSD datastore instead of the shared NFS datastore.
Since the local SSD datastores in the cluster are also made available to the Nova-Compute driver in the design, it is possible that root and ephemeral disks are placed on the local SSD datastores, if the shared NFS datastore happens to be unavailable. In normal operations, this will not happen, as the first decision criteria is based on the maximum number of attached hosts. | |
---|---|
Design Decisions Regarding Cinder Volumes in OpenStack
Based on the requirements for the storage domain, the following VIO volume design decisions have been made.
Table 76. Cinder Volume Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
All root disks and ephemeral disks of instances (VMs) will be placed on the shared NFS datastore. Nova-Compute will be configured to only use the NFS shared datastore. | Using shared datastore will reduce the overall amount of disk space used, and will reduce network traffic caused by image copies. | If local SSD performance is needed, a Cinder volume needs to be created and attached to the instance (VM). | |
vSphere SPBM will be used to expose the local SSD datastore as volume type in Cinder volumes. | SPBM is required for identifying the local SSD datastore when creating Cinder volumes. | Additional SPBM and Cinder volume type configuration needs to be done. |