High-Level Shared Storage Design Guidelines
At a high level, the storage design should follow these recommendations:
The storage design must be optimized to meet the diverse needs of applications, services, administrators, and users at <Customer>.
The goal is to strategically align business applications and the storage infrastructure to reduce costs, boost performance, improve availability, provide security, and enhance functionality.
Using the information gathered from the current state analysis and SME interviews at <Customer> to match application data to tiers of storage.
Each tier of storage has different performance, capacity, and availability characteristics.
Designing different storage tiers is cost efficient, given that not every application requires the most expensive, high-performance, highly available storage.
The design also needs to take into consideration the different types of storage available. Storage attributes for everything from price points to performance levels can factor into choosing the storage for the environment. The following sections discuss guidelines for using the different types of storage available.
Storage Platforms
VMware offers the ability to use both local and shared storage with ESXi hosts. Shared storage is the cornerstone for being able to provide services with vSphere such as high availability and optimum resource allocation (using vSphere vMotion). In addition to performance and functionality, the decision to implement one storage technology over another can be based on such considerations as:
The organization’s current in-house expertise and installation base.
The cost, including both capital and long-term operational expenses.
The organization’s current relationship with a storage vendor.
The following is a list of the types of storage that can be used:
Local storage – Storage connected directly to an ESXi host using interfaces such as SCSI or SAS. This is not shared storage, and, as such, is limited by that as a restriction.
Fibre Channel, NFS, FCoE, and iSCSI are mature and viable options to support virtual machine needs. They use traditional VMFS volumes to provide shared storage access in the environment.
Software-defined storage types.
Virtual SAN – A software-based distributed storage platform that combines the compute and storage resources of vSphere hosts, presenting local flash devices and magnetic disks. This solution makes software-defined storage a reality for VMware customers. However, considerations for hardware choice must be taken into account when sizing and designing a Virtual SAN cluster. These considerations are covered in-depth in the SDDC Assess, Design and Deploy, Software-Defined Storage Technical Materials.
VMware vSphere Virtual Volumes™ based storage – vSphere Virtual Volumes bring the functionality of Virtual SAN to shared storage. With vSphere Virtual Volumes, an individual virtual machine, not the datastore, becomes the unit of storage management allowing for greater flexibility and granularity of policy-based management. With vSphere Virtual Volumes, abstract storage containers replace traditional storage volumes based on LUNs or NFS shares.
Comparing Types of Storage
The following table compares types of storage.
Table 40. Network Shared Storage Supported by ESXi
Technology | Protocols | Transfers | Interface |
---|---|---|---|
Fibre Channel | FC/SCSI | Block access of data/LUN | FC HBA |
Fibre Channel over Ethernet | FCoE/SCSI | Block access of data/LUN | Converged network adapter (hardware FCoE) |
iSCSI | IP/SCSI | Block access of data/LUN | iSCSI HBA or iSCSI enabled NIC (hardware iSCSI) |
NAS | IP/NFS | File (no direct LUN access) | Network adapter |
Virtual SAN | IP | Block access of data | Network adapter |
vSphere Virtual Volumes | FC/FCoE/iSCSI/NFS/SCSI | Storage transports for the protocols | Same as protocol used |
Table 41. vSphere Features Supported by Storage Type
Type | Boot VM | vSphere vMotion | Datastore | RDM | VM Cluster | vSphere HA/DRS | Storage APIs Data Protection | Cinder API |
---|---|---|---|---|---|---|---|---|
Local Storage | Yes | No | VMFS | No | Yes | No | Yes | Yes* |
Fibre Channel | Yes | Yes | VMFS | Yes | Yes | Yes | Yes | Yes* |
iSCSI | Yes | Yes | VMFS | Yes | No | Yes | Yes | Yes* |
NAS over NFS | Yes | Yes | NFS | No | No | Yes | Yes | Yes* |
Virtual SAN | Yes | Yes | VSAN | No | No | Yes | Yes | Yes* |
vSphere Virtual Volumes | Yes | Yes | vvol | No | No | Yes | Yes | No |
*Cinder integrates with the vSphere datastore through the VMware vmdk driver. See further information on the OpenStack Communities (http://docs.openstack.org/kilo/config-reference/content/vmware-vmdk-driver.html).