Shared Storage Physical Design

The following section discusses the physical storage design of the environment.

NFS Storage Design

NFS storage access is provided over an Ethernet network. This connection is provided by an NFS client built into ESXi. The client uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume that is located on a NAS server. The ESXi host can mount the volume and use it for its storage needs. ESXi 6.0 supports versions 3 and 4.1 of the NFS protocol.

Performance, as with any storage system, is one of the major factors when designing the NFS infrastructure in the environment. The following points should be taken into account when designing the NFS configuration:

Use a vSphere Distributed Switch when possible for connectivity to the NFS storage array. This allows priority to be set for storage traffic with Network I/O Control.

Check that the NFS server exports a particular share as either NFS 3 or NFS 4.1, but does not provide both protocol versions for the same share. This policy needs to be enforced by the server, because ESXi does not prevent mounting the same share through different NFS versions.

NFS 3 and non-Kerberos NFS 4.1 do not support the delegate user functionality that enables access to NFS volumes using non-root credentials. If you use NFS 3 or non-Kerberos NFS 4.1, each host must have root access to the volume. Different storage vendors have different methods of enabling this functionality, but typically this is done on the NAS servers by using the no_root_squash option. If the NAS server does not grant root access, you might still be able to mount the NFS datastore on the host. However, you will not be able to create any virtual machines on the datastore.

If the underlying NFS volume on which files are stored is read-only, make sure that the volume is exported as a read-only share by the NFS server, or configured as a read-only datastore on the ESXi host. Otherwise, the host considers the datastore to be read-write and might not be able to open files.

ESXi supports Layer 2 and Layer 3 network switches. If you use Layer 3 switches, ESXi hosts and NFS storage arrays must be on different subnets and the network switch must handle the routing information. Use of Layer 3 switches is not recommended, as it adds latency and you can only have one default gateway.

In addition to performance, the following guidelines can be used to help avoid common configuration problems with NFS arrays or servers:

Make sure that NFS servers or arrays used are listed in the VMware hardware compatibility list (HCL). The VMware HCL includes information that includes not only the supported arrays, but the correct firmware versions to use as well.

When configuring NFS storage, follow the recommendation of your storage vendor.

NFS Storage Access Control

Several methods are available to control access to an NFS datastore. Determining which one should be chosen for implementation depends on how specific the access control must be.

Network segmentation provides the least specific access control. It prevents an entire host from accessing the NFS array. The administrator of the host does not have the option of mounting or not mounting a datastore from the array.

If network segmentation is not used, the administrator of the host has the choice to mount or not mount specific NFS datastores from the array. However, all hosts in a VMware cluster typically need access to the same storage resources for virtual machine migration and failover to work properly.

Datastore permissions provide the most specific access control. An administrator can use datastore permissions in the vCenter Server inventory to control access per user or group.

Storage Redundancy

Providing redundancy in storage design is essential to increase availability, scalability, and performance. Configure the following areas for storage redundancy:

Redundant storage network components

Redundant storage paths

Redundant storage processors

Redundant LUN configurations (RAID)

The following guidelines should be used for redundant storage and multipathing configurations:

Always configure storage multipathing.

For availability, VMware recommends a minimum of two active paths to a LUN, although four paths are better because this configuration accommodates more types of failures:

HBA0 to storage processor SPA0. HBA0 to storage processor SPB1.

HBA1 to storage processor SPA1. HBA1 to storage processor SPB0.

Configure paths using two HBAs or NICs, two switches, and two array storage processors. With this design, data is still available when an HBA, switch, or storage processor fails.

Multiple paths provide both availability and load balancing for performance.

VMware recommends using two single-port storage HBAs, rather than a single dual-port storage HBA, so that if a single HBA fails, storage access is not lost.

Verify that the multipath policy matches the type of array:

Use Most Recently Used (MRU) for active-passive arrays (avoids path thrashing).

Use Fixed or Round Robin as an option for active-active arrays.

Use MRU or Round Robin as an option for ALUA arrays.

Use MRU for virtual port storage arrays.

Always consult the array vendor’s documentation for specific multipathing policy support.

Third-party multipathing plug-ins might exist. Use vendor recommendations for these plug-ins.

For guidance specific to Virtual SAN, see the Software-Defined Storage Technical Materials.

Shared Storage Design Specifications

This section details the shared storage platform proposed for <Customer>.

Table 42. Storage Type Specification

Attribute Specification
Storage type NFS
Number of storage processors Unknown
Number of switches 1-2

Shared Storage Physical Design Decisions

The following table lists the physical storage design decisions made for this architecture design

Table 43. Shared Storage Physical Design for Management Cluster Decisions

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.
NFS storage will be used for the edge and compute clusters.
Jumbo frames are enabled. Storage performance improvement.

Make sure that the NFS datastore can handle a large amount of IOPS for snapshot on all the VMware Integrated OpenStack cluster management instances. Otherwise, it can trigger a timeout error during the deployment or upgrades.

Table 44. Shared Storage Physical Design for the Compute and Edge Clusters Decisions

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.
NFS storage will be used for the edge and compute clusters.
Jumbo frames are enabled. Storage performance improvement.
Dedicated storage bandwidth. VMware Integrated OpenStack cluster management VMs required large IOPS support.
RDMs will not be used. No targeted use cases for using RDMs. None.

results matching ""

    No results matching ""