Network Design
Virtual SAN uses the network to transport all information, including communication between the cluster nodes, as well as the VM I/O operations. Transport is accomplished by a specially created VMkernel port group, which must be configured on all hosts in the cluster, whether or not the hosts are contributing storage resources to the cluster.
When designing the network, consideration must be given to how much replication and communication traffic is running between hosts. With Virtual SAN, the amount of traffic directly correlates to the number of VMs that are running in the cluster, as well as how write-intensive the I/O is for the applications running.
As with other communications such as vSphere vMotion, this traffic should be isolated on its own Layer 2 network segment. You can do this with dedicated switches or ports, or by using a VLAN.
Virtual SAN Network Configuration
Virtual SAN requires its own VMkernel network configuration to use for synchronization and replication activities.
The following are the major decision points discussed in this section:
Network speed requirements
Type of virtual switch to be used
Jumbo frames
Multicast requirements
The recommendations in this section are made as best practices so that performance is not impacted due to lack of available network capacity. The amount of network activity is directly dependent on the number of virtual machines hosted on the Virtual SAN datastore as well as the activity in them.
Network Speed Requirements
Virtual SAN supports either 1 Gbps or 10 Gbps Ethernet for network uplinks. Depending on usage, the amount of activity on the Virtual SAN might overwhelm a 1 Gbps network and might be the limiting factor in I/O intensive environments such as the following:
Rebuild and synchronization operations
Highly- intensive disk operations, such as cloning a VM
High-density environments with a large number of VMs
A 10 Gbps network is required to achieve the highest performance (IOPS). Without it, a significant decrease in array performance can be expected. VMware recommends that solutions using the Software-Defined Storage module provide a 10 Gbps Ethernet connection for use with Virtual SAN, for best performance.
Table 58. Network Speed Selection
| Design Quality | Option 1 1 Gbps | Option 2 10 Gbps | Comments |
|---|---|---|---|
| Availability | o | o | Neither design option impacts availability. |
| Manageability | o | o | Neither design option impacts manageability. |
| Performance | ↓ | ↑ | Faster network speeds increase Virtual SAN performance (especially in I/O intensive situations). |
| Recoverability | ↓ | ↑ | Faster network speeds increase the performance of rebuilds and synchronizations in the environment, so that VMs are properly protected from failures. |
| Security | o | o | Neither design option impacts security. |
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality.
Table 59. Network Speed Selection – Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| 10 Gbps networking will be used for the Virtual SAN network to provide optimal performance. |
Type of Virtual Switch
Virtual SAN supports use of vSphere standard vSwitch configurations or vSphere Distributed Switch configurations. The benefit of using vSphere Distributed Switch configurations is that they allow VMware vSphere Network I/O Control (NIOC) to be used, which allows for prioritization of bandwidth when there is contention in an environment.
| vSphere Distributed Switch instances using NIOC is an attractive option for environments that have a limited number of ESXi host network ports. It allows the interface to be shared and prioritizes performance levels in contention scenarios. | |
|---|---|
VMware recommends that the configuration selected when using the Software-Defined Storage moduleshoulduse a vSphere Distributed Switch for the Virtual SAN port group. That way, the priority can be assigned using NIOC to separate and provide the bandwidth needed for Virtual SAN traffic in the environment.
Table 60. Switch Types
| Design Quality | Option 1 vSphere Standard | Option 2 vSphere Distributed Switch | Comments |
|---|---|---|---|
| Availability | o | o | Neither design option impacts availability. |
| Manageability | o | o | Neither design option impacts manageability. |
| Performance | ↓ | ↑ | The vSphere Distributed Switch has added controls, such as NIOC, which allow performance to be guaranteed for Virtual SAN traffic. |
| Recoverability | o | o | Neither design option impacts recoverability. |
| Security | ↓ | ↑ | The vSphere Distributed Switch has added built-in security controls to help protect traffic. |
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality.
Table 61. Virtual Switch Selection – Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| A vSphere Distributed Switch will be used for the configuration | A vSphere Distributed Switch is required to use Network I/O Control and Virtual SAN. |
Jumbo Frames
Virtual SAN supports using jumbo frames for Virtual SAN network transmissions. The environment will be supported fully whether or not jumbo frames are used. The performance gains are often not significant enough to justify the underlying configuration necessary to enable them properly on the network.
VMware recommends that when using the Software Defined Storage module, the design should use jumbo frames for Virtual SAN only if the physical environment is already configured to support them, they are part of the existing design, or if the underlying configuration does not create a significant amount of added complexity for the design.
Table 62. Jumbo Frames Selection – Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| Jumbo frames will be used in the Virtual SAN environment. | Jumbo frames are already implemented for VXLAN. |
VLANs
VMware recommends segregating Virtual SAN traffic on its own VLAN. When multiple Virtual SAN clusters are used, each cluster should use a dedicated VLAN or segment for their traffic. This will prevent interference between clusters and aid in troubleshooting cluster configuration.
VMware recommends that the configuration use separate VLANs for Virtual SAN traffic when using the Software Defined Storage Technical Materials.
Table 63. VLAN Selection – Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| A dedicated VLAN 30 will be used for Virtual SAN traffic. |
Multicast Requirements
Virtual SAN requires that IP multicast be enabled on the Layer 2 physical network segment utilized for intra-cluster communication. Layer 2 multicast traffic can be limited to specific port groups by using IGMP (v3) snooping. As a best practice, VMware recommends not implementing multicast flooding across all ports.
| Virtual SAN does not require Layer 3 multicast for any network communication requirements. | |
|---|---|
BCDR and Teaming Considerations
Business continuity and disaster recovery (BCDR) is critical in any environment in case of a network failure. Virtual SAN supports teaming configurations for network cards to improve the availability and redundancy of the network.
| Virtual SAN does not currently leverage teaming of network adapters for the purpose of bandwidth aggregation. | |
|---|---|
For a predictable level of performance, VMware recommends the use of multiple network adapters in either of the following configurations:
An active-passive configuration where an explicit failover is performed. Normally, this is used when the load balancing mechanism is set to route based on the originating virtual port ID.
An active-active configuration where the physical network is using a Link Aggregation Control Protocol (LACP) port channel configuration. Normally, one of the following algorithms are used in this configuration:
Route based on IP hash
Route based on physical network adapter load
VMware recommends that configurations use an active-active configuration with a route based on physical adapter load for the teaming in the environment when using the Software-Defined Storage Technical Materials. In this configuration, idle network cards do not wait for a failure to occur, and they aggregate bandwidth, although Virtual SAN does not currently leverage this capability. VMware also assumes that LACP has been, or can be, configured in the environment.
Table 64. NIC Teaming and Policy
| Design Quality | Option 1 Active-Active | Option 2 Active-Passive | Comments |
|---|---|---|---|
| Availability | ↑ | ↑ | Using teaming, regardless of the option, will increase the availability of the environment. |
| Manageability | o | o | Neither design option impacts manageability. |
| Performance | ↑ | o | An active-active configuration has the ability to send traffic across either NIC, thereby increasing the available bandwidth. Virtual SAN however does not currently utilize this capability. This provides a benefit if the NICs are being shared among traffic types and NIOC is used. |
| Recoverability | o | o | Neither design option impacts recoverability. |
| Security | o | o | Neither design option impacts security. |
Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality.
Table 65. NIC Teaming and Policy Selection – Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| The hosts use an active-passive configuration with route based on the originating virtual port ID. |