Logical Network Design
The following section discusses the logical network design of the environment.
VXLAN Transport Zones and vSphere Distributed Switch Options
A VXLAN transport zone represents a collection of vSphere clusters. The hosts in these clusters can communicate with each other via VXLAN.
ESXi hosts in clusters that are not part of a transport zone cannot communicate using VXLAN.
At <Customer>, there will be one transport zone that covers all compute and edge clusters.
For the Sphere Distributed Switch design, there are two options available that are supported by VIO:
One distributed virtual switch for all clusters (management, compute, and edge).
One distributed virtual switch for management and edge clusters and one distributed virtual switch for all compute clusters.
Figure 5. Virtual Distributed Switch and Transport Zone Design
VXLAN Replication Mode
Broadcast, unknown unicast, and multicast traffic can be replicated by VXLAN as follows:
Unicast mode ‒ ESXi host sends packets to every other ESXi host.
Multicast mode ‒ IGMP and multicast routing is required by physical network components.
Hybrid mode ‒ Multicast in local subnet (IGMP) and Unicast to other subnets.
Hybrid mode is the recommended way to replicate BUM traffic, because it leverages multicast in a local subnet (IGMP) and uses Unicast in routing to other subnets. That way, the hypervisor resources are conserved for other operations and there is no requirement for multicast routing support on the physical network.
DNS Requirements
To simplify the management of vSphere, at a minimum, configure all servers that are involved in vSphere management in DNS. This includes:
The ESXi hosts
The vCenter Server system(s)
The Platform Services Controller
Other services that might be involved in the infrastructure
IP storage addresses
To increase the availability of the DNS service, redundant DNS servers should be configured in the environment.
Naming Conventions
Configure common port group names across hosts to support virtual machine migration and failover. Common port group names are required by vSphere features such as vSphere vMotion, vSphere HA, and vSphere Distributed Resource Scheduler.
It is best practice to use port group names that describe their use, for example, IP Storage, Management, or vSphere FT.
The choice of distributed virtual switches simplifies the creation of common port group names across hosts. An example of port group naming convention is the following:
<network_name><_purpose><_switch_name_or_number>
Logical Network Design Decisions
The following table lists the logical network design decisions made for this architecture design.
Table 39. Logical Network Design Decisions
| Decision ID | Design Decision | Design Justification | Design Implication |
|---|---|---|---|
| For this design, <Customer> has made the following decisions listed in this table. | |||
| One distributed virtual switch for management and edge clusters, and one vSphere Distributed Switch for all compute clusters. | To isolate management network from the compute network. | ||
| A single transport zone will be used for the environment encompassing all NSX Edge and payload clusters. It will be scaled to respect the limit of 1000 distributed logical routers per ESXi host. | A single transport zone was chosen because it is a requirement for VMware Integrated OpenStack. | There is a tested limit of 1000 distributed logical router instances per ESXi host that will be respected when creating tenants. With the single transport zone model, every tenant distributed logical router will be created across all hosts within the same zone, so there must be an awareness of this number. | |
| NIC Teaming will be set to “Route based on originating virtual port ID” with both Uplinks set to Active. | |||
| Distributed virtual switches will be used. | <Customer> owns Enterprise Plus license. | Distributed virtual switches will simplify management. | |
| The number of virtual distributed switches will be two. One for management and edge clusters, and one for the compute cluster. | Due to security requirements, <Customer> has requested separation of the compute cluster network on a dedicated DVS. | None. | |
| NIC teaming will be configured for virtual switches in the management and edge clusters. NIC teaming will be set to “Route based on originating virtual port ID” with both Uplinks set to Active. | Throughput can be increased using special NIC teaming methods, for example, “Source Port ID”. | At least two NICs per ESXi host are required for NIC teaming. | |
| No NIC teaming will be implemented on compute hosts except the Hadoop hosts. | Compute hosts are configured with a single NIC. Hadoop hosts will contain 2 NICs. | A NIC failure, broken cable, or a port issue on the physical switch can lead to isolation of the host and impact the cluster capacity. | |
| Number of VTEPs per compute host will be one. | Compute host are configured with a single NIC and do not use a teaming policy. | ||
| Number of VTEPs per edge host will be two. | NIC load balancing based on “Source Port ID” needs two VTEPs. | Two VTEP IP addresses are needed per each host in the VTEP IP range. | |
| Network I/O control will be configured on both distributed switches. | Network I/O control allows prioritization of network traffic that can prevent saturation. Different network traffic types are used, but the compute hosts just use a single uplink; the management and edge hosts two uplinks. | None. | |
| VXLAN replication mode will be “Hybrid”. | ESXi host resources are being conserved for other tasks. | IGMP snooping must be configured on physical switches. | |
| VXLAN segment ID:100000-199999 | Pool of 100k networks per implementation. | ||
| VXLAN multicast address range: 239.40.0.0-239.41.255.255 | More than 100k dedicated IPs for VXLAN networks. | Make sure the multicast address range is not used by other services. |
- Virtual Switching Diagrams and Configurations
The following diagrams describe the virtual switch configuration for the environment.
Management Cluster
The following figure shows the management host switch design.
For detailed configuration information on the physical network design specifications, see the VMware Integrated OpenStack Configuration Workbook Configuration Workbook document.
Consultant: Insert the network physical design diagrams here and provide any relevant description around it.
Figure 6. Network Switch Design for Management Hosts
Edge Cluster
The following figure shows the edge host switch design. (The diagram shows an environment that also uses VXLAN.)
For detailed configuration information on the physical network design specifications, see the VMware Integrated OpenStack Configuration Workbook document.
Consultant: Insert the network physical design diagrams here and provide any relevant description around it.
Figure 7. Network Switch Design for Edge Hosts
Compute Clusters
The following figure shows the compute host switch design. (The diagram shows an environment that also uses VXLAN.)
For more information on the physical network design specifications, see the VMware Integrated OpenStack Configuration Workbook document.
Consultant: Insert the network physical design diagrams here and provide any relevant description around it.
Figure 8. Network Switch Design for Compute Hosts