Physical Network Design

Consultant: Insert the network physical design diagrams here and any relevant description around it.

Table 35. Physical Network Design for Management Cluster Decisions

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.
This architecture will be implemented at <Customer> in <location>.
Between leaf and spine switches, a Layer 3 infrastructure will be set up. BGP will be used to transport network routing information.

Table 36. Physical Network Design for Compute and Edge Clusters Decisions

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.

Physical Switches

The physical switches in a virtualized environment are just as important as the logical switch configuration because the switches provide the connectivity between hosts, racks, and the outside world. The physical switch configuration includes the following practices:

Configure redundant physical switches to enhance availability.

ESXi facing switch ports must be manually configured as trunk ports if VLANs are used. Virtual switches are passive devices and do not send or receive trunking protocols such as DTP.

Modify the Spanning Tree Protocol (STP) on any port connected to an ESXi NIC. This is because virtual switches do not support STP. As a result, vSphere HA may determine an isolation has occurred due to the length of time it takes for a port to come online as a result of STP. The following are recommendations for physical switch configuration:

    • Turn off STP on ESXi facing ports.
    • Enable PortFast mode on ESXi facing ports.

Table 37. Physical Network Design for Compute and Edge Clusters Decisions

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.
There are components in VMware Integrated OpenStack that need Layer 2 connectivity. These VMs will also be connected to the management VLANs and Layer 2 connectivity is being provided.

Jumbo Frames

Jumbo frames provide an increase in performance for workloads, if configured correctly. This is because increasing the per-frame compute from 1500 bytes to 9000 bytes increases the efficiency of data transfer. The problem experienced in most cases is that jumbo frames must be configured end-to-end on all devices on the network. Otherwise, performance can suffer because the larger packets must be broken up to be transmitted. Depending on the configuration, this might or might not be easily accomplished in a LAN. Workloads such as iSCSI, however, might benefit from having it configured.

Several factors can determine whether to configure jumbo frames to handle workloads. Often times, the act of configuring Jumbo frames end-to-end is not easily justifiable due to the cost of configuration versus the expected gains from configuring it. If a workload consistently transfers large amounts of network data, configure jumbo frames if possible, because it will benefit the configuration.

  1. Jumbo frames support must be configured end-to-end, which might be more difficult for WAN configurations than with LAN ones.

Keep in mind that the virtual machine operating systems and the virtual machine NIC must also support jumbo frames. All of these factors will play into whether it is feasible to configure jumbo frames for the environment.

The minimum recommended size for VXLAN transport is 1600 bytes.

At <Customer>, the MTU for the VXLAN VLAN will be set to 9000 bytes. This will be done at:

Physical Top of Rack (ToR) switches

vSphere Distributed Switch port groups for VXLAN

VLANs and Network Segmentation

Separating different types of traffic is highly recommended. This allows for a reduction of contention and latency. High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important for IP storage and the vSphere FT logging network because latency on these networks can negatively affect the performance of multiple virtual machines.

Information gathered from the current state analysis and key stakeholder and SME interviews can be used to determine the existence of specific workloads and networks that are especially sensitive to high latency.

Separate networks are also required for access security. Use information gathered from key stakeholder and SME interviews to determine the specific access requirements for users and services, and design the network to isolate them appropriately.

Number of Networks

The number of networks you provide really depends on an organization’s business needs. Determine the number of networks and/or VLANs that are required using input from the following types of traffic:

vSphere operation traffic

Organizational service and application traffic

At <Customer>, VLANs and IP networks as described in Table 38. Physical Network Design Decision and Table 39. Logical Network Design Decisions, will be used. VLANs in green cells will be available in all racks that host management devices, VXLAN tunnel with ToR switches.

The entire VLAN plan is documented in separate Excel spreadsheet files and can be also be found in the VMware Integrated OpenStack Configuration Workbook.

VXLAN

VXLAN provides the capability to create isolated, multi-tenant broadcast domains across data center fabrics and enables customers to create elastic, logical networks that span physical network boundaries.

The first step in creating this type of logical network is to abstract and pool the networking resources. Just as vSphere abstracts compute capacity from the server hardware to create virtual pools of resources that can be consumed as a service, VMware vSphere Distributed Switch™ and VXLAN abstract the network into a generalized pool of network capacity and separate the consumption of these services from the underlying physical infrastructure. This pool can span physical boundaries, optimizing compute resource utilization across clusters, pods, and geographically-separated data centers. The unified pool of network capacity can then be optimally segmented into logical networks that are directly attached to specific applications.

VXLAN works by creating Layer 2 logical networks that are encapsulated in standard Layer 3 IP packets. A segment ID in every frame differentiates the VXLAN logical networks from each other, without any need for VLAN tags, allowing very large numbers of isolated Layer 2 VXLAN networks to co-exist with a common Layer 3 infrastructure.

In the vSphere architecture, the encapsulation is performed between the virtual NIC of the guest virtual machine and the logical port on the virtual switch, making the VXLAN transparent to both the guest virtual machines and the underlying Layer 3 network. Gateway services between VXLAN and non-VXLAN hosts (for example, a physical server or an Internet router) are performed by VMware NSX. The edge gateway translates VXLAN segment IDs to VLAN IDs, so that non-VXLAN hosts can communicate with VXLAN virtual servers.

The dedicated edge cluster hosts all edges and routers that act as a “water valve” to the Internet, or to corporate VLANs, so that the network administrator can provide management in a more secure and centralized way.

For further details on VXLAN design, see the Software Defined Networking Technical Materials.

Physical Network Design Decisions

The following table lists the physical network design decisions made for this architecture design.

Table 38. Physical Network Design Decision

Decision ID Design Decision Design Justification Design Implication
For this design, <Customer> has made the following decisions listed in this table.
A leaf-and-spine network architecture will be used. <Customer> is looking for a network design that can scale out as much as possible. A newer methodology for network design. Therefore, might present a steep learning curve.
A Layer 3 design between leafs and spines will be used with BGP as routing protocol. Layer 2 failure domains will be kept to a minimum. BGP is already used within <Customer>. The layer 3 network has to be designed from an IP perspective.
Jumbo frames: MTU will be set to 9000 bytes on vSphere Distributed Switch port groups for VXLAN. VXLAN encapsulation requires MTU size to be increased to avoid communication errors. Physical switches have to be configured accordingly.
Network segmentation for devices and services in the management cluster will be accomplished with VLANs. VLANs are already widely in use at <Customer>. Activate VLANs on switches. Configure tagging on respective physical switch ports.
VXLAN will be used for compute VMs in compute and edge clusters. VMware Integrated OpenStack leverages VMware NSX to supply logical networking services. VXLAN will be supplied by VMware NSX and initiated by OpenStack operations.
VMware NSX controlled VTEPs will get their IP address via DHCP. Significantly ease implementation in a routed environment. A properly configured DHCP server must be available and IP helper must be activated on physical switches.
DNS, NTP, DHCP, SFTP services will be located in the management VLAN. These are the services required by vSphere and VMware NSX. Services must be set up and created prior to vSphere and VMware NSX installation.

results matching ""

    No results matching ""