vCenter Server Logical Design
The following section discusses the logical design of vCenter Server in the environment.
vCenter Server Identity
It is important to configure a static IP address and host name for either a vCenter Server for Windows system or a vCenter Server Appliance. The IP address must have a valid (internal) DNS registration including reverse name resolution.
The vCenter Server system must maintain network connections to ESXi hosts and many other solutions (such as VMware Horizon® View™). Depending on the deployment configuration, it might also have to maintain a connection to the Platform Services Controller as well.
If the address of the vCenter Server changes, any hosts or solutions connected to vCenter Server might lose connection until they are updated with the new information.
vCenter Single Sign-On™
As of vSphere 6.0, vCenter Single Sign-On is now part of the Platform Services Controller. This dramatically simplifies how it is configured and protected. Both the Platform Services Controller and vCenter Server components must be installed on at least one system to be able to manage the environment with Active Directory credentials. Once installed, appropriate identity sources must be configured.
vCenter Server Clusters
vCenter Server clusters provide a logical way to organize the infrastructure. Cluster design is important in determining how the environment is configured. This includes the organization of different types of servers into logical groupings for failure domains, or to configure features such as VMware vSphere Distributed Resource Scheduler™ and vSphere HA.
There are two types of methodology that can be used when building clusters:
- Scale-up clusters – Where the cluster has fewer hosts that are larger in size.
- Scale-out clusters – Where the cluster has more hosts that are smaller in size.
The decision on which approach to take must be made by:
- Evaluating the capital costs of purchasing fewer, larger hosts compared to purchasing more, smaller hosts. Costs vary between vendors and models.
- Analyzing operational costs of managing a few hosts compared to more hosts.
- Reviewing the maximums for clusters so that you are within the relevant limits.
- Considering the purpose of the cluster. For example, a virtualized server cluster typically has more hosts with fewer virtual machines per host.
vCenter Server Logical Design Decisions
The following table lists the vCenter Server logical design decisions made for this architecture design.
Table 16. vCenter Server Logical Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
vCenter Single Sign-On will be configured to use Active Directory (Integrated Microsoft Windows authentication). | vCenter Single Sign-On will be connected to Active Directory to allow users to log in and be assigned permissions with their Active Directory credentials. There will be a dedicated domain available for the Figaro environment. | Reliant on Active Directory for the majority of users to log in. |
Due to the complexity of cluster design, the remainder of the logical design decisions have been split into multiple sections.
Cluster Design Decisions
The following table lists the cluster design decisions made for this architecture design.
Table 17. Cluster Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
The vSphere Management cluster contains all vSphere management components, NSX Manager and NSX Controllers, database server, VMware vRealize Operations™, vRealize Log Insight. It is managed by a management vCenter instance. | Separation of management from compute helps to isolate the management workloads and avoid negative influence from compute workloads. | Layer 2 network relies on the MLAG spanned over the ToR switches of both CA44 and CA45. | |
Compute clusters will be used for all VMware Integrated OpenStack workloads. There will be a total of 5 compute clusters, 4 clusters in compute Rack CA45, and 1 cluster in compute rack CA44. | Compute clusters are implemented to segregate the workloads and properly protect the management traffic. The compute clusters are managed by the regular vCenter Server which is firecell-based. |
Management Cluster Design Decisions
The management cluster design chosen is important in determining how the environment is configured.
The following table lists the management cluster design decisions made for this architecture design.
Table 18. Management Cluster Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
The vSphere management cluster redundancy will be configured for N+1 HA protection. The cluster will use percentage-based reservation. The HA slot size for the management cluster needs to be configured. An average slot size of 4 vCPU/12 GB RAM per VM is specified for the management cluster. | Percentage-based reservations provides flexibility in situations where VMs have various CPU or RAM requirements. Because no VM resource reservations are set, HA slot size needs to be set manually so that HA reserves enough resources based on the altered slot size settings. | Risk: HA will prevent an impact to the environment in case of a host failure but will not intercept a rack failure. | |
The cluster size will start with 4 hosts. | The planned Virtual SAN cluster contains 4 hosts to provide capacity for the number of failover to tolerate=1 and full data migration if a host is set to maintenance mode. |
Table 19. Management Cluster Design Specifications
Attribute | Specifications |
---|---|
Number of hosts required to support management servers with no over-commitment. | 2 |
Number of hosts in cluster with HA allowance | 4 |
Percentage of cluster resources reserved | 50% reserved CPU and RAM |
Number of “usable” hosts per cluster | 4 usable hosts |
Edge Cluster Design Decisions
The edge cluster design is important in determining how the environment is configured.
The following table lists the edge cluster design decisions made for this architecture design.
Table 20. Edge Cluster Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
A dedicated edge cluster will be created for the Compute cluster. The edge cluster includes NSX Edge devices and an F5 load-balancer. The edge cluster will be located in a single network rack. | The edge cluster provides gateway functionality for logical networks created in OpenStack. | ||
Tintri NFS storage will be used as shared storage for the edge cluster. |
Table 21. Edge Cluster Design Specifications
Attribute | Specifications |
---|---|
Number of hosts required to support edge servers | 2 |
Percentage of cluster resources reserved | 25% for CPU and RAM |
Number of “usable” hosts per cluster | 2 |
The edge cluster will contain all edge gateway devices. This means that external connectivity only needs to be configured on the edge cluster hosts accommodates the adoption of a leaf-spine physical switch architecture.
Compute Cluster Design Decisions
The compute cluster design is important in determining how the environment is configured.
The following table lists the compute cluster physical design decisions made for this architecture design.
Table 22. Compute Cluster Design Decisions
Decision ID | Design Decision | Design Justification | Design Implication |
---|---|---|---|
For this design, <Customer> has made the following decisions listed in this table. | |||
5 compute clusters are used. The first compute rack has 3 clusters. The second rack has 2 clusters. A compute cluster contains 2 servers. The compute clusters will be connected to the vCenter Server of the firecell. | <Customer> will provide one rack with 8 blade servers and a second rack with 2 rack-mounted servers for the compute clusters. The setup should reflect the cluster topology that will be available at the US data centers of <Customer>. | ||
HA will be enabled, but no failover capacity will be reserved. | HA will provide VM failover in case a compute host fails. | Risk: HA will enable recovery in the case of a host failure, but will not intercept a rack or chassis failure. | |
1 compute cluster = 1 Availability zone in VMware Integrated OpenStack. | |||
Tintri NFS storage will be provided as shared storage for the compute cluster. | |||
1 compute cluster will use local disks of the ESXi hosts. | This setup will be required for the Hadoop project at <Customer>. |
Table 23. Compute Cluster Design Specifications
Attribute | Specifications |
---|---|
Number of hosts required to support customer resources | 10 |
Capacity for host failures per cluster | n/a |
Number of usable hosts per cluster | 2 |