vSphere with Tanzu and HPE Alletra dHCI

Executive Overview

Organizations are adopting containers to build distributed and resilient applications quickly and with more efficient resource utilization. The result is highly optimized automation workflows and processes to build, test, and deploy applications more rapidly with minimal manual intervention. Enterprises must balance the software developers’ needs to iterate applications quickly with instant access to infrastructure while operationalizing modern applications at scale without creating additional IT staffing overhead.

Despite the adoption of containers across many use cases and complex workloads, most enterprises cannot migrate to a 100% container environment quickly. This means that some applications run in a hybrid environment where VMs and containers coexist. For example, a database might remain virtualized either because it cannot be deployed in a container, or it is too expensive to migrate. However, it can be accessed by a containerized application. The requirements of managing a hybrid environment and simplifying infrastructure can be met by Disaggregated HCI (dHCI) which delivers the HCI experience of unified management and VM-centric operations; but with six-nines availability, faster performance, and a lower total cost of ownership (TCO).

HPE Alletra Storage dHCI is a purpose-built, optimized stack for VMware® virtualized solutions that radically simplifies management, scalability and management of IT and is built for business critical applications and mixed workloads delivering them through policy-driven storage framework with VMware Virtual Volumes (hereinafter referred to as vVols) or with traditional VMware vSphere® VMFS. As application requirements evolve in the enterprise, there is a shift to IT generalists who require VM-centric data services and a need to enable a hybrid cloud through policy-driven and self-service storage. At the same time, in a container environment, there is a shift to self-service leading to DevOps-centric provisioning and a need for the advanced data services provided by storage vendors.

vSphere with Tanzu enables containerized workloads and traditional legacy workloads to run on the same platform with Tanzu as the single consolidated management plane. Customers can build Kubernetes-based applications on their existing network and shared storage infrastructure. VMware vSphere admins can leverage existing infrastructure and VMware Tools to deliver Kubernetes namespaces for development teams in just a few hours. Meanwhile, developers can use an upstream-compliant Kubernetes infrastructure. This allows both teams to use existing tools to collaborate and align seamlessly to support modern applications.

HPE Alletra Storage dHCI and vSphere with Tanzu enables customers to get started fast modernizing apps using Kubernetes, while eliminating unnecessary silos between old and new environment.

Using this document

This technical white paper provides examples, tips, best practices and other configuration details to help deploy vSphere with Tanzu in order to deploy Tanzu Kubernetes cluster on HPE Alletra dHCI.

vSphere with Tanzu

vSphere with Tanzu is the new generation of vSphere for containerized applications. This single, streamlined solution bridges the gap between IT operations and developers with a new kind of infrastructure for modern, cloud-native applications both on premises and in public clouds.

HPE Alletra dHCI

HPE storage solutions for VMware are cloud-ready, fast, flexible, and efficient, and ship with built-in data protection. The HPE storage portfolio covers the full spectrum of business requirements, from small to large enterprise environments.

This HPE Alletra dHCI solution uses the HPE Alletra 6000 array for business-critical applications such as traditional virtualized workloads on VMware vSphere and containerized applications running on vSphere with Tanzu. HPE Alletra dHCI delivers the simplicity of a hyperconverged platform without the limitations. Built to streamline your infrastructure via data first modernization, the HPE disaggregated HCI for mixed application workloads at scale unlocks IT agility while ensuring apps are always on and always fast.

Key features include:

  • HPE InfoSight Full Stack Analytics
  • Six 9s availability
  • Mixed application workloads without performance loss
  • Simplified lifecycle management
  • Management by Data Services Cloud Console
  • Integrated data protection and backup
  • Sub-millisecond latency at consistent high performance
  • Integration with HPE GreenLake
  • Non-disruptive scalable storage and compute upgrades
  • VMware storage plug-in for HPE Alletra dHCI
  • Storage Policy-Based Management (SPBM) using vVols

Storage plug-in for VMware

The HPE Alletra dHCI plug-in for VMware vCenter® provides an easy way to both access storage information and make storage changes directly from the vCenter interface. The plug-in is registered and installed automatically as part of the dHCI solution and accessed from the vCenter UI. Hewlett Packard Enterprise strongly recommends that storage management be performed as much as possible from the vCenter interface, which includes the dHCI plug-in.

HPE InfoSight

HPE InfoSight is the industry’s most advanced AI for infrastructure. It gives users the power of AI to create an infrastructure that is autonomous. HPE InfoSight drives global intelligence and insights for infrastructure across servers, storage, and virtualized resources with the power of cloud-based machine learning.

infosight

Figure 1. HPE Infosight

HPE GreenLake

HPE GreenLake is the edge-to-cloud platform that brings the cloud to users and brings the cloud experience everywhere.

Data Services Cloud Console

The SaaS-based Data Services Cloud Console (DSCC) delivers a suite of cloud services that are designed to help achieve two goals: * Enable cloud operational agility for data infrastructure everywhere * Unify data operations across the data lifecycle

Because the HPE Alletra 6000 array is part of the HPE cloud-native portfolio, cloud connectivity is required for dHCI setup. During dHCI deployment, the array automatically self-registers. DSCC is only used for dHCI to register the array.

DSCC HCI

HCI Manager is an application within Data Services Cloud Console that extends HPE Alletra dHCI into the hybrid cloud. It provides a single interface for deploying and managing HPE Alletra dHCI systems across an entire organization.

Solution Components

Hardware

In this guide, we are using a new deployment of the HPE Alletra dHCI with the following configuration. Sizing and capacity shown below is for representative purposes only and actual requirements will need to be determined based upon customer needs and requirements.

Compute System Count CPU Memory Local Storage
DL380 Gen 10 3 Intel(R) Xeon(R) Gold 5118 256 GB 500 GB
Storage System
HPE Alletra 6070
Switches
HPE Aruba 8325

VMware software

The following VMware software and licensing was used to deploy vSphere with Tanzu.

Components Version License type
vCenter Server 7.0 U3 Standard
vSphere ESXi 7.0 U3 Enterprise Plus
vSphere Workload Management 7.0 U3 Basic
NSX Advanced Load Balancer 21.1.4 Essentials for Tanzu

Note

A 60-day evaluation license is assigned to the cluster when Workload Management is enabled on a vSphere cluster. A Tanzu license (Basic, Standard, Advanced) is required beyond the 60-day evaluation.

Configuration guidance for HPE Alletra dHCI and vSphere with Tanzu

Networking requirements

Three routable subnets are needed to support vSphere with Tanzu and the NSX Advanced Load Balancer (ALB).

Topology

Figure 2. vSphere with Tanzu network diagram example.

The following VLANs and virtual machine port groups were configured for the HPE Nimble Storage dHCI solution deployment. VMware and HPE storage best practices were followed for the network and storage configuration.

Note

The Management, Frontend, and Workload Networks cannot be on the same subnet. They also require L2 isolation. We highly recommend using VLANs to isolate the Management and Workload Networks.

Port Group Name Description VLAN Name
dvpg_mgmt-dHCI-Cluster-11001 ESXi/Array Management mgmt_vlan1
dvpg_iscsi1-dHCI-Cluster-11001 iSCSI 1 iscsi1_vlan
dvpg_iscsi2-dHCI-Cluster-11001 iSCSI 2 iscsi2_vlan
dhci_vm-network VM Network vm_network1
dhci_tanzu-management-net Tanzu Management Network mgmt_vlan
dhci_tanzu-workload-net Tanzu Workload Network tanzu_workload
dhci_tanzu-frontend-net Tanzu Frontend Network tanzu_frontend

1 = All ports used for ESXi Management and VM traffic are configured in Trunk mode. The VLAN ID is configured on the vSphere port group.
2 = iSCSI ports are configured in Access mode.

Important

The HPE Alletra dHCI must be deployed using the vSphere Distributed Switch (vDS) deployment option. HPE Alletra dHCI deployed with vSphere Standard Switch (vSS) will not support vSphere with Tanzu.

For more details and information on HPE Alletra dHCI deployment options, see HPE Nimble Storage dHCI and VMware vSphere New Servers Deployment Guide.

DNS & NTP

Ensure DNS forward and reverse lookups and NTP are configured and available.

Storage configuration

By default, the HPE Alletra dHCI deploys two basic datastores as shown in Figure 3.

drawing

Figure 3. Default datastores.

Create a vVols datastore on HPE Alletra dHCI.

A vVols datastore will need to be created for use by a Content Library, VM virtual disks and the deployment of vSphere with Tanzu Supervisor and guest Kubernetes clusters. This vVols datastore will also be used to provision persistent storage for the applications running in the Tanzu clusters.

  1. Right click an object within vCenter and go to HPE Alletra 6000 and Nimble Storage Actions. Click Create VVol Datastore.

    drawing

  2. Choose the dHCI system in Group. Click Next.

    drawing

  3. Specify a Name for the datastore. Click Next.

    drawing

  4. Set a Space Limit. Click Next.

    drawing

  5. Set optional Performance Limits. Finally click Create.

    drawing

At this stage, there should be a VMFS and vVols datastore available to deploy the various components needed for vSphere for Tanzu.

Deploying the Solution

Planning the environment

vSphere with Tanzu Solution deployment requirements

System Nodes vCPUs Memory Storage VM Class node role
NSX Advanced Load Balancer 3 8 24 128 GB - -
Supervisor Cluster 3 4 16 32 GB guaranteed-large -
Tanzu Kubernetes Cluster (per node) 3 4 16 16 GB guaranteed-large control plane
Tanzu Kubernetes Cluster (per node) 3 4 32 16 GB guaranteed-xlarge worker

Note

The node sizes shown above are for demonstration purposes only. Actual node sizes will be determined based upon the type and number of workloads being scheduled on each Kubernetes node. To learn more about Virtual Machine Class sizing, refer to Virtual Machine Classes for Tanzu Kubernetes Clusters

Storage Policies and tags

vSphere with Tanzu leverages the vSphere CSI Driver to provide storage to Tanzu Kubernetes Clusters. The vSphere Container Storage Interface (CSI) Driver supports both policy and tag based Storage Policy-Based Management (SPBM) to provision volumes. SPBM is a feature of VMware vSphere that allows an administrator to match VM workload requirements against storage array capabilities, with the help of VM Storage Policy. This storage policy can expose multiple array capabilities and data services to the Tanzu Kubernetes clusters, depending on the underlying storage you use. Using Storage Policies is preferred for the Tanzu Supervisor and Kubernetes nodes as each VM will have its volumes directly on the array. vVols (and VMFS tag-based) policies assigned to Tanzu Kubernetes clusters in StorageClasses, provide persistent storage for container-based applications allowing developers to tailor storage to their application needs.

Tags allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. A tag is a label that you can apply to objects in the vSphere inventory. With vVols, the VASA provider reports the capabilities and services that a vendor's storage can provide. For VMFS, with tag-based SPBM, you can create your own specific categories and tags based on almost anything you can envision. Performance levels or tiers, disk configurations, locations, OS type, departments, and disk types such as SAS, SATA or SSD are just a few examples.

Creating vVols Storage Policy

For our Supervisor Cluster and namespaces, we need to create a vVols-based storage policy that will control the storage placement of objects like control plane VMs, container images, and persistent storage volumes.

Go to Menu, Policies and Profiles.



tag

Highlight VM Storage Policies, click Create.

Policy

Specify a Name of the policy. Click Next.

Policy

Click Enable rules for "HPE Alletra 6000 and Nimble Storage" storage. Click Next.

Policy

To configure rules for the HPE Alletra 6000, click Add Rule.

Policy

Configure the rules to your application needs. Click Next.

Policy

Verify that the datastore is showing as compatible. Click Next.

Policy

Review and click Finish.

Policy

Create Subscribed Content Library

vSphere with Tanzu requires a subscribed Content Library to download the latest Tanzu Kubernetes Grid cluster images that will be used to deploy the Tanzu Supervisor and Kubernetes clusters.

Click on Menu then Content Libraries.

content library

Under Content Library, Click Create.

content library

Specify a Name for the Content Library. Click Next.

Content Library

Choose Subscribed content library.

Enter the following Subscription URL: http://wp-content.vmware.com/v2/latest/lib.json Click Next.

Content Library

Choose the default options on Apply security policy. Click Next.

Content Library

Choose the vVols datastore to store the Content Library objects. Click Next.

Content Library

Review the setting and click Finish to create the Content Library.

Content Library

Create Local Content Library

We will also create a local Content Library for OS images, OVAs, etc. The process is the same as above.

Click on Menu then Content Libraries.

content library

Under Content Library, Click Create.

content library

Specify a Name for the Content Library. Click Next.

Content Library

Choose Local content library. Click Next.

Content Library

Choose the default options on Apply security policy. Click Next.

Content Library

Choose the vVols datastore to store the Content Library objects. Click Next.

Content Library

Review the setting and click Finish to create the Content Library.

Content Library

vSphere with Tanzu User management

The vSphere with Tanzu platform involves two roles, the vSphere administrators and the DevOps engineer. It is important to configure proper role(s) and permissions within vCenter Single Sign-On for the users to interact with the platform. Please refer to vSphere with Tanzu User Roles and Workflows.

Creating a DevOps user

The DevOps user will be a consumer (non-vSphere admin) of the assigned vSphere namespace using the kubectl CLI. This user may or may not require access to the vCenter UI.

Note

If your vCenter is joined to Active Directory, you can substitute the “devops” user with another user from the identity store.

To create the devops@vsphere.local user:

Click Menu, Administration


users

Under Single Sign On, click Users and Groups.


users

In the Users and Groups, choose the Domain vsphere.local from the drop-down menu. Click Add.


users

Create the devops user, set the Password and click Add.


users

After the devops@vsphere.local user is created, it will be assigned to the vSphere namespace at a later step.

vSphere with Tanzu IP requirements

You will need three separate, routable subnets configured. One subnet will be for the vSphere with Tanzu Management network. This is the network where vCenter, the NSX Advanced Load Balancer cluster, and the Supervisor Cluster will live. The other subnets will be used for the Frontend and Workload networks.

Note

The subnets listed below are specific to this deployment and may not match your environment IP network addressing scheme.

Each network will need to be on separate vSphere Distribute Switch port groups.

Port Group Name Description Example Subnet
dhci_tanzu-management-net Tanzu Management Network 192.168.16.0/21
dhci_tanzu-workload-net Tanzu Workload Network 192.168.100.0/23
dhci_tanzu-frontend-net Tanzu Frontend Network 192.168.102.0/24

Topology

Management Network The Management Network is where the vCenter, ESXi hosts, Supervisor Cluster and NSX Advanced Load Balancer are deployed.

Name Minimum Quantity Example: IP Addresses
Load Balancer cluster IPs 3 192.168.22.185 - 192.168.22.187
Load Balancer cluster Virtual IP 1 192.168.22.190
Subnet Mask 1 255.255.248.0
NTP & DNS 1 192.168.20.5, 192.168.20.6
Supervisor Cluster IP Pool Block of 5 192.168.22.191 - 192.168.22.200

Frontend Network External clients (such as users or applications) accessing cluster workloads use the Frontend network to access backend load balanced services using virtual IP addresses.

Name Example: Static IP Pool
Frontend Network 192.168.102.51 - 192.168.102.90
Subnet Mask 255.255.255.0

Workload Network This network is used by the load balancer to access the Kubernetes services on the Supervisor and Guest clusters.

Components Example: Static IP Pool
Workload Network 192.168.100.51 - 192.168.100.90
Subnet Mask 255.255.254.0

Install and Configure the NSX Advanced Load Balancer

Deploy Controller VMs

To get started deploying the NSX Advanced Load Balancer, download the OVA from VMware.

You can access it from the products download page.

Once downloaded, upload the OVA to the Images content library we created earlier.

Go to Menu, click Content Library. Right click the Images content library and choose Import. This may take a few minutes to upload.

Note

If you see an error when uploading an image, you will need to browse to the URL for each of your ESXi hosts in your clusters and accept the certificate warnings within the browser. Then repeat the upload process.

We will be deploying NSX Advanced Load Balancer in a 3-node cluster to provide high availability. We configure the cluster in a later step.

Node Name Example IP
nsx-alb1 192.168.22.185
nsx-alb2 192.168.22.186
nsx-alb3 192.168.22.187

Navigate to the Images content library. Choose OVF & OVA Templates, right click on the controller OVA and click New VM from This Template...

avi content library

Specify the VM name and location. Click Next.

deployment

Select the Cluster. Click Next.

deployment

Confirm settings and click Next.

deployment

Select Storage. Click Next.

deployment

Under Management Network, select the appropriate management network to be used. Example: dhci_tanzu-management-net Click Next.

deployment

  • Specify the Management IP Example: 192.168.22.185
  • Set the Subnet Mask Example: 255.255.248.0
  • Set the Gateway Example: 192.168.20.1

Leave the other fields blank. Click Next.

Note

Administrative commands are configured on the Controller by accessing it using this IP address. The management IP address is also used by the Controller to communicate with other Service Engines. This IP address for all Controllers within a cluster should belong to the same subnet.

deployment

Confirm settings and click Finish.

deployment

Repeat these steps to deploy all nodes for the cluster.

Once all nodes are deployed, power on each node. It can take up to 10 minutes for the nodes to be in a Ready state.

Initialize Controller nodes

In a browser, enter the IP for the first node in the cluster. Example: https:\\192.168.22.185 Set a username and password. Click Create Account.

users

Important

For each additional load balancer node, login and set the admin password. Do NOT perform any additional configuration. All configuration will be done on the master node. When the cluster is configured, the master node configuration will be copied to each node within the cluster.

In the Welcome screen,

  • Set Passphrase Example: VMware1!
  • Set DNS server(s) Example: 192.168.20.5, 192.168.20.6
  • Set DNS Search Domain Example: hpelab.local

Click Next.

deployment

Configure an email address. Click Next.

deployment

Multi-tenant, keep Defaults.

deployment

Check Setup Cloud After box.

Important

Don't skip this or you will have to go through additional setup screens.

Click Save.

users

Add a License

Note

At initial startup, the Controller is assigned an Enterprise evaluation license. This guide will only use features available in the Essentials license (VMware NSX ALB essentials for Tanzu). The NSX ALB Essentials License is free to use with a valid vSphere with Tanzu license.

From the Controller dashboard, click the Administration tab.

deployment

Expand Settings, select Licensing. Click the gear icon next to Licensing.

deployment

Choose Essentials Tier. Click Save.

deployment

Configure the Default Cloud

From the Controller dashboard, click the Infrastructure tab, select Clouds.

Click the Convert Cloud Type gear icon on Default-Cloud.

deployment

In the dropdown menu, choose VMware vCenter/vSphere ESX Cloud Type. Click Yes, Continue.

users

Enter the vCenter Address, Username and Password. Keep all other settings default. Click Next.

deployment

Choose the Data Center. Click Next.

deployment

Select the Management Network, specify the IP Subnet and Default Gateway. Example: dhci_tanzu-management-net

Enter the Static IP Address Pool. Example: 192.168.22.191-192.168.22.200

These are IPs that will be assigned to Service Engines deployed on the Management Network.

deployment

Click Save.

Configure Cluster

Configuring a cluster is recommended in production environments for high availability and disaster recovery.

Important

To run a three node cluster, after you deploy the first Controller VM, deploy and power on two more Controller VMs. Set the initial admin password but do NOT complete the initial configuration wizard. The configuration of the first controller VM will be assigned to the additional Controller VMs.

From the Controller dashboard, click the Administration tab, select Controller > Nodes.

Click Edit.

Add a static IP for the Controller Cluster IP. Example: 192.168.22.190 (This IP must be available on the Management Network.)

For each Controller node to be added to the cluster, click Add.

deployment

Enter the IP Address, Name and Password for the admin user for each node.

deployment

Click Save.

deployment

Once all nodes are added to the cluster, click Save.

After a few minutes, the NSX Advanced Load Balancer will be reconfigured as a cluster.

Refresh the browser and navigate to the Cluster IP to continue with the configuration. Example: https://192.168.22.190

Configure a Certificate to the Controller

The Controller must send a certificate to clients to establish secure communication. This certificate must have a Subject Alternative Name (SAN) that matches the NSX Advanced Load Balancer Controller cluster hostname or IP address.

The Controller has a default self-signed certificate. But this certificate does not have the correct SAN. You must replace it with a valid or self-signed certificate that has the correct SAN.

From the Controller dashboard, click the Templates tab, select Security in the left-hand menu.

Select SSL/TLS Certificate. Click Create > Controller Certificate.

deployment

Enter a Certificate Name. Example: ALB Tanzu Cert

Enter the Controller IP address in Common Name. Example: 192.168.22.190

Select either EC or RSA. EC is recommended.

Under Subject Alternate Name (SAN), click Add and enter the Controller IP. Example: 192.168.22.190

deployment

Click Save

This certificate will be used to deploy the Supervisor Cluster when configuring Workload Management.

Once the certificate has been created, click Export Certificate.

deployment

In the Export Certificate window, click Copy to Clipboard under the Certificate section. Do not use the certificate key.

deployment

Finally, we need to update the portal certificate to use the newly created certificate.

From the Controller dashboard, click the Administration > Settings.

Select Access Settings, click the pencil to update settings.

deployment

Check the box Allow Basic Authentication.

Under SSL/TLS Certificate, remove any existing certificates. Click the drop down and select the new certificate we created in previous steps. Example: ALB-Tanzu-Cert

deployment

Keep the rest of the settings Default.

Click Save.

Configure a Service Engine Group

From the Controller dashboard, click the Infrastructure > Cloud Resources. Edit the Default-Group.

deployment

In the High Availability & Placement Settings section, select the High Availability Mode. The default option with the Essentials license is Active/Standby.

deployment

Click the Advanced tab.

Under Host & Data Store Scope section, Select Cluster > Include and choose the vSphere Cluster. Example: dHCI-Cluster

Under Data Store, select Include and choose the vSphere datastore. Example: dhci-vvol-ds01

deployment

Click Save.

Configure a Virtual IP Network

Configure a virtual IP (VIP) subnet for the Frontend Network. You can configure the VIP range to use when a virtual service is placed on the specific VIP network.

From the Controller dashboard, click the Infrastructure > Cloud Resources. Select Networks to see a list of networks available in vCenter Server. Verify that the proper Subnet information has been discover for the network.

Note

If the subnet information shows None under Discovered Subnets on a network, verify the following:

  • There is no VM attached to the port group. Fix: Attach a VM to the port group.
  • There is a VM attached to the port group, but it is not running VMware tools. Fix: Install VMware tools

Once issues above are addressed, manual discovery of the networks will need to be performed. Please refer to How to Perform a Manual Discovery of New Networks in VMware Cloud.

Choose the network that will provide virtual IP addresses. Example: dhci_tanzu-frontend-net Click Edit.

deployment

Click Edit, under IP Subnet.

deployment

Click Add Static IP Address Pool. Select Use Static IP Address for VIPs and Service Engine.

Enter the IP Address Pool to be used by the Frontend Network for VIPs and Service Engines. Example: 192.168.102.51-192.168.102.90

deployment

Click Save.

The IP Subnet should now show Configured and the list of IPs should be listed under the IP Address Pool.

deployment

Click Save

Configure a Static Route

A default gateway enables the Service Engine to route traffic to the pool servers on the Workload Network. You must configure the Frontend Network gateway IP as the default gateway. You must configure static routes so that the Service Engines can route traffic to the Workload Networks and Client IP correctly.

From the Controller dashboard, click Infrastructure > Cloud Resources > Routing.

Click Create.

In Gateway Subnet, enter the Workload Network subnet. Example: 192.168.100.0/23

In Next Hop, enter the Frontend Network gateway IP address. Example: 192.168.102.1

deployment

Click Save.

Configure IPAM

Configure IPAM for the Controller and assign it to the Default-Cloud configuration. IPAM is required to allocate virtual IP addresses when virtual services get created.

From the Controller dashboard, click Templates > Profiles > IPAM/DNS Profiles. Click Create and select IPAM Profile.

deployment

Specify a Name.

Click Add Usable Network.

deployment

Under Cloud for Usable Network, choose Default-Cloud.

Set Usable Network to the Frontend Network. Example: dhci_tanzu-frontend-net

Click Save.

Verify the NSX Advanced Load Balancer

With the NSX Advanced Load Balancer deployed and configured, verify it is functioning correctly.

From the Controller dashboard, click Infrastructure > Clouds.

Verify status of Default-Cloud is green.

deployment

Enabling Workload Management

With the NSX Advanced Load Balancer deployed and configured, we are ready to enable Workload Management and deploy the vSphere with Tanzu Supervisor Cluster.

Deploying the Supervisor Cluster

Start by going to the Menu, click Workload Management.

users

If you have added a Tanzu (Basic, Standard, Advanced) license to your vCenter, then you will have the Get Started option. If a license has not been applied, you will need to complete the form to start the 60-day evaluation period for vSphere with Tanzu.

workload

Step 1. Choose the vCenter Server and Network

Choose vSphere Distributed Switch (VDS) for the networking stack. Click Next.

workload

Step 2. Cluster

Choose the compatible Cluster. Click Next.

workload

Step 3. Storage

Choose the vVol policy we created in an earlier step. Click Next.

workload

Step 4. Configure the Load Balancer

The following information comes from the NSX Advanced Load Balancer we deployed in an earlier step.

  • Specify a Name. Example: nsx-alb
  • Set the Load Balancer type: NSX Advanced Load Balancer
  • Enter the Management IP Address of the NSX Advanced Load Balancer and port number (Port 443 is default): Example: 192.68.22.190:443
  • Enter the Load Balancer Username: Example: admin
  • Enter the Load Balancer Password: Example: VMware1!
  • Enter the Server Certificate: Paste the NSX Advanced Load Balancer certificate here.

workload

Step 5. Management Network

  • Set Network Mode. Static (DHCP can be used if available)
  • Set Management Network vDS port group. Example: dhci_tanzu-management-net
  • Set Starting IP Address that will be the first IP in a range of 3 to assign to Supervisor control plane nodes' management interfaces. Example: 192.168.22.201
  • Set Subnet Mask. Example: 255.255.248.0
  • Set Gateway. Example: 192.168.20.1
  • Set DNS Server(s). Example: 192.168.20.5, 192.168.20.6
  • Optional: Set DNS Search Domain(s). Example: hpelab.local
  • Set NTP Server(s). Example: 192.168.20.1

workload

Step 6: Workload Network

  • Set Network Mode. Static (DHCP can be used if available)
  • Set Internal Network for Kubernetes Services. Use Default setting Example: 10.96.0.0/23
  • Set Workload Network vDS port group. Example: dhci_tanzu-workload-net
  • Set Network Name: Use Default setting
  • Set IP Address Range to be used by the Tanzu Kubernetes guest clusters or other resources. Example: 192.168.100.51 - 192.168.100.90
  • Set Subnet Mask: Example: 255.255.254.0
  • Set Gateway: Example: 192.168.100.1
  • Set DNS Server(s): Example: 192.168.20.5, 192.168.20.6
  • Set NTP Server(s): Example: 192.168.20.1

workload

Step 7: Tanzu Kubernetes Grid Service

Choose the subscribed content library tanzu-content-library we created in an earlier step.

workload

Step 8: Review and Confirm

You can edit the Control Plane Size depending on the number of Pods you will have in your environment. In this case, we will keep the default setting.

workload

Once ready click Finish.

The deployment of the Supervisor Cluster will take some time.

workload

Once complete, the Config Status should show Running.

users

Create Namespace

With the Supervisor Cluster deployed, we are now ready to deploy the Tanzu Kubernetes Clusters. In order to do that, we need to create a Namespace.

Click on the Namespaces tab.

namespace

Click Create Namespace.

namespace

Choose the cluster Example: dHCI Cluster and specify a name for the namespace. Example: devteam

Click Create.

namespace

Once the Namespace has been created, you should see the following.

namespace

Assign storage, permissions, and VM Classes

From here we will configure user permissions, storage, and assign VM classes. We can also set limits to the resources available to the Namespace.

Permissions

We need to add the devops@vsphere.local user we created in an earlier step and assign Edit permissions in order for the user to deploy objects within the Tanzu Kubernetes cluster.

Click Add Permission

namespace

Select the vsphere.local identity source.

Enter the devops user and assign Can edit role. Click OK.

namespace

Storage

Click Add Storage

Select the following Storage Policies. Click OK.

Storage Policy
dHCI vVOL Tanzu Gold Policy
dHCI non-vVOL Tanzu Silver Policy
VM Service

Click Add VM Class.

Select the following VM Classes. Click OK.

VM Class vCPU Memory
best-effort-large 4 vCPU 16GB
best-effort-xlarge 4 vCPU 32GB
guaranteed-large 4 vCPU 16GB
guaranteed-xlarge 4 vCPU 32GB

Click Add Content Library

Choose the subscribed content library tanzu-content-library. Click OK.

Capacity and Usage

(Optional) vSphere with Tanzu admins can set resource limits (CPU, Memory, Storage) available to the Namespace. At this time, we won't make any changes here.

Once permissions and resources have been assigned to the devteam Namespace, we are ready to start interacting with vSphere with Tanzu. In order to do that we will need to download the CLI tools.

Click Open on the link to the CLI Tools.

namespace

This will open a browser window. Select Operating System and download and install the CLI tools.

namespace

Authenticate to the Supervisor Cluster

Note

The rest of the guide will be primarily done within a terminal. Basic knowledge of interacting with a Linux system (running commands, editing files, etc) is assumed.

With the CLI tools installed, lets verify kubectl is working correctly.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6+vmware.wcp.2", GitCommit:"619e0e4c3763423b0fe361c592ad3cd1e4914655", GitTreeState:"clean", BuildDate:"2022-02-15T10:23:25Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Next we need the Supervisor Cluster IP.

Note

You can get the IP from two locations, the URL from the browser window that opened when downloading the CLI tools or from the Supervisor Cluster view in Workload Management.

supervisor

Now we are ready to login to the Supervisor Cluster.

Example:

kubectl vsphere login --server 192.168.102.52 --insecure-skip-tls-verify
Welcome to Photon 3.0 (\m) - Kernel \r (\l)

Username: devops@vsphere.local
KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
Password:
Logged in successfully.

You have access to the following contexts:
   192.168.102.52
   devteam

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

Now change the context to work with the devteam Namespace.

$ kubectl config use-context devteam
Switched to context "devteam".

We are now logged into the Supervisor Cluster and are ready to deploy a Tanzu Kubernetes cluster.

(Optional) Global proxy configuration vs cluster specific proxy

If you are behind a corporate proxy, vSphere with Tanzu supports two methods to configure proxy settings at the Supervisor Cluster level or at the Guest Kubernetes cluster level.

Note

For newly-created Supervisor Clusters on a vSphere 7.0 Update 3 environment, HTTP proxy settings are inherited from vCenter Server. No matter if you create the Supervisor Clusters before or after you configure HTTP proxy settings on vCenter Server, the settings are inherited by the clusters.

Global proxy configuration

To configure a global proxy configuration that applies to all guest Kubernetes clusters, the following example YAML manifest is ran at the Supervisor Cluster level.

Note

Proxy settings will need to be applied by a vSphere cluster admin (i.e. administrator@vsphere.local). The user devops@vsphere.local doesn't have permissions to patch Supervisor Cluster objects.
Use kubectl vsphere login --server 192.168.102.52 --insecure-skip-tls-verify to switch users.

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  network:
    proxy:
      httpProxy: http://192.168.1.1:8080 #Proxy URL for HTTP connections
      httpsProxy: http://192.168.1.1:8080 #Proxy URL for HTTPS connections
      noProxy: [192.0.0.0/8,172.17.0/16,.domain.local,.local,.svc,.svc.cluster.local] #SVC Pod, Egress, Ingress CIDRs

Kubernetes cluster proxy configuration

You can use a proxy server with an individual Tanzu Kubernetes cluster by applying the proxy server configuration to the cluster manifest.

spec:
  settings:
    proxy:
        httpProxy: http://192.168.1.1:8080  #Proxy URL for HTTP connections
        httpsProxy: http://192.168.1.1:8080 #Proxy URL for HTTPS connections
        noProxy: [192.0.0.0/8,172.17.0/16,.domain.local,.local,.svc,.svc.cluster.local] #SVC Pod, Egress, Ingress CIDRs

Deploy Tanzu Kubernetes Cluster

We need to deploy a Tanzu Kubernetes cluster to deploy workloads to. A Tanzu Kubernetes cluster consists of control plane nodes and worker nodes.

Information is provided to kubectl using YAML manifests.

Here is an example of a YAML file tkc.yaml Tanzu Kubernetes cluster that we will be deploying.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tanzu-devops
spec:
  settings:
    storage:
      defaultClass: dhci-vvol-tanzu-gold-policy
    network:
      serviceDomain: hpelab.local
  topology:
    controlPlane:
      replicas: 3
      vmClass: guaranteed-large
      storageClass: dhci-vvol-tanzu-gold-policy
      tkr:
        reference:
          name:  v1.23.8---vmware.3-tkg.1
    nodePools:
    - name: worker-nodepool-a1
      replicas: 3
      vmClass: guaranteed-xlarge
      storageClass: dhci-vvol-tanzu-gold-policy
      tkr:
        reference:
          name:  v1.23.8---vmware.3-tkg.1

To explain the components of this deployment.

This describes the type of objects that we will be deploying namely a TanzuKubernetesCluster with the name of tanzu-devops.

apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
metadata:
  name: tanzu-devops

The spec defines the characteristics of the cluster.

  • spec.settings.storage.defaultClass - Sets a default StorageClass using one of the Storage Policy we assigned to the Namespace.
  • spec.settings.network.serviceDomain - Sets the service domain for the cluster.
  • spec.topology.controlPlane - Sets the control plane node size based on vmClass assigned to the Namespace, the StorageClass to use when deploying the VMs, Kubernetes version (Example: 1.23.8) for the cluster.
  • spec.topology.nodePools - Sets the worker node size based on vmClass assigned to the Namespace, the StorageClass to use when deploying the VMs, and Kubernetes version (1.22.9) for the cluster.
  • To view versions of Kubernetes available, use the following command: kubectl get tanzukubernetesreleases

Note

A StorageClass references the Storage Policy we assigned to the Namespace.

To create the Tanzu Kubernetes cluster, run the following command referencing the tkc.yaml file.

kubectl create -f tkc.yaml
tanzukubernetescluster.run.tanzu.vmware.com/tanzu-devops created

This deployment can take some time (20-30 minutes). We can monitor the deployment using the watch or -w command. Use Ctrl+C to exit watch.

watch kubectl get virtualmachine

or 

kubectl get virtualmachine -w

Login to Tanzu Kubernetes Cluster

Once the Tanzu Kubernetes Cluster is up and running, we need to login.

kubectl vsphere login --server 192.168.102.52 --tanzu-kubernetes-cluster-name tanzu-devops --tanzu-kubernetes-cluster-namespace devteam --vsphere-username devops@vsphere.local --insecure-skip-tls-verify

You should see output similar to below.

$ kubectl vsphere login --server 192.168.102.52 --tanzu-kubernetes-cluster-name tanzu-devops --tanzu-kubernetes-cluster-namespace devteam --vsphere-username devops@vsphere.local --insecure-skip-tls-verify
Welcome to Photon 3.0 (\m) - Kernel \r (\l)

KUBECTL_VSPHERE_PASSWORD environment variable is not set. Please enter the password below
Password:
Logged in successfully.

You have access to the following contexts:
   192.168.102.52
   devteam
   tanzu-devops

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

Next switch contexts so that kubectl is targeting our new TKG cluster.

$ kubectl config use-context tanzu-devops
Switched to context "tanzu-devops".

If you have gotten this far, your vSphere with Tanzu cluster is ready for developers to deploy applications.

Example DevOps Use Case with MongoDB

Now that vSphere with Tanzu is fully deployed, we can deploy workloads that leverages persistent storage from the HPE Alletra dHCI platform.

The following steps will performed to deploy MongoDB.

  • Verify StorageClass availability and cluster permissions
  • Configure Helm repo for MongoDB
  • Deploy MongoDB ReplicaSet Helm chart
  • Create DNS entries to MongoDB services
  • Validate NSX Advanced Load Balancer
  • Connect to MongoDB ReplicaSet using MongoDB Compass

Verify StorageClass availability

vSphere with Tanzu leverages Cloud Native Storage and Storage Policy Based Management to provision vSphere vVols based volumes, also known as First Class disks (FCDs), for use by workloads deployed on vSphere with Tanzu workloads. This removes many of the limitations of using third party CSI drivers which have limited access using traditional Fibre Channel or iSCSI data paths for block storage to Kubernetes clusters running as VMs.

To verify the available StorageClasses.

$ kubectl get sc
NAME                                    PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
dhci-vvol-tanzu-gold-policy (default)   csi.vsphere.vmware.com   Delete          Immediate           true                   129m

Configure user permissions (clusterrolebindings)

The final step before deploying workloads, we need to assign the devops user the ability to run any type of container in the environment.

Important

This shouldn't be done in production, but for a quick start, this will allow all authenticated users the ability to run any type of container.

kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

Configure Helm repo

In order to install MongoDB, we need to configure the Helm repository that contains the chart.

helm repo add bitnami https://charts.bitnami.com/bitnami

Next update Helm to pull the latest information from the repository.

helm repo update

To inspect the version of MongoDB available on the repository.

 helm search repo mongodb
NAME                                            CHART VERSION   APP VERSION     DESCRIPTION
bitnami/mongodb                                 13.5.0          6.0.3           MongoDB(R) is a relational open source NoSQL da...
bitnami/mongodb-sharded                         6.2.0           6.0.3           MongoDB(R) is an open source NoSQL database tha...

Deploy MongoDB ReplicaSet Helm chart

With the MongoDB repository in place and a StorageClass available, deploying complex MongoDB resources is greatly simplified by using Helm.

The following is an example of deploying MongoDB using Helm customizations that allow access from outside the vSphere with Tanzu environment to the MongoDB nodes. The ReplicaSet deployment will use LoadBalancer services that will be routed through the NSX Advanced Load Balancer.

helm install mongodb \
    --set architecture=replicaset \
    --set replicaSetName=dev-mongodb \
    --set replicaCount=3 \
    --set auth.rootPassword=secretpassword \
    --set auth.username=dev-user \
    --set auth.password=dev-password \
    --set auth.database=dev-database \
    --set persistence.size=50Gi \
    --set service.type=LoadBalancer \
    --set externalAccess.enabled=true \
    --set externalAccess.autoDiscovery.enabled=true \
    --set rbac.create=true \
    --set clusterDomain=hpelab.local \
    bitnami/mongodb

Note

The spec.clusterDomain parameter will need to be updated to match the root domain (i.e. example.com) for your environment.

You should see similar output:

dev-user:~/tanzu$ helm install mongodb \
>     --set architecture=replicaset \
>     --set replicaSetName=dev-mongodb \
>     --set replicaCount=3 \
>     --set auth.rootPassword=secretpassword \
>     --set auth.username=dev-user \
>     --set auth.password=dev-password \
>     --set auth.database=dev-database \
>     --set persistence.size=50Gi \
>     --set service.type=LoadBalancer \
>     --set externalAccess.enabled=true \
>     --set externalAccess.autoDiscovery.enabled=true \
>     --set clusterDomain=hpelab.local \
>     --set rbac.create=true \
>     bitnami/mongodb
NAME: mongodb
LAST DEPLOYED: Tue Jan 24 10:46:11 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 13.5.0
APP VERSION: 6.0.3

** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb-0.mongodb-headless.default.svc.hpelab.local:27017
    mongodb-1.mongodb-headless.default.svc.hpelab.local:27017
    mongodb-2.mongodb-headless.default.svc.hpelab.local:27017

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)

To get the password for "dev-user" run:

    export MONGODB_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-passwords}" | base64 -d | awk -F',' '{print $1}')

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.3-debian-11-r0 --command -- bash

Then, run the following command:
    mongosh admin --host "mongodb-0.mongodb-headless.default.svc.hpelab.local:27017,mongodb-1.mongodb-headless.default.svc.hpelab.local:27017,mongodb-2.mongodb-headless.default.svc.hpelab.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database nodes from outside, you need to add both primary and secondary nodes hostnames/IPs to your Mongo client. To obtain them, follow the instructions below:

  NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
        Watch the status with: 'kubectl get svc --namespace default -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/component=mongodb,pod" -w'

    MongoDB&reg; nodes domain: You will have a different external IP for each MongoDB&reg; node. You can get the list of external IPs using the command below:

        echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/component=mongodb,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"

    MongoDB&reg; nodes port: 27017

Validate NSX Advanced Load Balancer

We can inspect the NSX Advanced Load Balancer to verify that the Virtual Services were created successfully.

In a browser, navigate to the NSX Advanced Load Balancer Cluster URL and login. Example: https:\\192.168.22.190

From the Applications tab, in the Dashboard you should see various tiles representing the various Virtual Services that have been configured.

Click Virtual Services in the left hand menu. Now you can inspect the Virtual Services that were created automatically for vSphere with Tanzu, including the API services, Control Plane, vSphere CSI Driver, and our MongoDB application.

mongodb

After verifying that the Virtual Services have been created, we need to create DNS A records for each of the MongoDB ReplicaSet nodes.

Create DNS entries to MongoDB services

One of the final steps to get access to the MongoDB nodes outside of the Kubernetes cluster requires the creation of DNS A records to map to the exposed LoadBalancer services.

Note

NSX Advanced Load Balancer can be configured with a DNS Profile that will handle DNS resolution to all applications deployed within vSphere with Tanzu. This is an advanced feature available with an NSX Advanced Load Balancer Enterprise license. For more information on DNS profiles.

The following A records will need to be created. This information comes from the output of the Helm deployment.

MongoDB can be accessed on the following DNS name(s):

    mongodb-0.mongodb-headless.default.svc.hpelab.local
    mongodb-1.mongodb-headless.default.svc.hpelab.local
    mongodb-2.mongodb-headless.default.svc.hpelab.local

In order to find the IP address assigned to the LoadBalancer service per node, run the following command:

$ kubectl get services
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)           AGE
...
mongodb-0-external         LoadBalancer   10.102.215.237   192.168.102.56   27017:30751/TCP   72m
mongodb-1-external         LoadBalancer   10.100.185.170   192.168.102.57   27017:32610/TCP   72m
mongodb-2-external         LoadBalancer   10.99.222.65     192.168.102.58   27017:30634/TCP   72m
...

Next create a DNS A record for each service. The following is an example PowerShell command that will create a DNS A record in Microsoft DNS.

Add-DnsServerResourceRecordA -computername "dnsserver.example.com" -Name "mongodb-0.mongodb-headless.default.svc" -ZoneName "example.com" -AllowUpdateAny -CreatePtr -IPv4Address "192.168.102.56"

Connect to MongoDB ReplicaSet using MongoDB Compass

With the application deployed and services exposed, we can now connect to MongoDB using MongoDB Compass.

Launch MongoDB Compass. In the New Connection window, enter the following connection string or use Advanced Connection Options.

mongodb://dev-user:dev-password@mongodb-0.mongodb-headless.default.svc.hpelab.local:27017,mongodb-1.mongodb-headless.default.svc.hpelab.local:27017,mongodb-2.mongodb-headless.default.svc.hpelab.local:27017/?authSource=dev-database&replicaSet=dev-mongodb

mongodb

Upon successful login, you should see the dev-database on the left. Create a collection in MongoDB, click Create Collection and specify a name for the collection. Example: dev-collection

mongodb

Highlight the newly created collection.

namespace

Finally with a collection now ready, add data.

Example: A CSV containing a list of addresses was imported.

mongodb

Example DevOps Use Case with NGINX

Now lets deploy a web application to demonstrate additional storage operations including volume expansion.

Sample web application deployment.

dev-user:~/tanzu$ cat nginx.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-vol
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 25Gi
---  
apiVersion: apps/v1 
kind: Deployment 
metadata: 
  labels: 
    run: nginx 
  name: nginx-deploy-sample 
spec: 
  replicas: 1 
  selector: 
    matchLabels: 
      run: nginx-main 
  template: 
    metadata: 
      labels: 
        run: nginx-main 
    spec: 
      containers: 
      - image: public.ecr.aws/z9d2n7e1/nginx:1.19.5 
        name: nginx 
        volumeMounts: 
        - name: export 
          mountPath: "/usr/share/nginx/html"
        ports:
        - containerPort: 80 
      volumes: 
        - name: export 
          persistentVolumeClaim:
            claimName: nginx-vol

We will create the webserver deployment using the kubectl apply command.

$ kubectl apply -f nginx.yaml
persistentvolumeclaim/nginx-vol created
deployment.apps/nginx-deploy-sample created
$ kubectl rollout status deployment nginx-deploy-sample
deployment "nginx-deploy-sample" successfully rolled out
$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
nginx-deploy-sample-6865586d68-hkcdf   1/1     Running   0          35s

Expanding a persistent volume

We can use the Volume Expansion feature found in Kubernetes to expand volumes based as our application storage requirements increase.

We will show a simple example expanding a PVC using the PVCs we have available and the vSphere CSI Driver.

Existing PVC at 75 GB

We can verify the existing size of the PVC by running the kubectl get pvc command.

$ kubectl get pvc nginx-vol
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
nginx-vol    Bound    pvc-e489ca30-672a-44c5-996f-1c96f3c2436a   75Gi       RWO            dhci-vvol-tanzu-gold-policy   30m

Let's also log into the Pod to observe it from the view of the application.

$ kubectl exec -it nginx-deploy-sample-6865586d68-hkcdf -- /bin/bash
root@nginx-deploy-sample-6865586d68-hkcdf:/# df -h | grep /usr/share/nginx/html
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sdb         73G   60M   69G   1% /usr/share/nginx/html
...

Expand PVC to 100 GB

In order to expand the PVC, we need use the kubectl patch command to increase the backing volume size.

kubectl patch pvc/nginx-vol --patch '{"spec": {"resources": {"requests": {"storage": "100Gi"}}}}'
persistentvolumeclaim/nginx-vol patched

Note

This may take a few minutes to complete before being able to observe the new PVC size.

$ kubectl get pvc nginx-vol
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
nginx-vol    Bound    pvc-e489ca30-672a-44c5-996f-1c96f3c2436a   100Gi      RWO            dhci-vvol-tanzu-gold-policy   34m

Now we can log in to verify the application recognizes the volume expansion.

$ kubectl exec -it nginx-deploy-sample-6865586d68-hkcdf -- /bin/bash
root@nginx-deploy-sample-6865586d68-hkcdf:/# df -h | grep /usr/share/nginx/html
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sdb         98G   60M   94G   1% /usr/share/nginx/html
...

We can clean up the deployment using the kubectl delete command:

$ kubectl delete -f nginx.yaml
persistentvolumeclaim "nginx-vol" deleted
deployment.apps "nginx-deploy-sample" deleted

VMs and Kubernetes

Finally one the of the most powerful pieces of vSphere with Tanzu is that the Tanzu Supervisor Clusters and the guest Tanzu Kubernetes clusters are native objects within vSphere. No longer are your Kubernetes clusters that are running in VMs hidden from the view of your vSphere admins.

vms

Using VM Storage Policies for all workloads (both VMs and containerized applications running in vSphere with Tanzu) allows the administrator the ability to match the VM Storage Policies to the workloads or to tune the performance of an application as needed during runtime to either increase performance or reduce bottlenecks caused by resource contention. The rules found in the VM Storage Policy can be created and applied to individual or groups of vVols-based VMs or persistent storage for applications in Kubernetes for powerful automated storage management functionality.

To apply or edit the rules of a policy. Go to the vCenter Menu, Policies and Profiles, VM Storage Policies. Rules can be changed easily at any time.

Choose the policy and click Edit.

For example here, we are editing the Performance limits of a storage policy for the HPE Alletra dHCI.

rules

Cloud Native Storage UI in vCenter

In the devteam Namespace, you have visibility to the volumes that have been created in your Tanzu Kubernetes clusters.

cns

If we click on the number of PVCs, we can a more detailed list of the volumes backing the PVCs.

cns

VMware Admins can also view the volumes backing in use by Tanzu Kubernetes clusters in vSphere with Tanzu by highlighting the Datacenter object in the vSphere Inventory and going to the Monitor tab and clicking Container Volumes under Cloud Native Storage.

cns

When the PVC is mounted to a Pod, then it will also show the Pod information which is very helpful when tracking workloads.

cns

Summary

This configuration guide offers practical insights into how Hewlett Packard Enterprise brings together the key components of its portfolio as well as strategic partnerships with VMware to simply and easily create a containerized environment that is able to be shared. The HPE Alletra dHCI solution combines the powerful HPE Proliant platform with the capable HPE Alletra Storage array to deliver unified setup, configuration, and management. These benefits are enhanced by a containerized persistent storage solution for both on-premises locations and the cloud, and they are topped off by integration with vSphere with Tanzu. As one of the earliest adopters of containers for persistent storage, Hewlett Packard Enterprise has gained the depth of experience and capability to provide a true hybrid cloud for customers who want to leverage containerized stateful applications alongside their traditional VM infrastructure.

The solution enables several critical use cases for virtualized and containerized environments:

  • Dev/test workflows to develop in the cloud and deploy on-premises
  • Disaster recovery for complete data protection across on-premises
  • Instant availability of the right data for DevOps and CI/CD
  • Sophisticated analytics to analyze data in the cloud through Infosight and without disrupting the on-premises workloads
  • Consistent advanced data services and APIs
  • Everything as a service

The solution described in this configuration guide is just the beginning of the capabilities and integrations that Hewlett Packard Enterprise plans to offer in the future. Through partnerships with VMware and the integration with vSphere with Tanzu, this newly created solution will become the launching point for the ability to manage all your containerized workloads on-premises and provide robust solutions to modernize your applications. Hewlett Packard Enterprise will continue to provide insights and best practices for stateful and stateless applications though simplified setup, configuration, and monitoring.