VMware Tanzu Mission Control (TMC) is now officially GA for VMware Cloud Provider Partners

On June 4, 2021 VMware Tanzu Mission Control (TMC) has become available to our VCPP Cloud Provider. It is a quite exciting news to many of our Managed Service Providers that are trying to offer Kubernetes added services across Multi-Cloud.

Tanzu Mission Control will help unlocks new opportunities for VMware cloud providers to offer Kubernetes (K8s) managed services for a multi-cloud. TMC can manage multi-tenant K8s clusters created on VMware Cloud Director (VCD) in addition to any CNCF conformant K8s clusters on public clouds as well. This includes the native K8s implementation at hyperscaler such as EKS, AKS, and GKE.

VMware Tanzu Mission Control

Here is a summary of the top features TMC will help our VMware Cloud Providers offer to their tenants:

  • Cluster lifecycle management: Provision, scale, upgrade and delete Tanzu Kubernetes Grid clusters via Tanzu Mission Control across environments.
  • Attaching clusters: Attach any conformant Kubernetes clusters running in other environments—either on-prem or in public clouds—to Tanzu Mission Control for centralized management.
  • Centralized policy management: Apply consistent policies, such as access, security, and even custom policies to a fleet of clusters and namespaces at scale.
  • Observability and diagnostics: Gain global observability of the health of clusters and workloads across clouds for quick diagnostics and troubleshooting.
Read More

VMware Cloud Director Cannot verify the Kubernetes API endpoint certificate on this Supervisor Cluster Error

While trying to Connect my VMware Cloud Director to my Tanzu Kubernetes Grid environment (TKGS) in vSphere 7 update 2, I kept hitting a certificate error. The error presented itself during the step to configure the Kubernetes policy for my Provider VDC. After following the wizard at:

Resources ==> Cloud Resources ==> Provider VDCs ==> Kubernetes.

The Wizard goes properly till I get to the Machine Classes tab which is where VMware Cloud Director get to check on the certificate of the supervisor cluster. That when the below error present itself:

“VMware Cloud Director cannot verify the Kubernetes API endpoint certificate on this Supervisor Cluster. It might be a vSphere generated default self-signed certificate or another invalid certificates. For the steps to install your own certificate, see Change the Kubernetes API Endpoint Certificate. For trusting the certificate, see https://kb.vmware.com/s/article/80996”

Kubernetes Policies in VCD 10.2 with vCenter 7.0 and later Tanzu are non-functional

The cause of the issue is the certificate structure of Tanzu Kubernetes in vCenter, The certificate of the Supervisor Cluster is not automatically trusted by VCD. Calls made to the Supervisor Cluster by VCD fail due to SSL errors. While I have this issue with VCD 10.2.2, it actually affected previous versions as well, especially when using self-signed certificates.

KB80996 article on how to fix this need to be updated with step marked in red below, but for those who needs to resolve the issue today, I wanted to document the steps in here.… Read More

VMware Cloud Providers Feature Fridays

My colleague Guy Bartram had been leading over 30 VMware Cloud Providers focused sessions where he host a different expert on each. I was hosted previously on the Bitnami and App Launchpad Feature Friday previously.

Feature Friday Bitnami and App Launchpad

I highly recommend every VMware Cloud Provider to take a look at these sessions and subscribe to the VMware Feature Fridays Youtube Channel.

Below are the list of feature Friday available as of the day this article is written. Please refer to the above link for the VMware Feature Fridays Youtube Chanel link above to keep updated with newer sessions:

Feature Friday Episode 33 – Terraform vCloud Director Provider 3.1 and NSX T

Terraform vCloud Director Provider 3.1 delivers infrastructure as code for VMware Cloud Director partners and their customers. Learn what is available in the new 3.1 release and particularly the support for NSX-T.

Feature Friday Episode 32 – VMware Cloud Director APIs

In this video I’m joined by Benoit Serratrice, Staff Cloud Solutions Architect at VMware to discuss VMware Cloud Director APIs and SDKs. Watch this session to understand some of the capabilities of the APIs available and the corresponding SDK coverage of the APIs, so you can make the right decision and save time programmatically automating your cloud solution.… Read More

Cloud Director Kubernetes as a Service with CSE 2.6.x Demo

If you have been following our VMware Cloud Provider Space for a while, you have probably been introduced to our Cloud Director Kubernetes as a Service offering based on VMware Container Service Extension. In the past, Container Service Extension used to be command line only, where a nice UI was introduced in CSE 2.6.0. Here is a demo of what the new UI of Cloud Director Container Service Extension look like out of the box:

If you are curious of how to install CSE 2.6.1, you can follow my earlier post: VMware Container Service Extension 2.6.1 Installation step by stepRead More

VMware Container Service Extension 2.6.1 Installation step by step

One of the most requested feature with previous versions of the VMware Container Service Extension (CSE) is to add a native UI to it. As of CSE 2.6 we have added a native UI to CSE, which is adding to the friendliness of CSE and will make it much more appealing to many of our cloud providers. At just few clicks, our customers can deploy a K8S clusters at our Cloud Providers with filling few easy to understand fields.

Kubeconfig file will be auto generated as well and can be handed out right away to the developer limiting the efforts required by the tenant operation team/cloud providers administrators. Here is a quick screenshot teaser of what CSE look like. You can find a nice demo of CSE 2.6.1 UI at my following blog post: Cloud Director Kubernetes as a Service with CSE 2.6.x Demo

For more info on what is new with CSE 2.6.x please check my following blog post: vCloud Director Container Service Extension 2.6.x is here


Container Service Extension Create New Cluster


In this post, I am assuming you have an existing vCloud Director environment and AMQP already configured. To start the installation of CSE 2.6.1, you will need a supported OS. In my case, I have decided to go with CentOS 8.1.Read More

CSE 2.6.1 Error: Default template my_template with revision 0 not found. Unable to start CSE server.

While trying to run my Container Service Extension 2.6.1 after a successful installation. I kept getting the following error when trying to run CSE “Default template my_template with revision 0 not found. Unable to start CSE server.”

To fix this you will need to:

  1. Edit your CSE config.yaml file to include the right name of the default template and revision number. (Much more on this below)
  2. Encrypt your CSE config file again
  3. Re-Run CSE with your encrypted config file

In this post I will explain this in a bit more details for those hitting the same issue, as the resolution is quite simple but might not be as obvious if you are doing this for the first time.

For a start, here is what the exact error look like:

[root@vtcse01 Python-3.7.3]# cse run --config encrypted-config.yaml
Required Python version: = 3.7.3
Installed Python version: 3.7.3 (default, May  4 2020, 15:36:31)
[GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Password for config file decryption:
Decrypting 'encrypted-config.yaml'
Validating config file 'encrypted-config.yaml'
Connected to AMQP server (VTAMQP01.vt.com:5672)
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised.
Connected to vCloud Director (vcd.vt.com:443)
Connected to vCenter Server 'vtvc01' as 'administrator@vsphere.local' (vtvc01.vt.com:None)
Config file 'encrypted-config.yaml' is valid
Loading k8s template definition from catalog
Found K8 template 'ubuntu-16.04_k8-1.15_weave-2.5.2'
Read More

vCloud Director Container Service Extension 2.6.x is here!

As more and more of our Cloud Providers are being asked to support providing K8s and Container services to their customers in a self-service and Multi-tenant fashion, we have released Container Service Extension over 30 months ago.

The goal of Container Service Extension was to offer an Open Source plugin for vCloud Director that gives our Cloud Providers the capabilities to spin and scale Kubernetes Cluster with ease and minimum knowledge of Kubernetes by the Infrastructure team. It is an easy service that our Cloud Providers can add to their catalog at no extra cost specific to CSE. For more information about CSE, I would suggest you take a look at: https://vmware.github.io/container-service-extension/INTRO.html

VMware Container Service Extension

Our Container Service Extension had been evolving quickly and we just released CSE 2.6 . CSE 2.6 beta has been in testing for sometime by several partners. In this release, we had a lot of great enhancements coming including a native UI.

Here is a summary of features included with Container Service Extension 2.6:

  • New Templates with updated Kubernetes and Weave
  • In place Kubernetes upgrade for clusters
    • CSE offers the new capability to do in place upgrade of Kubernetes related software in Native clusters.
Read More

VMware Octant

VMware Octant an Open Source Project

As I have been discovering more with K8s, I have been growing to be a fan of VMware Octant. As defined by the open source project:

A highly extensible platform for developers to better understand the complexity of Kubernetes clusters.

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.

The reason I like Octant is that it allows me to visualize my K8s environment very easily. I can see all of my deployments, Replica Sets, Daemon sets, Pods, Replication Controllers, jobs and almost every other K8s Construct in a visual presentation. It can be of a great value as well to K8s newbies to understand K8s constructs. As it allows them to visualize the effects and changes completed by running a particular Kubectl command.

Here is few examples of the useful screens that you can find in Octant. Here is the Octant overview tab listing a summary of Deployments, Replica Sets, Jobs, Pods, Services, Ingress Controllers and more.… Read More

Kubernetes as a Service utilizing Nirmata and VMware vCloud Director

Over two years ago, VMware had released vCloud Director Container Service Extension (CSE). The idea back then was to allow service providers to spin Kubernetes Clusters for their customers at ease with a single command line. CSE has as well allowed our VMware Cloud Providers to offer Kubernetes Clusters as a self service, where Tenants can request, create and delete their own clusters within few minutes and with a very minimal interaction.

Even better CSE had two methods of integrations, where CSE was able to deploy Kubernetes clusters directly into vCloud Director as a vAPP that contains all masters and workload nodes or to integrate with Enterprise PKS and deploy on top of PKS benefiting from PKS capabilities and features.

While CSE had been very capable Open Source plugin that was originated by VMware, it was missing few features that many Cloud Providers desired. Nirmata had seen the need, and created Nirmata Plugin for vCloud Director which had fulfilled the below desired enhancements:

Nirmata for Kubernetes Logo

Graphical User Interface (GUI)

While vCloud Director Container Service Extension (CSE) had been very capable as far as deploying and managing Kubernetes clusters goes, it lacked a nice Graphical User Interface (GUI). Actually out of the box CSE had no GUI at all.… Read More

How to force delete a PKS Cluster

There are times when you want to delete a PKS Cluster, but the deletion with the usual cPKS delete-cluster command fails.

pks delete-cluster <PKS Cluster Name>

This is usually due to issues with that cluster either it failed for some reason during deployment or it was alternate into a way that destabilize it. No worry, there is a way to still force delete it and here is what I will focus on in this post. Please note you should always try to delete pks clusters using the pks delete-cluster command first and only resort to BOSH deployment delete when that does not work.

Please note in this post, I am assuming you have already setup the BOSH CLI and ready to use it. If you don’t have that setup already, I would suggest you follow the instructions at the following link.

Get your Bosh Credentials:

1-  Login to PCF OPS Manager Interface

Login to PCF OPS Manager Interface

2- Click on the Installation Dashboard

3- Click on BOSH Director for vSphere

4- Click Credentials tab

5- Click on the “Link to Credentials” link next to Director Credentials

PKS Get Bosh Director Credentials

6- Keep a copy of the Director Password, as you will need it later. It will look something like below:

PKS BOSH Director Password

Force Delete PKS Cluster using Bosh:

1- SSH to your Operations Manager Appliance

2- Run the following Command to login to BOSH

 $ bosh -e pks login

Use the username and Password collected earlier from BOSH Director for vSphere.… Read More