The VMware Cloud Director appliance deployment fails when you enable the setting to expire the root password upon the first login

As I have been trying to deploy the vCloud Director 10.1 in my home lab, I have faced the below errors, and I wanted to share with others the resolution in case you are facing it and have missed it in the release notes.

“No nodes found in cluster, this likely means PostgreSQL is not running on this node. Consult the management UI from another node where PostgreSQL is running. Otherwise, check /opt/vmware/var/log/vcd/vcd_ova_ui_app.log if you think this is an error.”

The following error in the /opt/vmware/var/log/vcd/vcd_ova_ui_app.log file:

————–/opt/vmware/var/log/vcd/vcd_ova_ui_app.log Start———————

2020-04-19 13:23:52,026 | ERROR | uWSGIWorker1Core0 | ERROR: Command ‘[‘sudo’, ‘-n’, ‘-u’, ‘postgres’, ‘/opt/vmware/appliance/bin/api/replicationClusterStatus.py’]’ returned non-zero exit status 255.

2020-04-19 13:23:52,027 | ERROR | uWSGIWorker1Core0 | Return code: 255

2020-04-19 13:23:52,027 | DEBUG | uWSGIWorker1Core0 | Got cluster status: {}

2020-04-19 13:24:16,968 | DEBUG | uWSGIWorker1Core0 | Appliance is of type primary or standby

2020-04-19 13:24:16,969 | DEBUG | uWSGIWorker1Core0 | getting cluster status: [‘/opt/vmware/appliance/bin/api/replicationClusterStatus.py’]

2020-04-19 13:24:17,098 | ERROR | uWSGIWorker1Core0 | ERROR: Command ‘[‘sudo’, ‘-n’, ‘-u’, ‘postgres’, ‘/opt/vmware/appliance/bin/api/replicationClusterStatus.py’]’ returned non-zero exit status 255.

2020-04-19 13:24:17,099 | ERROR | uWSGIWorker1Core0 | Return code: 255

2020-04-19 13:24:17,099 | DEBUG | uWSGIWorker1Core0 | Got cluster status: {}

———–/opt/vmware/var/log/vcd/vcd_ova_ui_app.log End————————

Usually there is two common names for the above issue with vCloud Director 10.1… Read More

How to Change VMware NSX-T Manager IP Address

VMware NSX-T logoThere is often the situation where you need to change the IP addresses of your NSX-T Managers. For example, you might be changing your IP schema as I am doing currently in my home lab. NSX-T does not have a field to change the IP address of it’s NSX Managers, but you will need to add new NSX Managers with the new desired IP, then gradually delete the old ones. Luckily the process is easy and straight forward as documented below.

Note: While I only have a single NSX-T Manager in my environment as it is a small home lab, usually in a production environment, you always want to maintain 3 NSX Managers active to sustain your NSX-T availability. Try to follow one of the below two approaches to maintain that.

  • Scenario A:
    • Manager A has IP address 172.16.1.11.
    • Manager B has IP address 172.16.1.12.
    • Manager C has IP address 172.16.1.13.
    • Add Manager D with a new IP address, for example, 192.168.55.11.
    • Remove Manager A.
    • Add Manager E with a new IP address, for example, 192.168.55.12.
    • Remove Manager B.
    • Add Manager F with a new IP address, for example, 192.168.55.13.
    • Remove Manager C.
  • Scenario B:
    • Manager A has IP address 172.16.1.11.
Read More

VMware Cloud Director 10.1 is here!

VMware Cloud Director 10.1 has just been released and ready for you to try! As we have got used to in the past few releases of vCloud Director, the amount of features added in each new release is substantial even in minor releases. VMware Cloud Director 10.1 is no different and comes with plenty of features than our Cloud Providers will be thrilled with.

As a start, you might have noticed the name change. vCloud Director has gone under a huge rebranding and is now called VMware Cloud Director, as our Cloud Director Service across the hyperscalers is on its way as well. While some of our documentations might take some time to get updated and where you might still find some old references on the web of it being called vCloud Director, going forward it will be named VMware Cloud Director. Please welcome “VMware Cloud Director”, as it brought a huge amount of features with it.

App Launchpad

Many of our Cloud Providers had often asked about adding applications market place to our Cloud Director Portal in the past. Now with App Launchpad they get a very modern looking market place within the Cloud Director portal, where they can publish applications from different sources including Bitnami Community Catalog VM images, ISV apps, and even in-house developed applications.… Read More

Google Cloud joins Azure and AWS in offering VMware Cloud

VMware had been working hard lately on executing their vision: “Build, Run, Manage, Connect and Protect Any App on Any Cloud on Any Device.”

VMware Cloud everywhere, Amazon AWS, Microsoft Azure, and Google GCP

If you have spoke to any of my colleagues lately or attended VMworld or any other VMware event, I am sure you have heard it or a slightly modified version of it.

Most of you have gotten with VMware Cloud on AWS (VMC on AWS) by now, as it was the first offering of VMware Cloud on hyperscaler. It is the offering of Native VMware stack on a hyperscaler, where the environment gives you the same features and manageability you have used to on-prem in addition to the integration of the hyperscaler services in an environment where you don’t have to worry about maintenance, upgrade, patching, while enjoying all the new features as new releases hit the market.

The idea had appealed to a large number of customer, where thousands of VMC on AWS hosts were spin up over the past couple of years. Microsoft had followed Amazon AWS suit and started offering Azure VMware Solutions lately. I have a feeling that it will pickup quite quickly as both VMware and Microsoft has a strong presence in the enterprise world.… Read More

vCloud Director Container Service Extension 2.6.x is here!

As more and more of our Cloud Providers are being asked to support providing K8s and Container services to their customers in a self-service and Multi-tenant fashion, we have released Container Service Extension over 30 months ago.

The goal of Container Service Extension was to offer an Open Source plugin for vCloud Director that gives our Cloud Providers the capabilities to spin and scale Kubernetes Cluster with ease and minimum knowledge of Kubernetes by the Infrastructure team. It is an easy service that our Cloud Providers can add to their catalog at no extra cost specific to CSE. For more information about CSE, I would suggest you take a look at: https://vmware.github.io/container-service-extension/INTRO.html

VMware Container Service Extension

Our Container Service Extension had been evolving quickly and we just released CSE 2.6 . CSE 2.6 beta has been in testing for sometime by several partners. In this release, we had a lot of great enhancements coming including a native UI.

Here is a summary of features included with Container Service Extension 2.6:

  • New Templates with updated Kubernetes and Weave
  • In place Kubernetes upgrade for clusters
    • CSE offers the new capability to do in place upgrade of Kubernetes related software in Native clusters.
Read More

VMware Cloud Provider Pod 1.5.0.x Update is here

Product Versions Deployed with Cloud Provider Pod had been updated to include vCloud Director 10.0. Below is the full Version changes:


  • VMware ESXi and vCenter Server are updated to 6.7 Update 3
  • VMware NSX for vSphere is updated to 6.4.6
  • VMware vCloud Director is updated to 10
  • VMware vRealize Network Insight is updated to 5.0
  • VMware vRealize Operations Manager is updated to 8.0
  • VMware vRealize Operations Manager Multi-Tenant App is updated to 2.3
  • VMware vRealize Operations Manager for NSX 
  • VMware vRealize Operations Manager – Cloud Pod Management Pack is updated to 3.0

·  You must get a new deployment guide from the Cloud Provider Pod designer and follow the instructions included in it. The instructions are changed, now you must manually upload the products.json in the Cloud Provider Pod appliance to get the new product versions.

·  There is a critical fix in the Drivers and Tools section of the Cloud Provider Pod page to resolve an issue with the download of required deployment software. See the known issues for more details.

As there is plenty of improvements and new features in vCloud Director 10.0, if you were waiting to stand up your setup with CPOD, now you can do it with vCloud Director 10.0… Read More

VMware Octant

VMware Octant an Open Source Project

As I have been discovering more with K8s, I have been growing to be a fan of VMware Octant. As defined by the open source project:

A highly extensible platform for developers to better understand the complexity of Kubernetes clusters.

Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer’s toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities.

The reason I like Octant is that it allows me to visualize my K8s environment very easily. I can see all of my deployments, Replica Sets, Daemon sets, Pods, Replication Controllers, jobs and almost every other K8s Construct in a visual presentation. It can be of a great value as well to K8s newbies to understand K8s constructs. As it allows them to visualize the effects and changes completed by running a particular Kubectl command.

Here is few examples of the useful screens that you can find in Octant. Here is the Octant overview tab listing a summary of Deployments, Replica Sets, Jobs, Pods, Services, Ingress Controllers and more.… Read More

Kubernetes as a Service utilizing Nirmata and VMware vCloud Director

Over two years ago, VMware had released vCloud Director Container Service Extension (CSE). The idea back then was to allow service providers to spin Kubernetes Clusters for their customers at ease with a single command line. CSE has as well allowed our VMware Cloud Providers to offer Kubernetes Clusters as a self service, where Tenants can request, create and delete their own clusters within few minutes and with a very minimal interaction.

Even better CSE had two methods of integrations, where CSE was able to deploy Kubernetes clusters directly into vCloud Director as a vAPP that contains all masters and workload nodes or to integrate with Enterprise PKS and deploy on top of PKS benefiting from PKS capabilities and features.

While CSE had been very capable Open Source plugin that was originated by VMware, it was missing few features that many Cloud Providers desired. Nirmata had seen the need, and created Nirmata Plugin for vCloud Director which had fulfilled the below desired enhancements:

Nirmata for Kubernetes Logo

Graphical User Interface (GUI)

While vCloud Director Container Service Extension (CSE) had been very capable as far as deploying and managing Kubernetes clusters goes, it lacked a nice Graphical User Interface (GUI). Actually out of the box CSE had no GUI at all.… Read More

vCloud Director 10.0 is here!!

VMware vCloud Director 10

For those who were following VMware vCloud Director Evolution lately, it has been developing quite fast and new features have been added continuously at a none matched speed. The development vCloud Director had over the past couple of years just amaze me as has surpassed what was delivered in 6 years development cycle previously.

vCloud Director 10 Evolution over time

vCloud Director 10.0 is here! Below is a summary of what it’s bringing to the table. Tons of great features and enhancement and more to come as we progress.

Modernizing Cloud Operations and Improving Core Efficiency

  • VCD as Central Point of Management: added Admin and Tenant UI, listing of vCenter inventory
  • Events API
  • VCD Appliance: Enhancements in migration, HA, certificate mgmt. These are very important for those who are keen to move to the vCD Appliance which is the direction where vCloud Director is going in the future.
  • VRO Plugin: Enable Multi-tenancy in VRO plugin and VRO cluster support. This will take vCD extensibility to the next level.
  • Enhanced support for VM storage placement with SDRS: added support for intra-SPOD placement; improved support for: inter-SPOD placement, datastore + SPOD hybrid placement, named disks and linked clones’ placement
  • Upgrading the appliance using the default VMware repository
  • Template based deployments, New Template Repository.
Read More

How to force delete a PKS Cluster

There are times when you want to delete a PKS Cluster, but the deletion with the usual cPKS delete-cluster command fails.

pks delete-cluster <PKS Cluster Name>

This is usually due to issues with that cluster either it failed for some reason during deployment or it was alternate into a way that destabilize it. No worry, there is a way to still force delete it and here is what I will focus on in this post. Please note you should always try to delete pks clusters using the pks delete-cluster command first and only resort to BOSH deployment delete when that does not work.

Please note in this post, I am assuming you have already setup the BOSH CLI and ready to use it. If you don’t have that setup already, I would suggest you follow the instructions at the following link.

Get your Bosh Credentials:

1-  Login to PCF OPS Manager Interface

Login to PCF OPS Manager Interface

2- Click on the Installation Dashboard

3- Click on BOSH Director for vSphere

4- Click Credentials tab

5- Click on the “Link to Credentials” link next to Director Credentials

PKS Get Bosh Director Credentials

6- Keep a copy of the Director Password, as you will need it later. It will look something like below:

PKS BOSH Director Password

Force Delete PKS Cluster using Bosh:

1- SSH to your Operations Manager Appliance

2- Run the following Command to login to BOSH

 $ bosh -e pks login

Use the username and Password collected earlier from BOSH Director for vSphere.… Read More

Installation of NSX 6.4 VIB on ESXi 6.7 host failed

I have often got to interact with customers who had an issue getting the NSX VIB installed on their ESXi host. Most of the time, it is a tedious configuration issue or a step that they have forgotten. I have hit a similar issue today in my lab with me missing a simple step and wanted to share the error and the fix with others just in a hope it helps others recover from the same error quicker.

I was getting the following error every time I tried to install the NSX 6.4.5 VIB on my ESX 6.7U2 host, and similar error as well when I try to run the resolve button. The error stated “Unable to access agent VIB module at https://192.168.1.211/bin/vdn/vibs-6.4.5/6.7-13168956/vxlan.zip (_NSX_87_VTRES01_VMware Network Fabric). A screen shot of the errir is below.

Unable to access agent VIB Module at vxlan.zip

There was a more detailed error on my NSX screen, which unfortunately I seem to have lost the screenshot for, but it stated something like below:

vtesxi01.vt.com: Unable to access agent offline bundle at https://192.168.1.211/bin/vdn/vibs-6.4.5/6.7-13168956/vxlan.zip.
Cause : <esxupdate-response>
<version>1.50</version>
<error errorClass=”MetadataDownloadError”>
<errorCode>4</errorCode>
<errorDesc>Failed to download metadata.</errorDesc>
<url>https://vtvc01.vt.com:443/eam/vib?id=ecf4a884-c9f5-406c-b57e-75a6613a3651</url>
<localfile>None</localfile>
<msg>(‘https://vtvc01.vt.com:443/eam/vib?id=ecf4a884-c9f5-406c-b57e-75a6613a3651’, ‘/tmp/tmpjnw369p9’, ‘[Errno 14] curl#6 – “Couldn\’t resolve host \’vtvc01.vt.com\'”‘)</msg>
</error>
</esxupdate-response>

As I have seen this one before, I was immediately able to spot that the fix is more than likely I have forgotten to setup Forward or Reverse DNS record or configuration for one of my setup component being ESXi, vCenter or NSX.… Read More

How to remove orphaned VM from vCenter the easy way

I have lately had few orphaned VMs in my Home Lab vCenter, as I was recreating my setup. Some of the Virtual machines were deleted directly from ESXi host, but still had records in the vCenter inventory. Below how orphaned VMs looked in my vCenter.

orphaned VM in vCenter

I have looked online for a way to remove these orphaned VMs, and while one of the KB article suggested to add it to a folder then remove the folder, that did not work as that KB was only for older versions of vSphere.

One method to do this, is to use one of the below command lines methods where any of them will do the trick:

PowerCLI
Remove-VM vm_name -deletepermanently

vMA
vmware-cmd --server esxhost –s unregister path_to_vmx_file
vmware-cmd --server vcenter --vihost esxhost –s unregister path_to_vmx_file vifs --server esxhost --rm “[datastore] path_to_vmx_on_datastore

CLI
vim-cmd vmsvc/destroy vmid

For those lazy ones, that don’t want to fire up a command line utility and then construct a command line to do the trick, I have a good news for you. You can delete that orphan VM in the GUI by right clicking the VM, then choosing All Virtual Infrastructure Actions ==> More UnCategorized Actions ==> Remove from Inventory. … Read More

Executing vRealize Orchestrator workflows using Rest API

I have been lately involved in conducting integration between vCloud Director and a Cloud Provider Market Place. The Cloud Provider wanted to use vRealize Orchestrator (vRO) for this integration, as there is different integrations point they wanted to integrate with beyond vCloud Director and they did not want to learn multiple APIs. What is nice about vRO, after you learn it’s API, you can use it to execute workflows against multiple solution using a very similar API calls.

I was able to find them multiple articles on how to invoke vRO workflow from Rest API, but it was not easy to find one that showed my partner the full steps in a simple way, so I am sharing below what I have documented to help my partner with it from scratch.

Alright before we start on the API calls, I have created a simple vRO workflow “AWSNEW” that requires only one input variable “name”. My particular workflow will just deploy an instance to AWS, but in reality the workflow you are calling can be doing anything or talking to any different endpoint that is supported by vRO, and you will still be able to call it the same way.

To call a vRO workflow from Rest API, you will need to know the Workflow ID, and the inputs required. … Read More

‘WIRE’ is a free self-paced learning platform to our Cloud Providers and Aggregation partners

VMware had released VMware WIRE for free to our Cloud Providers and Aggregators. It is a free self-paced learning platform with tons of amazing contents. WIRE offers VMware products and Solutions ennoblement for level (L100) sales level, (L200) Technical, and sometime (L300) Deep Technical. If you have been looking for great VCPP Training materials for your self or your team, here you have it and for free!

VMware Wire VCPP Solution Enablement Learning Path

WIRE is free for everyone to use and doesn’t require a partner central account. You can set up an account in the platform directly – please include all your details so we can put you in the correct team to serve you the correct content.

To use WIRE please login initially via http://bit.ly/SelfPacedWire where you can use a partner central account if you have one, or create one directly in WIRE, (again please remember to fill in all your details!) Once logged in you will be placed into the “VCPP Solution Enablement Learning Path” where you can view all the solution areas and various technologies within.

Watch the orientation course first and change your password so you can log back into WIRE, then save the URL or use http://bit.ly/GetEnabled for future access. If you have any questions on WIRE, you can contact: vcpp_gtm@vmware.com… Read More

VMware vCloud Availability 3.0 Update

The long waited VMware vCloud Availability 3.0 will be released soon, so I thought to share some update about the exciting set of features of what’s coming. If you have been following up the evolution of VMware vCloud Availability, you will find many of your wishes and requests are materializing in vCAv 3.0.

VMware vCloud Availability 3.0 solution introduces a new and simplified product architecture for replication, disaster recovery and migration capabilities into a single product. With the service you can perform:

  • Migration and Disaster Recovery of VMs from on-premise vCenter Server to a vCloud site.
  • Migration of vApps and VMs between two Virtual Data Centers that belong to a single vCloud Director Organization.
  • Replication and recovery of vApps and VMs between vCloud Director sites.

vCloud Availability 3.0 Combined Solution

VMware vCloud Availability (vCAv) 3.0 is combining the features of originally three different products, as shown in the below table. It is replacing all of vCloud Director Extender, vCloud Availability for DR to Cloud, and vCloud Availability for Cloud to Cloud.

vCloud Availability 3.0 Combined Features

 

vCloud Availability 3.0 capabilities:

  • Management and monitoring of replications from on-premise vCenter Server to vCloud Director site.
  • Capability to migrate and failover workloads from on-premise vCenter Server to vCloud Director site.

vCloud Availability 3.0 On-Prem to Cloud

  • Capability to failback recovered workloads from vCloud Director site to on-premise vCenter Server site.
Read More