vCloud Director (vCD) Architecture

With all the really cool and intricate discussions around this product, I thought that now would be a good time to take a step back and look at how this product is actually implemented. Terminology is abbreviated to save space, so you will see acronyms such as vCD (which stand for VMware Cloud Director), etc.

Architecture Overview

Think of the product in two distinct layers.  At the core, is vSphere cluster nodes and vCenter that is coupled with vShield Manager. This is the “foundation” (if you will) to providing services to each cloud Director service host (or commonly referred to as a “cell”). I say each, because you can only allocate one cell for each vCenter server. The second layer is comprised of VMware Cloud Director Server hosts that are made up of individual “cells”. Each cell operates off a central vCD database (which as of this posting, needs to be on Oracle) that resides at this layer. The cells form together to create the VMware Cloud Director Cluster.

Diagram 1


vCD Database

Each and every cell in a vCD cluster shares information through a database and each one needs to allow a minimum of 75 connections per/Cell with an additional 50 connections for Oracle.

Database sizing formula: 75 * (number of cells) + 50

vCD Database Guidelines:

  • Do not use the Oracle system account as the Cloud Director database user account.
  • Oracle must be at 10g Std. or Ent. Ed. Rel. 2 (10.2.0.x) -or- 11g Std. or Ent. Ed. (11.1.0.x)
  • A database server configured with 16GB of memory, 100GB storage, and 4 CPUs should be adequate for most Cloud Director clusters.
  • Verify that the database service starts automatically when the database server is rebooted.

Cloud Director Software

As stated above, each host must have the cloud director software installed on it to manage the cell. The only supported platform to date is Red Hat Enterprise Linux 5 (update 4 or 5) and must be 64bit. This is usually a vm in the cluster that has 2GB of memory assigned to is with multiple vCPU’s. Most standard builds of REH will have adequate space for the installation and log files. DNS is another critical component and must have forward and reverse FQDN lookups on the host. Issue this on your REH box:

#nslookup <cloudhost>

#nslookup <cloudhost>

A sample return would be something like this:

nslookup x.x.x.x (where x=numerical octet)

Additionally, each host must have 2 IP addresses assigned to it and must have 2 SSL certificates. Each IP address needs to mount the shared transfer server storage at $VCLOUD_HOME/data/transfer (This volume must have write permission for root).

Logging into the vCD console is at: (where cloudhost=your server name & domain=your domain name).

vCD Firewall Ports

As with any management portal, protect it from the internet with a firewall of some sort. To allow external management outside your organization, you will only need to allow port 443 (HTTPS) through. For connections internally, or within a vCD cluster:


Port Type Description
111 TCP & UDP NFS portmapper for transfer service
920 TCP & UDP NFS rpd.statd for transfer service
61611 TCP ActiveMQ
61616 TCP ActiveMQ


Port Type Description
111 TCP & UDP NFS Portmapper
443 TCP vCenter and ESXi Connections
514 UDP Syslog
902& 903 TCP vCenter and ESXi Connections
920 TCP & UDP NFS rpc.statd for transfer service
1521 TCP Oracle Database
61611 TCP ActiveMQ
61616 TCP ActiveMQ


Web Administration Browsers

  • Microsoft Internet Explorer (with the exception of IE7 on Win7 32bit or 64bit)
  • The Cloud Director Web Console requires Adobe Flash Player version 10.1 or later
  • Cloud Director requires SSL – versions include SSL 3.0 and TLS 1.0. (more on SSL in the vCD SSL section)

vShield Manager for vCD

  • Each Cloud Director cluster requires access to a vShield Manager host, which in turn provides network services to the Cloud (refer to diagram 1)
  • You must have a unique instance of vShield Manager for each vCenter Server you add to Cloud Director
  • vCenter and vSphere must be at at least on version 4.0 u2 (Build 264050 for vCenter and 261974 for vSphere) or higher.
  • vShield Manager must be at 4.1 (Build 287872)

Deployment Steps for vShield Manager:

  1. Download the OVF template
  2. Deploy the OVF template into your cluster (remember each cluster needs its own vShield Manager!)
  3. Power up the appliance and log in (User: admin & Pass: default)
  4. At the prompt, type “enable” (i.e. manager# enable) – the setup process will begin.
  5. Enter the IP, Subnet and Default Gateway for vShield Manager.
  6. Reboot!

Noteworthy: There is no need to synchronize vShield Manager with vCenter or register the vShield Manager as a vSphere Client plug-in when using vShield Manager with Cloud Director.


Cloud Director requires the use of SSL to secure communications between clients and servers.  You must create two certificates for each member of the cluster and import the certificates into the host keystores. You need to execute this procedure for each host that you intend to use in your Cloud Director cluster!

  • Cloud Director installer places a copy of keytool in /opt/vmware/cloud-director/jre/bin/keytool
  • You can use signed certificates (by a trusted certification authority) or self-signed certificates (most private cloud implementations)

vCD Network Configurations

The network configuration for vCD is comprised of pools which are undifferentiated. These are used in turn to create vApp networks and various types of organizational networks. At the core is vSphere’s network resources that have VLAN, port groups and isolated network segments. vCD takes these network resource pools and creates routed NAT configurations, internal organizational segments and all of the vApp networks. Each vCD organization (within Cloud Director) can have only one network pool, but multiple vCD’s can share the same pool.

There are basically three types of organizational networks:

  1. Direct Connect (External Organization Network)
  2. NAT or Routed (External Organization Network)
  3. Internal Organization Network

General Guidelines of vCD Deployments

  • The database must be configured to use the AL16UTF16 character set.
  • Cloud Director software is installed on each server host and is then connected to the shared database
  • A network pool “resource” must be created for use with vApp networks or Organizational Networks and prior to the build-out of Organizational or vApp networks. If these two entities are not built, only the direct connect option to the provider network will be available.
  • Each host should have access to a Microsoft Sysprep deployment package
  • Network time is critical! Make sure that all Cloud Director hosts are synchronized since the maximum drift on this is 2 SECONDS!

3 thoughts to “vCloud Director (vCD) Architecture”

  1. I miss lab manager.

    No seriously – I wish VMW would sell it to someone who will care for it (i.e. Flexera).

    vCloud Director is a great product for Public Cloud Hosting Providers who want to sell VMs, but forcing people to use vShield to get NAT to work is mean considering Lab Manager did this out of the box with a single product (vs. two (VCD and VShield).

    Private Clouds will continue to avoid it – well that is until VMW buys Abiquo or Rightscale and then they can bundle another acqusition into vSphere and call it vCloud Director Director, Standalone Edition.

    1. Rob,

      As a fellow Lab Manager implementer and user, I would agree with some of your statements. Although, Lab Manager was never really meant to deploy the type of workloads that vCLoud Director can and had more of a “short term” deployment strategy. While Lab Manager did have some nice fencing options, I think that the additional security that vShield gives you a more robust protection layer.

      I also think that larger private clouds will leverage it for that self-provisioning aspect of the product



Leave a Reply