Skip to main content

How can VMware Integrated Containers be useful in real life scenario - PART2

In this post we see the options to deploy the Virtual Container Hosts ( VCH)


The previous post talks about vSphere Integrated Containers and their benefits. The VIC offers a robust solution that enables the vSphere environment to quickly get containers up and running in their current vSphere infrastructure. This environment can be useful for migrating current apps to containers or for in-house development.


In a traditional container environment, containers run as threads within the container host. vSphere Integrated Containers leverage the native constructs of vSphere for provisioning container-based applications into its own container running its own very minimal Linux kernel with just enough code to run a Docker image, thus preventing any issue with containers being accessed from other containers by pushing isolation of the container down to the hypervisor layer that is much better at handling this type of isolation.

This isolation permits IT directors to deliver an instrumentation atmosphere while not having to create a separate, specialized instrumentation infrastructure stack. By deploying every instrumentation image as a vSphere virtual machine (VM), vSphere Integrated Containers permits these workloads to leverage vital vSphere application availableness and performance options like vSphere hour angle, vMotion, DRS and a lot of. vSphere Integrated Containers provides these options whereas still presenting a jack API to developers of container-based applications.

The VIC engine is that the mechanism that gives this docker API to the container VM’s. It permits the provisioning and management of VMs into vSphere clusters using the docker binary image format. It allows vSphere admins to pre-allocate certain amounts of compute, networking and storage and provides that to developers as a self-service portal using a familiar Docker-compatible API. It permits developers that already know docker to develop in containers and deploy them aboard ancient VM-based workloads on vSphere clusters.

Virtual Container Host Deployment Options: 

In VIC, we can deploy virtual container hosts (VCHs) that serve as Docker API endpoints. VCHs allow Docker developers to provision containers as VMs in your vSphere environment.

  1. Deploy Virtual Container hosts in the vSphere Client 
  2. Deploy Virtual Container hosts using the vic-machine CLI utility 
  3. Deploy a Virtual Container host through the vRealize Automation Portal ( vRA/vRO) 

Option 1: Deploy VCH in the vSphere client procedure  

If you have installed the HTML5 plug-in for vSphere Integrated Containers, you can deploy virtual container hosts (VCHs) interactively in the vSphere Client.

Log in to the HTML5 vSphere client with an admin account and choose "vSphere Integrated Containers" 

The vSphere Integrated Containers view presents the number of VCHs and container VMs that you have deployed to this vCenter Server instance.

Click on the deployed vSphere integrated containers and select "New Virtual Container Host" option. Provide a name for the VCH instance

Optionally we can also forward the logs to syslog or the vRealize Log insight server

Choose a cluster where you want to deploy a VCH

Provide the compute details

By default it takes 1 vCPU and 2 GB of Memory. In my experience, it works seamlessly well with 2 vCPU and 8 GB of memory.

Provide storage/datastore information. I recommend enabling anonymous volumes, which creates a path by default. If not then you could need to create/attach a volume manually post the deployment

Configure Networks - It is mandated to provide the Bridge network and Public network.

NOTE: It is mandated to create a dedicated BRIDGE NETWORK PORT GROUP for EACH VCH. If you reuse the same PORT GROUP then you would end up with duplicate IPs on C-VMS/container VMS

Security options - you can also turn off this option in your closed/POC/test setup.

 Registry access:  Leave it to default values unless you have some restrictions applied to download the registry entities in your network

Provide the operations user details - this is to deploy the VCH VM in your setup and also to access the ESXi host logs

In the next step, review and submit the request. Once the deployment is successful, you should see the below details with the right IP address.

In the vSphere client, we should be able to see a resource group/pool created with the same name as a VCH

NOTE: Each VCH creates its own Resource POOL where all the C-VMs/ container vms are grouped.

Once you have the DOCKER API IP details, navigate to VIC administrator portal and add the VCH to the project

Name a PROJECT where you want to add the VCH to.

Add host to the PROJECT

Add members who are entitled to access the project

Add the VCH host

Once the host is added it lists under the Infrastructure tab. We can add multiple hosts in this project

Choose the newly created PROJECT

Hosts are added to the Project and we are ready to spin our first C-VM/container.

In our next post, we see other options to deploy the VCH. 

Popular posts from this blog


The BCD registry file controls which operating system installation starts and how long the boot manager waits before starting Windows. Basically, it’s like the Boot.ini file in earlier versions of Windows. If you need to edit it, the easiest way is to use the Startup And Recovery tool from within Vista. Just follow these steps: 1. Click Start. Right-click Computer, and then click Properties. 2. Click Advanced System Settings. 3. On the Advanced tab, under Startup and Recovery, click Settings. 4. Click the Default Operating System list, and edit other startup settings. Then, click OK. Same as Windows XP, right? But you’re probably not here because you couldn’t find that dialog box. You’re probably here because Windows Vista won’t start. In that case, you shouldn’t even worry about editing the BCD. Just run Startup Repair, and let the tool do what it’s supposed to. If you’re an advanced user, like an IT guy, you might want to edit the BCD file yourself. You can do this

DNS Scavenging.

                        DNS Scavenging is a great answer to a problem that has been nagging everyone since RFC 2136 came out way back in 1997.  Despite many clever methods of ensuring that clients and DHCP servers that perform dynamic updates clean up after themselves sometimes DNS can get messy.  Remember that old test server that you built two years ago that caught fire before it could be used?  Probably not.  DNS still remembers it though.  There are two big issues with DNS scavenging that seem to come up a lot: "I'm hitting this 'scavenge now' button like a snare drum and nothing is happening.  Why?" or "I woke up this morning, my DNS zones are nearly empty and Active Directory is sitting in a corner rocking back and forth crying.  What happened?" This post should help us figure out when the first issue will happen and completely avoid the second.  We'll go through how scavenging is setup then I'll give you my best practices.  Scavenging s

AD LDS – Syncronizing AD LDS with Active Directory

First, we will install the AD LDS Instance: 1. Create and AD LDS instance by clicking Start -> Administrative Tools -> Active Directory Lightweight Directory Services Setup Wizard. The Setup Wizard appears. 2. Click Next . The Setup Options dialog box appears. For the sake of this guide, a unique instance will be the primary focus. I will have a separate post regarding AD LDS replication at some point in the near future. 3. Select A unique instance . 4. Click Next and the Instance Name dialog box appears. The instance name will help you identify and differentiate it from other instances that you may have installed on the same end point. The instance name will be listed in the data directory for the instance as well as in the Add or Remove Programs snap-in. 5. Enter a unique instance name, for example IDG. 6. Click Next to display the Ports configuration dialog box. 7. Leave ports at their default values unless you have conflicts with the default values. 8. Click N