Skip to main content

The role of VMware Integrated Containers in real life scenario - PART 3

Virtual Container Host Deployment using the "vic-machine" Utility - VMware Integrated Containers


In our previous posts, we saw the steps to deploy VIC appliance and deploying the VCH from vSphere client. In this post, we will see the steps to deploy the VCH using the "vic-machine" CLI Utility 

Refernce: https://github.com/rdjagadeesh/vic_homelab/

Once we deploy the vSphere Integrated Containers (VIC) appliance, access the VIC appliance IP from the browser and we land on the below page. From this page, we can download the vSphere Integrated Containers Engine bundle from the appliance and unpack it on the workstation/laptop/ jump host where we connect to our vSphere environment.




Unpack the downloaded bundle 


The bundle included the following contents and utilities  



The VIC bundle includes the vic-machine CLI utility. We use "vic-machine" to deploy and manage virtual container hosts (VCHs) at the command line.

Procedure: 


Open a terminal on the system on which we downloaded and unpacked the vSphere Integrated Containers Engine binary bundle.

Navigate to the directory that contains the vic-machine utility:





Run the vic-machine create command.


Syntax:

--target homelabvc01.vsphere.local/VIC_COMPUTE_CLUSTER  --user 'administrator@vsphere.local'  --password 'VMware@12345'  --no-tlsverify  --force  --bridge-network vxw-dvs-564-virtualwire-21-sid-2144-NSX-VIC-Bridge --bridge-network-range 192.168.125.0/12  --dns-server 10.126.193.47   --public-network vxw-dvs-564-virtualwire-21-sid-2144-NSX-public --container-network vxw-dvs-564-virtualwire-21-sid-2144-NSX-Container:public  --container-network-firewall vxw-dvs-564-virtualwire-21-sid-2144-NSX-Container:open  --compute-resource 'TEST_CLUSTER'  --image-store DATASTORE_VSAN  --timeout 20m  --endpoint-cpu 4   --memory 30000 --endpoint-memory 8192  --volume-store DATASTORE_VSAN/volumes:default  --thumbprint 09:21:29:EF:0G:DE:78:9D:FG:89:DF:8F:89:3S:89:0A:FF:67:ZX  --name MyFirstVCH


Example:  



C:\>documents\vic\vic-machine-windows.exe create --target "administrator@vsphere.local":VMware@12345@homelabvc01.vsphere.local/datacenter_name --compute-resource VIC_COMPUTE_CLUSTER --bridge-network "vxw-dvs-564-virtualwire-21-sid-2144-NSX-VIC-Bridge" --public-network "vxw-dvs-564-virtualwire-21-sid-2144-NSX-public" --image-store "DATASTORE_VSAN" --volume-store DATASTORE_VSAN/volumes:default --volume-store DATASTORE_VSAN/volumes:default --name MyFirstVCH --thumbprint 09:21:29:EF:0G:DE:78:9D:FG:89:DF:8F:89:3S:89:0A:FF:67:ZX --no-tlsverify --timeout 20m


Linux OS:

$ vic-machine-linux create--target esxi_host_address--user root--password 'esxi_host_password'--no-tlsverify--thumbprint esxi_certificate_thumbprint


Windows OS:

$ vic-machine-windows create--target esxi_host_address--user root--password "esxi_host_p@ssword"--no-tlsverify--thumbprint esxi_certificate_thumbprint


Mac OS:

$ vic-machine-darwin create--target esxi_host_address--user root--password 'esxi_host_p@ssword'--no-tlsverify--thumbprint esxi_certificate_thumbprint


Result


At the end of a successful deployment, VIC-machine displays information about the new VCH:


Initialization of appliance successfulVCH ID: vch_idVCH Admin Portal:https://vch_address:2378Published ports can be reached at:vch_addressDocker environment variables:DOCKER_HOST=vch_address:2376Environment saved in virtual-container-host/virtual-container-host.envConnect to docker:docker -H vch_address:2376 --tls infoInstaller completed successfully

Test the Deployment of the VCH


1. We can use a Docker client, run the docker info command to confirm that we can connect to the VCH.


docker -H vch_address:2376 --tls info

2. We should see confirmation that the Storage Driver is vSphere Integrated Containers Backend Engine.

3. In our Docker client, pull a Docker container image from Docker Hub into the VCH.

         For example, pull the BusyBox container image.


docker -H vch_address:2376 --tls pull busybox

4. In the ESXi host/vcenter  UI, open the Datastore browser and select the datastore. We should see that vSphere Integrated Containers Engine has created a folder that has the same name as the VCH. This folder contains the VCH endpoint VM files and a folder named VIC, in which to store container image files.

5. Expand the VIC folder to navigate to the images folder.

6. The images folder contains folders for each container image that We pull into the VCH. The folders contain the container image files.

7. In our Docker client, run the Docker container that We pulled into the VCH.


docker -H vch_address:2376 --tls run --name test busybox

8. In the ESXi host UI, go to Virtual Machines. We should see a VM named test-container_id. This is the container VM that We created from the BusyBox image.


Download kit: https://github.com/rdjagadeesh/vic_homelab/

Thanks for reading and in our next post we see an option to automate the deployment of VCH through vRealize Automation 

Popular posts from this blog

HOW TO EDIT THE BCD REGISTRY FILE

The BCD registry file controls which operating system installation starts and how long the boot manager waits before starting Windows. Basically, it’s like the Boot.ini file in earlier versions of Windows. If you need to edit it, the easiest way is to use the Startup And Recovery tool from within Vista. Just follow these steps: 1. Click Start. Right-click Computer, and then click Properties. 2. Click Advanced System Settings. 3. On the Advanced tab, under Startup and Recovery, click Settings. 4. Click the Default Operating System list, and edit other startup settings. Then, click OK. Same as Windows XP, right? But you’re probably not here because you couldn’t find that dialog box. You’re probably here because Windows Vista won’t start. In that case, you shouldn’t even worry about editing the BCD. Just run Startup Repair, and let the tool do what it’s supposed to. If you’re an advanced user, like an IT guy, you might want to edit the BCD file yourself. You can do this

DNS Scavenging.

                        DNS Scavenging is a great answer to a problem that has been nagging everyone since RFC 2136 came out way back in 1997.  Despite many clever methods of ensuring that clients and DHCP servers that perform dynamic updates clean up after themselves sometimes DNS can get messy.  Remember that old test server that you built two years ago that caught fire before it could be used?  Probably not.  DNS still remembers it though.  There are two big issues with DNS scavenging that seem to come up a lot: "I'm hitting this 'scavenge now' button like a snare drum and nothing is happening.  Why?" or "I woke up this morning, my DNS zones are nearly empty and Active Directory is sitting in a corner rocking back and forth crying.  What happened?" This post should help us figure out when the first issue will happen and completely avoid the second.  We'll go through how scavenging is setup then I'll give you my best practices.  Scavenging s

AD LDS – Syncronizing AD LDS with Active Directory

First, we will install the AD LDS Instance: 1. Create and AD LDS instance by clicking Start -> Administrative Tools -> Active Directory Lightweight Directory Services Setup Wizard. The Setup Wizard appears. 2. Click Next . The Setup Options dialog box appears. For the sake of this guide, a unique instance will be the primary focus. I will have a separate post regarding AD LDS replication at some point in the near future. 3. Select A unique instance . 4. Click Next and the Instance Name dialog box appears. The instance name will help you identify and differentiate it from other instances that you may have installed on the same end point. The instance name will be listed in the data directory for the instance as well as in the Add or Remove Programs snap-in. 5. Enter a unique instance name, for example IDG. 6. Click Next to display the Ports configuration dialog box. 7. Leave ports at their default values unless you have conflicts with the default values. 8. Click N