Thursday 9 February 2012

VMware ESX Server Logs


Vmkernel 
a. Location: /var/log/
b. Filename: vmkernel 
c. This log records information related to the vmkernel and virtual machines

Vmkernel Warnings

a. Location: /var/log/
b. Filename: vmkwarning
c. This log records information regarding virtual machine warnings

Vmkernel Summary
a. Location: /var/log/
b. Filename: vmksummary
c. This log records information used to determine uptime and availability statistics for ESX Server. This log is not easily readable by humans, import into a spreadsheet or database for use.
d. For a summary of the statistics in an easily viewed file, see vmksummary.txt

ESX Server Boot Log
a. Location: /var/log
b. Filename: boot.log
c. Log file of all actions that occurred during the ESX server boot.

ESX Server Host Aagent Log
a. Location: /var/log/vmware/
b. Filename: hostd.log
c. Contains information on the agent that manages and configures the ESX Server host and its virtual machines (Search the file date/time stamps to find the log file it is currently outputting to).

Service Console 
a. Location: /var/log/
b. Filename: messages
c. Contain all general log messages used to troubleshoot virtual machines on ESX Server.

Web Access
a. Location: /var/log/vmware/webAccess
b. Filename: various files in this location
c. Various logs on Web access to the ESX Server.

Authentication Log
a. Location: /var/log/
b. Filename: secure
c. Contains the records of connections that require authentication, such as VMware daemons and actions initiated by the xinetd daemon.

VirtualCenter HA Agent Log
a. Location: /var/log/vmware/aam/
b. Filename: aam_config_util_*.log
c. These files contain information about the installation, configuration, and connections to other HA agents in the cluster.

VirtualCenter Agent
a. Location: /var/log/vmware/vpx
b. Filename: vpxa.log
c. Contains information on the agent that communicates with the VirtualCenter Server.

Virtual Machine Logs
a. Location: The same directory as the virtual machine’s configuration files are placed in.
b. FileName: vmware.log
c. Contains information when a virtual machine crashes or ends abnormally.

High Availability (HA)



What It Is:
 VMware High Availability (HA) utilizes heartbeats between ESX hosts in the cluster to check that they are functioning. When a host failure is detected, VMware HA automatically restarts affected virtual machines on other production servers, ensuring rapid recovery from failures. Once VMware HA is configured, it operates without dependencies on operating systems, applications, or physical hardware.

Use Case: Protect Virtual Machines from Server Failures

When running Web servers or databases that are critical to your business, VMware HA ensures they will be restarted immediately upon failure of their servers. Interruption to your business will be minimized as the virtual machine is restarted on another available server in your HA cluster.

Step 1: Turn on VMware HA on a cluster

VMware HA can only be turned on for a cluster of ESX hosts. Please ensure that you have followed the prior steps in creating a cluster of ESX hosts. Please also ensure that DNS is set up and working properly, including forward and reverse lookups, fully-qualified domain names (FQDN) and short names. Consult your network administrator for assistance in DNS configurations.

It is also recommended you set up alternate isolation response address (best practice).

  1. To enable VMware HA on your cluster, right-click the cluster and select Edit Settings. The cluster settings window should appear.
  2. Under Cluster Features of the cluster settings window, select Turn On VMware HA. Each ESX host in the cluster will now be configured for VMware HA. Please note that you will need cluster administrator permissions to edit the cluster settings.


Step 2: Set Admission Control and Additional VMware HA options

You may want to set additional VMware HA options to allow for admission control, monitoring and setting policies for your hosts and virtual machines. These can be configured under VMware HA in the cluster settings window. The following is a listing of these addition features.

• Disabling host monitoring will allow you to perform ESX host maintenance without triggering VMware HA into thinking the host has failed.

• Admission control allows you to control whether virtual machines should be restarted after host failures depending on if resources are available
elsewhere in the cluster. VMware HA uses one of three admission control policies: 
 
1) tolerate some number of host failures, 
2) specify a percentage of cluster resources or, 
3) specify a designated failover host.

VM monitoring restarts virtual machines after their VMware Tools heartbeat is lost, even if their host has not failed. The monitoring sensitivity level can be set for each virtual machine.

Distributed Resource Scheduler (DRS)


What It Is: VMware Distributed Resource Scheduler (DRS) automatically load balances resource utilization across a cluster of ESX hosts by using VMotion to migrate virtual machines from a heavily utilized ESX host to a more lightly used ESX host. VMware DRS analyzes the CPU and memory consumption of a virtual machine over time to determine whether to migrate it.

Use Case: Redistribute Virtual Machines off of an ESX Host during Maintenance

VMware DRS migrates virtual machines off of an ESX host when a user enters that host into maintenance mode. DRS will intelligently migrate virtual machines to other available hosts in the DRS cluster in a load-balanced manner. After the maintenance on that host is completed and the user takes it out of maintenance mode, DRS will then migrate virtual machines back onto the host.

Step 1: 
Turn On VMware DRS for a cluster.
In this step, you will turn on VMware DRS for a cluster of ESX hosts that you created earlier. To turn on VMware DRS on your cluster see Figure
1. Right-click the cluster and select Edit Settings.Under Cluster Features select Turn On VMware DRS. Each host in the cluster will now be configured for VMware DRS. Please note that you will need cluster administrator permissions to edit the cluster settings.

 
Step 2: Set automation level for DRS cluster.

In this step you will be able to configure your DRS cluster to automatically balance your virtual machines across the cluster or simply provide recommendations to the user on where to migrate the virtual machines to achieve a load balanced cluster. Configuring the automation level of VMware DRS for the cluster is shown in Figure. You can also configure the automation level for each virtual machines within the cluster—explained in Step 3 below.
1. Click VMware DRS in the cluster settings window and you will notice that the automation level is set to Fully automated by default. The fully automated level optimally places virtual machines within the cluster upon powering them on, as well as migrates virtual machines after power on to optimize resource usage. You can adjust the sensitivity of the automated level by moving the slider bar to more conservative or more aggressive.
2. The partially automated level only places virtual machines within the cluster upon power on and then gives recommendations on where to migrate them to optimize resource usage.
3. The manual level gives placement recommendations at power on as well as where to migrate them later.
For this evaluation leave your VMware DRS settings at the default of Fully automated with the Migration threshold set in the center level.

 
Step 3: Set automation level for each virtual machine.
In this step, you will be able to configure each virtual machine, to be automatically balanced across the cluster or simply provide recommendations to the user on where to migrate the virtual machine to achieve a load balanced cluster.
1. To adjust the automation level for each virtual machine, click Virtual Machine Optionsunder “VMware DRS” in the cluster settings window. For this evaluation keep your Virtual Machine Options set at their default values.


Fault Tolerance (FT)



What It Is
: VMware Fault Tolerance (FT) protects a virtual machine in a VMware HA cluster. VMware FT creates a secondary copy of a virtual machine and migrates that copy onto another host in the cluster. VMware vLockstep technology ensures that the secondary virtual machine is always running in lockstep synchronization to the primary virtual machine. When the host of a primary virtual machine fails, the secondary virtual machine immediately resumes the workload with zero downtime and zero loss of data.
Use Case: On Demand Fault Tolerance for Mission-Critical Applications.
VMware FT can be turned on or off on a per-virtual machine basis to protect your mission-critical applications. During critical times in your datacenter, such as the last three days of the quarter when any outage can be disastrous, VMware FT on-demand can protect virtual machines for the critical 72 or 96 hours when protection is vital. When the critical periods end FT is turned off again for those virtual machines. Turning on and off FT can be automated by scheduling the task for certain times. Refer to Figure. below showing a server failure while running two virtual machines protected by VMware HA and a third virtual machine protected by FT.

The HA-protected virtual machines are restarted on the other host while the FT-protected virtual machine immediately fails over to its secondary and experiences no downtime and no interruption.
Step 1: Turn on VMware Fault Tolerance for a virtual machine 

Once your cluster is enabled with VMware HA, you can protect any virtual machine with VMware FT, given that the following prerequisites are met:
1. The ESX host must have an FT-enabled CPU. For details please refer tohttp://kb.vmware.com/kb/1008027.
2. Hosts must be running the same build of ESX.
3. Hosts must be connected via a dedicated FT logging NIC of at least 1 Gbps.
4. Virtual machine being protected must have a single vCPU.
5. Virtual machine’s virtual disk must be thick provisioned.
2. To enable a virtual machine with VMware FT, right-click the virtual machine called Win2003_VM01 on esx05a, select Fault Tolerance, and click Turn On Fault Tolerance. Please note that you will need cluster administrator permissions to enable VMware FT.



Step 2: Convert virtual disks to thick-provisioned virtual disk
VMware FT requires the virtual machine’s virtual disk to be thick provisioned. Thin-provisioned virtual disks can be converted to thick-provisioned during this step.
1. A dialog box will appear indicating that virtual machines must use thick-provisioned virtual disks. Click Yes to convert to thick-provisioned virtual disks and continue with turning on VMware FT.


 
Step 3: Observe the following actions after turning on VMware FT The process of turning on FT for the virtual machine has begun and the following steps will be executed:
1. The virtual machine, Win2003_VM01, is designated as the primary virtual machine.
2. A copy of Win2003_VM01 is created and designated as the secondary machine.


3. The secondary virtual machine is migrated to another ESX host in the cluster, esx05b in this case. VMware DRS is used to determine what host the secondary virtual machine is migrated to when FT is turned on. For subsequent failovers, a host for the new secondary virtual machine is chosen by VMware HA. Win2003_VM01 is now labeled as Protectedunder Fault Tolerance Status.


 
Step 5: Observe vSphere Alarms after Host Failure
Certain alarms are built into VMware vSphere to signal failures in ESX hosts as well as virtual machines. During the host failure invoked above, you can see an alarm for the FT-protected virtual machine.
1. Click the Alarms tab for Win2003_VM01. Here an alarm is generated even though the virtual machine’s workload continues to run uninterrupted because of VMware FT.


Click the Alarms tab for the rebooted ESX host, esx05a, to see the change in the host connection and power state.

Vmware Interview Questions


  1. Explain about your production environment? How many cluster’s, ESX, Data Centers, H/w etc ?
  2. How does VMotion works? What’s the port number used for it?
  3. Prerequisites for VMotion
  4. How does HA works? Port number? How many host failure allowed and why?
  5. What are active host / primary host in HA? Explain it?
  6. Prerequisites for HA ?
  7. How do DRS works? Which technology used? What are the priority counts to migrate the VM’s?
  8. How does snap shot’s works?
  9. What are the files will be created while creating a VM and after powering on the VM?
  10. If the VMDK header file corrupt what will happen? How do you troubleshoot?
  11. Prerequisites VC, Update manager?
  12. Have you ever patched the ESX host? What are the steps involved in that?
  13. Have you ever installed an ESX host?  What are the pre and post conversion steps involved in that? What would be the portions listed? What would be the max size of it?
  14. I turned on Maintenance mode in an ESX host, all the VM’s has been migrated to another host, but only one VM failed to migrate? What are the possible reasons?
  15. How will you turn start / stop a VM through command prompt?
  16. I have upgraded a VM from 4 to 8 GB RAM; it’s getting failed at 90% of powering on? How do you troubleshoot?
  17. Storage team provided the new LUN ID to you? How will you configure the LUN in VC? What would be the block size (say for 500 GB volume size)?
  18. I want to add a new VLAN to the production network? What are the steps involved in that? And how do you enable it?
  19. Explain about VCB? What it the minimum priority (*) to consolidate a machine?
  20. How VDR works?
  21. What’s the difference between Top and ESXTOP command?
  22. How will you check the network bandwidth utilization in an ESXS host through command prompt?
  23. How will you generate a report for list of ESX, VM’s, RAM and CPU used in your Vsphere environment?
  24. What the difference between connecting the ESX host through VC and Vsphere? What are the services involved in that? What are the port numbers’s used?
  25. How does FT works? Prerequisites? Port used?
  26. Can I VMotion between 2 different data centers? Why?
  27. Can I deploy a VM by template in different data centers ?
  28. I want to increase the system partition size (windows 2003 server- Guest OS) of a VM? How will you do it without any interruption to the end user?
  29. Which port number used while 2 ESX transfer the data in between?
  30. Unable to connect to a VC through Vsphere client? What could be the reason? How do you troubleshoot?
  31. Have you ever upgraded the ESX 3.5 to 4.0? How did you do it?
  32. What are the Vsphere 4.0, VC 4.0, ESX 4.0, VM 7.0 special features?
  33. What is AAM? Where is it used? How do you start or stop through command prompt?
  34. Have you ever called VMWare support? Etc
  35. Explain about Vsphere Licensing? License server?
  36. How will you change the service console IP?
  37. What’s the difference between ESX and ESXi?
  38. What’s the difference between ESX 3.5 and ESX 4.0?

VMware Clarifies Support for Microsoft Clustering

VMware published KB Article 1037959 ( http://kb.vmware.com/kb/1037959 ) on April 18, 2011 in an effort to clarify VMware’s position on running Microsoft Clustering technologies on vSphere. Below is a snapshot of the support matrix published by VMware in the KB (always refer to KB 1037959 for the most current information).

For those familiar with VMware’s previous position on Microsoft Clustering, you will notice a couple changes. First, VMware has made a distinction in Microsoft Clustering technologies by segmenting them into Shared disk and Non-shared Disk.




  • Shared Disk – solution in which the the data resides on the same disks and the VMs share the disks (think MSCS)






  • Non-shared Disk – solution in which the data resides on different disks and uses a replication technology to keep the data in sync (think Exchange 2007 CCR / 2010 DAG).





  • Next, VMware has extended support for Microsoft Clustering to include In-Guest iSCSI for MSCS.

    For those interested in leveraging Microsoft SQL Mirroring, the KB states that VMware does not consider Microsoft SQL Mirroring a clustering solution and will fully support Microsoft SQL Mirroring on vSphere.

    Under the Disk Configurations section of the KB, the KB discusses how if using VMFS, the virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the eagerzeroedthick option. The KB provides detail on how to create the eagerzeroedthick disks for both ESX and ESXi via command line or GUI.  Additional information regarding eagerzeroedthick can be found in KB article 1011170 (http://kb.vmware.com/kb/1011170). Something to note in KB 1011170, at the bottom of the article it states using the vmkfstools –k command you can convert a preallocated (eagerzeroed) virtual disk to eagerzeroedthick and maintain any existing data. Note, the VM must be powered off for this action.

    In closing, the VMware support statement exists to explicitly define what VMware will and will not support. It is very important for you to remember these support statements do not make any determination (either directly or indirectly) about what the software ISV (Independent Software Vendor) will and will not support.  So be sure to review the official support statements from your ISV and carefully choose the configuration that makes sense for your organization and will be supported by each vendor.

    New Vblock Announcements at EMC World 2011

    EMC isn't the only company with some news to unveil at EMC World 2011. VCE has some announcements as well all revolving around the BRAND NEW VBLOCKS!

    The first announcement that effects VCE is the unveiling of Unified Infrastructure Manager 2.1. UIM is standard with a Vblock and is the major hardware orchestration piece with many new road-map additions to tie it in with other VMware products. Check out Chad Sakac's post because he has already covered this really in depth EMC UIM v2.1 Provisioning and Operations.

    The second announcement from VCE is the availability of the new VNX Based Vblocks. The original Vblock names are still there, and I've created a chart that help depict the new differences.


    Vblock Name
    EMC Array
    Other Notes and Features
    Vblock 0
    NS-120
    NAS, SAN, or Both
    Vblock 1
    NS-480
    NAS, SAN, or Both
    Vblock 1U
    NS-960
    NAS, SAN, or Both
    Vblock 300 EX
    VNX 5300
    SAN or Unified
    Vblock 300 FX
    VNX 5500
    SAN or Unified
    Vblock 300 GX
    VNX 5700
    SAN or Unified
    Vblock 300 HX
    VNX 7500
    SAN or Unified
    Vblock 2
    VMAX
    SAN or use NAS with Gateway.

     
    All new 300 series Vblocks come in SAN or Unified (SAN/NAS) configuratons. No longer can Vblocks be ordered as NAS only. Why? Vblocks boot from SAN. When a Vblock was shipped as NAS only, UCS blades had to be populated with internal hard drives. Boot from SAN gives a lot of flexibility in virtual environments. Less spinning media to have to worry about, less power consumption of blades, if a blade fails it's very easy to replace it and boot it up without moving hard drives or re-installing ESX, UCS profiles with SAN Boot makes VMware 4.1 seem stateless, and UIM can configure blades that boot from SAN.
    The Vblock 300 EX is actually cheaper than the original Vblock 0 because of new hardware components.
    Some other things you may want to know about Vblock 300 series are the mins and maxs. All new Vblocks have specific minimums that can be shipped out. The minimum on the compute side is atleast 4 blades. VCE has decided on 4 blades because that is the recommended bare minimum VMware cluster size to account for N+1+maintenance. All UCS blade upgrades can be done in packs of 2 to account for redundancy. On the storage side, a Vblock can be shipped with as little 18 drives. The maximums on both compute and storage depend on the Vblock type for the amount of chassis and drives depend on the array.
    So what's going to happen with the original Vblock 0, 1, and 1U? Nothing. VCE still offers the original Vblocks and will continue to do so until EMC puts and end of life statement on the arrays.
    Your last pondering statement might be, what's up with the branding of 300? Seeing as how there is just a single number, there will be room for newer Vblocks to fit in the range of 0-1000. I can't give any more details, but things are in the pipeline!
    Lastly, if you happen to be at EMC World, you can take a glimpse at the new racks. VCE is now shipping brand new custom racks built by Panduit. Simply stunning. This looks like it's a Vblock 300 FX.