Friday 13 January 2012

Awesome Video Demoing The Next Generation of Digital Books


A must watch video for every tech-lover, this is the future of our books. "Al Gore's Our Choice" is an interactive app for Apple iPad and iPhone, featuring America's vice president's narrative tour spiced-up with great photography, interactive graphics, animations, and more than an hour of engrossing documentary footage this is simply a great experience.






Thanks to the all new groundbreaking multi-touch interface of the device the app provides users with an immersive experience letting them enjoy the content seamlessly. Do check the video demonstration posted after the jump.

DO WATCH THE FULL VIDEO
Al Gore's Our Choice


VMware Clarifies Support for Microsoft Clustering

VMware published KB Article 1037959 ( http://kb.vmware.com/kb/1037959 ) on April 18, 2011 in an effort to clarify VMware’s position on running Microsoft Clustering technologies on vSphere. Below is a snapshot of the support matrix published by VMware in the KB (always refer to KB 1037959 for the most current information).

For those familiar with VMware’s previous position on Microsoft Clustering, you will notice a couple changes. First, VMware has made a distinction in Microsoft Clustering technologies by segmenting them into Shared disk and Non-shared Disk.




  • Shared Disk – solution in which the the data resides on the same disks and the VMs share the disks (think MSCS)






  • Non-shared Disk – solution in which the data resides on different disks and uses a replication technology to keep the data in sync (think Exchange 2007 CCR / 2010 DAG).





  • Next, VMware has extended support for Microsoft Clustering to include In-Guest iSCSI for MSCS.

    For those interested in leveraging Microsoft SQL Mirroring, the KB states that VMware does not consider Microsoft SQL Mirroring a clustering solution and will fully support Microsoft SQL Mirroring on vSphere.

    Under the Disk Configurations section of the KB, the KB discusses how if using VMFS, the virtual disks used as shared storage for clustered virtual machines must reside on VMFS datastores and must be created using the eagerzeroedthick option. The KB provides detail on how to create the eagerzeroedthick disks for both ESX and ESXi via command line or GUI.  Additional information regarding eagerzeroedthick can be found in KB article 1011170 (http://kb.vmware.com/kb/1011170). Something to note in KB 1011170, at the bottom of the article it states using the vmkfstools –k command you can convert a preallocated (eagerzeroed) virtual disk to eagerzeroedthick and maintain any existing data. Note, the VM must be powered off for this action.

    In closing, the VMware support statement exists to explicitly define what VMware will and will not support. It is very important for you to remember these support statements do not make any determination (either directly or indirectly) about what the software ISV (Independent Software Vendor) will and will not support.  So be sure to review the official support statements from your ISV and carefully choose the configuration that makes sense for your organization and will be supported by each vendor.

    New Vblock Announcements at EMC World 2011


    EMC isn't the only company with some news to unveil at EMC World 2011. VCE has some announcements as well all revolving around the BRAND NEW VBLOCKS!

    The first announcement that effects VCE is the unveiling of Unified Infrastructure Manager 2.1. UIM is standard with a Vblock and is the major hardware orchestration piece with many new road-map additions to tie it in with other VMware products. Check out Chad Sakac's post because he has already covered this really in depth EMC UIM v2.1 Provisioning and Operations.

    The second announcement from VCE is the availability of the new VNX Based Vblocks. The original Vblock names are still there, and I've created a chart that help depict the new differences.


    Vblock Name
    EMC Array
    Other Notes and Features
    Vblock 0
    NS-120
    NAS, SAN, or Both
    Vblock 1
    NS-480
    NAS, SAN, or Both
    Vblock 1U
    NS-960
    NAS, SAN, or Both
    Vblock 300 EX
    VNX 5300
    SAN or Unified
    Vblock 300 FX
    VNX 5500
    SAN or Unified
    Vblock 300 GX
    VNX 5700
    SAN or Unified
    Vblock 300 HX
    VNX 7500
    SAN or Unified
    Vblock 2
    VMAX
    SAN or use NAS with Gateway.

    All new 300 series Vblocks come in SAN or Unified (SAN/NAS) configuratons. No longer can Vblocks be ordered as NAS only. Why? Vblocks boot from SAN. When a Vblock was shipped as NAS only, UCS blades had to be populated with internal hard drives. Boot from SAN gives a lot of flexibility in virtual environments. Less spinning media to have to worry about, less power consumption of blades, if a blade fails it's very easy to replace it and boot it up without moving hard drives or re-installing ESX, UCS profiles with SAN Boot makes VMware 4.1 seem stateless, and UIM can configure blades that boot from SAN.
    The Vblock 300 EX is actually cheaper than the original Vblock 0 because of new hardware components.
    Some other things you may want to know about Vblock 300 series are the mins and maxs. All new Vblocks have specific minimums that can be shipped out. The minimum on the compute side is atleast 4 blades. VCE has decided on 4 blades because that is the recommended bare minimum VMware cluster size to account for N+1+maintenance. All UCS blade upgrades can be done in packs of 2 to account for redundancy. On the storage side, a Vblock can be shipped with as little 18 drives. The maximums on both compute and storage depend on the Vblock type for the amount of chassis and drives depend on the array.
    So what's going to happen with the original Vblock 0, 1, and 1U? Nothing. VCE still offers the original Vblocks and will continue to do so until EMC puts and end of life statement on the arrays.
    Your last pondering statement might be, what's up with the branding of 300? Seeing as how there is just a single number, there will be room for newer Vblocks to fit in the range of 0-1000. I can't give any more details, but things are in the pipeline!
    Lastly, if you happen to be at EMC World, you can take a glimpse at the new racks. VCE is now shipping brand new custom racks built by Panduit. Simply stunning. This looks like it's a Vblock 300 FX.

    Distributed Resource Scheduling (DRS) for Storage on next vSphere version

    DRS for storage will enhance Storage vMotion in order to provide automatic load balancing for storage. Users will be able to define groups of data stores, called storage pods capable of automatic load balancing based on capacity, increasing storage utilization. Storage Distributed Resource Scheduler (DRS) will use Storage vMotion to perform automatic load balancing if a disk becomes overloaded. Storage DRS users will be able to define groups of data stores, called “storage pods,” that will automatically load-balance based on capacity. Users can then provision virtual machines (VMs) to specific storage pods rather than to specific data stores.


    EMC Isilon 15PB Storage system





    EMC has announced a new storage system called Isilon 108NL. The Isilon storage system consists of 36 disk 4U nodes which can be joined together to create a 144 node storage system with a capacity of 15 PB.
    It’s a scale-out storage system where each node contains its own 1 Gbps and 10 Gbps interfaces so you can add capacity while maintaining throughput.

    According to EMC, the Isilon 108NL can achieve a throughput of 85 GBps and a maximum of  1.4 M IOPS. The Isilon can be equipped with 1, 2 or 3 TB Hitachi SATA disks giving it 36, 72 or 108 TB per storage node. The Isilon is by powered by the OneFS operating system which supports NFS, SMB, iSCSI, http and FTP and delivers N+4 data protection on cluster, directory or file level.

    VMware is still the best



      
    Now, you can nit-pick on the measurements he made or the criteria he has chosen, but in general I think it’s a solid test of up-to-date versions.

    The best conclusions I can draw from his report are these:

    VMware might not always be the cheapest, VMware might not always be the one with the highest speeds.. but VMware is still the one with the most diverse OS support (any x86 OS can be virtualized), the best management toolkit and the most reliable architecture.


    The article also shows some interesting trends. If you go back in time a bit, you can certainly remember that Citrix was aiming at the server virtualization market when they bought Xen. They even re-branded their entire portfolio with it after the purchase. When you look at the results they present in the test Paul did, the conclusion I draw is that Citrix has run out of fuel in this part of town and is concentrating on the desktop again.
    Also, another remarkable trend is to be seen at Microsoft and Red Hat. A few years back, Red Hat didn’t really compete in this part of town and Microsoft was more or less the laughing stock of the bunch. Nobody really considered running Hyper-V in their data center as it was not even ready for proper single server deployment, let alone in a complete data center cluster.

    Well, Microsoft did what was to be expected of them; they improved and improved again. One thing can be said of their current version, it can be deployed in a data center scenario. But as the shoot out shows, there is still a lot of room for improvements. And we all know that statements made with previous versions of Hyper-V like ‘who needs live migration’ quickly changed into ‘look, we can do live migration too!’. Reliability and scalability hugely improved, management still is a pain in the butt for Redmond. One thing that strikes me most when I am in selection of a virtualization platform. Many clients tend to think Hyper-V is for free. You get it with any server license you buy. Indeed you do. But keep in mind that you burn that specific license for your virtual platform AND you still need to buy a collection of management software to properly manage the lot. But I guess we haven’t seen the last of this yet.

    Red Hat is one of the most remarkable companies in this list. They have been on the virtualization train for quite some time but as this test shows, they really can compete with the big three in this field.. and come out as second. It seems that open source is closing the gap quickly with the enterprise environments and really showing off what they can do, although implementation is a bit limited with Windows and Red Hat Linux as only supported client VM’s.

    Thursday 12 January 2012

    How to convert VHD to VMDK partition types (Convert format to VMDK)


    Download Winimage 8.10

    First thing you will need is Win Image 8.10 witch is shareware. You may evaluate it for a period of 30 days, after you need to register if you intend to continue using it. Make sure to download WinImage version 8.10. You can get it from:

    This is the Winimage download page.
    http://www.winimage.com/download.htm


    Install Winimage 8.10 and start it

    After installing WinImage, you will get to this screen:


    Click menu "Disk" and "Convert Virtual Hard Disk Image".


    Select file type "Virtual Hard Disk (*.vhd)" under file type field.
    Choose the folder where the source virtual disk is, select the virtual disk and click "Open".


    Select either "Create Fixed Size Virtual Hard Disk" or "Create Dynamically Expanding Virtual Hard Disk".


    Select file type "VMware VMDK (*.vmdk)" under file type field.
    Choose the folder where the destination virtual disk will be, type the virtual disk name and then click "Save".


    Give it time to finish the conversion. It may take a while depanding on the size of the virtual hard disk.


    Once it is done you will go back to the opening window. The conversion is done.

    This is "Virtual to Virtual" (V2V) conversion.

    vSphere 4.1 Memory Enhancements - Compression


    Finally, with Transparent Memory Compression, 4.1 will compress memory on the fly to increase the amount of memory that appears to be available to a given VM. The new Transparent Memory Compression is of interest in the workload cases where memory -- rather than CPU cycles -- has limitations. ESX/ESXi provides a Memory Compression cache to improve virtual machine performance when using memory over-commitment. Memory Compression is enabled by default when a host's memory becomes overcommitted; ESX/ESXi compresses virtual pages and stores them in memory

    Since accessing compressed memory is faster than accessing memory swapped to disk, Memory Compression in ESX/ESXi allows memory over-commits without significantly hindering performance. When a virtual page needs to be swapped, ESX/ESXi first attempts to compress the page. Pages that can be compressed to 2 KB or smaller are stored in the virtual machine's compression cache, increasing the capacity of the host. The maximum size can be set for the Compression Cache and disable Memory Compression using the Advanced Settings dialog box in the vSphere Client.

    vSphere 4.1 Network Traffic Management - Emergence of 10 GigE


    The diagram at left should be familiar to most. When using 1GigE NICs, ESX hosts are typically deployed with NICs dedicated to particular traffic types. For example you may dedicate 4x 1GigE NICs for VM traffic; one NIC to iSCSI, another NIC to vMotion, and another to the service console. Each traffic type gets a dedicated bandwidth by virtue of the physical NIC allocation. 



    Moving to the diagram at right … ESX hosts deployed with 10GigE NICs are likely to be deployed (for the time being) with only two 10GigE interfaces. Multiple traffic types will be converged over the two interfaces. So long as the load offered to the 10GE interfaces is less than 10GE, everything is ok—the NIC can service the offered load. But what happens when the offered load from the various traffic types exceeds the capacity of the interface? What happens when you offer say 11Gbps to a 10GigE interface? Something has to suffer. This is where Network IO Control steps in. It addresses the issue of oversubscription by allowing you to set the relative importance of predetermined traffic types.  

    NetIOC is controlled with two parameters—Limits and Shares.

    Limits, as the name suggests, sets a limit for that traffic type (e.g VM traffic) across the NIC team. The value is specified in absolute terms in Mbps. When set, that traffic type will not exceed that limit *outbound* (or egress) of the host.

    Shares specify the relative importance of that traffic type when those traffic types compete for a particular vmnic (phyiscal NIC). Shares are specified in abstract units numbered between 1 and 100 and indicate the relative importance of that traffic type. For example, if iSCSI has a shares value of 50, and FT logging has a shares value of 100, then FT traffic will get 2x the bandwidth of iSCSI when they compete. If they were both set at 50, or both set at 100, then they would both get the same level of service (bandwidth).

    There are a number of preset values for shares ranging from low to high. You can also set custom values. Note that the limits and shares apply to output or egress from the ESX host, not input.Remember that shares apply to the vmnics; limits apply across a team.

    vSphere 4.1 offers improved vMotion speeds


    The vSphere 4.1 vMotion enhancements significantly reduce the overall time for host evacuations, with support for more simultaneous virtual machine migrations and faster individual virtual machine migrations. The result is a performance improvement of up to 5x for an individual virtual machine migration and support for four to eight simultaneous vMotion migrations per host, depending on the vMotion network adapter (1GbE or 10GbE respectively).

     

    Fault Tolerance Checklist


    I delivered my first “vSphere 4 what’s new” training this week. During the last lab, the students managed to configure VMware Fault Tolerance. After powering down one of the ESX servers the protected virtual machine switched over to the second ESX server. FT works fabulous, you can use the following checklist to make sure you configure FT the right way. 

    Fault Tolerance Checklist 

    • Required  ESX/ESXi Hardware: Ensure that the processors are supported: AMD Barcelona+, Intel Penryn+ (run the CPU compatibility tool to determine compatibility).
    • Required  ESX/ESXi Hardware: Ensure that HV (Hardware Virtualization) is enabled in the BIOS.
    • Optional  ESX/ESXi Hardware: Ensure that power management (also known as power-capping) is turned OFF in the BIOS (performance implications).
    •  Optional  ESX/ESXi Hardware: Ensure that hyper-threading is turned OFF in the BIOS (performance implications).
    •  Required  Storage: Ensure that FT protected virtual machines are on shared storage (FC, iSCSI or NFS). When using NFS, increase timeouts and have a dedicated NIC for NFS traffic.
    •  Required  Storage: Ensure that the datastore is not using physical RDM (Raw Disk Mapping). Virtual RDM is supported.
    •  Required  Storage: Ensure that there is no requirement to use Storage VMotion for VMware FT VMs since Storage VMotion is not supported for VMware FT VMs.
    •  Required  Storage: Ensure that NPIV (N-Port ID Virtualization) is not used since NPIV is not supported with VMware FT.
    •  Optional  Storage: Ensure that virtual disks on VMFS3 are thick-eager zeroed (thin or sparsely allocated will be converted to thick-eager zeroed when VMware FT is enabled requiring additional storage space).
    •  Optional  Storage: Ensure that ISOs used by the VMware FT protected VMs are on shared storage accessible to both primary and secondary VMs (else errors reported on secondary as if there is no media, which might be acceptable).
    •  Optional  Network: Ensure that at least two NICs are used (NIC teaming) for ESX  management/VMotion and VMware FT logging. VMware recommends four VMkernel NICs: two dedicated for VMware VMotion and two dedicated for VMware FT.
    •  Required  Network: Ensure that at least gigabit NICs are used (10 Gbit NICs can be used as well as jumbo frames enabled for better performance).
    •  Optional  Redundancy: Ensure that the environment does not have a single point of failure (i.e. use NIC teaming, multiple network switches, and storage multipathing).
    •  Required  vCenter Server: Ensure that the primary and secondary ESX hosts and virtual machines are in an HA-enabled cluster.
    •  Required  vCenter Server: Ensure that there is no requirement to use DRS for VMware FT protected virtual machines; in this release VMware FT cannot be used with VMware DRS (although manual VMotion is allowed).
    • Required  vCenter Server: Ensure that host certificate checking is enabled (enabled by default) before you add the ESX/ESXi host to vCenter Server.
    •  Required  ESX/ESXi: Ensure that the primary and secondary ESX/ESXi hosts are running the same build of VMware ESX/ESXi.
    •  Required  Virtual Machines: Ensure that the virtual machines are NOT using more than 1 vCPU (SMP is not supported).
    •  Required  Virtual Machines: Ensure that there is no user requirement to use NPT/EPT (Nested Page Tables/Extended Page Tables) since VMware FT disables NPT/EPT on the ESX host.
    •  Required  Virtual Machines: Ensure that there is no user requirement to hot add or remove devices since hot plugging devices cannot be done with VMware FT.
    •  Required  Virtual Machines: Ensure that there is no user requirement to use USB (USB must be disabled) and sound devices (must not be configured) since these are not supported for  ecord/Replay (and VMware FT).
    •  Required  Virtual Machines: Ensure that there is no user requirement to have virtual machine snapshots since these are not supported for VMware FT. Delete snapshots from existing virtual machines before protecting with VMware FT.
    •  Required  Virtual Machines: Ensure that virtual machine hardware is upgraded to v7.
    •  Optional  Virtual Machines: Ensure that there are will be no more than four (to eight) VMware FT enabled virtual machine primaries or secondaries on any single ESX/ESXi host (suggested general guideline based on ESX/ESXi host and VM size and workloads which can vary).
    •  Required  Guest OS: Ensure that the virtual machines do not use a paravirtualized guest OS.
    •  Required  3rd Party: Ensure MSCS clustered virtual machines will have MSCS clustering removed prior to protecting with VMware FT (and make sure that the virtual machines are not SMP).