Raj2796's Blog

February 18, 2010

Vmware Vsphere PVSCSI white paper and article on interrupt coalescing

Filed under: pvscsi,vmware — raj2796 @ 10:51 am

As a final post on PVSCSI i’ve stummbled upon the vmware PVSCSI Storage Performance study paper
vsp_4_pvscsi_perf.pdf

Also the vmware VROOM! page

And lastly a more detailed explenation on why PVSCI is not allways best :

PVSCSI and Low IO Workloads
Filed under: Uncategorized — Tags: pvscsi, storage, vmkernel — Scott @ 10:46 am

Scott Sauer recently asked me a tough question on Twitter. My roaming best practices talk includes the phrase “do not use PVSCSI for low-IO workloads”. When Scott saw a VMware KB echoing my recommendation, he asked the obvious question: “Why?” It took me a couple of days to get a sufficient answer.

One technique for storage driver efficiency improvements is interrupt coalescing. Coalescing can be thought of as buffering: multiple events are queued for simultaneous processing. For coalescing to improve efficiency, interrupts must stream in fast enough to create large batch requests. Otherwise the timeout window will pass with no additional interrupts arriving. This means the single interrupt is handled as normal but after a useless delay.

An intelligent storage driver will therefore coalesce at high IO but not low IO. In the years we have spent optimizing ESX’s LSI Logic virtual storage adapter, we have fine-tuned the coalescing behavior to give fantastic performance on all workloads. This is done by tracking two key storage counters:

* Outstanding IOs (OIOs): Represents the virtual machine’s demand for IO.
* IOs per second (IOPS): Represents the storage system’s supply of IO.

The robust LSI Logic driver increases coalescing as OIOs and IOPS increase. No coalescing is used with few OIOs or low throughput. This produces efficient IO at large throughput and low latency IO when throughput is small.

Currently the PVSCSI driver coalesces based on OIOs only, and not throughput. This means that when the virtual machine is requesting a lot of IO but the storage is not delivering, the PVSCSI driver is coalescing interrupts. But without the storage supplying a steady stream of IOs there are no interrupts to coalesce. The result is a slightly increased latency with little or no efficiency gain for PVSCSI in low throughput environments.

LSI Logic is so efficient at low throughput levels that there is no need for a special device driver to improve efficiency. The CPU utilization difference between LSI and PVSCSI at hundreds of IOPS is insignificant. But at massive amounts of IO–where 10-50K IOPS are streaming over the virtual SCSI bus–PVSCSI can save a large number of CPU cycles. Because of that, our first implementation of PVSCSI was built on the assumption that customers would only use the technology when they had backed their virtual machines by world-class storage.

But VMware’s marketing engine (me, really) started telling everyone about PVSCSI without the right caveat (“only for massive IO systems!”) So, everyone started using it as a general solution. This meant that in one condition–slow storage (low IOPS) with a demanding virtual machine (high OIOs)–PVSCSI has been inefficiently coalescing IOs resulting in performance slightly worse than LSI Logic.

But now VMware’s customers want PVSCSI as a general solution and not just for high IO workloads. As a result we are including advanced coalescing behavior in PVSCSI for future versions of ESX. More on that when the release vehicle is set.
PVSCSI In A Nutshell

If you plodded through the above technical explanation of interrupt coalescing and PVSCSI I applaud you. If you just want a summary of what to do, here it is:

* For existing products, only use PVSCSI against VMDKs that are backed by fast (greater than 2,000 IOPS) storage.
* If you have installed PVSCSI in low IO environments, do not worry about reconfiguring to LSI Logic. The net loss of performance is very small. And clearly these low IO virtual machines are not running your performance-critical applications.
* For future products*, PVSCSI will be as efficient as LSI Logic for all environments.

(*) Specific product versions not yet announced.

original article can be found here at Scott Drummonds site – apologies for spelling your name wrong earlier 😛

When to use Vmware PVSCSI and when to use LSI Logic

Filed under: pvscsi,vmware — raj2796 @ 10:32 am

Now that the technology is officially supported by vmware, and we can expect to see a growth in the number of systems supported over the subsequent vmware updates, many of us are starting to use PVSCSI for all new vm’s. I’ve found it works fine with non offically supported OS’s, so long as you have the latest vmware tools installed (i have not tried netware yet though).

PVSCSI gives 12% more throughput and 18% less cpu use, thats enough of a gain to avoid having to upgrade your servers this year, added to thin provisioning and you can skip your storage upgrades, roll the money over and get something nice and shiny the following year.

It seems PVSCSI isn’t allways the best choice though, for heavy workloads its great, for lower workloads LSI is still champ. Vmware has a knowledgebase article on this which can summed upto :

PVSCSI – all else ( 2,000 IOPS + etc )
LSI Logic – less than 2,000 IOPS and issuing greater than 4 outstanding I/Os.

Vmwares knowledge base article:

Do I choose the PVSCSI or LSI Logic virtual adapter on ESX 4.0 for non-IO intensive workloads?
Details
VMware evaluated the performance of PVSCSI and LSI Logic to provide a guideline to customers on choosing the right adapter for different workloads. The experiment results show that PVSCSI greatly improves the CPU efficiency and provides better throughput for heavy I/O workloads. For certain workloads, however, the ESX 4.0 implementation of PVSCSI may have a higher latency than LSI Logic if the workload drives low I/O rates or issues few outstanding I/Os. This is due to the way the PVSCSI driver handles interrupt coalescing.

One technique for storage driver efficiency improvements is interrupt coalescing. Coalescing can be thought of as buffering: multiple events are queued for simultaneous processing. For coalescing to improve efficiency, interrupts must stream in fast enough to create large batch requests. Otherwise, the timeout window will pass with no additional interrupts arriving. This means the single interrupt is handled as normal but after an unnecessary delay.

The behavior of two key storage counters affects the way the PVSCSI and LSI Logic adapters handle interrupt coalescing:

* Outstanding I/Os (OIOs): Represents the virtual machine’s demand for I/O.
* I/Os per second (IOPS): Represents the storage system’s supply of I/O.

The LSI Logic driver increases coalescing as OIOs and IOPS increase. No coalescing is used with few OIOs or low throughput. This produces efficient I/O at large throughput and low-latency I/O when throughput is small.

In ESX 4.0, the PVSCSI driver coalesces based on OIOs only, and not throughput. This means that when the virtual machine is requesting a lot of I/O but the storage is not delivering, the PVSCSI driver is coalescing interrupts. But without the storage supplying a steady stream of I/Os, there are no interrupts to coalesce. The result is a slightly increased latency with little or no efficiency gain for PVSCSI in low throughput environments.

The CPU utilization difference between LSI and PVSCSI at hundreds of IOPS is insignificant. But at larger numbers of IOPS, PVSCSI can save a lot of CPU cycles.

Solution
The test results show that PVSCSI is better than LSI Logic, except under one condition–the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os.

February 11, 2010

Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters

Filed under: pvscsi,Virtualisation,vmware — raj2796 @ 5:08 pm

NOTE – although not officially supported, boot from PVSCSI adaptors seems to work fine so far on the OS’s i have tried – boot from PVSCSI is supported from 4 u1

Details
This article includes supplemental information about configuring and using VMware Paravirtual SCSI (PVSCSI) adapters.

PVSCSI adapters are high-performance storage adapters that can result in greater throughput and lower CPU utilization. PVSCSI adapters are best suited for environments, especially SAN environments, where hardware or applications drive a very high amount of I/O throughput. PVSCSI adapters are not suited for DAS environments.

Paravirtual SCSI adapters are supported on the following guest operating systems:

*
Windows Server 2008
*
Windows Server 2003
*
Red Hat Enterprise Linux (RHEL) 5

Paravirtual SCSI adapters also have the following limitations:

*
Hot add or hot remove requires a bus rescan from within the guest.
*
Disks with snapshots might not experience performance gains when used on Paravirtual SCSI adapters or if memory on the ESX host is overcommitted.
*
If you upgrade from RHEL 5 to an unsupported kernel, you might not be able to access data on the virtual machine’s PVSCSI disks. You can runvmware-config-tools.pl with the kernel-version parameter to regain access.
*
Because the default type of newly hot-added SCSI adapter depends on the type of primary (boot) SCSI controller, hot-adding a PVSCSI adapter is not supported.
*
Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. A disk attached using PVSCSI can be used as a data drive, not a system or boot drive. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1.

Solution
To configure a disk to use a PVSCSI adapter:

1.
Launch a vSphere Client and log in to an ESX host.
2.
Select a virtual machine, or create a new one.
3.
Ensure a guest operating system that supports PVSCSI is installed on the virtual machine.

Note: Booting a Linux guest from a disk attached to a PVSCSI adapter is not supported. Booting a Microsoft Windows guest from a disk attached to a PVSCSI adapter is not supported in versions of ESX prior to ESX 4.0 Update 1. In these situations, the system software must be installed on a disk attached to an adapter that does support bootable disk.

4.
In the vSphere Client, right-click on the virtual machine and click Edit Settings.
5.
Click the Hardware tab.
6.
Click Add.
7.
Select Hard Disk.
8.
Click Next.
9.
Choose any one of the available options.

10.
Click Next.
11.
Specify the options your require. Options vary depending on which type of disk you chose.
12.
Choose a Virtual Device Node between SCSI (1:0) to SCSI (3:15) and specify whether you want to use Independent mode.
13.
Click Next.
14.
Click Finish to finish the process and exit the Add Hardware wizard. A new disk and controller are created.
15.
Select the newly created controller and click Change Type.
16.
Click VMware Paravirtual and click OK.
17.
Click OK to exit the Virtual Machine Properties dialog.
18.
Power on the virtual machine.
19.
Install VMware Tools. VMware Tools includes the PVSCSI driver.
20.
Scan and format the hard disk.

Note: In some operating system types, to perform this procedure, you need to create a virtual machine with the LSI controller, install VMware Tools, then change to the drives to paravirtualized mode.

The above article is from vmwares knowledge base

Raphael from Hypervisor.fr has a great video on this:

vSphere Virtual Machine Upgrade Process

Filed under: pvscsi,Virtualisation,vmware — raj2796 @ 5:06 pm

vSphere Virtual Machine Upgrade Process

All over the world people are upgrading to VMware vSphere. The upgrade procedure of the hypervisor and the management layer is straight forward. If you want to take full advantage however of all the new features and performance improvements your virtual machines will also need to be upgraded. Scott Lowe wrote an excellent article on this topic and I wanted to bring this post to your attention as it’s an important part of the upgrade path in my opinion. Please visit the source article for feedback or comments.

Scott Lowe – vSphere Virtual Machine Upgrade Process

Upgrading a VMware Infrastructure 3.x environment to VMware vSphere 4 involves more than just upgrading vCenter Server and upgrading your ESX/ESXi hosts (as if that wasn’t enough). You should also plan on upgrading your virtual machines. VMware vSphere introduces a new hardware version (version 7), and vSphere also introduces a new paravirtualized network driver (VMXNET3) as well as a new paravirtualized SCSI driver (PVSCSI). To take advantage of these new drivers as well as other new features, you’ll need to upgrade your virtual machines. This process I describe below works really well. I’d like to thank Erik Bussink, whose posts on Twitter got me started down this path. Please note that this process will require some downtime. I personally tested this process with both Windows Server 2003 R2 as well as Windows Server 2008; it worked flawlessly with both versions of Windows. (I’ll post a separate article on doing something similar with other operating systems, if it’s even possible.)

1. Record the current IP configuration of the guest operating system. You’ll end up needing to recreate it.
2. Upgrade VMware Tools in the guest operating system. You can do this by right-clicking on the virtual machine and selecting Guest > Install/Upgrade VMware Tools. When prompted, choose to perform an automatic tools upgrade. When the VMware Tools upgrade is complete, the virtual machine will reboot.
3. After the guest operating system reboots and is back up again, shutdown the guest operating system. You can do this by right-clicking on the virtual machine and selecting Power > Shutdown Guest.
4. Upgrade the virtual machine hardware by right-clicking the virtual machine and selecting Upgrade Virtual Hardware.
5. In the virtual machine properties, add a new network adapter of the type VMXNET3 and attach it to the same port group/dvPort group as the first network adapter.
6. Remove the first/original network adapter.
7. Add a new virtual hard disk to the virtual machine. Be sure to attach it to SCSI node 1:x; this will add a second SCSI adapter to the virtual machine. The size of the virtual hard disk is irrelevant.
8. Change the type of the newly-added second SCSI adapter to VMware Paravirtual.
9. Click OK to commit the changes you’ve made to the virtual machine.
10. Power on the virtual machine. When the guest operating system is fully booted, log in and recreate the network configuration you recorded for the guest back in step 1. Windows may report an error that the network configuration is already used by a different adapter, but proceed anyway. Once you’ve finished, shut down the guest operating system again.
11. Edit the virtual machine to remove the second hard disk you just added.
12. While still in the virtual machine properties, change the type of the original SCSI controller to VMware Paravirtual (NOTE: See update below.)
13. Power on the virtual machine. When the guest operating system is fully booted up, log in.
14. Create a new system environment variable named DEVMGR_SHOW_NONPRESENT_DEVICES and set the value to 1.
15. Launch Device Manager and from the View menu select Show Hidden Devices.
16. Remove the drivers for the old network adapter and old SCSI adapter. Close Device Manager and you’re done!

If you perform these steps on a template, then you can be assured that all future virtual machines cloned from this template also have the latest paravirtualized drivers installed for maximum performance.

The above post was found on the vmware community blogs, an excellent resource which i suggest you check out.

Create a free website or blog at WordPress.com.