Tag Archives: VMotion

vSA vSphere Storage Appliance Performance Benchmark Test

The article below goes into depth with my experience with VMware vSA Performance Benchmark Testing. I’ve tried to be as detailed as possible to give you a complete picture of my findings. I believe that there is a space where storage virtualization may thrive but with recent experience with the VMware vSA product, I am less than satisfied with the results, manageability and most of all performance. I believe storage virtualization has a few more years until maturity until it can be truly considered a serious candidate in the small & remote office scenarios.  This statement holds true for other two/three node storage virtualization technologies including Falconstor’s storage virtualization.

VMware Version Information

VMware vCenter Server 5.1.0, 947673
VMware vStorage Appliance 5.1.3, 1090545
VMware ESXi 5.1 U1, HP OEM Bundle, 1065491 (VMware-ESXi-5.1.0-Update1-1065491-HP-5.50.26.iso)

HP ProLiant DL385 G2 Hardware Configuration
– 4 CPUs x 2.6 GHz
– Dual-Core AMD Opteron Processor 2218
– AMD Opteron Generation EVC Mode
– HP Smart Array P400, 512MB Cache, 25% Read / 75% Write
– RAID-5, 8x 72 GB 10K RPM Hard Drives
– HP Service Pack 02.2013 Firmware

vStorage Appliance Configuration
– 2 Node Cluster
– Eager Zero Full Format
– VMware Best Practices

IOZone Virtual Machine Configuration
– Oracle Linux 6.4 x86_64
– 2 vCPU
– 1 GB Memory
– 20 GB Disk, Thick Eager Zero Provisioned
– VMware Tool 9.0.5.21789 (build-1065307)

IOZone Test Paramaters
/usr/bin/iozone -a -s 5G -o

-a   Used to select full automatic mode. Produces output that covers all tested file operations for record sizes of 4k to 16M for file sizes of 64k to 512M.

-s #   Used to specify the size, in Kbytes, of the file to test. One may also specify -s #k (size in Kbytes) or -s #m (size in Mbytes) or -s #g (size in Gbytes).

-o   Writes are synchronously written to disk. (O_SYNC). Iozone will open the files with the O_SYNC flag. This forces all writes to the file to go completely to disk before returning to the benchmark.

VMware ESXi/vSA Network Configuration

VMware vSA Architecture

IOZone Performance Benchmark Results

vSA Read Graph

vSA Stride Read Graph

vSA Random Read Graph

vSA Backward Read Graph

vSA Fread Graph

vSA Write Graph

vSA Random Write Graph

vSA Record Rewrite Graph

vSA Fwrite Graph

Download RAW Excel Data

Summary

The vSA performed far less than the native onboard storage controller which was expected due to the additional layer of virtualization. I honestly expected better performance out of the 8-disk RAID-5 even without storage virtualization since they were 10,000 RPM drives. On average, across all the tests there is 76.3% difference between the native storage and the virtualized storage! Wow! That is an expensive down grade! I understand that the test bed was using not the latest and greatest hardware but in general terms of disk performance is generally limited by the spinning platter. I would really be interested in seeing the difference using newer hardware.

I believe this only depicts a fraction of the entire picture, performance. There is other concerns that I have at the moment with storage virtualization such as complexity and manageability. I found the complexity to be very frustrating while setting up the vSA, there are many design considerations and limitations with this particular storage virtualization solution most of which were observed during the test trails. The vSA management is a Flash-based application which had it’s quirks and crashes as well. Crashes at a storage virtualization layer left me thinking that this would be a perfect recipe for data loss and/or corruption. In addition, a single instance could not manage multiple vSA deployments due to IP addressing restrictions which was a must for the particular use-case which I was testing for.

For now, storage virtualization is not there yet in my opinion for any production use. It has alot of room to grow and I will certainly be interested in revisiting this subject down the road since I believe in the concept.

Reference Articles That May Help 

Tagged , , , , , , ,

VMware vMotion View Horizons Replica VDI

When working with a linked-clone replica in VMware View Horizons, you must unprotect the linked-clone base image prior to vMotioning the linked-clone base image to another data store. The following commands will enable you to unprotect the base image, vMotion the base image to the new datastore then to finally reprotect the base image for continued use by VMware View Horizons.

1. Disable provisioning for the VMware View Pool.
2. Change settings in VMware View to reflect the new datastore to use.
3. Unprotect replica.

sviconfig -operation=UnprotectEntity -DsnName=<dsnname> -DbUsername=<dbusername> -DbPassword=<dbpassword> -VcUrl=https://<vcenterurl>/sdk -VcUsername=<username> -VcPassword=<password> -InventoryPath=//vm/VMwareViewComposerReplicaFolder/ -Recursive=true

4. vMotion replica.
5. Reprotect replica.

sviconfig -operation=ProtectEntity -DsnName=<dsnname> -DbUsername=<dbusername> -DbPassword=<dbpassword> -VcUrl=https://<vcenterurl>/sdk -VcUsername=<username> -VcPassword=<password> -InventoryPath=//vm/VMwareViewComposerReplicaFolder/ -Recursive=true

6. Re-enable provisioning.

Reference : http://kb.vmware.com/kb/1008704

Tagged , , , , ,

Failed to Start Migration Pre-copy Error 0xbad003f vMotion Migration Fix

“A general system error occurred: Failed to start migration pre-copy. Error 0xbad003f. Connection closed by remote host, possibly due to timeout.”
“A general system error occurred: Failed to start migration pre-copy. Error 0xbad004b. Connection reset by peer.”

Another issue, that I recently came across was a live vMotion issue where the vMotion migration would fail during the pre-copy and always at 10%. The following issues were either one of the two:

VMware vCenter vSphere Event Log

I performed some basic troubleshooting such as a vmkping. I used the ping command and watched the response times remain consistent during the attempted vMotion migration. No packets were being lost which I thought that there would be packet loss if there was an issue with Layer 3 IP addressing.

VMware ESXi vmkping

While still on the command line with the ESXi host, I decided to look for any arp entries anyways regardless of my logic to rule it out. I ran the following:

cat /var/log/vmkernel | grep arp

I was wrong, there was another host on the network that had the same IP address!

VMware ESXi Log

I found a new IP address for my VMKernel, updated DNS then updated the IP address on the ESXi host and my issue was resolved!

Tagged , , , , , ,

Enable VMware EVC with Workloads Running!

As of ESX/ESXi 4.1, you can now enable Enhanced VMotion Compatibility (EVC) on a live cluster if you meet a specific criteria. EVC historically speaking required all virtual machines to be powered off in your cluster before enabling. You can enable EVC now if all your processors in your cluster are already using the same processor baseline. If there are different processor baselines in your cluster already you will not be able to enable EVC on a running cluster. This will require down-time to enable this feature. This is why it is recommended to set EVC on a new cluster prior to any VMs being added.

To enable EVC, right click on your cluster in vSphere and select Edit Settings. In the left panel, select VMware EVC then click Change. Select your desired EVC mode and the Compatibility box will indicate whether you can enable EVC with or without downtime. Below is a screenshot allowing us to enable EVC on a running VMware Cluster.

VMware Enable EVC

Reference: Enhanced vMotion Compatibility (EVC) processor support

Tagged , , , , ,