Category Archives: Virtualization

Poor Man’s vCPU & vRAM Right Size Recommendation Tool

VMware vCenter Operations Management Suite can be expensive. If you are like me and there is no budget for vCOPs, this script will give you a vCPU & vRAM recommendations based off of past virtual machine usage. The following script will connect to your vCenter, grab historical performance data and provide recommendations that were designed around two vKernel whitepapers. The following whitepapers are:


The script is simple to use only requiring the vCenter parameter to start with all defaults:

PoorMansRecommendations.ps1 -vCenter site1.local.domain


Specifies additional authentication information. Grabbing 60 days of past performance instead of the default 30 days:

PoorManRecommendations.ps1 -vCenter site1.local.domain -Username fred -Password root -PastDays 60


Specifies more samples for accuracy and using a larger ‘building block’ for memory recommendations:

PoorMansRecommendations.ps1 -vCenter site1.local.domain -PastDays 60 -MaxSamples 25000 -MemoryBuildingBlockMB 1024


When running the script interactively, a progress bar be displayed as it calculates recommendations per virtual machine:
Poor Man's Right Sizing

The results:

Poor Man's Recommendations Results

This should only be used as a guidance, point of reference, a conversation point or just a rough estimate. Each environment and workload characteristics are unique, please use your logic along with this data to come to a solution that is right for your environment.

Download the script: PoorMansRecommendations.ps1

Thanks for looking. Please leave any questions or comments below and have a great day!

Tagged , , , , , , ,

vCenter 5.1 Single Sign-on (SSO) Unable to expose the remote JMX registry. Port value out of range: -1

VMware vCenter 5.1 Single Sign-on can pose many problems since Single Sign-on has been introduced until VMware’s replacement with the 5.5 version of Single Sign-On. If you are required to still use vCenter’s 5.1 Single Sign-on server and experience the following “Unable to expose the remote JMX registry” or “Port value out of range: -1” the resolution is simple but let’s first identify this is the issue by analyzing the catalina log.

The following is an example from, C:Program FilesVMwareInfrastructureSSOServerlogscatalina.2013-09-02.log.

02-Sep-2013 00:38:51.903 INFO [WrapperSimpleAppMain]<init> tc Runtime property decoder using memory-based key
02-Sep-2013 00:38:52.854 INFO [WrapperSimpleAppMain]<init> tcServer Runtime property decoder has been initialized in 960 ms
02-Sep-2013 00:38:56.364 INFO [WrapperSimpleAppMain] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-bio-7445"]
02-Sep-2013 00:38:56.396 INFO [WrapperSimpleAppMain] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-bio-7444"]
02-Sep-2013 00:38:56.396 INFO [WrapperSimpleAppMain] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-bio-7080"]
02-Sep-2013 00:38:56.396 INFO [WrapperSimpleAppMain] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-bio-7009"]
02-Sep-2013 00:38:57.784 SEVERE [WrapperSimpleAppMain] com.springsource.tcserver.serviceability.rmi.JmxSocketListener.init Unable to expose the remote JMX registry.
 java.lang.IllegalArgumentException: Port value out of range: -1
	... debug junk ...

In C:Program, towards the bottom you will find the following variables:


Change base.jmx.port to equal 6969. By default, -1 is disabled but causes SEVERE warnings in the Single Sign-On (SSO) log files.


See for further detail about the base.jmx.port property at

Tagged , , , , , ,

vSA vSphere Storage Appliance Performance Benchmark Test

The article below goes into depth with my experience with VMware vSA Performance Benchmark Testing. I’ve tried to be as detailed as possible to give you a complete picture of my findings. I believe that there is a space where storage virtualization may thrive but with recent experience with the VMware vSA product, I am less than satisfied with the results, manageability and most of all performance. I believe storage virtualization has a few more years until maturity until it can be truly considered a serious candidate in the small & remote office scenarios.  This statement holds true for other two/three node storage virtualization technologies including Falconstor’s storage virtualization.

VMware Version Information

VMware vCenter Server 5.1.0, 947673
VMware vStorage Appliance 5.1.3, 1090545
VMware ESXi 5.1 U1, HP OEM Bundle, 1065491 (VMware-ESXi-5.1.0-Update1-1065491-HP-5.50.26.iso)

HP ProLiant DL385 G2 Hardware Configuration
– 4 CPUs x 2.6 GHz
– Dual-Core AMD Opteron Processor 2218
– AMD Opteron Generation EVC Mode
– HP Smart Array P400, 512MB Cache, 25% Read / 75% Write
– RAID-5, 8x 72 GB 10K RPM Hard Drives
– HP Service Pack 02.2013 Firmware

vStorage Appliance Configuration
– 2 Node Cluster
– Eager Zero Full Format
– VMware Best Practices

IOZone Virtual Machine Configuration
– Oracle Linux 6.4 x86_64
– 2 vCPU
– 1 GB Memory
– 20 GB Disk, Thick Eager Zero Provisioned
– VMware Tool (build-1065307)

IOZone Test Paramaters
/usr/bin/iozone -a -s 5G -o

-a   Used to select full automatic mode. Produces output that covers all tested file operations for record sizes of 4k to 16M for file sizes of 64k to 512M.

-s #   Used to specify the size, in Kbytes, of the file to test. One may also specify -s #k (size in Kbytes) or -s #m (size in Mbytes) or -s #g (size in Gbytes).

-o   Writes are synchronously written to disk. (O_SYNC). Iozone will open the files with the O_SYNC flag. This forces all writes to the file to go completely to disk before returning to the benchmark.

VMware ESXi/vSA Network Configuration

VMware vSA Architecture

IOZone Performance Benchmark Results

vSA Read Graph

vSA Stride Read Graph

vSA Random Read Graph

vSA Backward Read Graph

vSA Fread Graph

vSA Write Graph

vSA Random Write Graph

vSA Record Rewrite Graph

vSA Fwrite Graph

Download RAW Excel Data


The vSA performed far less than the native onboard storage controller which was expected due to the additional layer of virtualization. I honestly expected better performance out of the 8-disk RAID-5 even without storage virtualization since they were 10,000 RPM drives. On average, across all the tests there is 76.3% difference between the native storage and the virtualized storage! Wow! That is an expensive down grade! I understand that the test bed was using not the latest and greatest hardware but in general terms of disk performance is generally limited by the spinning platter. I would really be interested in seeing the difference using newer hardware.

I believe this only depicts a fraction of the entire picture, performance. There is other concerns that I have at the moment with storage virtualization such as complexity and manageability. I found the complexity to be very frustrating while setting up the vSA, there are many design considerations and limitations with this particular storage virtualization solution most of which were observed during the test trails. The vSA management is a Flash-based application which had it’s quirks and crashes as well. Crashes at a storage virtualization layer left me thinking that this would be a perfect recipe for data loss and/or corruption. In addition, a single instance could not manage multiple vSA deployments due to IP addressing restrictions which was a must for the particular use-case which I was testing for.

For now, storage virtualization is not there yet in my opinion for any production use. It has alot of room to grow and I will certainly be interested in revisiting this subject down the road since I believe in the concept.

Reference Articles That May Help 

Tagged , , , , , , ,