Impressive Equallogic PS6000XV IOPS result
Impressive Equallogic PS6000XV IOPS result
I just performed the test again 3 times and confirmed the followings, this is with default 1 Worker only, IOmeter testing using VM’s VMFS directly, no MPIO direct mapping to EQL array, VM is version 7, Disk Controller is Paravirtualized and NIC is VMXNet3.
SERVER TYPE: VM on ESX 4.1 with EQL MEM Plugin, VAAI enabled with Storage Hardware Acceleration
CPU TYPE / NUMBER: vCPU / 1
HOST TYPE: Dell PE R710, 96GB RAM; 2 x XEON 5650, 2,66 GHz, 12 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB Disks / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : ESX Software iSCSI, Broadcom 5709C TOE+iSCSI Offload NIC
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read……4.1913………13934.42………435.45
RealLife-60%Rand-65%Read……13.4110………4051.49………31.65
Max Throughput-50%Read………5.5166………10240.39………320.01
Random-8k-70%Read……………14.1525………3915.15………28.95
EXCEPTIONS: CPU Util. 67.82, 38.12, 56.80, 40.2158%;
##################################################################################
RealLife-60%Rand-65%Read 4051 IOPS is really impressive for a single array with 14 15K RPM spindles!
I think you prolly should try with a larger testfile than the default 4GB one, it seems your hitting cache alot
Thanks for your comment, I will test it again later with 8GB.
FYI, EQL 6000XV has 2GB cache on each active controller, but still, I will give it another try.
yeah i know mate, i have 4 6000XV’s myself and a 6000E
Ive tested with MEM and Vsphere 4.1 on a Dell R900, with the standard 4GB file, and im not seeing the same results you are..
I will post my numbers when ive run the final test in about an hour or two
Ok Done.
Mind that i am running Raid50, i unfortunately cannot change our production enviroment to run Raid10
MEM seems to deliver alot of performance, and MUCH bettr response times!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS With MEM (2NIC) 3xPS6000XV
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM or PHYS.
CPU TYPE / NUMBER: VCPU / 2
HOST TYPE: Dell R900, 8GB RAM; 4×4Core 2200mhz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS6000XV x 3 / 3×14+2 Disks / R50
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read……..___9,33___……….___6200___………____193,92___
RealLife-60%Rand-65%Read……____5,05__……….___6017___………_____47,01___
Max Throughput-50%Read……….____6,67__……….___8909___………____278,41__
Random-8k-70%Read……………..___5,14__……….__6191___………____48,37____
EXCEPTIONS: CPU Util. 30,39,26,38%;
##################################################################################
Thanks for your feedback.
My Comments:
1. The second line is the most important part especially IOPS number (Av. MB/sek doesn’t matter in this case) people look at as it simulates the real world VM loading. (ie, mostly Random)
RealLife-60%Rand-65%Read……____5,05__……….___6017___………_____47,01___
Your number for Av. Resp. Time ms and Av. IOs/sek R50 aren’t bad at all with 3 PS6000XV, oh one more thing, do you really find linear performance gain with you added the 2nd and 3rd box?
2. Max Throughput-100%Read, are you using Only 2 NICs for iSCSI? Seemed it tops at 200MB/s, so if you have 4 NICs, you will definitely boost the whole thing over 13,000 IOPS.
Btw, if you change the testing client to R710 with 5600 Xeon or R910 with latest Xeon, your performance will get even higher as I found testing client also plays a big role in the final result.
3. Yes, MEM and VAAI is really lovely!!! (I have another blog entry, search for VAAI offload) and I am waiting EQL to release their Thin-Provisioning Reclaim feature in coming FW5.x and that will really help in our environment.
4. I assume your testing machine is a VM and you directly attached to EQL SAN from within that VM right? In my case, it’s different, I simply just test the VM’s own VMFS which is on EQL SAN.
1) unfortunately its a bit late to test that in its entirety, but ive seen close to double performance when going from 1 to 2 units atleast
2) Yah the testserver only have 4 nics, 1 for vmotion, 1 for ethernet and 2 for iscsi.
3) yup gonna be cool!
4) Its run on a HDD mounted directly in VMDK
Will a 6000E with 14×2 1TB 7200rpm drives work for our office? We have 100 employees, SQL Server 2008 only used for our Blackberry 5.02 HA Server, two Exchange 2010 servers, and a Terminal Server, File Server and two Domain Controllers. EQL says no need to get 6000X/XV, they say 7200rpm will be fine in R50. We have two dedicated redundant Cisco 4948 Switches for the EQL device. Knowing we have 12 VMs…I realize a DC probably uses 10-20IOPS, but if I just say all our VMs need 40 IOPS, then we only need 480IOPS and 1000IOPS if we double our VM count? Just trying to make the E vs X/XV decision.
Thanks in advance everyone.
Exchange and File Server could use a lot of IOPS due to Random I/O, and the cost for E and X/XV are very close (less than 10-15%), if you are aiming for performance, then go for XV, if your enviornment requires a lot of space and performance is not the top factor to consider, then E in R50 is also fine.
I would say for the same Raid10, a XV equals to 3E and 1.5X.
Finally, 10 to 20 IOPS may be right on average, but backup windows can easily use upto 100-150 IOPS each.
We did go for XV in R10, if more space is required, we can simply expand it on-line to R50 (ie, double the space that is.)
Hope this helps.