Category: Network & Server (網絡及服務器)

Equallogic Bad Disk Kills All Your VMs and Stay Un-Alert!

By admin, August 6, 2011 9:27 pm

I noticed both the latest Equallogic firmware v5.05/v5.07 STILL contains a bug that a failed disk won’t be indicated in group manager. What’s worst is that soon you will discover all your VM starts to BSOD/Panic and thinking it’s either related to your switch/VMWare/ESX Host, until you found out  it is actually caused by Equallogic firmware. This bug causes volume corruption and excessive disk latency, killing all your VMs at the end, both scenarios have been preciously reported by actual EQL users in VMTN.

This is VERY SERIOUS! Users who bought Equallogic supposed to have a 99.999% reliability (what they called Five 9s), but a fail disk won’t show up in the error windows nor it flashes the amber light to alert you is a very scary experience and plain stupid!

I read from VMTN that a bad luck EQL users has no way to find out which drive has failed, finally he asked EQL support to WebEx into his console, spent hours to figure out the one.

Simply NOT ACCEPTABLE for this kind of high-end SAN!

Yes, that’s why I am still holding off my upgrade to v5.0.7 (currently at v5.0.2), I was told once by Dell consultant this: “DON’T UPGRADE ANY OF YOUR FIRMWARE if you are not having any problem.” I guess he meant for something by then, now I fully understood!

VMware Finally Revised the License Under Heavy Pressure!

By admin, August 4, 2011 10:34 am

After the heavy 99.9% negative comments in VMTN, VMW finally surrendered sort of! Why did you do that in the first place? Stupid $$$ driven decision! You have hurt many loyal followers heart deeply this time. Now I finally see why Monopoly is a bad thing!

Finally ESXi Free Edition is able to use 32GB compares to 8GB previously and I also found VSSP actually encourages use of the reserved ram model that allows memory over-commitment, but isn’t that contradict to what vRam model is used for enterprise? Seem VMware has confused everyone including itself.

As you are probably aware, when VMware announced our new Cloud Infrastructure Suite, we also introduced changes to the vSphere licensing based on a consumption and value-based model rather than on physical components and capacity.

While we believe this model is the right long-term strategy as we move into the cloud-computing era, the announcement generated a great deal of passionate feedback from partners and customers that led us to examine the impact of the new licensing model on every possible use case and scenario – and equally importantly, taking into consideration our partners’ and customers’ desire to broadly standardize on VMware. We’ve listened to your ideas and advice, and we are taking action with the following three updates to the vSphere 5 licensing model:

•We’ve increased vRAM entitlements for all vSphere editions, including the doubling of the entitlements for vSphere Enterprise and Enterprise Plus. Below is a comparison of the previously announced and the new vSphere 5 vRAM entitlements per vSphere edition:

1

•We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license. This change also aligns with our goal to make vSphere 5 the best platform for running Tier 1 applications.

•We’ve adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & development environments for example. We will now calculate a 12-month average of consumed vRAM to rather than tracking the high water mark of vRAM.

Finally, we introduced the vSphere Desktop Edition to address vSphere licensing in a desktop environment. vSphere Desktop is licensed on the total number of Powered On Desktop Virtual Machines allowing  customers to purchase vSphere for VDI use case on per user basis.  Our price books are being updated and will be available on Partner Central shortly.

Update Sep-14

There is an interesting article about how enterprises in Taiwan responded to the vRam change, it seemed 99% are still against the change.

Hot-add: A Different Term in Veeam B&R

By admin, August 4, 2011 9:06 am

I always wonder what Hot-add means in Veeam’s term as it’s always mentioned in their forum, now I understand after reading the blog post, but again I never use virtual appliance mode as it’s for small enviornment say if you have 30 or less VM to backup.

Hot-add – This is frequently used when referring to virtual appliance mode backups within Veeam Backup & Replication. Basically, once the virtual machine backup is underway; the associated VMDK files are added dynamically to the Veeam backup server. This dynamic procedure can only happen when the virtual machine has a snapshot in place, and then the VMDK files of the source virtual machine are disconnected from the Veeam backup server once the backup steps are completed. Further, this option is only available when Veeam Backup & Replication is installed on a virtual machine. Veeam Backup & Replication has supported Hot-add mode since version 4, released in 2009.

Dell’s Customized ESXi ISO

By admin, July 28, 2011 5:38 pm

Recently, I involved in a project to deploy latest ESXi 4.1 at client site and all of their ESX hosts are Dell Poweredge based.

We know there are two variation of ESXi images, one is the VMware version you normally download from VMware’s web site, the other is the one customized by server vendor such as IBM, HP and Dell. The main difference is those customized ones are modified with vendor specific IPMI/Hardware monitoring stuff, for Dell, they have added some drivers in order for OpenManage SNMP agent to recognize and use. So it’s common sense to use vendor specific ISO if you happen to use their servers.

However, I looked into Dell’s FTP, there are two Completely Different versions founds:

VMware-RecoveryCD-4.1.0.update1-348481.x86_64-Dell_Customized.iso
VMware-RecoveryCD-4.1.0.update1-348481.x86_64-Dell_Customized_A01.iso
VMware-RecoveryCD-4.1.0.update1-348481.x86_64-Dell_Customized_A02.iso
VMware-RecoveryCD-4.1.0.update1-348481.x86_64-Dell_Customized_A03.iso
VMware-RecoveryCD-4.1.0.update1-348481.x86_64-Dell_Customized_A04.iso

VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized.iso
VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized_A01.iso
VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized_A02.iso
VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized_A03.iso
VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized_A04.iso

And the answer is VMware-VMvisor-Installer is to be installed on local storage (like harddisk or SAN) and VMware-RecoveryCD is for internal embedded devices like SD Card of USB.

In additional, Dell has created more confusion as there are even different versions within each distribute, like for VMware-VMvisor-Installer-4.1.0.update1-348481.x86_64-Dell_Customized, there is A01, A02, A03 and A04 version.

Lucikly, by carefully reviewing the content of each download, you will soon discover the short cut that is always use the latest number, like A04 in this case because it contains all the previous fixes in A03, A02 and A01.

Hope this helps those who had scratched their hair recently.

Tiered Enterprise Storage with SSD

By admin, July 20, 2011 4:26 pm

I found an excellent article on ITHome (Taiwan) regarding the latest trend in Tiered Enterprise Storage with SSD, but it’s in Chinese, you may need Google Translate to read it.

68700_4_3_l[1]

Update:

Aug-17-2011

There is new topic about Distributed SSD Cache in Enterprise SAN and Server, new SSD caching products are coming from EMC, NetApps, but where is Equallogic in this area?

Also Dell just released its SSD solution (CacheCade) for H700 H800 raid card with 1GB NVRam, but the cost is quite expensive, not worth the money really. (as one 149GB Solid State Drive SAS 3Gbps 2.5in HotPlug Hard Drive,3.5in HYB CARR [US$4,599.00])

CacheCade is used to improve random read performance of the Hard Disk Drive (HDD) based Virtual Disks. A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data. SSDs significantly increase the I/O performance (IOPS) and/or write speed in Mbps from a storage device. With Dell Storage Controllers, you can create a CacheCade using SSDs. The CacheCade is then used for better performance of the storage I/O operations. Use either Serial Attached SCSI (SAS) or Serial Advanced Technology Attachment (SATA) SSDs to create a CacheCade.

Create a CacheCade with SSDs in the following scenarios:

Maximum application performance—Create a CacheCade using SSDs to achieve higher performance without wasted capacity.

Maximum application performance and higher capacity—Create a CacheCade using SSDs to balance the capacity of the CacheCade with high performance SSDs.

Higher capacity—If you do not have empty slots for additional HDDs, use SSDs and create a CacheCade. This reduces the number of HDDs required and increases application performance.

The CacheCade feature has the following restrictions:

Only SSDs with the proper Dell identifiers can be used to create a CacheCade.

If you create a CacheCade using SSDs, the SSD properties are still retained. At a later point of time, you can use the SSD to create virtual disks.

A CacheCade can contain either SAS drives or SATA drives but not both.

Each SSD in the CacheCade does not have to be of the same size. The CacheCade size is automatically calculated as follows:
CacheCade size =capacity of the smallest SSD * the number of SSDs

The unused portion of SSD is wasted and can not be used as an additional CacheCade or an SSD-based virtual disk.

The total amount of cache pool with a CacheCade is 512 GB. If you create a CacheCade which is larger than 512 GB, the storage controller still uses only 512 GB.

The CacheCade is supported only on Dell PERC H700 and H800 controllers with 1 GB NVRAM and firmware version 7.2 or later.

In a storage enclosure, the total number of logical devices including virtual disks and CacheCade(s) cannot exceed 64.

NOTE: The CacheCade feature is available from first half of calendar year 2011.
NOTE: In order to use CacheCade for the virtual disk, the Write and Read policy of the HDD based virtual disk must be set to Write Back or Force Write Back and read policy must be set to Read Ahead or Adaptive Read Ahead.

  

Aug-24-2011

I’ve found out only H700 H800 raid card with 1GB NVRam supports FastPath and CacheCade, unlike the LSI product which requires a hardware or software key, Dell’s LSI OEM H700/H800 provides those free of charge!!!

msm

In additional, if you tired of using OpenManage to manage your storage, you can try LSI’s Magaraid Storage Manager (aka MSM) which is a client based GUI tool like the previous Array Manager.

However, this still leave with one big questions even you have a H800 w. 1GB NVCache as it only supports SAS interface on Powervault MD1200/MD1220. Where can we find a cheap compatible  SAS SSD for alternative? (OCZ still costs a lot), if you know, please let me know.

For those of you who are interested in enabling H700 with cheap SATA SSD for CacheCade and FastPath, here is the How To Link, but it’s in Chinese.

 

Aug-27-2011

CacheCade software enables SSDs to be configured as a secondary tier of cache to maximize transactional I/O performance. Adding one SSD to substantially improve IOPs performance as opposed to adding more hard drives to an array.

By utilizing SSDs in front of hard disk drives (HDDs) to create a high-performance controller cache of up to 512GB, CacheCade allows for very large data sets to be present in cache and deliver up to a 50X performance improvement in read-intensive applications, such as file, Web, OLTP and database server.

The solution is designed to accelerate the I/O performance of HDD-based arrays while minimizing investments in SSD technology.


CacheCade Characteristics
The following list contains various characteristics of CacheCade technology:

 

- A CacheCade virtual disk cannot be created on a controller where the CacheCade feature is disabled. Only controllers with a 1GB NVDIMM will have the CacheCade feature enabled.

- Only SSDs that are non-foreign and in Unconfigured Good state can be used to create a CacheCade virtual disk.

- Rotational drives cannot be used to create a CacheCade virtual disk.

- Multiple CacheCade virtual disks can be created on a controller. Although there is no benefit in creating multiple CacheCade virtual disks.

- The total size of all the CacheCade virtual disks will be combined to form a single secondary cache pool. However the maximum size of the pool will be limited to 512GB.

- Virtual disks containing secured Self-Encrypting Disks (SEDs) or SSDs will not be cached by CacheCade virtual disks.

- CacheCade is a read cache only. Write operations will not be cached by CacheCade.

- IOs equal to or larger than 64KB are not cached by CacheCade virtual disks.

- A foreign import of a CacheCade virtual disk on a controller with the CacheCade feature disabled will fail.

- A successfully imported CacheCade VD will immediately start caching.

- CacheCade VDs are based on R0. As such the size of the VD will be the number of contributing drives x the size of the smallest contributing drive.

- In order to use CacheCade for the virtual disk, the Write and Read policy of the HDD based virtual disk must be set to Write Back or Force Write Back and read policy must be set to Read Ahead orAdaptive Read Ahead.

- CacheCade has NO interaction with the battery learn cycle or any battery processes. The battery learn behavior operates completely independent of CacheCade. However, during the battery learn cycle, when the controller switches to Write-Through mode due to low battery power, CacheCade will be disable.
NOTE:

Any processes that may force the controller into Write-Through mode (such as RAID Level Migration and Online Capacity Expansion) will disable CacheCade
 

Reconfiguration of CacheCade Virtual Disks

A CacheCade virtual disk that is made up of more than one SSD will automatically be reconfigured upon a removal or failure of a member SSD.

The virtual disk will retain an Optimal state and will adjust its size to reflect the remaining number of member disks.

If auto-rebuild is enabled on the controller, when a previously removed SSD is inserted back into the system or replaced with a new compatible SSD, the CacheCade will once again be automatically reconfigured and will adjust its size to reflect the addition of the member SSD.

 

Sep-4-2011

LSI Corp. today updated its MegaRAID CacheCade software to support write and read solid state drive (SSD) caching via a controller, providing faster access to more frequently used data. 

LSI MegaRAID CacheCade Pro 2.0 speeds application I/O performance on hard drives by using SSDs to cache frequently accessed data. The software is designed for high-I/O, transaction-based applications such as Web 2.0, email messaging, high-performance computing and financials. The caching software works on LSI MegaRAID 9260, 9261 and 9280 series of 6 GB SATA and SAS controller cards.

LSI delivered the first version of CacheCade software about a year ago with read-only SSD caching. MegaRAID CacheCade Pro 2.0 is priced at $270 and is available to distributors, system integrators and value added resellers. LSI’s CacheCade partners include Dell, which in May began selling the software with Dell PowerEdge RAID Controller (PERC) H700 and H800 cards.

“What we want to do is close the gap between powerful host processors and relatively slow hard disk drives,” said Scott Cleland, LSI’s product marketing manager for the channel. “Hosts can take I/O really fast, but the problem is traditional hard disk drives can’t keep up.”

LSI claims the software is the industry’s first SSD technology to offer both read and write caching on SSDs via a controller.

LSI lets users upgrade a server or array by plugging in a controller card with CacheCade. Cleland said users can place hot-swappable SSDs into server drive slots and use LSI’s MegaRAID Storage Manager to create CacheCade pools. The software will automatically place more frequently accessed data to cache pools.

“In traditional SSD cache and HDD [hard disk drive] configurations, the HDDs and SSDs are exposed to the host,” Cleland said. “You have to have knowledge of the operating system, file system and application. With CacheCade, the SSDs are not exposed to the host. The controller is doing the caching on the SSDs. All the I/O traffic is going to the controller.”

SSD analyst Jim Handy of Objective Analysis said it took time for LSI to build in the write caching capability because “write cache is phenomenally complicated.”

With read-only cache, data changes are copied in the cache and updated on the hard drive at the same time. “If the processor wants to update the copy, then the copy in cache is invalid. It needs to get the updated version from the hard disk drive,” Handy said of read-only cache.

For write cache, the data is updated later on the hard drive to make sure the original is still updated when the copy is deleted from cache.

LSI also has a MirrorCache feature, which prevents the loss of data if it is copied in cache and not yet updated on the hard drive.

Handy said read and write caching is faster than read-only caching.

“Some applications won’t benefit from [read and write caching],” Handy said. “They won’t notice it so much because they do way more reads than writes. For instance, software downloads are exclusively reads. Other applications, like OLTP [online transition processing], use a 50-50 balance of read-writes. In these applications, read-write is really important. ”

 

Sep-5-2011

The SSD Review did an excellent research in their latest release “LSI MegaRAID CacheCade Pro 2.0 Review – Total Storage Acceleration Realized

vSphere Storage Appliance 1.0, Is It Really Necessary After All?

By admin, July 19, 2011 2:08 pm

A week ago, one of the latest derived products from vSphere 5 catches my attention, the vSphere Storage Appliance v1.0. Basically, it’s a software SAN solution for ESX Shared Storage requirement. VMware vSphere Storage Appliance provides virtual shared storage volumes without the hardware.

VSA enables Different Key Features according to different vSphere Editions:

Essentials Plus, Standard
•High Availability
•vMotion

Enterprise
•Fault Tolerance
•Distributed Resource Scheduler

Enterprise Plus
•Storage vMotion

And it offers Storage Protection
RAID 1 (mirroring) protection across nodes
RAID 10 protection within each node

Licensing
vSphere Storage Appliance is licensed on a per-instance basis (like vCenter Server)

Each VSA instance supports up to 3 nodes. (ie, maximum expandability is 3 ESX Hosts)
At least two nodes needs to be part of a VSA deployment

Pros: There is only ONE, it doesn’t require you to purchase an expensive SAN in order to use vMotion/DRS.

Cons: Too Many!!! Read on…

1. The license fee is USD5,995 per-instance but 40% off if with vSphere Essentials Plus, again VMware wants all of you to purchase E+, a $$$ driven price structure thing created by its fleet of “genius” MBAs. If your company have the money to purchase VSA, then I am pretty sure a proper SAN won’t cost you an arm.

2. “Run vCenter Separate from VSA Cluster for best protection” Why’s that?  These days the ultimate goal is to virtualized everything even for vCenter, it’s against the most fundamental rule of virtualization just like vRam asking you to purchase more servers with less ram installed on each host!

3. Have additional disk-space to enable RAID protection: VSA protects your data by mirroring data in multiple locations – this means your business data will require additional raw disk capacity. Good rule of thumb is to get 4x the server internal disk space you expect to use (You kidding me! RAID10, then split the rest for other node’s mirror, left you ONLY 1/4 of the original storage, this is again not enviornmental friendly); in VSA 1.0, disk capacity and nodes cannot be changed setup – feature is planned for future release.

3. Two VSA Hosts can support up to 25 VMs, Three VSA Hosts can support up to 35 VMs: This particularly renders VSA to Not Worth Spending the $$$, 3 nodes can only support 35 VMs max sounds unjustified for ROI.

4. Since it’s NFS based, you can’t use RDM or VAAI, this is bad news for those who run Exchange/SQL and looking for performance, but again, if you are after IOPS, then you got the $$$. Not to mention NFS is well known for its low performance over IP network comparing to block based such as FC or iSCSI and placing the same shared storage or SAN on ESX host will inevitably reduce the overall performance.

It seemed to me an immature rush to release product, use 400% more space is not a solution for Enterprise, it’s a big waste and a 2nd point against virtualization. It reminds me the software feature from NetApp’s Network Raid, it will use 2/3 of your storage for N+1 raid feature, well it worked, but it just doesn’t justify the cost after all.

Virtualization should save cost by fully utilizing Cpu/Ram/Space, but with the release of vSphere 5, it seemed to me the VMware is trying its best to discourage everyone by using vRam model as well as this VSA product. There is a hot debate on this topic on VMTN, 99.9% is against this “vmw screwed us“ new license change, and vmw’s official is hiding in the dark and afraid to response with a valid point.

No body want to purchase more servers with less ram on each host these days, these will use more power and absolutely non environmental friendly, now, we see VMware is becoming an anti-environmental friendly corporation.

Finally, VSA will use host CPU and local storage IOPS, and it’s not a model for future aggregate IOPS growth comparing to a real SAN, but after all, it’s just a temporary product for SMB, but why does VMware charge so much for it? In my own opinion, vSphere Storage Appliance 1.0 can be made obsolete and it’s uncessary as there are already Free Products from Microsoft iSCSI Target, Starwind and many similar products from Linux world for the purpose of Shared Storage for ESX.

PS. Just found out the free vSphere 5.0 ESXi version has the pathetic 8GB limit, now I finally understand why Monopoly is a bad thing, I shall start to look into Xen and Hyper-V seriously for my clients.

Concept from Movie Inception became true with ESX, How Many More Layers Can You Create?

By admin, July 19, 2011 1:50 pm

Last time I tried a Nested iSCSI VM thing, EQL>MS iSCSI Target>Starwind but today I found something even more crazy!

This is probably one of the best and most people responded blog page regarding Nested ESX and it’s unlimited possibilities.

The author even went so far by migrate a running VM with VMotion from the physical ESX host to the virtual ESX host, as well as running Hyper-V within ESX.

Wow, why don’t you install an additional ESX within that Hyper-V again to see what will happen when migrate a running VM back to the original mother ESX, that’s 2 layer’s back.  :)

Equallogic PS Series Firmware Version V5.1 Released

By admin, July 17, 2011 4:54 pm

It’s a Early Production Access. Originally, I thought we need VMware vSphere 5.0 in order to use these great features, but apparently, it also works for vSphere 4.1 (Wrong, see update below). In additional, Dell is finally moving into Fluid Data Solution with Equallogic and Compellent products. (ie, moving hot-data to SSD/15K SAS tiers automatically)

Two of the major improvements are:

Support for VMware Thin Provision Stunning

Version 5.1 of the PS Series Firmware supports VMware’s Thin Provision Stunning feature. When thinly-provisioned volumes containing VMware virtual machines approach their in-use warning threshold, the PS Series Group alerts vCenter that the volume is low on space. When the volume reaches the maximum in-use threshold, the PS Series Group does not take the volume offline, which would cause the virtual machines to crash. Instead, vCenter stuns (ie, suspend) the virtual machines on the volume to prevent them from crashing. When additional volume space is created, either by growing the volume or moving virtual machines to other volumes, the administrator can resume the virtual machines.

Performance Load BalancingIn Version 5.1

Improvements have been made to the load balancing capabilities in PS Series groups. This enhancement is designed to provide sub-volume performance load balancing and tiering in both heterogeneous and homogeneous storage pool configurations. The load balancer detects pool member arrays that are at, or near, overload, and shifts some of the workload to less-loaded arrays. To ensure that the load balancing responses to long-term trends rather than brief increases in activity, the operation takes place gradually, over a period of hours. The result is improved performance balancing in pools, especially in pools containing mixed drive or RAID configurations. Mixed pools experience the greatest benefit with workloads that show tiering and are regular in their operating behavior.

 

Fix List v5.1:

- A problem with an internal management process may disrupt in-progress management commands. This issue affects arrays running version 5.0.4 or 5.0.5 of the PS Series Firmware. In rare circumstances, on arrays running version 5.0.5 Firmware, this may also result in a control module becoming temporarily unresponsive.

- Unplanned control module failovers may occur in multi-member groups running in environments in which VMware ESX version 4.1 with VAAI, the Host Integration Tools for Microsoft v3.5, or Host Integration Tools for VMware 3.0 are used. (This is serious!)

- In some cases, a controller failover occurred because of a drive error during a RAID rebuild.

- In some cases, the background drive scanning process encountered an error during drive testing, and the drive was reported as “faulted” and left online when it should have been marked failed” and removed from use. AND In rare cases, a failing drive in an array could not be correctly marked as failed. When this occurred, the system was unable to complete other I/O operations on group volumes until the drive was removed. This error affected PS3000, PS4000, PS5000X, PS5000XV, PS5500, PS6000, PS6010, PS6500, and PS6510 arrays running Version 5.0 of the PS Series Firmware. (Still haven’t fixed this since v5.04! So the Predicative Feature doesn’t work in reality.)

- A failure in a hot spare drive being used as the target in a drive mirroring operation could have resulted in the group member becoming unresponsive to user I/O.

- Connection load balancing could result in volume disconnects in VMware environments using the EqualLogic Multipath Extension Module. (You KIDDING ME right?)

 

Update Oct 26, 2011

I got the wrong impression VMware Thin Provision Stun Option will also work with the existing ESX 4.1 version before, so this means vSphere version need to be version 5.

vSphere Storage APIs – Array Integration (VAAI) were first introduced with vSphere 4.1, enabling offload capabilities support for three primitives:

1. Full copy, enabling the storage array to make full copies of data within the array
2. Block zeroing, enabling the array to zero out large numbers of blocks
3. Hardware-assisted locking, providing an alternative mechanism to protect VMFS metadata

With vSphere 5.0, support for the VAAI primitives has been enhanced and additional primitives have been introduced:

• vSphere® Thin Provisioning (Thin Provisioning), enabling the reclamation of unused space and monitoring of space usage for thin-provisioned LUNs
• Hardware acceleration for NAS
• SCSI standardization by T10 compliancy for full copy, block zeroing and hardware-assisted locking

My Own Interpretation of the Sudden Release of vSphere 5.0, What’s WRONG ?!!!

By admin, July 13, 2011 1:17 pm

After reviewing all the latest features, I would say it should be called vSphere 4.5 instead of vSphere 5.0 as there isn’t much improvements feature wise over the previous 4.1 version.

To my great surprise, VMware launched it’s latest flagship product vSphere in such a hurry, it was originally planed to be released in Q3, 2011 or later. Why is this?

As people say “the devil always lies in the details”, after half reading the latest pricing guide, I quickly figured out the answer to the above question.

It’s all about $$$, VMware tells you the latest vSphere 5.0 doesn’t have any more restriction in CPU/RAM on an ESX host, that sounds so fabulous isn’t it? Or IS IT?

Let’s make a simple example:

Say you have the simplest cluster with two ESX hosts with 2 CPU and 128GB RAM each, you Enterprise Plus edition for these two is USD13,980.

With the previous vSphere 4.1, you have UNLMITED vRAM entitlement and up to 48 cores.

With the brand new vSphere 5.0 pricing model, for the same amount of license (ie, USD13,980), you can only have 192GB entitled vRAM, so in order to have the original 256GB vRAM entitlement, you need to pay extra 2 more Enterprise Plus license, which is USD6,990.

The more RAM your server has, the more you are going to pay with the new licensing model.

So my conclusion is VMware is discouraging people going into cloud in reality. Think about this, why would you buy a Dell Poweredge R710 (2 sockets) with only 96GB RAM installed? The maximum RAM Powerdge R710 is capable of 288GB RAM but you need to pay EXTRA (288GB-96GB) / 48GB = 4 more Enterprise Plus license.

In reality, CPU is always the last resource to run out, but RAM IS! Future server will have much more powerful CPU for sure, but RAM is still the number 1 factor deciding your cloud capacity, IOPS is the 2nd, Network is the 3rd and just to remind you once more, CPU is the last!

Very clever VMware, but will potential customer buy this concept is another story.

Hum…may be it’s a strong sign that I can finally sell my VMW after all these years.

Finally, interesting enough, Microsoft also responsed to this interesting topic.  “Microsoft: New VMware Pricing Makes VMware Cloud Costs 4x Microsoft’s

* Please note the above is my own personal interpretation as a user, it doesn’t represent my current employer or related affiliates.

vSphere 5.0 is Released today by Surprise!

By admin, July 13, 2011 10:34 am

Today VMware announced the release of VMware vSphere 5.0 by surprise! The following is  some of the key features of this release: 

  • Convergence. vSphere 5.0 is the first vSphere release built exclusively on the vSphere ESXi 5.0 hypervisor architecture as the host platform.
  • VMware vSphere Auto Deploy. VMware vSphere Auto Deploy simplifies the task of managing ESXi installation and upgrade for hundreds of machines.
  • New Virtual machine capabilities. 32-way virtual SMP, 1TB virtual machine RAM, Software support for 3D graphics, and more.
  • Expanded support for VMware Tools versions. VMware Tools from vSphere 4.x is supported in virtual machines running on vSphere 5.0 hosts.
  • Storage DRS. This feature delivers the DRS benefits of resource aggregation, automated initial placement, and bottleneck avoidance to storage.
  • Profile-driven storage. This solution allows you to have greater control and insight into characteristics of your storage resources.
  • VMFS5. VMFS5 is a new version of vSphere Virtual Machine File System that offers improved scalability and performance, and provides Internationalization support.
  • Storage vMotion snapshot support. Allows Storage vMotion of a virtual machine in snapshot mode with associated snapshots.
  • vSphere Web Client. A new browser-based user interface that works across Linux and Windows platforms.
  • vCenter Server Appliance. A vCenter Server implementation running on a pre-configured Linux-based virtual appliance.
  • vSphere High Availability. VMware High Availability has been transformed into a cloud-optimized availability platform.

Here are some of the reasons why I’m excited about this: 

  • App Aware API.  This is the same API that has been used by Neverfail’s vAppHA and Symantec’s ApplicationHA products.  Now with 5.0, this API is publicly available.  This means anyone can craft a solution that allows for the monitoring of a application and interfacing with vSphere HA to restart the VM.  Couple this with the new vSphere Web Client’s ease of extensibility and you have the potential to do some great things.
  • Ever had DNS resolution cause you issues when using vSphere HA?  With 5.0, all dependency on DNS for vSphere HA has been removed!
  • IPv6 is now supported.
  • Logging.  There have been a lot of improvements to the log messages with vSphere HA.  This was done to make the log messages more descriptive than ‘unknown HA error’ and should help with identifying configuration issues.
  • Several user interface enhancements.  Now you can see more detailed state information about the hosts in your cluster and what role they play with vSphere HA.

The changes to the vSphere HA infrastructure now eliminates the primary/secondary constructs that existed in previous versions of vSphere.  Replacing that is a master/slave model.  In this model, one of the hosts is designated as a master, while the other hosts are designated as slaves.  The master coordinates most of the activities within the cluster and relays information to and from vCenter.

For now though, the reason why this is so great is that you no longer have to worry about details such as what hosts act as your primary nodes and which ones act as secondary nodes.  If you are installing vSphere HA in a blade chassis or making a stretched cluster, this is excellent news for you!

Heartbeat datastores is a feature that allows vSphere HA to utilize the storage subsystem as an alternate communication path.  Using heartbeat datastores allows vSphere HA to do things like determine the state of the hosts in the event of a network failure of the management network. Which brings up another enhancement to be excited about:  management network partitions are now supported in 5.0!

Finally, vSphere 5 has changed entitlements around CPU cores and memory use. The company has lifted licensing restrictions for cores per processor and RAM per host. It’s also eliminated the “advanced” version of vSphere plus, leaving just Essentials, Essentials Plus, Standard, Enterprise, and Enterprise Plus. vSphere 5 has also introduced a small change to the entitlement process around what is known as virtual memory or vRAM.

You must obtain new licenses to deploy VMware vSphere 5. Your existing VMware vSphere 4 licenses will not work on vSphere 5.

For more, please refer to What’s New in vSphere 5.0 and vSphere 5.0 Licensing, Pricing and Packaging Whitepaper as well as Compare VMware vSphere Editions.

Pages: Prev 1 2 3 4 5 6 7 ...18 19 20 ...26 27 28 Next