Dell Poweredge 12G Server: R720 Sneak Preview

By admin, December 8, 2011 10:03 am

It seemed to me that Dell is already using e-MLC SSD (enterprise MLC) on its latest Poweredge G11 series servers.

100GB Solid State Drive SATA Value MLC 3G 2.5in HotPlug Drive,3.5in HYB CARR-Limited Warranty Only [USD $1,007]

200GB Solid State Drive SATA Value MLC 3G 2.5in HotPlug Drive,3.5in HYB CARR-Limited Warranty Only [USD $1,807]

I think Poweredge R720 will be probably released in the end of March. I don’t see the point using more cores as CPU is always the last resources being running out, but RAM in fact is the number one most important thing, so having more cores or faster Ghz almost means nothing to most of the ESX admin. Hum…also if VMWare can allow ESX 5.0 Enterprise Plus to have more than 256G per processor then it’s a real change.

dell_poweredge_12g_servers

Michael Dell dreams about future PowerEdge R720 servers at OpenWorld

Dell, the man, said that Dell, the company, would launch its 12th generation of PowerEdge servers during the first quarter, as soon as Intel gets its “Sandy Bridge-EP” Xeon E5 processors out the door. Dell wasn’t giving away a lot when he said that future Intel chips would have lots of bandwidth. as readers of El Reg know from way back in may, when we divulged the feeds and speeds of the Xeon E5 processors and their related “Patsburg” C600 chipset, that bandwidth is due to the integrated PCI-Express 3.0 peripheral controllers, LAN-on-motherboard adapters running at 10 Gigabit Ethernet speeds, and up to 24 memory slots in a two-socket configuration supporting up to 384GB using 16GB DDR3 memory sticks running at up to 1.6GHz.

But according to Dell, that’s not the end of it. he said that Dell would be integrating “tier 0 storage right into the server,” which is server speak for front-ending external storage arrays with flash storage that is located in the server and making them work together seamlessly. “You can’t get any closer to the CPU,” Dell said.

Former storage partner and rival EMC would no doubt agree, since it was showing off the beta of its own “Project Lightning” server flash cache yesterday at OpenWorld. The idea, which has no doubt occurred to Dell, too, is to put flash cache inside of servers but put it under control of external disk arrays. this way, the disk arrays, which are smart about data access, can push frequently used data into the server flash cache and not require the operating system or databases to be tweaked to support the cache. It makes cache look like disk, but it is on the other side of the wire and inside the server.

Dell said that the new PowerEdge 12G systems, presumably with EqualLogic external storage, would be able to process Oracle database queries 60 times faster than earlier PowerEdge 11G models.

The other secret sauce that Dell is going to bring to bear to boost Oracle database processing, hinted Dell, was the system clustering technologies it got by buying RNA Networks back in June.

RNA Networks was founded in 2006 by Ranjit Pandit and Jason Gross, who led the database clustering project at SilverStorm Technologies (which was eaten by QLogic) and who also worked on the InfiniBand interconnect and the Pentium 4 chip while at Intel. The company gathered up $14m in venture funding and came out of stealth in February 2009 with a shared global memory networking product called RNAMessenger that links multiple server nodes together deeper down in the iron than Oracle RAC clusters do, but not as deep as the NUMA and SMP clustering done by server chipsets.

Dell said that a rack of these new PowerEdge systems – the picture above shows a PowerEdge R720, which would be a two-socket rack server using the eight-core Xeon E5 processors – would have 1,024 cores (that would be 64 servers in a 42U rack). 40TB of main memory (that’s 640GB per server), over 40TB of flash, and would do queries 60 times faster than a rack of PowerEdge 11G servers available today. presumably these machines also have EqualLogic external storage taking control of the integrated tier 0 flash in the PowerEdge 12G servers.

Update: March 6, 2012

Got some update from The Register regarding the coming 12G servers, one of the most interesting feature is R720 now supports SSD housing directly to PCI-Express slots.

Every server maker is flash-happy these days, and solid-state memory is a big component of any modern server, including the PowerEdge 12G boxes. The new servers are expected to come with Express Flash – the memory modules that plug directly into PCI-Express slots on the server without going through a controller. Depending on the server model, the 12G machines will offer two or four of these Express Flash ports, and a PCI slot will apparently be able to handle up to four of these units, according to Payne. On early tests, PowerEdge 12G machines with Express Flash were able to crank through 10.5 times more SQL database transactions per second than earlier 11G machines without flash.

Update: March 21, 2012

Seems the SSD housing directly to PCI-Express slots is going to be external and hot-swappable.

固態硬碟廠商Micron日前推出首款採PCIe介面的2.5吋固態硬碟(Solid State Disk,SSD),不同於市面上的PCIe介面SSD產品,這項新產品的最大不同之處在於,它並不是介面卡,而是可支援熱插拔(Hot- Swappable)的2.5吋固態硬碟。

近年來伺服器產品也開始搭載SSD硬碟,但傳輸介面仍以SATA或SAS為主,或是提供PCIe介面的擴充槽,讓企業可額外選購PCIe介面卡形式的 SSD,來擴充伺服器原有的儲存效能與空間。PCIe介面能提供更高效能的傳輸速率,但缺點是,PCIe擴充槽多設於伺服器的機箱內部,且不支援熱插拔功 能,企業如有擴充或更換需求,必須將伺服器停機,並掀開機箱外殼,才能更換。

目前Dell PowerEdge最新第12代伺服器產品新增了PCIe介面的2.5吋硬碟槽,而此次Micron所推出的PCIe介面2.5吋SSD,可安裝於這款伺 服器的前端,支援熱插拔功能,除保有高速傳輸效率的優點之外,也增加了企業管理上的可用性,IT人員可更輕易的擴充與更換。


Vendor Acquisitions & Partnerships

By admin, December 8, 2011 9:51 am

Found someone (HansDeLeenheer) actually did this very interesting relationship diagram, it’s like a love triangle, it’s very innovative and informative! I just love it!

VENDORS[1]

Question about Latest Equallogic Firmware 5.1 Feature: Auto Load Balancing

By admin, December 8, 2011 9:36 am

I’ve opened a case with Equallogic, but so far getting no reply yet.

As I understand Equallogic Firmware 5.1 supports Auto Load Balancing which the firmware (or software) automatically reallocate hot data to appropriate group members.

In fact, there were two great videos (Part I and Part II) about this new feature on Youtube.

As we knew the best practice before FW 5.1 is to group similar generation and spindles in the same storage pool. For example, place PS6000XV with PS6000XV, PS6000E with PS6000E, etc.

Now with FW 5.1, it is indeed possible to place whatever you want in the same pool  (ie, different generation and spindles in the same pool) as Auto Load Balancing will take care the rest.

Dell called this Fluid Data Technology, actually a term borrowed from its recently acquired storage vendor Compellent.

My question is in terms of performance and updated best design practice, is it still recommended by Dell Equallogic to go with the old way? (ie, Separate storage tier with similar generation and spindle speed)

 

Update: Dec 12, 2011

Finally got the reply from Dell, seemed the old rule still applies.

The recommendation from Dell to use drives with the same drive speed is still the same within a pool. When you mix drives of different speeds, you slow the speed of the faster drives to the speed of the slower drives. The configuration should WORK but is not optimal and not recommended by Dell.

Then my next question is what’s the use  of the new feature “Fluid Data Technology” if the old rule still applies? huh?

 

Update: Dec 21, 2011

Received another follow up from EQL support, this really solved my confusion now.

Load balancing is across disks, controllers, cache, and network ports.  That means the storage, processing, and network ports access are balanced.  Not ONLY disk speed. 

Dell Equallogic customers create their own configuration.  It is an option for you to add disks of different speeds to the group/pool; however, the disk spin speed will change to be the speed of the slowest drive.  Most customers do not have a disk spin speed bottleneck; however, most customers are also aware of the rule-of-thumb in which they keep disks of like speeds together.

http://www.cns-service.com/equallogic/pdfs/CB121_Load-balancing.pdf

 

Update: Jan 18, 2011

After spending a few hours at Dell Storage Community, I found the following useful information from different person, the answer is still the same.

DO NOT MIX different RPM disks in the same pool even with the latest EQL FW v5.1 APLB feature!

Yes, the new improvements to the Performance Load Balancing in v5.1.x and the sub-volume tiering performance balancing capabilities now allow for mixed drive technologies and mixed RAID policies coexisting in the same storage pool.

In your case, you would have mixed drive technologies (the PS400E, SATA 5k and the PS4100X, SAS 10k) with each member running the same RAID policies.

When the PS4100X is added to the pool, normal volume load balancing will take the existing volumes on the two PS400E’s and balance them onto the PS4100X. Once the balancing is complete and you then remove the PS400E from the group (which is the requirement to remove 1 PS400E), the volumes slices contained on this member will be moved to the remaining two members and be balanced across both members (the PS400E SATA and PS4100X SAS) at that point.

Note, however, that Sub-volume performance load balancing may not be so noticeable until the mixed pools experience workloads that show tiering and are regular in their operating behavior. Because the operation takes place gradually, this could take weeks or EVEN MONTHS depending on your specific data usage.

 

Arisadmin,

Hi, I’m Joe with Dell EqualLogic. Although we support having members with mixed RAID policies in the same pool, in your case this is not advisable due to the two different drive types on your two members, i.e., your PS6000E is a SATA and your PS6000XV is a SAS. Mixing different drive types in the same pool, will most likely degrade performance in the group.

If the arrays were of the same drive type, i.e., both SATA (or both SAS), then combining the two (RAID 10 and RAID 6), would not be a problem, however the actual benefits, in your case, may not be as great as expected.

In order for load balancing to work “efficiently”, the array will analyze the disk I/O for several weeks (2-3) and determine if the patterns are sequential (RAID’s 10/5/6) or random (RAID 10), and would migrate those volumes to the corresponding member.

However in a two members group this is often less efficient, hence EQL always suggest 3 members group for load balancing.

Since the array will try to balance across both member, and you may end up with 80% of the volume on one member and 20% on the other member instead of a 50/50 split.

We also support manually assigning a raid level to the volume, but this would in effect, eliminate the load balance that you are trying to achieve, since it is only a two member group.

So in summary, we don’t recommend combining different Drive types (or Disk RPM) in the same pool.

You can go to http://www.delltechcenter.com/page/Guides and review the following documents for more information:
Deploying Pools and Tiered Storage in a PS Series SAN
PS Series Storage Arrays Choosing a member RAID Policy

Regards,
Joe

 

This is a very popular and frequently asked question regarding what type of disks and arrays should be used in a customers environment. First, you asked about the APLB (Automatic Performance Load Balancing)  feature in EqualLogic arrays.

Yes, it will move “hot blocks” of data to faster or less used disk, but its not instantaneous. This load balancing between volumes, or better known as sub volume load balancing uses an advanced algorithm that monitors data performance over time. I recommend reading the APLB whitepaper that should help you out more in better understand how this technology works.

see here: www.virtualizationimpact.com

In terms of what disks to buy, well that comes down what you are going to require in your environment. From my experience and from reading on other forums, if you are trying to push for the best performance  and capacity I would look at the X series arrays or 10K series drives. You can now get 600GB 10K drives in 2.5 and 3.5 form factors (i believe) and you won’t have to worry if your 7200 drives will be able to keep up with your workload, or at least, be faster and mix them with the 15KSAS/SSD arrays. Not saying that the 7200’s won’t work, just depends on your requirements.

Hope thats some help, maybe someone else will chime in with more info too.

Jonathan

 

Best solution is to work with Sales and get more specific about the environment than can easily be done via a forum.   You’re entire environment should be reviewed.  Depending on the IOPs requirements will determine if you can mix SAS/SATA in same pool and still achieve your goals.

One thing though it’s not a good idea to mix the XVS with other non-XVS arrays in the same pool.  yes the tiering in 5.1.x firmware will move hot pages but it’s not instant and you’ll lose some of the high end performance features of the XVS.

Regards,

-don