Category: Equallogic & VMWare (虛擬化技術)

Shrinking a Equallogic Thin Volume = Thin Reclaim on Dirty Blocks

By admin, December 22, 2011 10:28 pm

I thought it’s only available in FW 5.1 with ESX 5.0 or above (in fact, it turns out this EQL feature is not even ready yet till Q2/Q3 2012), and I am surprised to find out the following today somewhere on-line.

Still I do not have the courage to do this  in my production, it’s just too scary, better just Storage VMotion from one datastore (ie, EQL volume) to another datastore and then remove the old dirty volume to claim space.

Shrinking is only supported in EqualLogic firmware version 3.2.x or greater. Shrinking a Thin Provisioned volume is only supported in V4 or higher. For v3.2, you can however, convert a TP volume to a standard thick volume and then resize it, assuming there is space available to do so.

CAUTION: Improperly shrinking a filesystem can result in data loss. Be very careful about shrinking volumes, and always create snapshots to fall back to in the event of a problem.

To shrink a volume you must first shrink the file system and partition. While there are a few Operating Systems, like Windows 2008 for example, which support this natively, most do not. For Windows NT, 9x,XP, W2K3, you need to use a tool such as partition magic, partition commander, or similar tool. We do not test these tools in house, so we cannot make any statements about how well they work.

WARNING: be certain that you are shrinking the volume to slightly larger than what you have reduced the filesystem/partition size to. Shrinking a volume to be smaller than the partition/filesystem “WILL” result in data loss, i.e., if you reduced the filesystem to 500G, shrink the EQL volume to 501GB.

Always create a snapshot before attempting any resize operation. Shrink the file system & partition slightly more than you intend to shrink the volume to avoid rounding or other math discrepancies.

The volume will need to be offline to shrink the volume, og off or shutdown the server(s) connected to that volume first.

The Shrink command is a CLI only option. You have to first select the volume (covered in the CLI manual available on the FW download page).

GrpName> vol sel
GrpName(volume_volume_name)> offline
GrpName(volume_volume_name)> shrink
GrpName(volume_volume_name)> online

Another Great FREE Tool from Solarwind: Storage Response Time Monitor

By admin, December 22, 2011 9:47 am

Just found Storage Response Time Monitor 10 mins ago, downloaded and installed it and love it at the first sight! It correctly shows me the real-time break down of each Equallogic volume  latency and what’s more, it lists the top 5 BUSIEST VMs with IOPS number for that particular volume in one single window.

Having trouble with storage latency issues? Seeing your response times getting slower and slower? Download SolarWinds Storage Response Time Monitor and start tracking those sluggish VMs. Storage Response Time Monitor shows the top host to datastore total response times and a count of additional high latency offenders. Keeping track of your storage response times has never been easier.

Storage Response Time Monitor Highlights:

  • Get at-a-glance insight into host to datastore connections with the worst response times, and the busiest VMs using those connections
  • See a breakdown of the datastore including type and device versus kernel latency

overview_free-storage-respo

Equallogic Versus Lefthand Blog

By admin, December 22, 2011 9:32 am

Amazing, someone actually made such an interesting blog collecting all the related articles he could find on the net, listing out all the pros & cons of each vendor, for me, of course I bias towards  Equallogic, but it’s nice to see such in depth comparison.

What a BIG Surprise! Equallogic FW 5.1 VMware’s Thin Provision Stunning feature WORKED on ESX 4.1!

By admin, December 21, 2011 10:21 pm

This happened two days ago and followed by a very happy ending.

Around 8AM, I received an SMS alert saying one of the email server isn’t responding, then it followed with many emergency calls from clients on that email server  (obvious…haha).

I logged into vCenter and found the email server had a question mark on top of its icon, shortly I realized the EQL volume it sits on is full as SANHQ also generated a few warning as well the previous days, but I was too busy and too over confident to ignore all the warnings and thinking it may get through over the weekend.

The solution is quite simply. First increase the Thin Provisioned EQL volume size, then extend the existing volume in vCenter, next simply answer the question on the stunned (ie, suspended) VM to Retry again, Woola, it’s back to normal again, no need to restart or shutdown the VM at all.

This is really beautiful! I was a bit depressed when knowing it will only work with ESX 5.0 previously and this was also confirmed by a Dell storage expert, then found out the The VMware’s Thin Provision Stunning feature will work with ESX 4.1 again the other day from Dell’s official document, I was completely confused as I do not know who’s right until two days ago.

FYI, I had a nasty EQL Thin Provisioned volume issue last year (EQL firmware was 4.2), the whole over-provisioned thin volume simply went offline when it reached the maximum limit and all my VMs on that volume crashed and need to restart manually even after extending the volume in EQL and vCenter.

No more worries, thank you so much Equallogic, you really made my day! :)

Finally, some those who may be interested to know why I didn’t upgrade to ESX 5.0? Why? Why should I? Normally I will wait for a major release like ESX 3.5 or ESX 4.1 before making the big move.

There is another issue as the latest Veeam Back & Replication 6.0 still have many minor problems with ESX 5.0 and frankly speaking, I don’t see many advantage moving to ESX 5.0 while my ESX 4.1 environment is so rocket solid. The new features such as Storage DRS, SSD caching or VSSA are all minor stuffs comparing with VAAI and iSCSI multipathing, thin provisioning in ESX 4.1. In additional, the latest EQL firmware features always mostly backward compatible with older ESX version, so that’s why I still prefer to stay at ESX 4.1 at least for another year.

Oh…one thing I missed that is I really do hope EQL firmware 5.1 Thin Reclaim can work on ESX 4.1, but it seemed it’s a mission impossible, never mind, I’ve got plenty of space, moving a VM off a dirty volume isn’t exactly a big deal, so I can live with it and manually create a new clean volume.

Update Jan 17, 2012

Today, I received the latest Equallogic Newsletter and it somehow also indicates this VMware Thin Provisioning Stun feature is supported with ESX 4.1, hope it’s not a typo.

Dell EqualLogic Firmware Maintenance Release v5.1.2 (October 2011)

Dell EqualLogic Firmware v5.1.2 is a maintenance release on the v5.1 release stream. Firmware v5.1 release stream introduces support for Dell EqualLogic FS7500 Unified Storage Solution, and advanced features like Enhanced Load Balancing, Data Center Bridging, VMware Thin Provisioning awareness for vSphere v4.1, Auditing of Administrative and Active Directory Integration. Firmware v5.1.2 is a maintenance release with bug fixes for enhanced stability and performance of EqualLogic SAN.

Update Mar 5, 2012

I found something new today that Thin Provisioning Stun is actually a hidden API in ESX 4.1 and apparently there are only two storage vendors support it in ESX 4.1, one being Equallogic, no wonder this feature worked even with firmware v5.0.2, as I thought at least v5.1 is required. Thanks EQL, so this gives me a bit more time to upgrade to FW5.1 or even FW5.2.

Thin Provisioning Stun is officially a vSphere 5 VAAI primitive. It was included in vSphere 4.1 and some array plugins support it, but it was never officially listed as a vSphere 4 primitive.

Out of Space Condition (AKA Thin Provisioning Stun)

Running out of capacity is a catastrophe, but it’s easy to ignore the alerts in vCenter until it’s too late. This command allows the array to notify vCenter to “stun” (suspend) all virtual machines on a LUN that is running out of space due to thin provisioning over-commit. This is the “secret” fourth primitive that wasn’t officially acknowledged until vSphere 5 but apparently existed before. In vSphere 5, this works for both block and NFS storage. Signals are sent using SCSI “Check Condition” operations.

VAAI Commands in vSphere 4.1

esxcli corestorage device list

esxcli vaai device list

esxcli corestorage plugin list

Storage I/O Control (SIOC) Causing VM to Fail

By admin, December 9, 2011 12:53 am

Recently, I encountered a strange situation that sharp at 2am which is the backup window (Acronis agent running inside VM), one of the VM failed to function on occasion, I have to reboot it in order to RTO. CPU on the VM went to 100% for a few hours and became non-responsive to ping gradually.

However I am still able to login to console, but cannot launch any program, reboot is ok though.

There are tons of Red Alert Error under Event Log (System), most of them are related to I/O problem, seemed harddisk on EQL SAN is having bad block or so.


Event ID: 333
An I/O operation initiated by the Registry failed unrecoverable. The Registry could not read in, or write out, or flush, one of the files that contain the system’s image of the Registry.

Event ID: 2019
The server was unable to allocate from the system nonpaged pool because the pool was empty.

Event ID: 50
{Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere.

Event ID: 57
The system failed to flush data to the transaction log. Corruption may occur.
I couldn’t’ find the exact reason during the preliminary investigation and email exchange with EQL technical support returns nothing.

Event ID: 2019
Unable to read the disk performance information from the system. Disk performance counters must be enabled for at least one physical disk or logical volume in order for these counters to appear. Disk performance counters can be enabled by using the Hardware Device Manager property pages. The status code returned is in the first DWORD in the Data section.

Then suddenly I found there is an vCenter alert saying there is a non-direct storage congestion on the volume where the VM locates. Right away I figured out it’s related to SIOC, checked the IO latency during 2AM confirmed this. It’s the SIOC throttled back  (over 30ms) or forcing the volume to use less latency during the backup windows, so this cause the backup software (Acronis) and Windows somehow crashed.

I’ve disabled SIOC on that particular volume for 3 days already, everything runs smooth so far, it seemed I have solved the mystery.

If you have encountered something like this, please do drop me a line, thanks!

Update: Dec 23, 2011

The same problem occurred again, so it’s definitely not related to SIOC. Funny thing is it happened on the same  time when scheduled antivirus and Acronis backup windows started together. So I’ve changed the Acronis backup windows to a later time  because I think these two I/O  intensive programs were competing with each other.

I do hope this is the root of the problem and will observe more.

Update: Jan 14, 2012

I think I’ve nailed the problem finally, no more crash since Dec 23, 2011. Last night I observed a very interesting fact that the VM CPU went to 90% at 2am and last for 15 mins. ah…I realized it’s the weekly scheduled virus scanning that’s causing huge I/O and latency, it even some of the service on this VM stopped responding during the busy time.

So I’ve decided to remove the weekly scan completely, it’s useless anyway.

Update: Jan 18, 2013

The above procedure (removed weekly scheduled virus scanning) does prove it’s the cause of the problem after all.

Dell Poweredge 12G Server: R720 Sneak Preview

By admin, December 8, 2011 10:03 am

It seemed to me that Dell is already using e-MLC SSD (enterprise MLC) on its latest Poweredge G11 series servers.

100GB Solid State Drive SATA Value MLC 3G 2.5in HotPlug Drive,3.5in HYB CARR-Limited Warranty Only [USD $1,007]

200GB Solid State Drive SATA Value MLC 3G 2.5in HotPlug Drive,3.5in HYB CARR-Limited Warranty Only [USD $1,807]

I think Poweredge R720 will be probably released in the end of March. I don’t see the point using more cores as CPU is always the last resources being running out, but RAM in fact is the number one most important thing, so having more cores or faster Ghz almost means nothing to most of the ESX admin. Hum…also if VMWare can allow ESX 5.0 Enterprise Plus to have more than 256G per processor then it’s a real change.

dell_poweredge_12g_servers

Michael Dell dreams about future PowerEdge R720 servers at OpenWorld

Dell, the man, said that Dell, the company, would launch its 12th generation of PowerEdge servers during the first quarter, as soon as Intel gets its “Sandy Bridge-EP” Xeon E5 processors out the door. Dell wasn’t giving away a lot when he said that future Intel chips would have lots of bandwidth. as readers of El Reg know from way back in may, when we divulged the feeds and speeds of the Xeon E5 processors and their related “Patsburg” C600 chipset, that bandwidth is due to the integrated PCI-Express 3.0 peripheral controllers, LAN-on-motherboard adapters running at 10 Gigabit Ethernet speeds, and up to 24 memory slots in a two-socket configuration supporting up to 384GB using 16GB DDR3 memory sticks running at up to 1.6GHz.

But according to Dell, that’s not the end of it. he said that Dell would be integrating “tier 0 storage right into the server,” which is server speak for front-ending external storage arrays with flash storage that is located in the server and making them work together seamlessly. “You can’t get any closer to the CPU,” Dell said.

Former storage partner and rival EMC would no doubt agree, since it was showing off the beta of its own “Project Lightning” server flash cache yesterday at OpenWorld. The idea, which has no doubt occurred to Dell, too, is to put flash cache inside of servers but put it under control of external disk arrays. this way, the disk arrays, which are smart about data access, can push frequently used data into the server flash cache and not require the operating system or databases to be tweaked to support the cache. It makes cache look like disk, but it is on the other side of the wire and inside the server.

Dell said that the new PowerEdge 12G systems, presumably with EqualLogic external storage, would be able to process Oracle database queries 60 times faster than earlier PowerEdge 11G models.

The other secret sauce that Dell is going to bring to bear to boost Oracle database processing, hinted Dell, was the system clustering technologies it got by buying RNA Networks back in June.

RNA Networks was founded in 2006 by Ranjit Pandit and Jason Gross, who led the database clustering project at SilverStorm Technologies (which was eaten by QLogic) and who also worked on the InfiniBand interconnect and the Pentium 4 chip while at Intel. The company gathered up $14m in venture funding and came out of stealth in February 2009 with a shared global memory networking product called RNAMessenger that links multiple server nodes together deeper down in the iron than Oracle RAC clusters do, but not as deep as the NUMA and SMP clustering done by server chipsets.

Dell said that a rack of these new PowerEdge systems – the picture above shows a PowerEdge R720, which would be a two-socket rack server using the eight-core Xeon E5 processors – would have 1,024 cores (that would be 64 servers in a 42U rack). 40TB of main memory (that’s 640GB per server), over 40TB of flash, and would do queries 60 times faster than a rack of PowerEdge 11G servers available today. presumably these machines also have EqualLogic external storage taking control of the integrated tier 0 flash in the PowerEdge 12G servers.

Update: March 6, 2012

Got some update from The Register regarding the coming 12G servers, one of the most interesting feature is R720 now supports SSD housing directly to PCI-Express slots.

Every server maker is flash-happy these days, and solid-state memory is a big component of any modern server, including the PowerEdge 12G boxes. The new servers are expected to come with Express Flash – the memory modules that plug directly into PCI-Express slots on the server without going through a controller. Depending on the server model, the 12G machines will offer two or four of these Express Flash ports, and a PCI slot will apparently be able to handle up to four of these units, according to Payne. On early tests, PowerEdge 12G machines with Express Flash were able to crank through 10.5 times more SQL database transactions per second than earlier 11G machines without flash.

Update: March 21, 2012

Seems the SSD housing directly to PCI-Express slots is going to be external and hot-swappable.

固態硬碟廠商Micron日前推出首款採PCIe介面的2.5吋固態硬碟(Solid State Disk,SSD),不同於市面上的PCIe介面SSD產品,這項新產品的最大不同之處在於,它並不是介面卡,而是可支援熱插拔(Hot- Swappable)的2.5吋固態硬碟。

近年來伺服器產品也開始搭載SSD硬碟,但傳輸介面仍以SATA或SAS為主,或是提供PCIe介面的擴充槽,讓企業可額外選購PCIe介面卡形式的 SSD,來擴充伺服器原有的儲存效能與空間。PCIe介面能提供更高效能的傳輸速率,但缺點是,PCIe擴充槽多設於伺服器的機箱內部,且不支援熱插拔功 能,企業如有擴充或更換需求,必須將伺服器停機,並掀開機箱外殼,才能更換。

目前Dell PowerEdge最新第12代伺服器產品新增了PCIe介面的2.5吋硬碟槽,而此次Micron所推出的PCIe介面2.5吋SSD,可安裝於這款伺 服器的前端,支援熱插拔功能,除保有高速傳輸效率的優點之外,也增加了企業管理上的可用性,IT人員可更輕易的擴充與更換。


Vendor Acquisitions & Partnerships

By admin, December 8, 2011 9:51 am

Found someone (HansDeLeenheer) actually did this very interesting relationship diagram, it’s like a love triangle, it’s very innovative and informative! I just love it!

VENDORS[1]

Question about Latest Equallogic Firmware 5.1 Feature: Auto Load Balancing

By admin, December 8, 2011 9:36 am

I’ve opened a case with Equallogic, but so far getting no reply yet.

As I understand Equallogic Firmware 5.1 supports Auto Load Balancing which the firmware (or software) automatically reallocate hot data to appropriate group members.

In fact, there were two great videos (Part I and Part II) about this new feature on Youtube.

As we knew the best practice before FW 5.1 is to group similar generation and spindles in the same storage pool. For example, place PS6000XV with PS6000XV, PS6000E with PS6000E, etc.

Now with FW 5.1, it is indeed possible to place whatever you want in the same pool  (ie, different generation and spindles in the same pool) as Auto Load Balancing will take care the rest.

Dell called this Fluid Data Technology, actually a term borrowed from its recently acquired storage vendor Compellent.

My question is in terms of performance and updated best design practice, is it still recommended by Dell Equallogic to go with the old way? (ie, Separate storage tier with similar generation and spindle speed)

 

Update: Dec 12, 2011

Finally got the reply from Dell, seemed the old rule still applies.

The recommendation from Dell to use drives with the same drive speed is still the same within a pool. When you mix drives of different speeds, you slow the speed of the faster drives to the speed of the slower drives. The configuration should WORK but is not optimal and not recommended by Dell.

Then my next question is what’s the use  of the new feature “Fluid Data Technology” if the old rule still applies? huh?

 

Update: Dec 21, 2011

Received another follow up from EQL support, this really solved my confusion now.

Load balancing is across disks, controllers, cache, and network ports.  That means the storage, processing, and network ports access are balanced.  Not ONLY disk speed. 

Dell Equallogic customers create their own configuration.  It is an option for you to add disks of different speeds to the group/pool; however, the disk spin speed will change to be the speed of the slowest drive.  Most customers do not have a disk spin speed bottleneck; however, most customers are also aware of the rule-of-thumb in which they keep disks of like speeds together.

http://www.cns-service.com/equallogic/pdfs/CB121_Load-balancing.pdf

 

Update: Jan 18, 2011

After spending a few hours at Dell Storage Community, I found the following useful information from different person, the answer is still the same.

DO NOT MIX different RPM disks in the same pool even with the latest EQL FW v5.1 APLB feature!

Yes, the new improvements to the Performance Load Balancing in v5.1.x and the sub-volume tiering performance balancing capabilities now allow for mixed drive technologies and mixed RAID policies coexisting in the same storage pool.

In your case, you would have mixed drive technologies (the PS400E, SATA 5k and the PS4100X, SAS 10k) with each member running the same RAID policies.

When the PS4100X is added to the pool, normal volume load balancing will take the existing volumes on the two PS400E’s and balance them onto the PS4100X. Once the balancing is complete and you then remove the PS400E from the group (which is the requirement to remove 1 PS400E), the volumes slices contained on this member will be moved to the remaining two members and be balanced across both members (the PS400E SATA and PS4100X SAS) at that point.

Note, however, that Sub-volume performance load balancing may not be so noticeable until the mixed pools experience workloads that show tiering and are regular in their operating behavior. Because the operation takes place gradually, this could take weeks or EVEN MONTHS depending on your specific data usage.

 

Arisadmin,

Hi, I’m Joe with Dell EqualLogic. Although we support having members with mixed RAID policies in the same pool, in your case this is not advisable due to the two different drive types on your two members, i.e., your PS6000E is a SATA and your PS6000XV is a SAS. Mixing different drive types in the same pool, will most likely degrade performance in the group.

If the arrays were of the same drive type, i.e., both SATA (or both SAS), then combining the two (RAID 10 and RAID 6), would not be a problem, however the actual benefits, in your case, may not be as great as expected.

In order for load balancing to work “efficiently”, the array will analyze the disk I/O for several weeks (2-3) and determine if the patterns are sequential (RAID’s 10/5/6) or random (RAID 10), and would migrate those volumes to the corresponding member.

However in a two members group this is often less efficient, hence EQL always suggest 3 members group for load balancing.

Since the array will try to balance across both member, and you may end up with 80% of the volume on one member and 20% on the other member instead of a 50/50 split.

We also support manually assigning a raid level to the volume, but this would in effect, eliminate the load balance that you are trying to achieve, since it is only a two member group.

So in summary, we don’t recommend combining different Drive types (or Disk RPM) in the same pool.

You can go to http://www.delltechcenter.com/page/Guides and review the following documents for more information:
Deploying Pools and Tiered Storage in a PS Series SAN
PS Series Storage Arrays Choosing a member RAID Policy

Regards,
Joe

 

This is a very popular and frequently asked question regarding what type of disks and arrays should be used in a customers environment. First, you asked about the APLB (Automatic Performance Load Balancing)  feature in EqualLogic arrays.

Yes, it will move “hot blocks” of data to faster or less used disk, but its not instantaneous. This load balancing between volumes, or better known as sub volume load balancing uses an advanced algorithm that monitors data performance over time. I recommend reading the APLB whitepaper that should help you out more in better understand how this technology works.

see here: www.virtualizationimpact.com

In terms of what disks to buy, well that comes down what you are going to require in your environment. From my experience and from reading on other forums, if you are trying to push for the best performance  and capacity I would look at the X series arrays or 10K series drives. You can now get 600GB 10K drives in 2.5 and 3.5 form factors (i believe) and you won’t have to worry if your 7200 drives will be able to keep up with your workload, or at least, be faster and mix them with the 15KSAS/SSD arrays. Not saying that the 7200’s won’t work, just depends on your requirements.

Hope thats some help, maybe someone else will chime in with more info too.

Jonathan

 

Best solution is to work with Sales and get more specific about the environment than can easily be done via a forum.   You’re entire environment should be reviewed.  Depending on the IOPs requirements will determine if you can mix SAS/SATA in same pool and still achieve your goals.

One thing though it’s not a good idea to mix the XVS with other non-XVS arrays in the same pool.  yes the tiering in 5.1.x firmware will move hot pages but it’s not instant and you’ll lose some of the high end performance features of the XVS.

Regards,

-don

Selecting between Intel 320 and Crucial M4 SSD

By admin, December 2, 2011 2:15 pm

I will soon purchase a SSD for testing ESX performance on Poweredge R710 (H700 512MB raid card), After researching a bit on Storage Review and AnandTech, I narrowed down to two most reliable branded SSD.

Crucial M4 128GB 2.5″ SATA 3 6Gb/s SSD (FIRMWARE 009)

Intel 320 Series 120GB (Retail Box w/External Case) G3 Postville SSDSA2CW120G3B5 2.5″ SATA SSD

m4-ssd[1]

1. Intel has two more years of warranty (5 years) vs Crucial has 3 years.

Locally Intel is warranted by聯強 (Synnex), Crucial is warranted by建達 (Xander), I had good experience with both.

2. Intel Retail box comes with USB 3 connector and I really like this! As after the testing, I can still use it with my desktop USB 3.0.

3. Crucial M4 is much faster in terms of sequential read/write, its random IOPS is extremely impressive according various benchmarking and random IOPS is what SSD all about for VM.

http://www.anandtech.com/print/4256
http://www.anandtech.com/print/4421

4. Intel SSD offers a free nice GUI  “Intel SSD Toolbox”, it comes with nicer feature too, like update firmware and erase SSD to have faster speed like new, M4 got nothing to compare in this area.

5. Intel has Power Lost Data Protection, M4 got nothing. However, Intel 320 had a dark period three month ago that losing power of SSD will render the SSD to 8MB bug, may be it’s just a gimmick thing from Intel like always.

6. Intel 320 is still SATA2, 3Gbps, Curcial M4 is SATA3, 6Gbps

7. Both Intel 320 and Crucial M4 uses 25nm NAND.

8. Also found out the latest Crucial M4 is the SAME as Crucial C400 (just a different name) and it’s the newer generation of its famous Crucial C300.

I don’t like OCZ, as I read so many negative reports about their firmware problem previously.

Seemed Crucial M4 120GB SSD is the right one for me, it’s 25nm, SATA3, 6Gbps, 2-3 times faster than Intel 320 120GB SSD in terms of  random 4k IOPS. 

I have decided go for M4 and I can live up by giving up Intel SSD Toolbox feature and USB 3.0 cable, well the compatible SSD USB 3.0 should be very easy to find for M4.

All I want is the huge IOPS, seemed a Single Crucial M4 advertised as 40,000 IOPS for 4k is equivalent to 10 Equallogic PS6000XV boxes with 16 x 15KRPM disks each. That’s 160 15K RPM spindle compares to 1 SSD, absolutely ridiculous! Not to mention the cost is 1,000 times LESS, OMG!!! I CAN’T BELIEVE MY CALCULATION!!!.

That’s in theory, I’ve read the actualy M4 IOPS for 4k reallife 60%random65%read is about 8,000, so equivalent to 2 PS6000XV with 16 x 15KRPM disks each, still that’s amazing! 1 SSD kills 32 enterprise 15K RPM SAS in terms of IOPS.  In additional, the actualy sustained IOPS over a long period still proves 15K RPM SAS is better than SSD, the extremely high number can only sustain for the first few minutes and then it dropped rapidly to 1/10 over a period of serveral hours, so this means if you have a heavy OLTP or transactional based application runs 24/7, 15K RPM SAS is still the only choice.

So who cares about that extra 2 more years of warranties from Intel, if I am looking for reliability later, all I need is to purchase another unit and make it a RAID1.

Dell’s ESX Monitor and Reporting Tool: DPACK

By admin, December 2, 2011 8:57 am

During the Equallogic Seminar yesterday, Dell’s storage consultant introduced DPACK (Dell Performance Analysis Collection Kit) to us.

Basically it’s a Monitoring and Reporting utility that will run over time (4-24 hours) to collect data (such as CPU, Memory, IOPS, network traffic etc) from your physical or virtual servers.

The best thing is the footprint of this program is very small, about 1.5MB, still remember that crappy Dell Management Console 2.0?

The best part is you don’t need to install it at all, simply run it, key in the server or vcenter parameters, select a time range, run it, wait for a few hours, that’s it.

The output is a single file with extension .iokit, then you are supposed to use the online generator to get the DPACK result in PDF. (http://go.us.dell.com/DPACK). However, the site is currently not working as it’s still not ready according to DPACK team.

So what to do? DPACK team is kind enough, I sent the result.iokit to them and Dell helped me to generate the PDF manually.

dpack

The Overview of DPACK and Download DPACK.

Finally, I would say it’s a lite version of Veeam Monitor and Veeam Monitor does a lot more and it’s also free. :)

Pages: Prev 1 2 3 4 5 6 7 8 9 10 ...16 17 18 Next