Category: Equallogic & VMWare (虛擬化技術)

Battles Between the Two: Veeam Backup & Replication and Acronis vmProtect

By admin, October 10, 2011 12:36 pm

I wouldn’t say it’s professional to be exact, but somehow their words between the lines did reveal the truth.

Btw, I love both products and using both in my client’s environment, both has its advantage and disadvantage. IMOP, I would say Veeam is the leader in virtual world and Acronis continues to be the leader in physical or within each vm as it allows more granular restore requirements.

The whole flight is that Anton was pissed about Sergey mentioning their own method of running VM from backup image directly via NFS and not to say a word about this feature was in fact invented by Veeam in the first place. (ie, Copycat from Acronis)

At the end it seemed to be Anton from Veeam vs Sergey from Acronis, fight start, Round 1, KO! :)

 

Sergey Kandaurov wrote:

In vmProtect we offer an alternative solution which can effectively replicate replication in most scenarios: a virtual machine can be started directly from a compressed and deduped backup. It only takes several seconds, sometimes up to a minute to create virtual NFS share, mount it to ESXi host, register and power on the VM.

 

Anton Respond:

You only forgot to mention that this feature was originally invented by Veeam, has been available as a part of Veeam Backup & Replication for 1 year now, and is patent-pending (as you are well aware). While it is a smart thing to copy the leader (not for long though), I think you should at least be fair to the inventor, and should have referenced us (in this topic about Veeam), instead of making it look like something you guys have unique to replace your missing replication with.

Nevertheless, back to your replication comment, I must mention that you seem to completely lack any understanding of when and how replication is used in disaster recovery. Replicas are to be used when your production VMware environment goes down, which means you cannot even run your appliance (vmProtect can only run on VMware infrastructure, and does not support being installed on standalone physical server, am I correct)? Also, even if your appliance is somehow magically able to work after VMware environment or production storage disaster, I would love to see it running a few dozens of site’s VMs through NFS server, the disk I/O performance of those VMs, and how they will be meeting their SLAs. You clearly still have a lot to learn about replication and production environments, if you are positioning vPower as an alternative to replication.

I would have never replied to you, because I generally avoid vendor battles on public forums, just like I avoid advertizing Veeam products on these forums (instead, I only respond to specific questions, comments or remarks about Veeam). With thousands of daily visitors on Veeam’s own forums, I have a place to talk about my stuff. But this one looked so very special, I just could not pass it. First, clearly your only intent and sole purpose of registering on these forums a few days ago was to advertize your solution (which is NOT the purpose of these forums). Not a good start already, however, it would be understandable if you created the new topic. But instead, you have chosen the topic where OP is asking about the two very specific solutions (assume you found it by searching for Veeam), and crashed the party with blatant advertizement paired with pure marketing claims having nothing in common with reality. And that was
really hard to let go, I am sorry.

 

Why, instead, you have decided to go with 100% copycat of Veeam’s patent-pending, virtualization-specific archicture, which is nowhere near what you have patented? No need to answer, I perfectly realize that this was because your patented approach simply would not work with virtualization, as in-guest logic is not virtualization aware – so things like Storage VMotion (which is essential to finalize VM recovery), would not produce the desired results if you went that route.

Differences between Veeam Replication and VMware SRM

By admin, September 22, 2011 11:00 am

Just another good one from Veeam’s Anton!

Functionality wise, top 3 differences between host based replication Veeam v5 and SRM v5 are:

1. Multiple replica restore points vs. single restore point
In other words, if you don’t spot the corruption immediately, your replica will have absolutely no use. Consider virus infection, or application data corruption – both are very hard to spot immediately.

2. Application-aware image processing vs. basic file-level consistency
We perform application-specific VSS backup step when grabbing the image, as well as application-specific VSS restore steps upon failover (when replica VM is first started) as per Microsoft requirements. vSphere replication just picks up the image without looking inside or trying to meet any Microsoft requirements on how application should be backed up and restored. Exchange webinar goes into specific details using Exchange as example.

3. File level restores from replica
You decide how this is important for you, I know many customers love it.

v6 adds much more stuff to this list (big focus on replication there), but since v6 not generally available yet, it is not fair to use its features as differentiators (even though we are so close!)

SRM is great for storage-level replication and whole site failover orchestration, and our replication is great for smaller scope outages and better protection of individual most critical workloads due to multiple restore points. My take is that vSphere 5 replication is way to basic to provide replacement for our replication (especially with v6 functionality), but neither our v6 removes the need for SRM v5 in scenarios where storage-level replication and site failover orchestration are required to meet RTO and RPO in case of large scale site-wide disaster.

Of course, there is also huge price difference, but that does not matter for you since you already own SRM anyway.

Finally Enterprise MLC SSD Enters Storage World!

By admin, September 17, 2011 7:28 pm

Dell added eMLC drives from SanDisk (with technology acquired from Pliant Technology) in its EqualLogic PS6100 storage arrays launched this month.

The 5.1 firmware handles tiering and load balancing that can help manage SSDs by moving data based on access patterns, Vigil said. Although EqualLogic has been offering single-level cell (SLC) SSDs in the PS6000 line since 2009, Vigil said less than 10% of EqualLogic systems ship with SSDs. “We’re seeing that our customers don’t need a lot of SSDs, but SSDs gives a nice performance boost for those who do need them,” he said.

 

51a[1]Does it make sense to use “not so reliable” MLC SSD for enterprise storage? Well, it really depends on your situation. Hitachi is the first vendor launching eMLC SSD, I am sure others will follow this trend. MLC costs a lot less than SLC (at least 50%) and SSD is known for its huge READ IOPS performance, so it may be a good alternative for your database, exchange type of appilcations.

Of course, for those who can afford, FusionIO is your only choice currently, think about the IOPS coming from a 5TB SSD (also based on MLC), it’s just unbelievably fast and expensive as well!

Finally, I enjoyed reading this article related to Equallogic and SSD: “Are costly SSDs worth the money?

He bought three SSDs to serve as top-tier storage for business intelligence (BI) applications on his SAN. The flash storage outperformed 60 15,000rpm Fibre Channel disk drives when it came to small-block reads.

However, when Marbes used the SSDs for large-block random reads and any writes, “the 60 15K spindles crushed the SSDs,” he said, demonstrating that flash deployments should be strategically targeted at specific applications.

 

Updated Equallogic Host Integration Tools for VMware (HIT/VE)

By admin, September 10, 2011 7:15 pm

hitveThe first release of HIT/VE (Version 3.0.1) was back in April 2011, it did make storage administrator’s life a lot easier particularly if you are managing hundreds of volumes. Well, to be honest, I don’t mind creating volumes, assigning them to ESX host and setting the ACL manually, it only takes a few minutes more. I feel that I need to know every step is correctly carried out which is more important because I can control everything, obviously, I don’t have a huge SAN farm to manage, so HIT/VE is not a life saver tool for me.

Today, I came across a new video about the latest HIT/VE, in the video, it shows clearly the version being 3.1.0, which is a new version that has the option of VASA added, too bad it works on vSphere 5.0, but not vSphere 4.1.

However, version 3.1 cannot be found at EQL support web site, I guess it’s only for internal evaluation.

Dell EqualLogic Host Integration Tools for VMware 3.1 — provide customers with enhanced storage visibility and datastore management as well as improved performance and availability through tight integration with VMware vSphere 5, VMware vSphere Storage APIs for Storage Awareness, VMware Storage Distributed Resource Scheduler (SDRS), and VMware vCenter Site Recovery Manager™ 5.

Dell EqualLogic Host Integration Tools for VMware 3.1, Dell EqualLogic Host Integration Tools for Microsoft 4.0, and SAN Headquarters 2.2 software are in beta now and planned for release this year.

Update Sep-13, 2011

EqualLogic has released HIT for VMware V3.1 Early Production Access, I don’t prefer putting any EPA products into production as it might contains bug that will create disaster which no one can guarantee it won’t happen, so I shall skip it and wait till the official release.

Partition Alignment with VMware vCenter Converter Standalone 5.0

By admin, September 2, 2011 11:47 pm

VMWare has just released VMware vCenter Converter Standalone 5.0 this week. Among the all the new features, ”Optimized disk and partition alignment and cluster size change” is the one I considered the most important! This means V2V your existing unaligned Windows Server 2003 is no longer mission impossible and no need to pay those alternative ones who charge a premium for this simple task.

Also, the new version supports more 3rd parties vendors:

Convert third-party image or virtual machine formats such as Parallels Desktop, Symantec Backup Exec System Recovery, Norton Ghost, Acronis, StorageCraft, Microsoft Virtual Server or Virtual PC, and Microsoft Hyper-V Server virtual machines to VMware virtual machines.

Enable centralized management of remote conversions of multiple physical servers or virtual machines simultaneously.

Ensure conversion reliability through quiesced snapshots of the guest operating system on the source machine before data migration.

Enable non-disruptive conversions through hot cloning, with no source server downtime or reboot.
Third-party backup images and virtual machines:

Microsoft Virtual PC 2004 and Microsoft Virtual PC 2007
Microsoft Virtual Server 2005 and Microsoft Virtual Server 2005 R2
Hyper-V Server virtual machines that run Windows guest operating systems
Hyper-V Server virtual machines that run Linux guest operating systems
Acronis True Image Echo 9.1, 9.5, and Acronis True Image 10.0, 11.0 (Home product)
Symantec Backup Exec System Recovery (formerly LiveState Recovery) 6.5, 7.0, 8.0 and 8.5, LiveState Recovery 3.0 and 6.0 (only .sv2i files)
Norton Ghost version 10.0, 11.0, 12.0, 13.0, and 14.0 (only .sv2i files)
Parallels Desktop 2.5, 3.0, and 4.0
StorageCraft ShadowProtect 2.0, 2.5, 3.0, 3.1, and 3.2

An Insight Discussion about 3PAR SAN Technology

By admin, August 30, 2011 1:50 pm

I came across an interesting thread about 3PAR’s unique SAN capability especially about its Chunklets, again it’s in Chinese. Those who participated in that thread seemed to be the SAN elite in their fields.

To my surprise, I’ve learnt that 3PAR is still using the old PCI-X, NO SAS drives. Well heard they are going to release the SAS version shortly in Q4 2011 and there is a new series V after T & F.

Interesting enough another SMB SAN/NAS product named Drobo somehow can achieve the same unique feature as 3PAR as Drobo’s Beyond RAID technology can also form a RAID using different size, different RPM drives (ie, mix SSD/FC/SATA), What’s more Drobo doesn’t use any hot-spare for redundancy as all the drives can devote part of the capacity for the hot-spare purpose, this is another similarity to 3PAR. Well, the price difference between the two is of course beyond numbers.

 1,中端存储高端化:3PAR定为于高端存储,但是它的架构有点类似中端存储的架构,有控制器的概念,最多可以有8个控制器(S400最多有4个控制器,S800可以有8个),控制器之间由一个网状背板连接起来,每个控制器有两个INTEL的CPU负责控制指令,还有一个3PAR自己控制器的ASIC完成数据移动。普通的中端存储,一个LUN只能属于一个控制器,用户必须将LUN人工分布到两个控制器上,而3PAR每个LUN实际上是由多个控制器并行处理的。

这个怎么说那?
个人感觉本身可能打算用来代替EVA的,但是目前在hp的产品线看来他是介于xp和eva之间,当然他的满配跟hp的xp也差不多少了,更何况他更适合于云,另外当年oracle又说他跟asm结合能够如何如何的提高性能。

毕竟他是hp自己的东东,相对于可能更愿意推3par,其实3par2个系列,F和T,而跟在后面的数字就是支持的控制器的最大数,F200最大支持2个,F400最大支持4个,T400最大支持4个,当然T800最大支持8个,

至于他的控制器的芯片感觉有点类似于LSI,intel的cpu只是负责控制信息,真正做raid计算的是ASIC,所以如果和其他厂家的控制器全intel inside来pk说3par的cpu不行,可就有点外行了

当然Q4听说要出新3par就将采用双ASIC控制器,并支持SAS硬盘,将pci-x改为pci-e
另外说双A(不是ALUA哦),这点不能说其他厂家的终端存储都最多做到ALUA,其实hds的AMS也是真正的双A,至于eva和cx都是ALUA,ds5000就只能是A/P了

2,非常有特点的盘盒: 3PRA插盘的方式非常独特,和其他存储都完全不同,在4U高度的盘盒中,有10个盘包,每个盘包可以纵向插入4块盘,相当于其他存储一块盘的位置插入了四块,所以3PAR的盘的密度非常高。当然这也有问题,比如在换盘时,每次需要将整个盘包拔出,或者盘包自身的背板损坏,可能造成数据丢失等等。不过3PAR已经考虑了这些情况,并有解决方案。

其实3par有3种盘笼,DC2, DC4(T-Class ) and DC3(F-Class)DC2, DC4累就是上面写的,但是dc3是16个个drive bay,0-15就不是4个硬盘装在一起了。

3,虚拟卷管理:第一层:每个盘都被划分为256M的小块(chunklet),由于每个盘盒都和两个控制器相连,所以这些存储块都有两个访问通道。第二层:将这些存储块,基于RAID类型和存储块的位置组成逻辑盘(LD)。第三层:将一个或多个LD映射成为了一个虚拟卷(VV),并最终以LUN的形式输出给主机(VLUN)。这样可以将IO分散到系统所有的磁盘,光纤连接和控制器上,而不象某些存储,必须要借助主机的LVM才能实现。如果我们使用文件系统或者ASM的话,将会很方便。另外一个特点就是管理非常方便,只需要告诉系统我要多大的LUN,其他系统会自动完成

这点感觉有点类似于eva(其实eva)或者v7000,xiv,当然他们又不完全相同,XIV的Pool是个逻辑的,他并没有将disk物理隔离(譬如EVA中的Disk Group,eva有rss的概念),对于外部的IO quest由所有的Disk提供,只是在空间,也就是Size方面逻辑的划分pool

 

对于国内存储市场来说,3PAR 是不折不扣的后来者。也是个相对陌生的存储产品,以至于其竞争对手的人员甚至都不知道这家公司已经杀入中国市场。

3PAR 在 1999 年成立,几个创始人主要出自 Sun ,前身叫作 3PARdata , 2008 年上市。要知道在存储技术领域竞争还是比较激烈的,EMC / HDS 等控制着高端存储的主要市场,3PAR 能突破技术壁垒并最后成功上市,没两把刷子那是绝对做不到的。

InSpire 硬件结构

3PAR 背板采用全网状的连接结构,每个控制器节点之间高速直连。因为是全网状的,所以基本上一个链路坏掉只影响直连的两个节点的通信,对其它节点无影响。每个控制器节点内置一块硬盘,用于操作系统安装。控制器节点最多可以扩展到 8 个,是 3PAR 存储最核心的组件。

相比之下,HDS 架构采用全光线交换方式(Universal Star Network),而 EMC 是采用直连矩阵方式(新一代产品采用虚拟矩阵架构–Virtual Matrix ,其实已经放弃了直连矩阵架构了)。这些连接方式的孰优孰劣历来是厂商攻击竞争对手的着眼点,能否最大限度发挥性能是用户最需要关心的。

3PAR 针对 I/O 指令和数据移动使用不同的计算芯片。I/O 指令(元数据/控制Cache)用 Intel 的芯片,而 数据移动/Cache 则使用专门设计的 ASIC 芯片来完成。

因为有专门的硬件 ASIC 芯片用于 RAID 5 XOR 校验,3PAR 号称有了其第三代 ASIC 芯片,实现的 RAID 5 是业界最快的,甚至 SATA 盘也能有不错的性能表现。(从 Oracle 公司测试的数据来看,和 RAID 10 速度的确相差无几。)

InForm 操作系统软件与虚拟化

3PAR 的操作系统叫 InForm,最初就是面向层次化的设计。与其他存储不同的是,3PAR 所有磁盘被分成 256MB 统一大小的小盘(Chunklet),可以根据需要用多个 Chunklet 组成 RAIDlet(逻辑磁盘)。因为这个独特的设计方式,3PAR 是可以很容易做到不同容量的磁盘混用,同一个 RAID 组里都可以有不同大小、不同转速的磁盘混用,这是其他存储做不到的。而且,所有的磁盘都可以利用,因为Hotspare Chunklet 以更小的单位分散在不同的磁盘上,也不再需要单独留热备盘。空间利用率可以更充分一些。 

多说一句,有这个冗余机制,3PAR 更换磁盘也是与众不同:直接抽磁盘盒子(一个盒子可是四块磁盘啊),我当初看到 3PAR 技术人员这么操作真是着实吓了一跳。

因为固定大小的 Chunklet 的存在,可以将 I/O 更为均匀的分散到多个磁盘上。

对于熟悉Oracle 的朋友来说,会发现这和 ASM 的思想非常接近。因而也可以和 Oracle 数据库进行无缝集成:

因为软件做得非常具有易用性,日常管理与维护远远没有其他高端存储那么复杂,新增磁盘这种事情,都是一行命令之后底层自动处理。其实在 Thin Provisioning 方面 3PAR 也是很值得一说的,比一些厂商的伪 Thin Provisioning 具体多了。限于篇幅,不赘述。

3PAR 在美国有很多金融证券行业的客户,也有 Web 2.0 行业的客户–MySpace 。在保证 I/O 响应在 10ms 以内的前提下,3PAR 的 IOPS 能力非常优异(这才是卖点,不难理解其客户多集中在证券、金融领域)。虽然有些厂商号称能得到更高的 IOPS ,但那是在 I/O 响应时间很差的情况下的数据。要说明的是,现在随着一些存储厂商在高端服务器上也支持 SSD ,未来几年如何还要再看。

前两年 3PAR 推行所谓 Utility Storage(功用存储) 理念,现在貌似改成敏捷存储了。说实话,我觉得敏捷存储真的挺适合的,3PAR 命令行批量创建 LUN 真的很让人感觉舒服。当然,也在宣传云存储和绿色存储的理念,那是题外话了。

3PAR 原来只做中高端市场,只有 T 这一个系列,现在也开始关注中低端市场了,推出了 F 系列的产品。软硬件体系基本没变,倒是没仔细看过。

 

当然3PAR也有几个不怎么样的地方:

1.不支持sas盘,大家都用sas,你不用,是不是有点另类?
2.换盘麻烦,不像xp,eva那样直接就拔盘(当然只是T系列),虽然这点对于用户和销售来说不是问题,但是对于工程师来说还是费劲。估计要多干10分钟的活
3.pcx-133的总线
4.虽然说背板不是个什么问题,但不管怎么说还是个单点
5.后端依然是JBOD
5.硬件感觉不如软件那么NB
6.lun的大小调整需要额外的license,不像eva,这都是基本的免费功能
7.3PAR 的技术优势在机械硬盘时期,但未来将是SSD 的时代,追求高 IOPS ,更小 I/O 响应时间,如果全用ssd即可
8.不支持raid3

 

1.3par相比其高端存储配置时间和复杂度大大降低,抛开xp配置的复杂度不说,只谈时间,每次做dci,格式化后,我们就可以去睡觉了。

2.相比其他中端双控存储,
他们很难做性能方面的扩展,也难以提高的更安全的保障,某些硬件故障的时候,它这个性能可能会大幅的下降,即使是某些高端的存储,如果某一个cache 板挂掉,性能可不是降低一点点哦?将近达到90%以上了。

3.自动分层
他把不同类型的硬盘,如SSD、FC或者SATA,划分为256MB Chunklets,这些Chunklets被自动的分组,构成vv,以前是针对整个卷进行RAID划分,而现在,对子卷进行RAID划分。实现了分层的存储,从而让一个卷可以跨SSD、FC和STAT等不同的类型的硬盘。

通过对vv中不同类型硬盘IOPS的监控,就可以实现基于策略的数据自动迁移,自动在不同硬盘底层平衡工作负载,如用SSD来满足应用对IOPS要求。系统帮他决定哪个应用需要IOPS比较高,自动把这部分数据迁到SSD的硬盘上,

不像其他存储,redolog你要不放在SSD上,要不放在FC盘或者SATA上,如果我很多的文件都要求性能很高就费劲了,都要放在ssd上,但是3par就不用了,你可以把他们都放在池子里面,性能不够加ssd,容量不够加sata,其他的,管他那,唉,做3par的cpu挺累的,真不容易,我感觉未来应该就是这样,对用户越来越透明,不过以后如果全部ssd,这个功能就可以88啦,关于自动分层吹牛的东西咱不聊,网上讲的很多,也很细,不过我看很多其他厂家也有这技术了

Treasure Hunt for Free vSphere Goodies

By admin, August 27, 2011 9:41 pm

Today is another one of my lucky day for vSphere treasure hunt. I was able to locate three really useful ones again.


VMWare Compliance Checker for vSphere

Not much to say here, simply get it and run through your infrastructure, the assessment report is mainly related to the default settings causing security holes on ESX hosts and recommendation how to fix them.


VMTurbo

turbo1If you have already got Veeam’s Monitor (Free Version), then this is the one that you shouldn’t omit. It’s a compliment to Veeam monitoring tool where you can use VMTurbo Community Version (Monitor+Reporter) to see those details are only available for the Veeam paid version. Furthermore, it clearly displays a lot of very useful information such as IOPS breakdown of individual VMs.

Except the recommendation part is a bit too aggressive. For example, if a VM has been idled for 1 day, then VMTurbo would recommend to reduce the Ram size, but in reality the size is required during the user session. Another example is VMTurbo always suggest to reduce the size of those already Thin-Provisioned disk as it think they are taking too much wasted storage space.

The Report Feature is probably the Best I’ve ever seen and the most easiest one to setup, there is no more complicated steps like in Veeam Reporter (setup MSSQL Reporting Service along is a pain) as VMTurbo itself is a virtual appliance. :)

Overall speaking, it’s a very good product, way better than Xangati IMOP and has a great potential to match Veeam’s product. I still haven’t got time to play the Capacity Planning feature, last time I ran VMKernal’s Capacity View (Free Version), the recommendation was simply a joke as it said there are only 2 VM we can add while in reality there is a couple of hundreds of Ram that’s free to use, hopefully VMTurbo can give me some insight for real capacity growth prediction.

 

AVG Anti-Virus ISO
Simply use it to boot the VM, scan and clean the whole volume or designated directories. It also allows you to update the virus definition file in real-time. Just to make sure the disk type is not paravirtual as it won’t recognize this new disk type.

 

Update Aug-31

Well, after extensively testing VMTurbo for 3 days, I must admit it’s probably the most comprehensive monitoring and reporting product I’ve ever seen. I love the features that I couldn’t find in Veeam’s free edition and VMTurbo probably got every scenario combination you can think of, like those vCPU/MEM/IOPS/Network, etc.

Of course there is some drawback, one of the biggest drawback is to manage the product itself is too complicated, somehow, I don’t get the same easy to use feeling as Veeam. I understand both have tree on left pane and details on the right, but somehow it just takes me a few more clicks to find out what I need on VMTurbo, it would be perfect if they can improve this aspect. It actually did happen to me that I got lost after 10 clicks for in-depth information of what I was looking for.

Another thing is I still haven’t got the Planning working out correctly, I tested on a 2 hosts cluster with even load (probably 53%:47%), the result strongly recommend to shift all the load from host 1 to host 2, so I got a 2%:98%, isn’t this suppose to be evenly balanced? (ie, close to 50%:50%)

Finally, I understood the concept and calculation how to derive the available HA Slot today. ESX 4.1 did a great job in simplifying the whole calculating thing and present the available slot number automatically. This HA Slot Availability number is crucial in estimating the future capacity growth. The other thing is I’ve learnt  instead using Reservation, Use Limit instead for your SLA as this will not change your default HA Slot. So simply create a Resource (ie, Jail) Pool and assign CPU & MEM Limit (for Network, use VLAN with capped speed, for IOPS use SIOC) to it and place those abusive VMs into the “Jail”, Btw…there is no chance they can ever “Jail Break” under vSphere. :)

Latest Equallogic PS6100 vs PS6000 Series

By admin, August 23, 2011 2:52 pm

Dell has released the updated version of its iSCSI flagship Equallogic PS6100/PS4100 series today. Why do I say it’s an updated version instead of a complete new generation? It’s because how Equallogic naming convention after their products, previously there are PS50, PS100, PS3500, PS5000, PS6000, if it’s a new generation, then it would be PS7000 series. After going through different documents, release notes, blog articles, I confirmed my guess is right. Best of all, you can mix and match any generation of Equallogic box in your SAN, I simple love this!

dellstorage082211[1]

The followings are some of the major changes:

1. 4GB Cache vs 2GB Cache, probably won’t do much in terms of IOPS performance gain, I knew there are other vendors like EMC/Hitachi products using hugh cache up to 32GB/64GB, but real data shows the amount of cache isn’t the decision factor for IOPS.

2. The look of the new PS6100/PS4100: front and back looks almost identical to Dell’s SMB storage Powervault series, especially the back. It seemed to me that Dell has finally decided to use the same OEM hardware to save cost since the acquisition of Equallogic back in 2007. Especially, I don’t like the cheap look of the controller module on the back, it’s too Powervault MD1200/MD1220/MD3200 alike and the design of the new PS6100/PS4100 drive tray looks really ugly to me. Well, I am a person with design background, so I have a particular requirement for the Look! “Look is Everything!” once said by Agassi!

3. Dell claimed there is 60% more IOPS with PS6100, well, it’s kind of misleading as there are 24 drives in PS6100 vs 16 drives in PS6000. Simple math shows the extra 8 drives is only 50% more spindle if not counting the spare drivers. Also it’s interesting that if you use 24 x 2.5″ 10K drives on PS6100X, the IOPS performance is probably going to be similar to 16 x 3.5″ 15K drives PS6000XV. Of course if there is major upgrade in PS6100 controller hardware, then the story is different, just like PS5000XV and PS6000XV, controller module hardware is vastly different, the performance is also different even with the same number of spindles and disk RPM.

4. All PS6100 models has reduced its size from 3U to 2U with 24 x 2.5″ 15K/10K drives, it does save some rack space and power (probably not as there are more drives per 2U). Except for PS6100XV 3.5″ model which has been increased from 3U in PS6000XV to 4U, that extra 1U is for the additional 8 drives.

5. IMOP, the best products is PS6100XS which really combines the Tier 0 (7 400GB SSD) with Tier 1 (17 x 15K drives), this is the one I will purchase in our next upgrade cycle.

6. There is a Dedicated Management port on all PS6100XV, it’s a plus for some deployment scenarios.

7. On the VMware side, Dell said EqualLogic’s firmware 5.1 has “thin provisioning awareness” for VMware vSphere. As a result, the integration can save on recovery time and manage data loss risks. Dell has focused on VMware first given that 90 percent of its storage customers use virtualization and 80 percent of those customers are on VMware.

eql

Over the third and fourth quarters, Dell and VMware will launch:

• EqualLogic Host Integration Tools for VMware 3.1. (Great!)
• Compellent storage replication adapters for VMware’s vCenter Site Recovery Manager 5.
• And Dell PowerVault integration with vSphere 5. (Huh? PV is becoming the low cost solution VMware SAN for 2-4 hosts deployment)

Finally, I wonder what does “Vertical port failover” mean in the PS6100 product PDF?

In fact, as usual I am more interested to know the hardware architecture of the new PS6100 series. One thing for sure PS6100 storage processor is  4-cores vs 2-cores in PS4100.

Can someone post a photo of the controller module please? What chipset does it use now?

Luckily, Jonathan of virtualizationbuster  solved part of the mystery, thank you very much!

-SAS 6GB Backplane – Finally. That’s all I can say. Will we see a big difference? With 24 drives SAS 6GB + SSD’s SAS 6GB is definitely needed. There have been independent review sites of other storage hardware showing that we are getting pretty close to maxing SAS 6GB out, especially with 24 SSD’s….I will be talking to the engineering team at EQL about the numbers on this in the future, I am curious.

-Cache to Flash Controllers – Bye-Bye battery backed cache, hello Flash Memory Based controllers. All I have to say is it’s about time! (this goes for ALL storage vendors- -hello? Its 2011!). Note*- In my review of the Dell R510, the PERC H700 was refreshed in the beginning of the year to support 1GB NV Cache, which is similar to what EQL using, and it rocks!

-2U Form Factor – Well, you can’t beat consolidation right? The more space we save in a rack the better + more drives in each array = more performance per U, per rack, per aisle. I have been a HUGE proponent of the 2.5in drive form factor for many reasons outside the room for this post. I have been running 2.5in server drives from Dell for about 4 years now and they have been rock solid.

-24 Drives vs 16 Drives- Well, you can’t beat more spindles, not much to say here but if you are a shop that thinks they NEED 15k drives you really need to take a look at the 2.5in 10k drives, their performance is steller, yes its slightly slower than its 15 3.5in counterpart, but performance is pretty close, especially with 24 of these.

An Excellent Example About Veeam’s Incremental Backup Retention

By admin, August 22, 2011 1:50 pm

A Good Example About Veeam’s Retention

Re: Job Set for 14 Mount Points – 46 exist
Posted: Thu Jan 20, 2011 4:57 amby tsightler

Logically your requirement to keep 14 restore points can’t be met until there are at least 14 more restore points from the last VBK.

In other words, you have a full backup, then 46 incrementals, then a full backup. If Veeam deletes the first full backup, and 33 incrementals, you’ll be left with 13 incrementals and a full backup, except those 13 incrementals will be worthless since the full backup on which they were based would be deleted. Setting the number of retention points for incremental backups sets the minimum number that will be retained. The maximum number might vary based on the interval that you run full backups (either active or synthetic).

This has been explained in another thread, but here it is again for simplicity. Let’s say you run backups every day, and you want to keep 14 restore points, and you run a full every week. After one week you get this:

Week 1: F…I…I…I…I…I…I

So that’s one full, and 6 incrementals, for a total of 7 restore points, now the second week:

Week 1: F…I…I…I…I…I…I
Week 2: F…I…I…I…I…I…I

So that’s two fulls and 12 incrementals for a total of 14 restore points, now it’s day 15, and you run another full and end up with this:

Week 1: F…I…I…I…I…I…I
Week 2: F…I…I…I…I…I…I
Week 3: F

You’ve told veeam to keep 14 restore points, but it can’t delete the full backup from week one, because that would invalidate all of the incremental backups from week 1 and leave you with the following:

Week 2: F…I…I…I…I…I…I
Week 3: F

That’s only 8 restore points, and you’ve told Veeam to keep 14. Veeam will not delete the first weeks backups until the last backup of Week 3 because at that point you’d have 21 backups:

Week 1: F…I…I…I…I…I…I
Week 2: F…I…I…I…I…I…I
Week 3: F…I…I…I…I…I…I

Thus if Veeam deletes Week 1 I get:

Week 2: F…I…I…I…I…I…I
Week 3: F…I…I…I…I…I…I

14 backups, which meets the requirement.

If you want Veeam to keep the exact number of restore points you can use reverse incrementals since in that case the oldest backup can always be deleted, otherwise, it’s no different than tape backup retention has been forever, the ratio of full to incremental backups will determine the number of backups that have to be kept to meet a minimum retention period.

A GFS (Google File System) Alike Company: Nutanix

By admin, August 20, 2011 1:55 am

I came across an interesting article today that a new storage startup Nutanix can do what Google does for a living, using cheap hardware to simulate a big pool of distributed storage, of course it’s software based raid, something like the hardware based (actually it’s still software based + vendor specific designed array box) NetApps Network Raid, 3PAR and Compellent’s Cluster File System or even the newly launched vSphere Storage Appliance (purely software based). The concept is very nice, but at the end IOPS is still linked to how many spindles you have.

nutanix_appliannce_architecture[1]

Yes it may be true that Nutanix’s storage has no boundary, but in my own opinion, only the next Google or Amazon alike company may need this kind of technology to drive down its operational cost, for most SMB (which Nutanix is aiming for), a low end SAN/NAS or even a free iSCSI target like MS iSCSI Target/Windstar shall do the job beautifully for their vSphere hosts. If Nutanix really targets for SMB, then they will definitely need to reduce the price and start to use really cost effective DIY servers as the selling point.

Finally, it’s interesting to know Nutanix is in fact founded by ex-employee of Google who probably the brain behind GFS, no wonder!

2013-05-05

Nutanix新款伺服器,可讓多臺伺服器肚內的儲存硬碟,共組成統一的叢集儲存槽,企業不用額外建置SAN儲存架構

伺服器新創廠商Nutanix在今年初由逸盈科技代理來臺,推出新款伺服器,可透過Nutanix Distributed File System(NDFS)技術,讓多臺伺服器肚內的儲存硬碟,共組成統一的叢集儲存槽,企業不用額外建置SAN儲存架構,伺服器節點也能共享儲存空間。

Nutanix區域銷售經理PK Lim表示,Nutanix產品設計源自於Google、Facebook、Amazon、Twitter等大型資料中心設計概念。

過去,企業資料中心架構,伺服器運算與儲存裝置都需要單獨採購,在伺服器運算設備以外,需要再額外架設SAN儲存環境來共享大量磁碟儲存空間。

但大型的資料中心,例如Google、Facebook、Amazon等,為了加快資料存取的I/O速度,讓伺服器運算的資料不需再儲存至外部硬碟,因此就設計了Google File System技術,讓伺服器肚內的SSD或儲存硬碟可以讓多臺伺服器共享,就可以取代企業傳統的SAN架構。

Nutanix產品取經Google等大型資料中心的儲存架構概念,設計出Nutanix系列伺服器,單臺機器採2U高度,機架式設計,每臺伺服器內含4個運算節點,多臺伺服器可透過網路埠串接,並共用每臺伺服器內的儲存空間。

80132_1_1_l

Nutanix新型態伺服器採2U機架高度,內含4個運算節點,由逸盈科技代理來臺。

目前Nutanix產品依照系統規格可區分為NX-2000、NX-3000等系列,高階XN-3050最高可內裝3.2TB SATA介面的SSD,每個節點採2路設計,配置Intel Xeon E5-2670系列處理器。因此,一臺2U高度的XN-3050系列裝滿4節點後最高可達8顆處理器、64核心。每個節點提供2個10GbE和2個1GbE網路埠。企業可以依照建置規模,來選擇伺服器內裝的運算節點數量和儲存大小等。

Nutanix也和VMware合作,推出在Nutanix伺服器內預載vSphere 5.1虛擬化平臺的版本,目前已在臺上市。

Pages: Prev 1 2 3 4 5 6 7 ...9 10 11 ...16 17 18 Next