Category: Network & Server (網絡及服務器)

Oh…My…Microsoft Partner Gathering

By admin, September 4, 2012 1:03 pm

Boring as usual, it’s all about Windows 8, non of the sales there have deep knowledge about Windows Server 2012 and Hyper-V 3.0, the ones in front panel just non-stop  blablablala…those noise really put me into sleep, except Microsoft promotion girls are always the center of the flash. :)

I ended up leaving much early than expected, I am not going to attend this kind of seminar anymore in the future for sure, it’s a completely waste of my time.

VMware Returns to Per Processor License, vRam has GONE!

By admin, August 30, 2012 8:37 am

Last-Import-22

NO MORE vTax YEAH!!!

VMware finally has to face the truth: Sales Record is going Down and competitor Microsoft is chasing like a dog, almost onto its foot.

It’s a great start this week’s VMWorld 2012 by announcing the cancellation of vTax from VMW’s new CEO Pat Gelsinger and even the exiting CEO Paul Martiz said the vRam thing was indeed TOO COMPLICATED to implement.

So I am sure we will see 1TB/2TB ESX host to become a norm pretty soon and not to mention the cost per VM will be greatly driving down and become much more competitive than Hyper-V or XEN.

Bravo! It’s time to buy some more VMW now!

Finally, I can’t wait to attend the latest VMware conference in Hong Kong and experience those exciting new features from ESX 5.1!

PS. Congratulations to Equallogic, they have won The Finalist of the Category New Technology in VMworld 2012 Awards.

Update: Sep 10, 2012

VMware vRAM still in place for VSPPs
VMware isn’t completely “striking the word vRAM from the vocabulary of the vDictionary,” as CEO Pat Gelsinger said at VMworld. The controversial pricing model will still apply to those running vSphere in public clouds.

Participants in the VMware Service Provider Program (VSPP) will continue to charge according to the amount of memory allocated per virtual machine (VM). Some service providers said they have no major problems with vRAM licensing and pricing because it aligns well with the public cloud’s pay-per-use business model.

“We’re fairly happy with where we are right now,” said Pat O’Day, chief technology officer of VSPP partner BlueLock LLC.

But a West Coast-based VSPP partner said VMware vRAM can put VSPPs at a disadvantage against other service providers, especially when it comes to high-density hosts.

“We’re seeing a lot of large service providers and industry players moving off of VMware and into open source … and that’s putting competitive pressure on us,” said the partner, who spoke on the condition of anonymity.

How VMware vRAM works for VSPPs

The VMware vRAM allocation limits for VSPPs are different from those abolished at VMworld, which were for business customers purchasing vSphere. For VSPPs, vRAM is capped at 24 GB of reserved memory per VM, and VSPPs are charged for at least 50% of the memory they reserve for each VM.

Also, unlike with the licenses for private organizations, VSPP vRAM is not associated with a CPU-based entitlement, and it does not require the purchase of additional licenses to accommodate memory pool size limits.

The infrastructure for BlueLock Virtual Datacenters runs on Hewlett-Packard servers, packing in 512 GB of RAM each. VMware licensing is a very small percentage of BlueLock’s overall costs, O’Day said. Given the 24 GB per VM cap, the company could probably add even more RAM to each server, O’Day said.

“But that’s not worth disrupting our business model to do,” he added.
VSPPs’ VMware vRAM reactions

VMware discontinued the wildly unpopular vRAM licensing and pricing model for business customers after more than a year of discontent. Some VSPPs worried the move away from vRAM on the enterprise side would also affect their businesses, according to an email one VSPP software distributor sent to service provider clients.

“We have received feedback from quite a few partners who have expressed concern to us over the announcement that VMware is moving away from vRAM,” the distributor said in the email, which was obtained by SearchCloudComputing.com. “The question to a person seems to be, ‘How does this impact me as a VSPP partner whose billing is completely based around vRAM?’ The answer is, not at all.”

But the West Coast partner, who received that email, said the cost of memory on high-density hosts can eat into VSPPs’ margins or force VSPPs to pass the costs on to end-user customers. This partner offers services including hosted virtual desktops that don’t run on VMware View, and vRAM can cost as much as $7 per gigabyte under this model.

“If you’re trying to provision a desktop with four gigabytes of RAM … the memory alone is $30,” the partner said. “You’re at a significant disadvantage as the market tries to bring the price down.”

Furthermore, the complexity of the licensing model makes it difficult to set the monthly costs of vRAM based on usage, the partner said.

“You basically hand VMware your wallet and let them take whatever they want,” the partner added. “… The pricing is so complicated, we have no way to estimate how much we’re going to pay VMware at the end of the month.”

In a blog post discussing the continuation of vRAM for VSPPs, VMware said the vRAM model allows service providers to sell more computing capability from the same infrastructure, and to control the memory oversubscription and service profit margin through allocated vRAM delivered to customers.

VMware vRAM timeline

July 2011: VRAM licensing and pricing model announced at vSphere 5 launch.

Aug. 2011: VMware increases vRAM limits in response to complaints.

Aug. 2012: At VMworld, VMware eliminates vRAM licensing and pricing for business customers.

Best practices for the use of RAID 5 and RAID 50 on Dell EqualLogic arrays have changed

By admin, August 17, 2012 10:13 am

Just received the latest EQL news update this morning, two things caught my eyes.

With the release of firmware version 5.2.5
- Faster recovery from a drive failure, especially with the 7200 RPM drives
- Dell is also providing, for download, an update to hard drive firmware for selected 500GB, 1TB and 2TB SATA 7200 RPM hard drives. This new drive firmware will help to improve the overall life expectancy of the drives. Updating drive firmware takes approximately 15 minutes.

My view is still the same and I’ve heard different bad stories about those bigger 7200RPM 2TB/3TB on EQL or Powervault that they tend to fail Much more often  than SAS 10K/15K, so if your application is mission critical, I would stay away from those drives but opt for IOPS instead (ie, SAS 10K/15K)

Please be advised that the best practices for the use of RAID 5 and RAID 50 on Dell EqualLogic arrays have changed. The changes to the RAID policy best practice recommendations are being made to offer enhanced protection for your data.

  • RAID 5 is no longer recommended for any business critical information on any drive type
  • RAID 50 is no longer recommended for business critical information on Class 2 7200 RPM drives of 1TB and higher capacity.

What? It seemed to my that Dell Equallogic is implicitly saying RAID10 is the ONLY CHOICE for business critical application due to it’s superior performance and reliability, and RAID60 is the optimal choice for space concern application, but and again 7200 RPM drives bigger than 1TB is not recommended for high demanding application according to the above.

So after all, can we conclude that the Near-Line (NL) SATA or SAS based 7200 RPM drive is not a suitable candidate for mission critical application after all even with highest RAID5 protection (ie, RAID50), the answer seemed YES.

Dell spills its hot cache Fluid, hopes to beat off rivals (by theregister)

By admin, July 30, 2012 9:22 am

Dell recently lifted the lid on its “Hermes” project, named for the speedy chap from Greek mythology, at the Dell Storage Forum. The product, which Dell hopes will smack down EMC’s VFCache, will carry writes between flash caches in a Dell cluster to make sure they all carry the same data. That’s a big deal, so project “Hermes” has to be quick.

Caching is pretty straightforward; you put hot data from a slow storage medium in a small chunk of faster, and more expensive, storage in front of it. This procedure means that accesses to the cached data are faster than to data in the slow storage. We have flash caches in front of disk drive arrays or in in array controllers to perform this function. They cache data to be read from the array and data to be written to the array.

Currently there’s lots of activity putting PCIe flash caches in servers, such as EMC’s VFCache and Fusion-io’s ioDrive products, to accelerate applications by avoiding enforced waits for data to be fetched from slow disk drive arrays. When the servers have multiple nodes there is a caching problem with data writes. When one server writes data into its cache, such as an updated price for an XYZ dongle, that data is only in its cache. If an application in another server then looks up the price of an XYZ dongle, it gets a different price.

Typically this prevents write caching being implemented across multiple nodes, because the different caches don’t necessarily hold the same data; they are not coherent. Cache coherency problems generally prevent distributed write caches across a server cluster or a group of server blades in a single chassis.

Fast disk arrays with a very high speed controller inter-connect, like the VMAX’s virtual matrix, can solve that problem. But for ordinary x86 server nodes in a cluster it’s impractical, which is where Dell’s Hermes sprints into view. Carter George, Dell’s exec director for storage strategy, spoke about it in a keynote at Dell’s Storage Forum in Boston last week.

He said that a coherent distributed cache needs software technology to detect a write being made to one cache node and replicate it quickly to the other nodes (ie, so there will be a single XYZ dongle price across the cache).

In effect, a single virtual cache is formed from the individual ones. Dell acquired its virtual cache technology by buying RNA Networks [1] in June last year. Its Memory Virtualisation Acceleration (MVX) technology can pool server DRAM or flash into single virtual pools – with Ethernet or InfiniBand carrying the linking messages to ensure coherency across the physical instances of DRAM or flash.

Naturally Dell is calling this Fluid Cache, the “Fluid” tag being applied to most storage concepts on Dell these days, and says it will move its tier 1 storage from its arrays, the Compellent and maybe EqualLogic ones, and put it into the servers. Existing Compellent storage tiering software will be developed to place hot data into the Hermes physical caches for fast read access, and the RNA technology will be used to spread one server nodes’s fresh write data in its physical cache across the separate flash stores to provide a single version of the truth.

These caches will physically be PCIe flash cards. In this scheme there is no shared flash box sitting between the servers and the array, as is the case with EMC’s Project Thunder [2] and XtremIO technology. Dell would say that a networked flash box would not be as fast as a coherent server flash cache scheme because it still slows data access through network latency. This will provide Dell with a response to competition from networked all-flash arrays like those from Nimbus, Pure Storage and Violin Memory.

Dell could also use its RNA technology to aggregate a clustered server’s DRAM into a single virtual pool and so enable apps running in that cluster to use an in-memory database [3].

Going further, “Hermes” could then sprint between clustered X86 controllers in a storage array and give them either a coherent virtual memory pool or a coherent virtual flash pool, or both, and enable them to handle vastly more I/O traffic, meaning support more servers and more applications.

Dell’s RNA technology can support more than 100 server nodes in a cluster. The Hermes coherent distributed cache scheme has a first quarter 2013 target introduction date and should speed up applications significantly – especially those with a lot of write I/O which would not be so accelerated by read-only caching schemes such as VFCache. ®

IOPS, Seq. Read, Power, MTBF of Various 2.5/3.5 Inch Disk Drives

By admin, June 30, 2012 8:55 am

iops_cap

(Click on the above screen capture to get the full image)

Update: July 30, 2012

As you can see, the bigger the size, the higher cost for IOPS, so if you are looking for IOPS, then buy 15,000RPM disk, 3.5″ disks (300GB/450GB/600GB) and 2.5″ 300GB or even 10,000RPM 2.5″ 300GB will do the job.

iops_cost

Equallogic: Snapshots are not Backups

By admin, June 30, 2012 8:02 am

Got this interesting fact from the latest EqualLogic Customer Connection (June 2012).

So at the end you still need a proper backup solution from vendors like Veeam, PHD, Symantec or Acronis.

Question of the Month

Snapshots are not Backups

Q: Are Snapshots a replacement for backups?

A: Snapshots are point in time copies, stored with the original volume in a SAN. Snapshots provide improvements to backup operations in open file handling, and the ability to offload the backup copy operation to a different server than where the application is running.

While snapshots provide a fast and efficient way to create copies of SAN volumes, the snapshots are still stored with the SAN volumes. This means that both primary application data and its snapshots are vulnerable to catastrophic loss scenarios such as fire, flood and earthquake. Any administrative mistakes can not only open the possibility to loss of primary data, but also the snapshots of this data.

Snapshots are also inherently temporary – while an administrator can ask the system to keep many snapshots, they typically have a life span measured in weeks to months, rather than years as typical of backup archives. Depending on your system policy, they also may be deleted automatically to make room for newer snapshots.

Well-designed true backup environments ensure that copies of data are regularly created and stored separately from the primary volume data. Typically this backup data is stored in a secure location away from your primary storage devices, and retained for months to years depending on your organization’s policies. In this configuration, the backup data can be used both for small data recovery operations, such as user accidentally deleting a file, as well as to recover from a catastrophic failure such as a fire or flood in the data center.

Dell EqualLogic strongly recommends customers design and run a comprehensive back up environment, and consider utilizing snapshots as part of this environment to improve backup operations.

VMware虛擬化環境的快照管理 (轉文)

By admin, June 23, 2012 10:20 am

透過快照可為虛擬機器建立還原點,應用十分方便,但必須謹慎使用與管理,避免衍生額外問題

對於VMware虛擬平臺的用戶來說,快照(Snapshot)是個十分方便的工具,可為虛擬機器(VM)建立多個還原點,並在必要時將VM倒回指定的還原點狀態。

因此許多用戶都把快照作為一種VM備份手段,例如在為VM的Guest OS進行重大更新之前,先利用快照建立還原點,以備之後VM出現問題時可以倒回到更新前狀態。VMware還有許多服務或功能也都是透過快照作為中介,如VCB與VADP備份機制、Storage vMotion,與VMware Lab Manager軟體等。

VMware環境下的快照運作方式
VMware的快照屬於Copy-on-Write類型,執行快照時,系統會建立一個稱為delta.vmdk的新檔案(實際檔名通常是vmname-00001.vmdk),接下來包括新增或異動在內的所有VM寫入I/O,都會被導入到新建立的delta.vmdk檔案中,不再寫回原始vmdk檔,原始vmdk檔被轉為唯讀狀態。

每當執行一次快照,就會建立一個新的delta.vmdk檔案,並將前一次快照的delta.vmdk檔轉為唯讀,然後依靠新的delta.vmdk檔來接受VM的新寫入資料。

建立快照時,可選擇是否啟用兩個附屬功能:(1)連同VM的記憶體一同進行快照;(2)配合虛擬機器Guest OS檔案系統的靜止(Quiesce)功能。

若勾選連同VM的記憶體一同快照(如果快照時VM處於開機狀態,這選項是預設開啟),ESX主機會將VM記憶體中的資料Dump下來寫到vmdk檔中,缺點是會拉長快照執行時間。記錄VM狀態的vmsn檔案也會變得更大些。

如果要啟動另一種靜止快照,則須搭配安裝在VM上的VMware Tools使用,勾選後,VMware Tools的Sync驅動程式會讓VM的檔案系統進入靜止狀態(或者說是凍結狀態),停止應用程式對檔案系統的寫入,並將緩衝區或快取記憶體區域內的資料寫回磁碟,以得到確保資料一致性、用於備份作業的快照。

利用快照還原VM狀態的程序
要將VM倒回某個快照狀態,可有兩種操作方式:

一是在VM上按右鍵,然後從Snapshot選單中選擇的Revert to Current Snapshot項目,不過只能將VM還原到最近一次快照的狀態。第二種方法,則是利用Snapshot選單中的Snapshot Manager功能,可從展開的樹狀圖中選擇任何要回復到的快照,然後按下Go to按鈕即可。

執行倒回程序後,接下來VM將改為在所選擇的快照上執行,也就是在被選上的那個delta.vmdk檔上執行寫入動作,並捨棄當前的VM狀態。

快照操作的困難點:刪除快照

快照雖然帶來許多方便,但卻是「請神容易送神難」。由於Copy-On-Write類型的快照之間形成鎖鏈關係、彼此相依,因此不能任意刪除快照,否則就有導致其他快照不可用的問題。以VMware來說,執行快照以後,VM接下來所新增或異動的資料,都寫入在新增的快照檔案中,如果不做任何處理而直接刪除快照檔案,那也就會失去這些新增與異動資料,這顯然是不可接受的。

所以VMware雖然有刪除快照功能,但實際上被刪除的是「還原點」,而不是真的刪除那份快照檔案內所包含的資料。執行刪除動作後,只是少了那個快照還原點而已,但那份快照所包含的資料,會先被合併(Consolidate)到前一份快照中,而不是真的消失了。因此在執行刪除快照功能時,實際上進行的是「先合併、再刪除」的作業,以確保VM資料的完整性與可用性。

假設先後為VM建立了snap1、snap2與snap3等三份快照,如果選擇刪除snap2,系統會先把snap2的資料合併到snap1中,然後再刪除snap2。如果選擇刪除VM的所有快照,ESX主機就會依序將後續快照合併到前一份快照中,待預定刪除的快照資料併入原始vmdk以後,再刪除全部的快照檔案。

但這種程序也衍生出兩個問題:
● 合併作業耗時
建立快照後,如果很長一段時間都不刪除,隨著新增與寫入資料的增加,快照檔案將會變得越來越大,刪除快照時所執行的合併動作將會耗去相當多時間,甚至達數小時之譜。

● 暫存空間需求過大
刪除快照過程需要足夠的暫存磁碟空間,如果要刪除snap2,要等到snap1+snap2的程序完成後,原來的snap2檔案才會刪除,需要的最小暫存空間便等同於snap2大小。如果對VM做了多次快照,那執行刪除所有快照時,需要等系統按照由新而舊的順序,以倒推的順序進行合併後,才會實際刪除快照,累積的暫存空間非常大,甚至可能超過datastore可用空間,導致刪除失敗。

更糟的是,在ESX 4.x以前,執行刪除快照後,快照資訊便會從Snapshot Manager中移除,即使刪除失敗,也無法從vSphere Client管理介面得知。用戶常以為快照已被刪除,VM已回到原始vmdk上執行,但實際上由於快照刪除失敗,VM仍是在最後一版的快照上執行,往往等到快照變得很大、導致datastore空間大量消耗後,才發現問題,此時要解決,將會變得非常麻煩(有時得將VM遷移到有足夠空間的新datastore,然後重新執行刪除與合併)。

要避免這個問題有兩個方法,一是不要使用「刪除所有快照」功能,改為手動方式逐一刪除個別快照,如此每份快照將在合併後便行刪除,需要暫存空間大幅減少,不過操作程序也更繁瑣。

第二種方法是將系統更新到ESX 4.0 Update 2以後版本,新版的快照合併作業經過改進,首先,不是採用倒推的順序合併(從最後一份快照開始逐一往前合併),而是採用正向的合併順序,從最早的一份快照開始合併。其次,每份快照合併後便會立即刪除,因此需要的暫存空間也就大幅減少。

另外從vSphere 4.x起,就可在vCenter裡設定一個警告訊息,使vSphere Client提醒用戶當前的VM是在快照上執行,而非在沒有快照的原始vmdk上,避免過往的忽略快照刪除失敗問題。

vSphere 5.0又有所改進,新增了快照移除失敗警示功能,還在快照選單新增一個Consolidate選項,可在快照刪除失敗後,重新執行快照合併作業。

相較於正規備份作業,快照執行起來相對快速,而且還不用另外部署,十分方便,因此一些用戶有時會以快照來替代備份的角色,但VMware的快照並無法作為真正的備份使用。

首先,快照只能因應VM層次或Guest OS層級的故障,由於快照與VM位於同一臺實體設備上,如果發生實體設備層級故障,快照將與VM一同失效。而真正的備份產品,則是將備份複本保存在獨立的儲存裝置上,可應付實體設備失效的情況。

其次,備份的目的,是要相對長期的保存原始資料在多個不同時間點下的複本,一般情況每天至少要執行一次備份,其中一些備份複本往往要保存數個月甚至更長時間。若以快照充當備份使用,而讓快照保留過久、或建立過多版本的快照,將會衍生出許多麻煩的問題,包括快照占用datastore空間過大,日後刪除快照困難等。

因此較好的做法,是把快照當作臨時性的建立還原點手段,另外部署專門的備份產品來因應備份需求。

文⊙張明德

Are You Looking for Help? Got Any Interesting VMware Project for Me?

By admin, June 14, 2012 7:18 pm

Currently I am also available as a freelance virtualization consultant in Hong Kong with more than 8 years of extensive hands on experience, from initial Case Analysis, to Architecture Design, to the actual Site Implementation and Post Maintenance.

If your company is looking for a guru in building your next virtual project (VMWare/Equallogic/Dell centric) or simply an additional helping hand, please do not hesitate to contact me.

The main reason is I am kind of bored with my existing client projects as most of them have been implemented, the rest is just maintaining stuff, so I am looking for some fresh challenges, oh…I prefer not to travel if possible. :)

Dell Equallogic PS-M4110 and Firmware 6.0

By admin, June 12, 2012 12:18 pm

Total 14 drives per Equallogic PS-M4110 Blade, a nice way to utilize the existing investment (ie, M1000e blade enclosures), as usual there are XV and XS (for SSD) models to choose from. PS-M4110 starts shipping in August 2012.

Some of the new Equallogic Firmware 6.0 features:

  • Syncronous Replication- Real time syncronization
  • Volume Unmap – Thin Provision Reclaim, I would say this is the MOST IMPORTANT FEATURE after all! We have waited for so long.
  • Volume Undelete – preserve for 1 week or until the space is required

To me, Equallogic is a great product really, just like Apple’s iOS, the coming iOS 6.0 will still backward support even 3GS. For EQL, you can always enjoy the latest feature with the latest firmware on your old EQL boxes, of course, if you have a maintenance plan.

Finally, EQL will also release SANHQ 2.5 in fall 2012.

Update: July 30, 2012

Dell customers not so keen on Blade Arrays concept

BOSTON — Dell Inc. still has to convince customers that the storage blades rolled out this week at Dell Storage Forum 2012 are a good idea.

When asked about the iSCSI EqualLogic PS-M4110 Blade Arrays that Dell launched Monday, several customers said they’re unsure about the storage blade concept. A few said integrating storage, servers and networking into a condensed system is too new of an approach, while others wondered if it would have the same performance as traditional rack-mounted storage.

“The performance is not there to stick everything in a single box. You would need to stack a ton of them together just to get decent high performance, especially if you have high I/O loads,” said Tibor Pazera, a senior technology specialist at Irvine, Calif.-based Advantage Sales and Marketing LLC and a Compellent customer. “Convergence is nice for ease of deployment, but there’s not enough spindle capacity to maintain high I/O performance.”

Other customers characterize Blade Arrays as unproven.

“We get concerned about risk, partly because it’s new,” said Alex Rodriguez, vice president of systems engineering and product development at Cleveland-based IT services company Expedient Communications. “If a blade chassis has a failure, it’s gone.”

A virtualization and storage architect at a major New York-based retailer, who asked not to be identified, said he “dabbled with the idea, but it’s a bit too new for us.”

Equallogic Hardware Architecture Exposed! Difference between PS6100 and PS4100

By admin, May 23, 2012 10:51 pm

This article from Mainland China revealed the secret of the latest Equallogic hardware architecture.

PS6100 uses 64bits RISC CPU which provided by NetLogic XLS616, it has 4 MIPS cores with 4 threads per core at 1Ghz each (ie, 16 total threads per CPU), and 1MB L2 Cache.

In contrast, PS4100 uses its younger brother NetLogic XLS608, which has 2 MIPS cores with 4 threads per core at 800Mhz each (ie, total 8 threads per CPU), and 1MB L2 Cache, so the processing speed has been cut at least 50%.

Everything else is the same except the above processor speed and number of cores. It is also interesting to learn about the super conductor instead of traditional BBU cache for data.

liTNOWFzymR0A

liwL1tQvteU06

liK8ru9ImY0U

Pages: Prev 1 2 3 4 5 6 7 ...10 11 12 ...26 27 28 Next