Category: Network & Server (網絡及服務器)

Interesting Discovery when Customize a VM

By admin, November 2, 2011 3:28 pm

One of my clients requires W2K3 with MSSQL 2005 and .NET 4 as well as Silverlight4, so I started to deploy the VM from Windows Server 2003 Standard SP2 template, the template itself is configured with the most basic features, there is no IIS or .NET components.

I’ve encountered a few problems that really surprised me:

When I installed IIS, it asked me for both the Windows Server 2003 and SP2 CD-ROM (it never did if I use a W2K3 without SP2), where do I get that SP2 CD huh?  After a while, I figured out the solution is simply to extract the files from the SP2 zip then it completes the whole IIS installation.

Then I encounter another problem when installing SQL Server 2005, the installation failed with error code 1063, Google helped me again, the solution is to install .NET 2.0 first as SQL Server installation files depends on this foundation.

Finally, the client requires .NET 4.0 and Silverlight 4. Luckily, I came across this blog page “Configuring Windows Server 2003 with ASP.NET 4.0 while supporting ASP.NET 2.0“.

More Spindles Means More IOPS? RAID10 vs RAID50

By admin, October 26, 2011 10:21 pm

I’ve asked Equallogic Support the following question today, it turns out I had wrong understanding about RAID10 before.

Client enviornment is running VMWare using EQL PS6000XV, 16 15K RPM disks (ie, 14 disks + 2 hot-spares)

In RAID10, there are 7 spindles as RAID10 only ultilizes half the disks, so there are only 7 spindles producing the atual IOPS.

In RAID50, all 14 spindles are used.

I/O pattern is mostly random (30% Read, 70% Write)

Does this mean in this case RAID50 is faster than RAID10 in terms of IOPS as there are 14 spindles in RAID50 compares to 7 spindles RAID10?

 

Reply from Equallogic’s US Support

The thing to remember here is that with RAID10 it will make two writes, one to each disk.
With RAID50, it will need to make the read/write/parity to all of the member disks in the raidset.

With Read/Write sequential IO, the difference would be minimal. With an environment as you described (virtual environment) where there are many small random I/Os, RAID10 will be quicker.

RAID10 will also be quicker if a drive fails whereas the performance with RAID5/50 is poor when the raidset is degraded and a rebuild is going on.

And of course, the space issue when considering RAID10 or RAID50 is a huge consideration
 

Then, I got an even better answer from a Dell storage expert in Hong Kong

image00194

Actually spindle in RAID10 is still 14, not 7; 7 is the capacity oriented view. We still consider using all disks during I/O.

But you may notice that above chart didn’t mention for Read performance? And yes, read performance for RAID 10 and RAID 50 or even RAID-5 will almost the same!  The different will mainly focus on write.

RAID 50 is two RAID-5 and stripe, the RAID parity overhead will still exist across the disks; the more disks you have in the RAID group, the more overhead and performance penalty you have!

In RAID 10, every RAID group consists with two disks only, so overhead and performance penalty is minimum.

 

Equallogic Latest Release: Firmware V5.1.2, SANHQ 2.2, HIT/VMware 3.1.1

By admin, October 25, 2011 1:28 pm

No more worries using the pre-released or pre-production versions. As usual, it’s recommended to wait for at least another month before upgrade to the latest firmware.

I am going to try the latest SANHQ and HIT/VMware this week for sure.

Equallogic PS Series Firmware, version V5.1.2
Enhanced Load Balancing, VSSA and Thin Provision Stun

SANHQ 2.2
Live View and 95th Percentile Reporting, as well as RAID Evaluator (ie, R10 > R50 performance prediction)

HIT/VMware 3.1.1
Support for VMware Version 5, VMFS 5, Thin Provision Stun, VASA, Datastore Clusters and Storage DRS

Finally, I also noticed the release of a newer version of EqualLogic Multipathing Extension Module (MEM) V1.0.1

 

Update: Oct 27, 2011

I’ve upgraded the SANHQ to version 2.2 and immediately tried two of the most interesting features,

1. Live View

live_view

It does provide very useful detail ONLY when you need it, such as when you suddently found IOPS is abnormal and wanted to find out which volume is causing the problem. However to be honest, I would rather use VMTurbo’s Usage for VM table, it’s much easier and easy to find the bad VMs.

2. RAID Evaluator

raid_eval

I found this is the most useful one in my own environment as I may expand my Equallogic SAN capacity by changing from RAID10 to RAID50, so I definitely need to know the estimated  performance decrease if I do so, it shows me 41%, wow, that’s huge!

 

I recevied a false alarm email with subject “SAN HQ Notification Alert” immediately after the SANHQ upgrade completed

Caution conditions:

10/27/2011 9:02:44 PM to 10/27/2011 9:08:08 PM

Caution: Controller failed over in member eql (IT DID SCARE THE HELL OUT OF ME)

Caution: Firmware upgrade on member eql Secondary controller.
 - Controller Secondary in member eql was upgraded from firmware version to version V5.0.2 (R138185)
 - Condition already generated an e-mail message. If the condition persists, additional messages will be sent approximately every 6 hours.

Caution: Firmware upgrade on member eql Primary controller.
- Controller Primary in member eql was upgraded from firmware version to version V5.0.2 (R138185)

Deploying Windows 2008 R2 and Windows 7 templates with vmxnet3 renames the NIC as #2

By admin, October 21, 2011 10:09 am

This is interesting as I have just found the solution, the hotfix is not available for download, you will need to contact Microsoft or your server OEM to get the hotfix, note there are two different hotfix versions besides the 32/64bits, one is for Win7/Win2k8R2 with SP1, one is without.

Symptoms

When deploying or cloning virtual machines from a Windows 2008 R2 or Windows 7 template configured with the VMXNET3 virtual network device:

•The resulting virtual machine’s guest operating system shows the Ethernet network interface as:

- VMXNET Device #2 in Device Manager
- Local Area connection #2 in Network Properties

•The old Ethernet network interface remains present in Device Manager when Show Hidden Devices is enabled.

•The old Ethernet network interface retains its network configuration, preventing the new interfaces from reusing the previous static IP addresses. For more information see Networking Error: IP address already assigned to another adapter (1179).

Note: Upgrading the virtual hardware from version 4 to 7, or upgrading the VMware Tools, may automatically convert Flexible or VMXNET2 virtual network interfaces to VMXNET3. This issue may be observed after such an upgrade.

Also I encountered the following error when trying to open “Edit Settings” after converted the Template to VM.

Call “PropertyCollector.RetrieveContents” for object “propertyCollector” on vCenter Server “VC” failed.

The solution is simple, remove the VM from inventory and add it back again will solve the problem.

Of course, you also need to add back the extra VMX parameters if you are using OEM Windows and make sure devices.hotplug = false is added which prevents hot plug warning in OS.

* Just found out you don’t need to do this if you added the VM to Inventory from Datastore directly as the original VMX contains such information already.

What Will Happen After Veeam Maintenance is Over?

By admin, October 13, 2011 1:12 pm

Interesting there is a 10 years glass ceiling, it was never mentioned in their web site I guess.

If you open a license file with notepad, you will see that there are two expiration dates: one for support and one for the product. Each license is issued for 10 years, so if your support expires, you can use an existing installation until the product expiration date is reached. You will not, however, be eligible for support and product upgrades.

Thanks,
Andrey Beck
Veeam Software Support

Equallogic MEM and Number of iSCSI Connections in the Pool

By admin, October 12, 2011 11:13 pm

The following is from the latest October Equallogic newsletter:

How will installing Dell EqualLogic Multipathing Extension Module (MEM) for MPIO on my ESXi hosts affect the number of iSCSI connections in an EqualLogic pool?

The interesting answer to this question is for most the number of connections will increase but for a few it will go down.  The key to understanding the number of iSCSI connections created by MEM are the following variables:

  • The number of ESX hosts you have
  • The number of vmkernel ports assigned for iSCSI on each ESX server
  • The number of EqualLogic members in the pool
  • The number of volumes in the EqualLogic pool being accessed from the ESXi hosts
  • The MEM parameter settings

The MEM is designed to provide many enhancements over the VMware vSphere Round Robin multipathing and standard fixed path functionality including – automatic connection management, automatic load balancing across multiple active paths, increased bandwidth, and reduced network latency.  However, before installing the MEM you should evaluate and understand how the number of iSCSI connections will change in your environment and plan accordingly. In some cases it may be necessary to adjust the MEM parameter settings from the defaults to avoid exceeding the array firmware limits.

The MEM parameters are listed below.  You’ll notice each parameter has a default, minimum and maximum value.  What we’ll go over in this note is how to determine if the default values are suitable values for your environment and if not, what values are suitable for your environment to ensure that the total iSCSI connections to the EqualLogic pool remains below the EqualLogic pool iSCSI connection limit.

Value Default Maximum Minimum Description
totalsessions 512 1024 64 Maximum total sessions created to all EqualLogic volumes by each ESXi host.
volumesessions 6 12 3 Maximum number of sessions created to each EqualLogic volume by each ESXi host.
membersessions 2 4 1 Maximum number of sessions created to each volume slice (portion of a volume on a single member) by each ESXi host.

Single EqualLogic Array in Pool

Let’s start with a simple example – a four node ESXi cluster with 4 vmkernel ports on each host.  Those hosts all connect to 30 volumes on an EQL pool and that pool has a single EQL array. For each individual ESXi host the following variables effect how many connections MEM creates: 

Input Value
membersessions MEM parameter value 2
volumesessions MEM parameter value 6
totalsessions MEM parameter value 512
# of vmkernel ports 4
# of volumes 30
# of arrays in pool 1

So the first step in our ESXi host connection math is to get some subvalues from these parameters we’ll call X, Y and Z.

X = [Lesser of (# of vmkernel ports) or (membersessions parameter value)]

Y = [# of volumes connected to by this ESXi host]

Z = [Lesser of ((# of arrays in pool) or (volumesessions/membersessions)]

We then use X, Y and Z to calculate the total MEM sessions for one ESXi host using the formula below.

Total iSCSI Sessions from ESXi host = [Lesser of (X * Y  *  Z)  or (totalsessions MEM parameter value)]

So in this particular scenario X = 2 (the membersessions MEM parameter value),   Y = 30 (# of volumes connected to by this ESXi host) and Z = 1 (the # of arrays in the pool).  So for one ESXi host in this scenario we have a total of 60 iSCSI connections.  Since 60 is less than the totalsessions MEM parameter limit of 512 the MEM will create all 60 connections on this ESXi host.

We have 4 ESXi hosts in our environment this EQL array will have a total of 240 (4 x 60) connections from those ESXi hosts.   


Why would I get less iSCSI connections with MEM?

Let’s go back to our statement that some environments you may have less connections with MEM than with VMware vSphere Round Robin multipathing.  Typically this will only happen if you have a single member group and when the number of vmkernel ports is more than the membersessions MEM parameter.  In our original example we had four vmkernel ports so with VMWare fixed VMware vSphere Round Robin multipathing you would have four connections to each volume.  When you install MEM it will look at the membersessions MEM parameter and change the number of connections to the default of 2 connections per volume.

You may be concerned that changing from four connections per volume to two connections per volume might have a performance impact but this is usually not true. MEM will use all four VMkernel ports for the various volumes but just not use all vmkernel ports for all volumes. In addition, the EqualLogic array connection load balancing will keep the network load on each individual array port balanced out as evenly as possible.


Add Additional
EqualLogic Arrays in Pool

Let’s say your environment grows and you need more storage capacity.  Let’s add another two EqualLogic arrays to that single member pool in our original example.   The volumes in the pool will now spread out over all three arrays. That is, there is a “slice” of the volume on each of the three arrays.  Now the 3rd MEM parameter – volumesessions – comes into play.  MEM will create 2 connections (membersessions default) to each of the three arrays that have a slice of the volume. MEM is aware of what portion of each volume is on each array so these connections allow it to more efficiently pull volume data down to the ESX server. Standard ESX MPIO doesn’t know about the EqualLogic volume structure so it can’t be as efficient as MEM.

The only parameter that changes in the table from the first example is the number of arrays which increases from 1 to 3.

So let’s get our subvalues X, Y and Z for the situation where there are 3 arrays rather than just one:

X = [Lesser of (# of vmkernel ports) or (membersessions parameter value)]  = 2

Y = [# of volumes connected to by this ESXi host] = 30

Z = [Lesser of ((# of arrays in pool) or (volumesessions/membersessions)] 

= (3) or (6/2)

 = 3

So   X * Y * Z = 180 connections per ESXi host in this example.  180 is less than the totalsessions limit of 512 so MEM will create a total of 180 connections from each ESXi host. 

We have 4 ESXi hosts in our environment this EQL array will have a total of 720 (4 x 180) connections from those ESXi hosts.   720 total connections is within the limits for a PS6xxx series array but is well over the connection limit for a PS4xxx series pool.  However, if any additional expansion of the environment occurs – such as adding two additional ESXi hosts – the session count will now be 1080. So in some circumstances we may need to make some adjustments in the MEM parameters or array group configuration to optimize our configuration.   We’ll talk about that in the next section.
Planning the Number of iSCSI Connections

So if you have done your math and see that you’re getting near the array firmware limits for connections how do can you alter the number of connections?  There are several choices for this including:

  • reduce the number of volumes per pool
  • reduce the number of ESX servers that connect to each volume
  • move some of the EQL arrays/volumes to another pool
  • reduce the membersessions MEM parameter limit on each ESX server
  • reduce the volumesessions MEM parameter limit on each ESX serer
  • reduce the totalsessions MEM parameters limit on each ESX server.

Remember when you’re doing your math that you also need to include any connections from non-ESXi hosts when deciding if you’re going to exceed the array iSCSI connection limit.

As we’ve seen a little bit of planning will help you keep the iSCSI connections to your EqualLogic pool at an optimal level.

Battles Between the Two: Veeam Backup & Replication and Acronis vmProtect

By admin, October 10, 2011 12:36 pm

I wouldn’t say it’s professional to be exact, but somehow their words between the lines did reveal the truth.

Btw, I love both products and using both in my client’s environment, both has its advantage and disadvantage. IMOP, I would say Veeam is the leader in virtual world and Acronis continues to be the leader in physical or within each vm as it allows more granular restore requirements.

The whole flight is that Anton was pissed about Sergey mentioning their own method of running VM from backup image directly via NFS and not to say a word about this feature was in fact invented by Veeam in the first place. (ie, Copycat from Acronis)

At the end it seemed to be Anton from Veeam vs Sergey from Acronis, fight start, Round 1, KO! :)

 

Sergey Kandaurov wrote:

In vmProtect we offer an alternative solution which can effectively replicate replication in most scenarios: a virtual machine can be started directly from a compressed and deduped backup. It only takes several seconds, sometimes up to a minute to create virtual NFS share, mount it to ESXi host, register and power on the VM.

 

Anton Respond:

You only forgot to mention that this feature was originally invented by Veeam, has been available as a part of Veeam Backup & Replication for 1 year now, and is patent-pending (as you are well aware). While it is a smart thing to copy the leader (not for long though), I think you should at least be fair to the inventor, and should have referenced us (in this topic about Veeam), instead of making it look like something you guys have unique to replace your missing replication with.

Nevertheless, back to your replication comment, I must mention that you seem to completely lack any understanding of when and how replication is used in disaster recovery. Replicas are to be used when your production VMware environment goes down, which means you cannot even run your appliance (vmProtect can only run on VMware infrastructure, and does not support being installed on standalone physical server, am I correct)? Also, even if your appliance is somehow magically able to work after VMware environment or production storage disaster, I would love to see it running a few dozens of site’s VMs through NFS server, the disk I/O performance of those VMs, and how they will be meeting their SLAs. You clearly still have a lot to learn about replication and production environments, if you are positioning vPower as an alternative to replication.

I would have never replied to you, because I generally avoid vendor battles on public forums, just like I avoid advertizing Veeam products on these forums (instead, I only respond to specific questions, comments or remarks about Veeam). With thousands of daily visitors on Veeam’s own forums, I have a place to talk about my stuff. But this one looked so very special, I just could not pass it. First, clearly your only intent and sole purpose of registering on these forums a few days ago was to advertize your solution (which is NOT the purpose of these forums). Not a good start already, however, it would be understandable if you created the new topic. But instead, you have chosen the topic where OP is asking about the two very specific solutions (assume you found it by searching for Veeam), and crashed the party with blatant advertizement paired with pure marketing claims having nothing in common with reality. And that was
really hard to let go, I am sorry.

 

Why, instead, you have decided to go with 100% copycat of Veeam’s patent-pending, virtualization-specific archicture, which is nowhere near what you have patented? No need to answer, I perfectly realize that this was because your patented approach simply would not work with virtualization, as in-guest logic is not virtualization aware – so things like Storage VMotion (which is essential to finalize VM recovery), would not produce the desired results if you went that route.

微軟揭露Windows Server 8核心功能 (轉文)

By admin, October 3, 2011 6:34 pm

微軟首度展示Windows Server 8多項核心功能,不需共享儲存,也可以同時進行多個虛擬機器的線上轉移,遠端虛擬桌面也能支援觸控操作

微軟在Build研討會中釋出新一代作業系統Windows 8和Windows Server 8的開發者預覽版本,也首度揭露Windows Server 8開發者預覽版的核心功能,除了內建Hyper-V 3.0新版外,強化了線上轉移功能,不需共用儲存可同時讓多臺虛擬機器線上轉移,也提供了多達上千個PowerShell指令和新版的伺服器管理平臺,可在單一管理介面中控管多臺伺服器,讓企業更容易調度各種Hyper-V的虛擬化資源。不過,微軟表示,預覽版本功能未來也可能異動。

微軟伺服器和雲端部門副總裁Bill Laing表示,微軟從過去推出的公有雲服務中累積了各項系統經驗,來開發Windows Server 8的新功能,讓Windows Server 8能夠成為企業打造私有雲的利器,其中包括了虛擬化功能強化、多伺服器管理機制的簡化、多租戶網路架構設計以及多媒體的遠端虛擬桌面機制等。

不用共享儲存,可線上轉移多個虛擬機器
Windows Server 8內建了新版Hyper-V 3.0,最高可提供32顆虛擬處理器和512GB記憶體的虛擬機器,也新增虛擬光纖通道的支援,可在虛擬化環境中建置虛擬SAN。Windows Server 8內建了新版Hyper-V 3.0,最高可提供32顆虛擬處理器和512GB記憶體的虛擬機器,也新增虛擬光纖通道的支援,可在虛擬化環境中建置虛擬SAN。

除此之外,另一個虛擬化功能的改善是VM線上轉移(Live Migration)機制的增強,過去一次只能線上轉移一個虛擬機器,現在則可以在虛擬機器不停機的情況下,在線上同時進行多個虛擬機器的搬移,而且線上轉移時,不需要先建立共享儲存。只要兩臺伺服器之間透過網路連結,就能將第一臺伺服器中的虛擬機器線上轉移到另外一臺中執行。此外,也新增了虛擬儲存空間線上轉移的功能。

虛擬桌面功能則大幅強化了多媒體功能,包括了遠端3D圖像支援、遠端觸控支援和遠端USB支援,讓使用者可以透過不同的行動裝置來操控伺服器提供的虛擬桌面環境。

另外系統控管機制也有新功能,除了能在單一平臺中管理多臺伺服器以外,還提供了新的資源監控介面,錯誤控管機制則增加了如Guest應用系統健康監控、VM硬體錯誤隔離、VM當機時轉移優先排序、支援信任開機等新功能。

微軟已經開放下載的Windows 8預覽版,包括了Visual Studio 11預覽版本,和Team Foundation Server 11的預覽版本,以及.NET Framework 4.5,而Windows Server 8預覽版則只開放給MSDN用戶下載。

Windows Server 8核心功能

● 虛擬化:VM支援32顆虛擬處理器和512GB記憶體、VHDX格式最大達16TB、Hyper-V最大支援2TB實體記憶體和160顆邏輯處理器。單一叢集最大支援63節點和4,000個VM。可多VM線上轉移,新增Hyper-V Replica可線上複製VM、線上儲存轉移、線上VHD合併,以及支援虛擬光纖通道等。

● 管理機制:上千個PowerShell指令,包括了PowerShell命令的Rest API、150個Hyper-V PowerShell命令、儲存機制命令、網通機制命令、可移除Shell和IE、多機器管理協定、PowerShell和系統功能的作業流程整合。

● 網路機制:Virtual NIC監控模式、SRIOV Networking、多租戶網路架構(包括Port ACLs和防火牆)、支援Virtual NUMA、NIC Teaming等。

● 多媒體遠端虛擬桌面:VDI、RemoteFX、遠端3D圖像支援、遠端觸控支援和遠端USB支援。

● 監控機制:新資源監控介面,包括處理器、網通、儲存計量機制和儲存空間管理的監控。可在單一平臺控管多臺伺服器。Guest應用系統健康監控。

My Next Dream Desktop PC

By admin, September 26, 2011 12:00 am

Well, I have been using my buddy PIII for over a decade! Seemed I am the odd one now. I have been planning to buy a new desktop with the following features:

  • No more DIY, probably a Dell Optiplex (now is 990, not sure what’s after that) with 3-5 years NBD warranty.
  • Intel Ivy Bridge, 22nm or i5 >3Ghz, 2 cores is more than enough
  • SFF Case with low power usage
  • 120GB SSD for OS
  • External 3-4TB 7200RPM USB 3.0 Harddisk x 2 (one for data, the other for backup SSD and data drives)
  • OS: Windows 8 or Windows 7 Ultimate
  • Two Gigalan, one for Internal LAN.
  • 8GB Ram
  • Cost less than HKD5,000

Hopefully Dell will release an upgraded Optiplex shortly in December.

Differences between Veeam Replication and VMware SRM

By admin, September 22, 2011 11:00 am

Just another good one from Veeam’s Anton!

Functionality wise, top 3 differences between host based replication Veeam v5 and SRM v5 are:

1. Multiple replica restore points vs. single restore point
In other words, if you don’t spot the corruption immediately, your replica will have absolutely no use. Consider virus infection, or application data corruption – both are very hard to spot immediately.

2. Application-aware image processing vs. basic file-level consistency
We perform application-specific VSS backup step when grabbing the image, as well as application-specific VSS restore steps upon failover (when replica VM is first started) as per Microsoft requirements. vSphere replication just picks up the image without looking inside or trying to meet any Microsoft requirements on how application should be backed up and restored. Exchange webinar goes into specific details using Exchange as example.

3. File level restores from replica
You decide how this is important for you, I know many customers love it.

v6 adds much more stuff to this list (big focus on replication there), but since v6 not generally available yet, it is not fair to use its features as differentiators (even though we are so close!)

SRM is great for storage-level replication and whole site failover orchestration, and our replication is great for smaller scope outages and better protection of individual most critical workloads due to multiple restore points. My take is that vSphere 5 replication is way to basic to provide replacement for our replication (especially with v6 functionality), but neither our v6 removes the need for SRM v5 in scenarios where storage-level replication and site failover orchestration are required to meet RTO and RPO in case of large scale site-wide disaster.

Of course, there is also huge price difference, but that does not matter for you since you already own SRM anyway.

Pages: Prev 1 2 3 4 5 6 7 ...16 17 18 ...26 27 28 Next