Windows Server Backup in Windows Server 2008 R2

By admin, October 8, 2010 1:17 pm

I’ve been using Windows Server Backup in Windows Server 2008 R2 for almost a month, and found it can do everything Acronis True Image Server does, seemed there is really no need to buy AIS in the future in my own opinion.

See what WSB can offer you (my requirement list):

  • WSB works at Bloack Level, so backup or taking snapshot is very fast during backup and restore as well.
  • Full server baremetal backup.
  • Full backup at first time, and incremental afterwards, Excluding files during backup, Compression
  • System State integrated with AD, so you won’t get a crash consistant state that you have your server restored, but found AD cannot be started.
  • Individual folder/file restore
  • Backup to network shared folders (there is a limit that you cannot have incremental copies, but only keep ONE copy, the later one will over-write the previous one), this does suck badly! However I don’t use network folder to store my backup, so it’s fine for me.
  • Maximum 64 copies (in my term, it’s almost 2 months since I only have 1 schedule backup running) or limited to your backup disk size
  • The backup copies are hidden from the file system, in TIS, you need to create a partition to hide the backup copies. (Acronis Secure Zone)
  • WSB can backup Hyper-V vm images as well.

Best of all Windows Server Backup (WSB) in Windows Server 2008 R2 IS FREE! TIS is over USD1,200, I don’t need any features like Convert to VM, Universal Restore, Central Management, so WSB works perfectly for the standalone server.

Finally, you may ask what about the rescure bootable DVD/CD-ROM? While you don’t have one, what? Yes, the Windows Server 2008 DVD is your optimate rescure bootable DVD, fair enough? :)

A Possible Bug in VMware ESX 4.1 or EQL MEM 1.0 Plugin

By admin, October 8, 2010 12:37 am

This week, I encountered a strange problem in redundancy testing, all paths to our network switches, servers and EQL arrays have been setup correctly with redundancy.  Each ESX Host iSCSI VMKernel (or pNIC) has 16 paths to EQL arrays and we tested every single possible failure situation and found ONLY ONE senerio doesn’t work. See When Power OFF Master PC5448 Switch, we are no longer able to ping PS6000XV.

After two days of troubleshooting with local Pro-Support as well as US EQL support, we have narrowed down the problem to “A Possible Bug in VMware 4.1 or EQL MEM 1.0 Plugin”.

During the troubleshooting, I found there is another Equallogic user in Germany is also having similar problem not exactly the same though (See  Failover problems between esx an Dell EQL PS4000), as he’s using PS4000 and only have two iSCSI paths, and his problem is more seroius than ours.

Oct 5, 2010 3:16 AM
Fix-List from v5.0.2 Firmware:
iSCSI Connections may be redirected to Ethernet ports without valid network links.

Also he’s problem is similar as whatever iscsi connection left in LAG won’t get redirected to slave switch after shutdown the master switch, I got 4 paths, his PS4000 has two, so my iscsi connection survived due to there is an extra path to the slave switch, but somehow vmkping doesn’t work.

and if you look at comment #30 .

Jul 27, 2010
Dell acknowledged that the known issue they reporting in the manual of the EqualLogic Multipathing Extension Module is the same I get.

They didn’t open a ticket at vmware for now, but they will, after some more tests.

I think this issue is there since esx 4.0. In VI3 they used only one vmkernel for swiscsi with redundancy on layer1/2, so there it should not be the case.

My case number for this issue at vmware is 1544311161, the case number at dell is 818688246.

If vmware acknowledge this as a bug in 4.1, and don’t have a workaround, we will go with at least 4 logical paths for each volume and hope that at least one path is still connected after switch1 fails, until they fix it.
Finally, it could also be something related to EQL MEM Plugin for ESX which we have installed. (Comment #29 on page 2)

It indicates there is a know issue that once a network link failed (could be due to shut down the master switch), if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN.

Jul 23, 2010

from the dell eql MEM-User_Guide 4-1:

Known Issues and Limitations
The following are known issues for this release.

Failure On One Physical Network Port Can Prevent iSCSI Session Rebalancing
In some cases, a network failure on a single physical NIC can affect kernel traffic on other NICs. This occurs if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN. The result is that the iSCSI session management functionality in the plugin will fail to rebuild the iSCSI sessions to respond to failures of SAN changes.

Could it be the same problem I have? So they (DELL/VMware) already know about this problem?

Aside this it looks like the Dell MEM makes only sense in setups with more then one array per psgroup, because the PSP selects a path to a interface of the array where the data of the volume is stored. And it have a lot of limitations. We only have one array per group for now, so I think I skip this.

Still dont understand why there is no way to prevent that the connections go through the LAG in the first place, it should be possible to prefer direct connections…

My last reply to EQL Support today:

Some updates, may be you can pass them to L3 for further analysis.

The problem seemed to be due to EQL MEM version 1.0 Known Issue. (User Manual 4-1)

==================================================
Failure On One Physical Network Port Can Prevent iSCSI Session Rebalancing

In some cases, a network failure on a single physical NIC can affect kernel traffic on other NICs. This occurs if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN. The result is that the iSCSI session management functionality in the plugin will fail to rebuild the iSCSI sessions to respond to failures of SAN changes.
==================================================
I’ve performed the test again and it showed your prediction is correct that the path is always fixed to vmk2.

1. I’ve restarted the 2nd ESX host, and found the path C0 to C1 showed correctly.
2. The reboot master switch, from 1st or 2nd ESX Host, I cannot ping vCenter or vmkping EQL or iSCSI vmks on other ESX hosts, and CANNOT PING OTHER VMKernel such as VMotion of FT this time as well.
3. But VM did not crash or restart, so underneath, all iSCSI connections stay on-line, that’s good news.
4. After master swtich comes back, under storage path on ESX Hosts, I see those C4, C5 paths generated.

Could you confirm with VMware and EQL if they have this bug in their ESX 4.1 please? (ie, Path somehow is always fixed to vmk2)

I even did a test by switching off iSCSI ports on the master switch one by one, problem ONLY HAPPENS when I switch off ESX Host VMK2 port. (which is the first vmk iScsi port, ie, the default route for the subnet?)

It confirmed the vmkping IS BOUND to the 1st iSCSI vmk port which is vmk2 and this time, all my vmkping dead including VMotion as well as FT.

The good news is underneath the surface everything works perfectly as expected, iSCSI connections are working fine, VMs are still working and they can be VMotioned around, FT is also working fine.

We do hope VMware and EQL can release a patch sometime soon so they fix this vmkping problem that vmkping always has to go out of the default vmk2 which is the 1st iSCSI VMKernel in the vSwitch, but not any other vmk connections, so when the master switch dead, vmkping also died with vmk2 as vmkping uses vmk2 for ICMP ping to other VMKernel IP address.