How to get ESX MPIO working on StardWind iSCSI SAN

By admin, April 21, 2011 10:23 pm

Anton (CTO of Starwind) gave me a great gift last night (StarWind v5.6 Ent), thanks!  I couldn’t wait to set it up and do some tests on this latest toy!

The setup is very easy, took me less than 5 mins, probably I’ve installed the previous v4.x back in 2009, but setup according to my own taste is a bit tricky as you need to tweak starwind.cfg and understand the first few parameters especially under the <Connection> section.

It took me 1 hours to get everything working (ie, ESX MPIO+Starwind) as I want to limit the NICs to only iSCSI subnet, as well as change the default iSCSI port to 3268. Yes, sure you can use a non-default port as 3268, as my 3260 is occupied by Microsoft’s iSCSI Target 3.3. I found the default installation also opens the management and iSCSI port 3261/3260 to public in firewall, you definitely want to disable it and limit the NIC access in StarWind management console as well the .cfg file.

So I have configured two Gbit NICs on WindStar box,

10.0.8.2:3268
10.0.8.3:3268

On each of the ESX Host there are 4 Gbit NICs on iSCSI subnet, I added one of the target IP 10.0.8.2:3268, then I found ONLY 4 MPIO Paths discovered, but not the 8 paths, all 4 were using the 10.0.8.2 path, this mean the other redundant path 10.0.8.3:3268 was not being used at all, so MPIO was not working technically specking. On contrast, Microsoft iSCSI Target will add the other one 10.0.8.3:3268 automatically, so it correctly shows 8 Paths.

After searching Starwind forum with Google (yes, use that site: command, so powerful), I quickly located the problem is within starwind.cfg

You can do normal ESX multipathing in Starwind without the HA cluster feature of Starwind 5, just follow the instructions for configuring Starwind to work with XEN and uncomment the <iScsiDiscoveryListInterfaces value=”1″/> line in the starwind.cfg file. This allows ESX to see all the possible paths to the iSCSI target on the server.

After enabled it, and restarted the StarWind service, Bingo! Everything worked as expected! 8 MPIO paths showing Active (I/O). This tweak does work for ESX as well not just Xen, and in fact it’s a MUST to enable it in order to see all paths.

So within the last 3 days, I was able to added two software iSCSI SAN to my VMware environment together with Equallogic, now I virtually have three SANs to play with, I will try to test Storage vMotion between all 3 SANs and perform some interesting benchmarking on StarWind as well as Microsoft iSCSI Target.

Later, I will try to configure the StarWind HA mode on VM (which is hosted on Equallogic), so it’s an iSCSI SAN within another iSCSI SAN. :)

終于出手了

By admin, April 21, 2011 7:10 pm

前天終于打電話給車模店訂了那台Autoart Ford Mustang GT390 1967年金色的﹐絕對是因為鄰居那1:1棗紅色的真傢伙實在太令人著迷﹗

今天路過Lancel手袋舖位﹐突然看見這個﹐太巧了﹗原來我的確對屁股好看的超跑情有獨鍾﹗想不到現在連美國的經典肌肉車也能迷倒我﹗

car

Official picture from Lancel web site:

2

Equallogic PS Series Firmware Version V5.0.5 Released

By admin, April 21, 2011 4:02 pm

As usual, I would wait at least 1 month before taking the firmware update, probably not to update the firmware at all as none of the following  issues occur to me.

Issues Corrected in this version (v5.0.5) are described below:

In rare cases, a failing drive in a array may not be correctly marked as failed. When this occurs, the system is unable to complete other I/O operations on group volumes until the drive is removed. This error affects PS3000, PS4000, PS5000X, PS5000XV, PS5500, PS6000, PS6010, PS6500, and PS6510 arrays running Version 5.0 of the PS Series Firmware.

I thought this has been fixed in v5.0.4 where the fix list indicates Drives may be incorrectly marked as failed. So this basically means a supposed failed drive is marked as health, but a healthy drive is marked as failed, wow, interesting! :)

• A resource used by an internal process during the creation of new volumes may be exhausted, causing the process to restart.

• If an array at the primary site in a volume replication relationship is restarted while the replication of the volume is paused, resuming replication could cause an internal process to restart at the secondary site.

• A resource used by the network management process could be exhausted causing slow GUI response.

• Volume login requests in clustered host environments could timeout resulting in the inability of some hosts to connect to the shared volume.

• A management process could restart while attempting to delete a local replica snapshot of a volume, resulting in slow array response at the primary site for that volume.

• When a management process is restarted, or a member array is restarted, a volume that is administratively offline could be brought online.

• If a member of the secondary site restarts while a volume replication is active, the group at the primary site could continue to report that the secondary site group is offline after the secondary site member is back online.

How to Extend VM partition under Linux (CentOS)

By admin, April 21, 2011 8:12 am

I often extend partition live (without downtime) for Windows VM using either diskpart or extpart from Dell, but extending partition under Linux is a totally different thing, it’s a bit complex if you are from Windows world.

  1. Increase the disk from vCenter GUI, reboot the server. (Take a Snapshot)
  2. ls -al /dev/sda* to find out the last created partition, sda2 is the one in my case
  3. fdisk /dev/sda type n for new partition, then p and 3 for the partition number (ie, sda3), then accept all the default first and last cylinders and finally w to complete the partition creation, then finish with reboot.
  4. pvcreate /dev/sda3 create a new Physical Volume
  5. vgextend VolGroup00 /dev/sda3 add this new volume to default Volume Group VolGroup00.
    (Note: vgextend cl /dev/sda3 in CentOS 7)
  6. vgdisplay to show the FREE PE (size of the free disk space), lvdisplay to show the volume name.
  7. Extend the volume by lvextend -L +XXG /dev/VolGroup00/LogVol00, you can find out the exact path of the default Logical Volume by lvdisplay. (ie, lvextend -L +20G…)
    (Note: lvextend -L +XXG /dev/cl/root in CentOS 7)
  8. Resize the file system by resize2fs /dev/VolGroup00/LogVol00 to complete the whole process. (If everything works, remove the Snapshot)

Update: May 15, 2017
For CentOS 7 , use xfs_growfs /dev/cl/root as it’s use XFS file system instead of the traditional ext3/4 based file systems, also Group and volume name have been changed to cl (was VolGroup00) and root (was VolGroup00).