Category: Equallogic & VMWare (虛擬化技術)

How to get ESX MPIO working on StardWind iSCSI SAN

By admin, April 21, 2011 10:23 pm

Anton (CTO of Starwind) gave me a great gift last night (StarWind v5.6 Ent), thanks!  I couldn’t wait to set it up and do some tests on this latest toy!

The setup is very easy, took me less than 5 mins, probably I’ve installed the previous v4.x back in 2009, but setup according to my own taste is a bit tricky as you need to tweak starwind.cfg and understand the first few parameters especially under the <Connection> section.

It took me 1 hours to get everything working (ie, ESX MPIO+Starwind) as I want to limit the NICs to only iSCSI subnet, as well as change the default iSCSI port to 3268. Yes, sure you can use a non-default port as 3268, as my 3260 is occupied by Microsoft’s iSCSI Target 3.3. I found the default installation also opens the management and iSCSI port 3261/3260 to public in firewall, you definitely want to disable it and limit the NIC access in StarWind management console as well the .cfg file.

So I have configured two Gbit NICs on WindStar box,

10.0.8.2:3268
10.0.8.3:3268

On each of the ESX Host there are 4 Gbit NICs on iSCSI subnet, I added one of the target IP 10.0.8.2:3268, then I found ONLY 4 MPIO Paths discovered, but not the 8 paths, all 4 were using the 10.0.8.2 path, this mean the other redundant path 10.0.8.3:3268 was not being used at all, so MPIO was not working technically specking. On contrast, Microsoft iSCSI Target will add the other one 10.0.8.3:3268 automatically, so it correctly shows 8 Paths.

After searching Starwind forum with Google (yes, use that site: command, so powerful), I quickly located the problem is within starwind.cfg

You can do normal ESX multipathing in Starwind without the HA cluster feature of Starwind 5, just follow the instructions for configuring Starwind to work with XEN and uncomment the <iScsiDiscoveryListInterfaces value=”1″/> line in the starwind.cfg file. This allows ESX to see all the possible paths to the iSCSI target on the server.

After enabled it, and restarted the StarWind service, Bingo! Everything worked as expected! 8 MPIO paths showing Active (I/O). This tweak does work for ESX as well not just Xen, and in fact it’s a MUST to enable it in order to see all paths.

So within the last 3 days, I was able to added two software iSCSI SAN to my VMware environment together with Equallogic, now I virtually have three SANs to play with, I will try to test Storage vMotion between all 3 SANs and perform some interesting benchmarking on StarWind as well as Microsoft iSCSI Target.

Later, I will try to configure the StarWind HA mode on VM (which is hosted on Equallogic), so it’s an iSCSI SAN within another iSCSI SAN. :)

Equallogic PS Series Firmware Version V5.0.5 Released

By admin, April 21, 2011 4:02 pm

As usual, I would wait at least 1 month before taking the firmware update, probably not to update the firmware at all as none of the following  issues occur to me.

Issues Corrected in this version (v5.0.5) are described below:

In rare cases, a failing drive in a array may not be correctly marked as failed. When this occurs, the system is unable to complete other I/O operations on group volumes until the drive is removed. This error affects PS3000, PS4000, PS5000X, PS5000XV, PS5500, PS6000, PS6010, PS6500, and PS6510 arrays running Version 5.0 of the PS Series Firmware.

I thought this has been fixed in v5.0.4 where the fix list indicates Drives may be incorrectly marked as failed. So this basically means a supposed failed drive is marked as health, but a healthy drive is marked as failed, wow, interesting! :)

• A resource used by an internal process during the creation of new volumes may be exhausted, causing the process to restart.

• If an array at the primary site in a volume replication relationship is restarted while the replication of the volume is paused, resuming replication could cause an internal process to restart at the secondary site.

• A resource used by the network management process could be exhausted causing slow GUI response.

• Volume login requests in clustered host environments could timeout resulting in the inability of some hosts to connect to the shared volume.

• A management process could restart while attempting to delete a local replica snapshot of a volume, resulting in slow array response at the primary site for that volume.

• When a management process is restarted, or a member array is restarted, a volume that is administratively offline could be brought online.

• If a member of the secondary site restarts while a volume replication is active, the group at the primary site could continue to report that the secondary site group is offline after the secondary site member is back online.

How to Extend VM partition under Linux (CentOS)

By admin, April 21, 2011 8:12 am

I often extend partition live (without downtime) for Windows VM using either diskpart or extpart from Dell, but extending partition under Linux is a totally different thing, it’s a bit complex if you are from Windows world.

  1. Increase the disk from vCenter GUI, reboot the server. (Take a Snapshot)
  2. ls -al /dev/sda* to find out the last created partition, sda2 is the one in my case
  3. fdisk /dev/sda type n for new partition, then p and 3 for the partition number (ie, sda3), then accept all the default first and last cylinders and finally w to complete the partition creation, then finish with reboot.
  4. pvcreate /dev/sda3 create a new Physical Volume
  5. vgextend VolGroup00 /dev/sda3 add this new volume to default Volume Group VolGroup00.
    (Note: vgextend cl /dev/sda3 in CentOS 7)
  6. vgdisplay to show the FREE PE (size of the free disk space), lvdisplay to show the volume name.
  7. Extend the volume by lvextend -L +XXG /dev/VolGroup00/LogVol00, you can find out the exact path of the default Logical Volume by lvdisplay. (ie, lvextend -L +20G…)
    (Note: lvextend -L +XXG /dev/cl/root in CentOS 7)
  8. Resize the file system by resize2fs /dev/VolGroup00/LogVol00 to complete the whole process. (If everything works, remove the Snapshot)

Update: May 15, 2017
For CentOS 7 , use xfs_growfs /dev/cl/root as it’s use XFS file system instead of the traditional ext3/4 based file systems, also Group and volume name have been changed to cl (was VolGroup00) and root (was VolGroup00).

“The operation is not supported on the object” encountered when deploy VM from Template.

By admin, April 19, 2011 2:55 pm

Today, when I deploy a CentOS VM from Template, I’ve encountered an error:

Reconfigure virtual machine Status showing “The operation is not supported on the object”

Googled around and find nothing, then I realized it’s probably something to do with the hardware configuration. I checked the vmfs configuration file and found ddb.adapterType = “lsilogic”, after remove it, everything is back to normal, of course, I’ve updated my template as well. It’s due to the CentOS template VM Disk Controller has been changed and the old configuration was still kept somehow.

I also discover deploy a Linux VM somehow will add new a NIC, the solution is to remove the nic.bak, and reconfigure the IP on the new eth0.

Update Jun-21-2011

I’ve encountered the same problem today when deploy from a w2k8r2 template, the annoying alert simply won’t go away. Luckily, I’ve found out the solution by trial and error. Simply convert the Template to VM, then to Template solved the problem completely. I suspect this is a bug in ESX 4.1, the original template was Cloned from the running VM, may be that’s why!

 

It’s REAL, Microsoft iSCSI Software Target 3.3 is FREE now!

By admin, April 18, 2011 2:06 pm

This is probably one of the most exciting news from Microsoft for virtualization community in recent years! It’s a bad news for many others who offering the similar products for high cost like StarWind, SANMelody, etc. and it’s even a sad news for those who purchased W2K8R2 storage servers from OEMs like Dell, HP and IBM a year ago.

On Apr 8 Microsoft has made it’s iSCSI Target software (original it’s WinTarget) for FREE! So go grab it and use it in your VMware ESX. :) Sure, it doen’t have many fancy features comparing to Equallogic, but it’s FREE and it supports ESX and Hyper-V as well as Xen and comes with schedule snapshots, so there is really nothing to complain.

Setup is very simple, just make sure you DE-SELECT the non-iSCSI NIC interfaces and leave only the iSCSI ones or you may risk to open your iSCSI SAN to the world and please double check the firewall setting disabled Public access, note after the default installation, somehow it enables Public access to iSCSI Target, huh? Ouch!

I got ESX 4.1 to connect to the MS iSCSI Target without any problem, and also went as far as changing the multipathing policy to EQL_PSP_EQL_ROUTED, haha…guess what, it did work apparently as all paths showing Active (I/O), but obviously it won’t work as later I found out there is no disk to mount under Storage, well it was expected. This leaves Round Robin (VMware) the best choice for MPIO setting and finally I loaded IOmeter, MPIO did shoot up to 60-80Mbps as I have two active links, not bad consider the underlying RAID-5 has only 4 disks on PERC H700 and the good news is the CPU loading of WinTarget.exe is very low, almost close to 0%.

FAQ
Q: The Microsoft iSCSI Software Target is now free. Is it supported in a production environment?
A: Yes. The Microsoft iSCSI Software Target is supported in a production environment. The Hyper-V team regularly tests with the MS iSCSI Software Target and it works great with Hyper-V.

Q: On what operating systems is the Microsoft iSCSI Software Target supported?
A: The Microsoft iSCSI Software Target is supported for Windows Server 2008 R2 Standard, Enterprise and Datacenter Editions with or without SP1 (in fact, that’s what is recommended), and it can only be installed on Windows Server 2008 R2 Full install, but not Core Install.

The Microsoft iSCSI Software Target 3.3, is provided in only in an x64 (64-bit) version.

Q: Can the free Microsoft Hyper-V Server 2008 R2 use the free Microsoft iSCSI Software Target?
A: Yes and No. Yes, Microsoft Hyper-V Server 2008 R2 can act as a client to access virtual machines via iSCSI. The way to do that is to type iscsicpl.exe at the command prompt to bring up the Microsoft iSCSI Initiator (client) and configure it to access an iSCSI Target (server). However, you can’t install the Microsoft iSCSI Software Target on a Microsoft Hyper-V Server. The Microsoft iSCSI Software Target requires Windows Server 2008 R2.

Q: Can I use the Microsoft iSCSI Software Target 3.3 as shared storage for a Windows Server Failover Cluster?
A: Yes. That is one of its most common uses.

Download the Microsoft iSCSI Software Target 3.3 for Windows Server 2008 R2, go to http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0 and download a single file called “iSCSITargetDLC.EXE”.

Finally, make sure you read and understand the Scalability Limits!

Barracuda Spam & Virus Firewall Vx, NICE!

By admin, April 11, 2011 10:19 pm

I got the Barracuda Spam & Virus Firewall Vx virtual appliance working in less than 30 minutes, it’s really easy to setup and use, there is almost no learning curve if your job involving managing email server daily.

In fact, I almost got a Barracuda 300 back in 2008, but the performance back then wasn’t good, now with the VM version, it’s lightening fast, thanks to latest CPU and Equallogic SAN, so I may eventually purchase one of these nice stuff for my clients this month.

One major drawback is the lack of comprehensive reporting capability even after almost 10 years of it’s product life. It can’t even provide simple things like list the top 100 domain name with most email usage and size for a specific day or time, list the top user who used the most email bandwidth, etc. Without this kind of reporting capability, I would say Barracuda can never make to the real ISP enterprise market, it’s good for SMB though.

bc

PS. I just found out Barracuda Networks introduced the Barracuda Reputation Block List (b.barracudacentral.org) for free! Of course, you need to register your DNS server first in order to use it.

Entering Maintenance Mode but VM won’t be automatically migrated to other ESX Host by DRS

By admin, April 11, 2011 5:10 pm

Then I found this thread on VMTN which explained everything.

This is not a BUG. However, is the way we understand the HA Settings and Functionality. When in “HA Settings” you specify the “Failover Capacity” as “1″ and you have a 2 NODE Cluster, you are simply telling the HA that in any given instance it will have “At least 1″ spare HOST. Now, when you Manually or Using Update Manager try to put a HOST in Maintenance Mode, HA Failover Capacity is “Violated” because while the HOST is in Maintenance Mode, there is “NO Spare HOST” for HA. Meaning in an event the Second Node goes down, everything goes down and HA will never work. This a Straight Violation to the “Failover Capacity” that you have specified.

Hence, by all means in a 2 NODE Cluster you have to “Allow VMs to Power On even if they violate the availability constrains” if you want them to be Automatically Migrated when you put HOST on Maintenance Mode or use Update Manager. If you don’t want to change this setting and still use this feature you need to add another HOST to the Cluster while keeping the Failover Capacity at 1.

The new version 4.1 of vShield Manager and vShield Zone

By admin, April 11, 2011 4:12 pm

vsz1-1

  • The good thing, it’s FREE with ESX Advance/Enterprise/Enterprise Plus version.
  • Yes, it’s simply a transparent firewall utilizing VMsafe API, so there is no need to change the default public IP on VMs, vShield Zone (ie, firewall) comes with limited functions comparing to real stuff like Netscreen, but it does get the job done by limiting ports, source, destination, direction on L2/L3 and L4 layer, one extra nice thing is vShield Zone comes with a bunch of dynamic ports based application such as FTP, DNS, etc.
  • In version 4.1, there is no more separate OVF for vShield agent, it’s been renamed to vShield Zone, and deployment of vShield Zone is simply by clicking the Install link on the menu, it’s so much simpler to install a firewall on each ESX host with v4.1, no need to create any template for vShield Zone like in the old days as well. In additional, A new vSwitch, called vmservice-vswitch, is also created. It has no physical NICs assigned to it and has a VMkernel interface with a 169. IP address. This vSwitch should not be modified. It’s used exclusively by the Zones firewall VM, which has two vNICs connected to it. Through the vNICs , the Zones VM communicates with the LKM in the VMkernel. One vNIC is used forcontrol, and the other is for data path communication.
  • The original version of vShield operated in bridged mode and sat inline between vSwitches so that all traffic to the protected zones passed through it. The new method of monitoring traffic at the vNIC, instead of the vSwitch, eliminates the vSwitch reconfiguration that previously occurred, and it provides better protection. In bridged mode, VMs in a protected zone had no protection from other VMs in the protected zone, but now that vShield Zones operates at the vNIC level, every VM is totally protected.
  • So if something happens to the Zones virtual firewall VM (e.g., it’s powered off), the networking on a host will go down, because nothing can route without the virtual firewall VM. If you migrate a VM from a Zones-protected host to an unprotected one, vCenter Server automatically removes the filter, so a VM won’t lose network connectivity on its new host.
  • Also in the new version of 4.1, VM Flow is gone (it was available free in is previous version), you need to upgrade to VShield App get have it back again. For my environment, I use PRTG’s packet analyzer on switch mirror ports, so such feature is not required.
  • In this new version 4.1, committed firewall policy is applied in real time, there is no need to login to console and issue validate sessions anymore.
  • vShield Zone firewall can apply to 3 levels, Data Center, Cluster and Port Groups (?), I usually deploy it at Cluster level due to DRS.
  • If you have a cluster, then it’s highly recommended to install vShield Zone on all ESX hosts as VMs may got vMotioned between ESX hosts in the cluster and they will still be protected by vShield Zone (ie, firewall).
  • Install vShield Zone process does not need to reboot the ESX host, but uninstall vShield Zone does require reboot the ESX host, after the reboot, often you will find the originally configured vShield Zone switch is not removed, so you need to remove it manually.
  • It’s nice to see the extra tab in vCenter interface, but I still prefer to manage vShield using the web interface.
  • You can always get more features by upgrading to other advanced vShield Products like vShield Edge as those will provide features like VPN, routing, load balancing, etc.
  • Under Maintenance Mode, vShield Zone for a particular ESX host SHOULD NEVER NOT be vmotioned away although vShield Manage can. You will need to manually shut it down (by using CLI shutdown) after DRS automatically migrates all other VMs on the host and reboot the host. I am still trying to figure out how to set maintenance mode DRS recommendation for vShield VM, if you know, please do let me know, thanks.

Part of the above were quoted from Eric Siebert’s newly revised Installing VMware vShield Zones for a virtual firewall and don’t forget to review his article on “Top 10 VMware security tips for vShield users”

Update: Oct 30, 2011

I found the solution to my last question in how to avoid vShield Zone VM to be moved away during Maintenance Mode by DRS, the answer is at the end of the Install vShield Zone PDF (Page 13, can’t believe I missed this extremely important piece of information). Basically, you need to set vShield Zone VM Restart Priority to Disable under HA and Automation Level to Disabled under DRS.

To prove it’s working  I did a test, I put the host in to Maintenance Mode, DRS was able to vmotion away everything to other available nodes except the vShield Zone VM, then it shuts it down nicely, everything is done automatically.

Bravo!

How to Update Firmware on ESX using Dell Repository Manager

By admin, April 9, 2011 9:55 pm

No more guessing, I finally got it working and performed the firmware updates on ESX. So now it’s time for Dell to make its vCenter Plugin product for FREE? :)

So the following is what I did today, btw, many thanks for local Dell Pro-Support as well:

1. Download the latest version of Dell Repository Manager (v1.2.158) and install it with all the default options.

2. Start DRM in Server Mode (Client Mode is for desktop and notebook).

3. Create a new inventory, select the options with care and according to your requirement ONLY, this wil greatly reduce the total update package size, like Rack, R710, Windows Package Only.

* IMPORTANT: You will ONLY NEED TO DOWNLOAD the LATEST Windows Package, no need to download any Linux Package as USC or Life Cycle Controller 1.4 or above will only support SUU ISO/USB with Windows Package anyway, so using Windows DUP (Dell Update Package) to update Powerdge server firmware even with ESX installed is perfectly O.K.!

4. Export the whole update to ISO, select Export as Server Update Utility (SUU) ISO.

5. Reboot your ESX server (of course migrate your VM away to another ESX host first) and then use iDRAC console to enter USC (ie, F10), select firmware update, mount virtual media with the DRM  SUU ISO created in the above step. Soon you will encounter “Catalog file not authenticated correctly”, Don’t Worry! Just ignore it and continue the ISO export because the exported SUU is simply NOT digitally signed.

6. Select firmwares need to be updated and then reboot, system will return to USC by default, go to firmware update again, and compare the update result with DRM ISO if needed.

Finally, the catalog.xml on dell.ftp.com is always a few months behind the latest update. This is where Dell RM does the magic that you can simply Add the latest firmware to an existing inventory and customize to whatever you want it to be, just remember to download those latest firmware first to any folder, Dell RM will copy it over again to root of the RM’s inventory folder, then save it and export a new SUU ISO again.

PS. I just found another great article “The Easiest Way To Update A Dell Server’s Firmware

 

Update Jul-19-2011

After reading page 77-79 of the latest Dell vCenter Plugin 1.0.1 User Guide, I found DVP still requires a manual creation of a Fimrware Repository like the above (and requires CIFS or NFS). So this step really renders DVP obsolete as we can achieve the same result using Dell Repository Manager with even better control and management.

In that sense, I strongly believe Dell will make its Dell vCenter Plugin for Free within the next 12-24 months.

 

Update Nov-26-2011

The following improvements have been made in Repository Manager v1.3 which was released in July 2011:

  • Support for Dell PowerEdge maintenance driver packs and Dell Unified Server Configurator (USC) driver pack management
  • Repository Manager now displays the size of bundles and components
  • New plugin management support for Server Update Utility (SUU) and Deployment media plug-ins
  • Tool tips in the Export Bundles/Components wizards to explain the various options
  • Enhanced performance and reliability for downloading files from ftp.dell.com
  • Runtime logs now enabled to capture error, warning, and information
  • Improved Success / Failure feedback after each process with reasons for failure if any + recommendations to remedy.
  • Report dialog box displayed upon completion of tasks such as repository creation and component downloads
  • Cancellation functionality added for time-consuming tasks
  • Compare button easier to find, with wizards to guide you through the process.
  • Added ability to download, store, and run SUU & DTK plugins locally 
  • Improved user interaction for time-consuming tasks

Capping the Resources on VM

By admin, April 8, 2011 10:40 pm

I am starting to put different capping policies on VM resources such as CPU, Memory, Storage and Network this week.

The concept is very simple, but implementation takes a lot more thinking and planning than the followings:

High: 2000 vCPU, 20 VM RAM
Normal: 1000 vCPU, 10 VM RAM
Low: 500 vCPU, 5 VM RAM

High:Normal:Low = 4:2:1

For example, a VM with 1 vCPU and 1GB (1024MB) RAM Shares = NORMAL

So CPU Shares will show 1000 and Memory Shares wil show 10240.

In fact, after the capping starts to kick in, I begin to fall in love with VMware’s Resources Managment!

Pages: Prev 1 2 3 4 5 6 7 ...12 13 14 15 16 17 18 Next