Category: Others (其它)

Equallogic Firmware 5.0.2, MEM, VAAI and ESX Storage Hardware Accleration

By admin, October 1, 2010 6:43 pm

Finally got this wonderful piece of Equallogic plugin working, the speed improvement is HUGE after intensive testing in IOmeter.

100% Sequential, Read and Write always top 400MB/sec, sometimes I see 450-460MB/sec for 10 mins for a single array box, then the PS6000XV box starts to complain about all its interfaces were being saturated.

For IOPS, 100% Random, Read and Write has no problem reaching 4,000-4,500 easily.

The other thing about this Equallogic’s MEM script is IT IS JUST TOO EASY to setup the whole iSCSI vSwitch/VMKernel with Jumbo Frame or Hardware iSCSI HBA! 

There is NO MORE complex command lines such as esxcfg-vswitch, esxcfg-vmknic or esxcli swiscsi nic, life is as easy as a single command of setup.pl –config or –install, of course you need to get VMware vSphere Power CLI first.

Something worth to mention is the MPIO parameter that you can actually tune and play with.

C:\>setup.pl –setparam –name=volumesessions –value=12 –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Setting parameter volumesessions  = 12

Parameter Name  Value Max   Min   Description
————–  —– —   —   ———–

reconfig        240   600   60    Period in seconds between iSCSI session reconf
igurations.
upload          120   600   60    Period in seconds between routing table upload
.
totalsessions   512   1024  64    Max number of sessions per host.

volumesessions  12    12    3     Max number of sessions per volume.

membersessions  2     4     1     Max number of sessions per member per volume.

 

C:\>setup.pl –setparam –name=membersessions –value=4 –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Setting parameter membersessions  = 4

Parameter Name  Value Max   Min   Description
————–  —– —   —   ———–

reconfig        240   600   60    Period in seconds between iSCSI session reconf
igurations.
upload          120   600   60    Period in seconds between routing table upload
.
totalsessions   512   1024  64    Max number of sessions per host.

volumesessions  12    12    3     Max number of sessions per volume.

membersessions  4     4     1     Max number of sessions per member per volume.

Yes, why not getting it  to its maximum volumesessions=12 and membersessions=4, each volume won’t spread across more than 3 array boxes anyway, and the new firmware 5.0.2 allows 1024 total sessions per pool, that’s way way more than enough. So say you have 20 volumes in a pool and 10 ESX hosts, each having 4 NICs for iSCSI, that’s still only 800 iSCSI connections.


Update Jan-21-2011

Do NOT over allocate membersessions to be greater than the available iSCSI NICs, I encountered a problem that allocating membersessions = 4 when I only have 2 NICs, high TCP-Retransmit starts to occur!

To checkup if Equallogic MEM has been installed correctly, issue

C:\>setup.pl –query –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Found Dell EqualLogic Multipathing Extension Module installed: DELL-eql-mem-1.0.
0.130413
Default PSP for EqualLogic devices is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06ba23424c914a0f1889d68 is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06b72405fc9b4a0f1880d96 is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06b722496c9c4a2f1888d0e is DELL_PSP_EQL_ROUTED.
Found the following VMkernel ports bound for use by iSCSI multipathing: vmk2 vmk3 vmk4 vmk5

 One word to summarize the whole thing: “FANTASTC” !

 More about VAAI from EQL FW5.0.2 Release Note:
 

Support for vStorage APIs for Array Integration

Beginning with version 5.0, the PS Series Array Firmware supports VMware vStorage APIs for Array Integration (VAAI) for VMware vSphere 4.1 and later. The following new ESX functions are supported:

•Hardware Assisted Locking – Provides an alternative means of protecting VMFS cluster file system metadata, improving the scalability of large ESX environments sharing datastores.

•Block Zeroing – Enables storage arrays to zero out a large number of blocks, speeding provisioning of virtual machines.

•Full Copy – Enables storage arrays to make full copies of data without requiring the ESX Server to read and write the data.

VAAI provides hardware acceleration for datastores and virtual machines residing on array storage, improving performance with the following:

•Creating snapshots, backups, and clones of virtual machines

•Using Storage vMotion to move virtual machines from one datastore to another without storage I/O

•Data throughput for applications residing on virtual machines using array storage

•Simultaneously powering on many virtual machines

•Refer to the VMware documentation for more information about vStorage and VAAI features.

 

Update Aug-29-2011

I noticed there is a minor update for MEM (Apr-2011), the latest version is v1.0.1. Since I do not have such error and as a rule of thumb if there is nothing happen, then don’t update, so I won’t update MEM for the moment.

Finally, I wonder if MEM will work with vSphere 5.0 as the released note saying “The EqualLogic MEM V1.0.1 supports vSphere ESX/ESXi v4.1″.

Issue Corrected in This Release: Incorrect Determination that a Valid Path is Down

Under rare conditions when certain types of transient SCSI errors occur, the EqualLogic MEM may incorrectly determine that a valid path is down. With this maintenance release, the MEM will continue to try to use the path until the VMware multipathing infrastructure determines the path is permanently dead .

 

Import VM from previous VMware ESX versions into vSphere ESX 4.1

By admin, September 19, 2010 12:38 am

The following method proved to work even for pre-vSphere ESX servers (e.g., Dinosaurs like 2.5.4) vmdk files and same methodology also worked for True Image Server images.

Steps:

11. Use Veeam’s great free tool FastSCP to upload the original.vmdk file (nothing else, just that single vmdk file is enough, not even vmx or any other associated files) to /vmfs/san/import/original.vmdk. In case you didn’t know, Veeam FastSCP is way faster than the old WinSCP or accessing VMFS from VI Client.

2. Putty SSH into ESX host and change su -, then cd to directory /vmfs/san/import/, then  issue the command “vmkfstools -i original.vmdk www.abc.com.vmdk”

(note: Of course, you can use “-d thin” means thin provisioning, but I won’t recommend it as it’s waste of time and takes 2-3 times longer to convert to version 7, you will see why later)

I would also suggest DO NOT remove the original.vmdk until you have successfully completed the whole migration process.

3. Create a new VM as usual (for example, www.abc.com), when the wizard asked for disk size, just put 1GB (it’s going to be overwritten anyway later) . For best performance, select VMXNET3 during the configuration. (We shall add PVSCSI later, see step 5.)

Now, go back to putty, simply issue “mv www.abc.com* /vmfs/san/www.abc.com/”, it will ask you if you want to overwrite the two default files (www.abc.com.vmdk and www.abc.com-flat.vmdk), say yes to both.

4. Now right click and select Re-config (you must do this, or your VM won’t boot), Select everything and then change everything, including Windows License and Workgroup name, time, etc. Then you can boot this VM and login as usual.  Btw, you will find network is not ready as it’s DHCP, you can change to static IP later, after login, VM sysprep will do all the trick for you and it will also reboot itself and re-configure something more.

5. In order to have ESX 4 newly added PVSCSI for your disk controller, you will need to Add a new disk of say 10MB  and MUST choose a Virtual Device Node between SCSI (1:0) to SCSI (3:15) and specify whether you want to use Independent mode, then change the disk controller to PVSCSI, while keeping the original disk as whatever it is for now.

If you try to add PVSCSI to disk11 controllr without boot the VM, then you will get a famous blue screen BSOD as you haven’t installed or updated the VMware Tools for PVSCSI. Boot the VM, login and let VMTools to install all the necessary drivers for you including PVSCSI and VMXNET3.

Details see:
http://xtravirt.com/boot-from-paravirtualized-scsi-adapter
http://www.vladan.fr/changement-from-lsilogic-paralel-into-pvscsi/

Official from VMware (KB Article: 1010398)

Now shutdown your VM again, go to the original disk controller and change it to PVSCSI and power again. You’ve got it! Simple as that!

Of course, there is always some reason you DO NOT want to use VMware PVSCSI
http://blog.scottlowe.org/2009/07/05/another-reason-not-to-use-pvscsi-or-vmxnet3/

6. You may also want to re-configure the VM using re-configure now, what you need now is to right click the VM and choose “Re-Configure”, but before doing so, you will of course also need to install the corresponding sysprep in Virtual Center first.

For example, install Windows 2003 Sysprep files, you can download the file from

http://www.microsoft.com/downloads/details.aspx?FamilyID=93f20bb1-97aa-4356-8b43-9584b7e72556

Instructions: Run the filename.exe /x and extract the files into a folder, within you will find deploy.cab, extra again to C:\ProgramData\VMware\VMware vCenter Converter\sysprep\svr2003 (This is for ESX 4.1 VMware Converter). You will also need to do the same for C:\ProgramData\VMware\VMware vCenter\sysprep\svr2003 if you want to use sysprep for w2k3 template deployment later.

Use sysprep to edit the computer name, re-enter the Windows activation code and other stuff if necessary.

Now Power-On the VM, and login, it will automatically use sysprep to upgrade everything for you, just relax and watch the whole magical thing to happen, it will reboot the server, once it’s done, the only last step is to change the Display Hardware Acceleration to Full.

7. That’s it! Not yet! After reboot, when I tried to re-configure the IP again, I found

“The IP address you entered for this network adapter is already assigned to another adapter ‘VMware PCI Ethernet Adapter’. The reason is ‘VMware PCI Ethernet Adapter” is hidden from the Network connections folder because it is not physically in the computer.”

Solution:

-         Select Start > Run.

-         Enter cmd.exe and press Enter. This opens a command prompt. Do not close this command prompt window. In the steps below you will set an environment variable that will only exist in this command prompt window.

-         At the command prompt, run this command:
set devmgr_show_nonpresent_devices=1

-         In the same command prompt run this command:
Start devmgmt.msc (press Enter to start Device Manager.)

-         Select View > Show Hidden Devices.

-         Expand the Network Adapters tree (select the plus sign next to the Network adapters entry).

-         Right-click the dimmed network adapter, and then select Uninstall.

-         Close Device Manager.

-         Close the Command Prompt

Actually you would want to Uninstall all the previous NIC Cards just to make sure and have a clear environment.

8. To change back to Uniprocessor for VM

According to Microsoft, “If you run a multiprocessor HAL with only a single processor installed, the computer typically works as expected, and there is little or no affect on performance.” But if you’re like me and just want to be absolutely sure that there won’t be issues, switching back to the uniprocessor HAL in Windows Server 2003 is pretty easy:

-         Make sure you have at least Windows Server 2003 Service Pack 2 installed.

-         Shut down the virtual machine.

-         Change number of virtual processors to 1

-         Power on the virtual machine.

-         In Windows, go to Device Manager -> Computer.

-         Right-click “ACPI Multiprocessor PC” and choose “Update Driver…“.

-         Select “No, not this time” option -> “Install from a list or specific location” -> “Don’t search. I will choose the driver to install.” -> select “ACPI Uniprocessor PC.”

-         Reboot the virtual machine.

9. VMware ESX 4 Reclaiming Thin Provisioned disk Unused Space

http://www.virtualizationteam.com/virtualization-vmware/vsphere-virtualization-vmware/vmware-esx-4-reclaiming-thin-provisioned-disk-unused-space.html

The summary of the solution is to use sdelete & Storage Vmotion on the virtual machine to free up that unused space.

That’s all!!! You have just successfully imported or migrated a VM from a previous old ESX version to the latest ESX 4.1 and this method should work for all ESX versions, and what’s best, you upgraded all your existing VM to VMware Version 7 with PVSCSI and VMXNET3 enhanced drivers that you can really take the benefits with technology like vStorage/VAAI/Veeam Change Block Tracking (CBT), etc.

That’s all, for more info about Import or Convert VM into ESX, see

http://blog.lewan.com/2009/12/22/vmware-vsphere-using-vmware-converter-to-import-vms-or-vmdks-from-other-vmware-products/

10. To extend disk in real-time without downtime for Windows Server 2003/2000. (Windows 2008 has the magic build, you can expand/strink the partition on the fly)

a. You can use Diskpart to extend any non-bootable partition (e.g. , D:\) on the fly. *Need to disable Page file first!!!
b. You can use Dell’s Extpart to extend bootable partition (e.g., C:\) on the fly under ONE CONDITION, the extended partition need to be RIGHT-BEHIND C:\ partition. (You can use Acronis Disk Director’s Rescue Media to do this)

 

Update:

I found sometimes when importing an old VMDK file or Acrnois image, the default disk controller is IDE, which you definitely need to change it to SCSI for much better performance.

Converting a virtual IDE disk to a virtual SCSI disk (KB Article: 1016192)

To convert the IDE disk to SCSI:
1.Locate the datastore path where the virtual machine resides. For example:

/vmfs/volumes/

2.From the ESX Service Console, open edit the primary disk (.vmdk) in a text editor.
3.Look for the line:

ddb.adapterType = “ide”

4.To change the adapter type to LSI Logic change the line to:

ddb.adapterType = “lsilogic”

To change the adapter type to Bus Logic change the line to:

ddb.adapterType = “buslogic”

5.Save the file.
6.From VMware Infrastructure or vSphere Client
a.Click Edit Settings for the virtual machine.
b.Select the IDE virtual disk.
c.Choose to Remove the Disk from the virtual machine.
d.Click OK.

Caution: Make sure that you do not choose Remove from disk.

7.From the Edit Settings menu for this virtual machine:
1.Click Add > Hard Disk > Use Existing Virtual Disk.
2.Navigate to the location of the disk and select to add it into the virtual machine.
3.Choose the same controller as in Step 3 as the adapter type. The SCSI ID should read SCSI 0:0.

8.If a CDROM device exists in the virtual machine it may need to have the IDE channel adjusted from IDE 0:1 to IDE 0:0. If this option is greyed out, remove the CDROM from the virtual machine and add it back. This sets it to IDE 0:0.

PCCW 100Mbps Fiber Broadband

By admin, September 18, 2010 3:32 pm

Today PCCW come to my place to install PCCW 100Mbps FTTH Fiber Optics for FREE! :)

It did take the whole team of 4 staff 3 hours to finish the testing and QC, now I have two 100Mbps broadband at home, one is PCCW (Fiber), the other is HKBN (RJ-45) and both can reach upload/download at 100Mbps. During the setup, PCCW staff used WebUI login to HuaWei HG863 and I asked if there is any special tunning I can do to increase performance or add feature, the answer is Not Much.

Shortly after, I am going to use Netscreen 5GT dual-home feature to combined the two for load-balancing and failover.

IMG_2765

IMG_2766

IMG_2764

 

根據跨國業界組織「光纖到戶協會」(Fiber-to-the-Home Council)於二○一○年二月發表的資料顯示,本港光纖到戶及光纖到樓服務的住戶滲透率(即使用兩類服務的住戶佔住戶總數的比例)為百分之三十三,全球排行第三,僅次於南韓和日本。部分亞洲其他地區有關的住戶滲透率如下:

    地區   光纖到戶及光纖到樓的住戶滲透率
    ──   ───────────────
    南韓     百分之五十二
    日本     百分之三十四
    台灣     百分之二十四

Equallogic takes time to kick in the additional paths under Windows MPIO

By admin, September 17, 2010 4:02 pm

I’ve spent almost 4 hours on-phone from mid-night to 4am in the morning trouble shooting with Dell Equallogic Consultants in US via WebEx today.

eqlpsAs we found the EQL I/O testing performance is low, only 1 path activated under 2 paths MPIO and disk latency is particular high during write for the newly configured array.

It was finally solved because we forgot the most fundamental concept after all that is Equallogic takes time to kick in the additional paths under MPIO!!! You need to wait say at least 5 mins to see the rest paths kick in.

The followings are my findings and mostly email exchange with Equallogic support. yes, it is long and boring to many, but it’s extremely useful for some who are seeking the same solutions for this problem, I wish someone put it on their blog previously, then I could sleep much better last night.

Timeline as in Descending Order:

- 2pm

We found a very interesting fact that the 2ND LINK WILL ONLY KICK IN AFTER THE 1ST LINK BEING SATURATED/OVERLOADED for a period of time, see 1.gif and 2.gif, So MPIO with Dell EqualLogic DSM (not using Microsoft Generic DSM) is actually working perfectly now and before!

1.gif showing both links are activated, I saw the 2nd link (EQL Mgt 2) suddenly kicked in (may be we opened more copy windows to iSCSI target) and it dropped out again and then come back again when needed.

2.gif shows the performance of the two active ports on EQL Iscsi target also increased by a lot. (From 45% to 80%)

So I can pretty sure the issue doesn’t exist right from the beginning, it just TAKES TIME FOR THE REST NICs (LINKs) to be activated gradually over the testing period and according to loading situation automatically. Previously, we only tested for less than 2 mins, in other words, we didn’t give enough time for MPIO intelligent logic to kick in additional paths for throughput or I/O.

- 12pm

See attached TR1036-MPIO_EQLX-DSM.pdf PS Series Best Practices
Configuring and Deploying the Dell EqualLogic™ Multipath I/O Device Specific Module (DSM) in a PS Series

MPIO DSM Load-Balance Policy

Microsoft MPIO DSM allows the initiator (server) to login multiple sessions to the same target (storage), and then aggregate that into a single device. Multiple target sessions can be established using different NICs to the target ports.

If one of the sessions fails, then another session continues to process I/O without interrupting the application.

Dell EqualLogic MPIO DSM supports following balancing policies.

• Fail Over Only: Data is sent in one path, while other paths are standby. This connection is used for routing data until it fails or times out. If the active connection fails, then one of the available paths is chosen until the former is available. This load balancing policy is the default configuration when MPIO DSM is disabled.

• Round Robin: All available paths are used to perform I/O in a rotating sequence (round robin sequence). There is no disruption in sending I/O even if any of the paths fails. Using this policy, all paths are used effectively.

• Least Queue Depth: I/O is sent to the path that has least queue length. The performance analyses for the above load balancing policies are presented in the following sections.

• EQL recommend to use Microsoft DSM with “Least Queue Depth” load balancing policy on Windows Server 2003/2003

• To fully utilize Microsoft’s MPIO capabilities, Dell EqualLogic provides MPIO DSM that is complementary to ASM for both high availability and performance.

- 11am

I found something very important on google.

Device Initialization Recall that MPIO allows for devices from different storage vendors to coexist, and be connected to the same Windows Server 2008 based or Windows Server 2003 based system. This means a single Windows server may have multiple DSM’s installed. When a new eligible device is detected via PnP, MPIO attempts to determine which DSM is appropriate to handle this particular device.

MPIO contacts each DSM, one device at a time. The first DSM to claim ownership of the device is associated with that device and the remaining DSMs are not allowed a chance to press claims for that already claimed device. There is no particular order in which the DSMs are contacted, one at a time. The only guarantee is that the Microsoft generic DSM is always contacted last. If the DSM does support the device, it then indicates whether the device is a new installation, or the same device previously installed but which is now visible through a new path.

Does this means if we see multiple DSM in MPIO, DELL Equallogic will be always used first or it’s priority is always higher than MS DSM?

- 10am

Some update I found: Even I added back with mpclaim -r -i -d “MSFT2005iSCSIBusType_0×9″

MPIO is still showing Dell Equallogic is the DSM instead of Microsoft, how can I force MPIO to select Microsoft instead of Dell Equallogic as desired? That exactly explained why there is ONLY ONE PATH (or NIC) working at the same time, but not load balancing across two NICs.

I even did a real time test, by Disabling a NIC, then all traffic automatically shifted to the 2nd NIC (or path) and vice versa. So seemed Windows Server 2008 R2 doesn’t understand Dell Equallogic DSM for MPIO. In other words, if Dell Equallogic is the DSM, then only one path is available.

I also find out from Google, that Windows Server 2008 DOES NOT add “MSFT2005iSCSIBusType_0×9″ automatically like in Windows Server 2003, we need to add it manually from MPIO GUI or CLI.

See the output.

C:\Users\Administrator>mpclaim -s -d

For more information about a particular disk, use ‘mpclaim -s -d #’ where # is
he MPIO disk number.

MPIO Disk System Disk LB Policy DSM Name
——————————————————————————-
MPIO Disk0 Disk 2 RR Dell EqualLogic DSM

C:\Users\Administrator>mpclaim -s -d 0

MPIO Disk0: 02 Paths, Round Robin, ALUA Not Supported
Controlling DSM: Dell EqualLogic DSM
SN: 6090A078C06B1219D3C8D49CF188CD5B
Supported Load Balance Policies: FOO RR LQD

Path ID State SCSI Address Weight
—————————————————————————
0000000077070001 Active/Optimized 007|000|001|000 0
0000000077070000 Active/Optimized 007|000|000|000 0

C:\Users\Administrator>mpclaim -r -i -d “MSFT2005iSCSIBusType_0×9″

So the KEY question is how can we FORCE MPIO DSM TO USE Microsoft instead of Dell Equallogic?

- 9am

1. Removed MPIO from W2K8 Feature, reboot, then remove HIT, reboot, and re-installed again, reboot, under MPIO, still no MSFT2005iSCSIBusType_0×9.

2. This time, I changed the NIC’s Flow Control to TX & RX and reading performance of EQL also increased to 99%.

I do think we need to enable Flow Control RX as well, as we saw yesterday, only writing to EQL is working at 99%, but reading from EQL is at 20%, so this proved it’s required.

3. Also, disk latency for read is very small (39ms compares to 350ms for write) when we saturated the link using multiple 16GB files, however, writing to EQL and overloading the link still gives us over 300ms disk latency. Those high number of Re-transmit % all went down from 5-6% to 1-2%.

4. No more MPIO initiator dropping out problem even without MSFT2005iSCSIBusType_0×9 in place, it may not be necessary after all?
As I installed HIT twice, MSFT2005iSCSIBusType_0×9 is not there as always, I suspect manually adding it can actually cause more problem? Or shall I remove MPIO from W2K8 Feature and just install it again manually to see if MSFT2005iSCSIBusType_0×9 pops up?

Extra Notes:

MPIO CLI Comands

mpclaim -r -i -d “MSFT2005iSCSIBusType_0×9″
(Note: HIT installation on Windows Server 2008 R2 DID NOT add this to MPIO)

mpclaim -s -d

mpclaim -s -d device_name

mpclaim.exe –v C:\Config.txt

C:\Users\Administrator>mpclaim -s -d

For more information about a particular disk, use ‘mpclaim -s -d #’ where # is
he MPIO disk number.

MPIO Disk System Disk LB Policy DSM Name
——————————————————————————-
MPIO Disk0 Disk 2 RR Dell EqualLogic DSM

C:\Users\Administrator>mpclaim -s -d 0

MPIO Disk0: 02 Paths, Round Robin, ALUA Not Supported
Controlling DSM: Dell EqualLogic DSM
SN: 6090A078C06B1219D3C8D49CF188CD5B
Supported Load Balance Policies: FOO RR LQD

Path ID State SCSI Address Weight
—————————————————————————
0000000077070001 Active/Optimized 007|000|001|000 0
0000000077070000 Active/Optimized 007|000|000|000 0

C:\Users\Administrator>mpclaim -r -i -d “MSFT2005iSCSIBusType_0×9″

Equallogic and ESX 4.1 iSCSI Setup Crack Sheet

By admin, September 16, 2010 7:17 pm

sanhqFor the whole month, my mind is full of VMWare, ESX 4.1, Equallogic, MPIO, SANHQ, iSCSI, VMKernel, Broadcom BACS, Jumbo Frame, IOPS, LAG, VLAN, TOE, RSS, LSO, Thin Provisioning, Veeam, Vizioncore, Windows Server 2008 R2, etc.

It’s definitely like taking an extremely fast track in getting my enterprise storage degree, and after all, it worths every penny of struggling, many long nights, endless calling to Pro-Support in Hong Kong and US EQL supports.

 

Equallogic and ESX 4.1 iSCSI Setup Crack Sheet to save you typing many commands.

  1. Configure iSCSI vSwitch using GUI first and assigned multiple NICs onto the vSwitch, in my case, it’s 4 NICs.
  2. Create multiple VMKernel on this vSwitch, in my case, there are 4 VMKernel (named iSCSI 1 to iSCSI 4)
  3. Removed the extra NICs from individual VMKernel by unselecting 3 of those NICs and do this for each VMKernel.
  4. # Enable Jumbo Frame on iSCSI vSwitch using CLI
    esxcfg-vswitch -m 9000 vSwitch4
    esxcfg-vswitch -l to verify MTU=9000
  5. # Enable Jumbo Frame on each VMKernel using CLI
    esxcfg-vmknic -m 9000 iSCSI1 – iSCSI4
    esxcfg-vmknic -l to verify MTU=9000* I also enabled Jumbo Frame for VMotion as well as FT network.
  6. Go to GUI, enable software iSCSI and note down the vmhba #, in my case, it’s vmhba47.
  7. # Bind VMKernel to iSCSI Adpater using CLI
    esxcli swiscsi nic add -n vmk2 -d vmhba47
    esxcli swiscsi nic list -d vmhba47 to verify if all 4 NICs are binded with vmhba47
  8. Do a rescan of the Storage, you will see EQL volume now. Please make sure you checked “Allow Simultaneous Connection…” under EQL volume property, or multiple ESX connection to the same volume won’t work.
  9. To verify from EQL, go to group manager, then click that volume, now you see there are 8 connections with 8 different IP Addresses (ie, 2 ESX hosts, with 4 NICs each)
  10. To verify from ESX host side, go to storage, right click Manage Path, you will see there are 4 IP Addresses from EQL.

 

Just got a reply from Equallogic support team regarding my customized configuration.

The document on the web site is the supported method of setting Jumbo Frames on the switch. This is the method that we have tested and confirmed to work.

Of course, as with many things, there is typically a method of doing this through the GUI as well. The method you are following appears to work in my tests as well, but we cannot confirm if it is a viable operation as it has not been tested through our QA process.

My suggestion would be to utilize the tested method. You may also want to check with VMware directly as it is possible that the GUI method you are utilizing simply calls the CLI commands we provide, but we cannot confirm that for certain (we do not have access to their code).

(Name Removed)

Enterprise Technical Support Consultant
Dell EqualLogic, Inc.

 

Finally, test ping your destination with a large message and specify don’t fragment.

  • Linux VMs:         ping –M do –s 8000 <ip address or destination>
  • Windows VMs:    ping –f –l 8000 <ip address or destination>
  • ESX(i):                vmkping –d –s 8000 <ip address or destination>

Dell Poweredge R710 iSOE key DDR3L Broadcom Quad NICs

By admin, August 27, 2010 12:09 pm

Finally I’ve got time to inspect each individual part thoroughly and the following is my findings. 

  1. Dell Powerdge iSCSI Offload Key for LOM NICs. Strange funny little stuff that makes a hell lot of difference for some people. Broadcom charges this for extra on their 5709 NICs (5709C not 5709S), the same apply to HP Pro-liant NICs.According to one of the EQL engineer we talked to, it is still best NOT TO use 5709C as ISOE HBA in ESX4.1 as it will lost Jumbo Frame feature and some other nice features will be gone if HAB mode is used with EQL boxes.IMG_2728

    IMG_2729

  2. DDR3 Low Voltage ECC+Buffer R-DIMM 8GB by Samsung: It’s nice to have that 20% saving, but when you add 2DPC, your nice 20% power saving (ie, 1.35V) will be disabled automatically (ie, raise to 1.5V instead), good part is you still have that 1333Mhz bandwidth with 2DPC. DDR3L 1.35V will only apply when it’s in 1DPC mode.IMG_2735What about 3DPC? Old story applies, it’s 800Mhz, tested it and proved it and if populate with 3DPC and fully filled that 18 DIMMs (ie, 144GB), it will take twice the time to verify and boot the server, so it’s better not to as you lost 40% bandwidth is very important for ESX.

    IMG_2736

    Dell’s on-line documentation has no where indicates my findings and my findings completes what I found at HP’s resources previously. Btw, why does DDR3L still need that aluminum cap for heat dissipation if it’s voltage is really that low?

    More about Samsung’s DDR3 Low-Voltage Ram

  3. Broadcom NetXtreme II 5709 Gigabit NIC w TOE & iSCSI Offload, Quad Port, Copper, PCIe-4: Nice to have two of this besides the embedded quad NICs, so total you will have 12 NICs within one server. The chipset is still BCM5709CC0KPBG, there is no iSCSI key found on the NIC, guess it’s been embedded already as well.IMG_2732

    IMG_2730

I Love You Phillip Morris

By admin, August 18, 2010 9:17 pm

Jim Carrey近年罕有的登峰造極之作﹐淚中帶笑﹐苦中帶甜﹐絕對值得一看﹗

Imprimer

Who said there aren’t angles live around us?

By admin, August 17, 2010 12:50 pm

Jackie Evancho, 10 years old sings ‘O mio babbino caro’ from Musical ‘Gianni Schicchi’

Paul Potts, mobile sales man sings Nessun Dorma

Susan Boyle, housewife sings ‘I Dreamed A Dream’ from Musical ‘Les Miserables

Some of my findings from past 2 months research

By admin, August 9, 2010 11:21 pm

The new Virtual Data Center project has been keeping me really busy, the followings are some of my findings.

  • Learn that R810/M910 with 4 sockets will only use 1 memory controller instead of two, memory bandwidth will be cut by half, that sucks! Strange enough, when R810 populates with 2 sockets only, it will use both memory controllers and gain access to all 32 DIMMS. So R810/R910 is still best for 2 sockets server, that why we switch to R710 instead after read the benchmarking, it’s a waste of money to go for R810.
  • DDR3L low voltage (1.35V), we populate with 2 DPC (8GB x 12 DIMMS), guess what? Voltage will shoot to normal 1.5V! and no one at Dell pre-sales or pro-support can answer us this, I found out this fact from HP’s resources, ridiculous! Anyway, it’s still running at 1333Mhz, that’s a bless.
  • Equallogic PS6000XV 15K should be a monster, we will only be worried about adding front-end (like R710/R720/R730) and adding back-end PS6000XV/PS7000XV etc. in the future, that’s main selling point for this solution, scalable beyond imagination, this really is the whole motivation we selected EQL boxes.
  • With the release of vSphere 4.1, VAAI, iSCSI Off-load fully support on Broadcom 5709 but no Jumbo Frame, what the heck! EQL vStorage off-load improvement, Multipath plugin (EQL finally solved the big problem)
  • One box of PS6000XV is good enough for 4Gbps, absolutely no need to go for 10Gbps for the time being except you are aiming for that 200M/s increase (yes, it can only reach 650M/s at max, there is no way reaching 1000M/s in reality. We were also told by paying about 1/4 of the box, you can always upgrade to 10Gbps version of PS6010XV in the future, but for our environment, IOPS is way demanding than thoughput, so 4Gbps is more than enough, just get a PowerConnect 5448 with 48 ports, we should have lots of space to grow from here.
  • There is a special iSCSI Key need to be purchased in order to have TOE+iSCSI Off-load on R710, this is the same on HP Proliant servers.
  • Found HP Proliant’s server resources are much more professional than Dells’, but Dell’s stuff are a lot cheaper (1/3 at least), so we just have to live with that.
  • In ESX 4 or above, Thick provisioning is always recommended for performance concerned VM applications, it’s a lot faster.
  • Talked to two of the local IDC who’s going to start a cloud business, but their core technical team doesn’t seem to know what they are really doing, seemed they have a long way to go comparing with US counterparts.
  • Again, virtualization is the future and you need a good SAN to support it!
  • Talk to your inside sales manager, show your sincerity, you will be rewarded and get unbelievable discount at quarter end!
  • Dell’s EQL expert is really helpful and resourceful, thank you so much! You are really the super-hero, kick-ass type!

Kick-Ass

By admin, August 6, 2010 5:54 pm

All I can say is What a Film! Make sure you stay till the end that’s where the real ass-kicking coming in.

Print

Pages: Prev 1 2 3 4 5 6 7 ...91 92 93 ...102 103 104 Next