Equallogic MEM and Number of iSCSI Connections in the Pool

By admin, October 12, 2011 11:13 pm

The following is from the latest October Equallogic newsletter:

How will installing Dell EqualLogic Multipathing Extension Module (MEM) for MPIO on my ESXi hosts affect the number of iSCSI connections in an EqualLogic pool?

The interesting answer to this question is for most the number of connections will increase but for a few it will go down.  The key to understanding the number of iSCSI connections created by MEM are the following variables:

  • The number of ESX hosts you have
  • The number of vmkernel ports assigned for iSCSI on each ESX server
  • The number of EqualLogic members in the pool
  • The number of volumes in the EqualLogic pool being accessed from the ESXi hosts
  • The MEM parameter settings

The MEM is designed to provide many enhancements over the VMware vSphere Round Robin multipathing and standard fixed path functionality including – automatic connection management, automatic load balancing across multiple active paths, increased bandwidth, and reduced network latency.  However, before installing the MEM you should evaluate and understand how the number of iSCSI connections will change in your environment and plan accordingly. In some cases it may be necessary to adjust the MEM parameter settings from the defaults to avoid exceeding the array firmware limits.

The MEM parameters are listed below.  You’ll notice each parameter has a default, minimum and maximum value.  What we’ll go over in this note is how to determine if the default values are suitable values for your environment and if not, what values are suitable for your environment to ensure that the total iSCSI connections to the EqualLogic pool remains below the EqualLogic pool iSCSI connection limit.

Value Default Maximum Minimum Description
totalsessions 512 1024 64 Maximum total sessions created to all EqualLogic volumes by each ESXi host.
volumesessions 6 12 3 Maximum number of sessions created to each EqualLogic volume by each ESXi host.
membersessions 2 4 1 Maximum number of sessions created to each volume slice (portion of a volume on a single member) by each ESXi host.

Single EqualLogic Array in Pool

Let’s start with a simple example – a four node ESXi cluster with 4 vmkernel ports on each host.  Those hosts all connect to 30 volumes on an EQL pool and that pool has a single EQL array. For each individual ESXi host the following variables effect how many connections MEM creates: 

Input Value
membersessions MEM parameter value 2
volumesessions MEM parameter value 6
totalsessions MEM parameter value 512
# of vmkernel ports 4
# of volumes 30
# of arrays in pool 1

So the first step in our ESXi host connection math is to get some subvalues from these parameters we’ll call X, Y and Z.

X = [Lesser of (# of vmkernel ports) or (membersessions parameter value)]

Y = [# of volumes connected to by this ESXi host]

Z = [Lesser of ((# of arrays in pool) or (volumesessions/membersessions)]

We then use X, Y and Z to calculate the total MEM sessions for one ESXi host using the formula below.

Total iSCSI Sessions from ESXi host = [Lesser of (X * Y  *  Z)  or (totalsessions MEM parameter value)]

So in this particular scenario X = 2 (the membersessions MEM parameter value),   Y = 30 (# of volumes connected to by this ESXi host) and Z = 1 (the # of arrays in the pool).  So for one ESXi host in this scenario we have a total of 60 iSCSI connections.  Since 60 is less than the totalsessions MEM parameter limit of 512 the MEM will create all 60 connections on this ESXi host.

We have 4 ESXi hosts in our environment this EQL array will have a total of 240 (4 x 60) connections from those ESXi hosts.   


Why would I get less iSCSI connections with MEM?

Let’s go back to our statement that some environments you may have less connections with MEM than with VMware vSphere Round Robin multipathing.  Typically this will only happen if you have a single member group and when the number of vmkernel ports is more than the membersessions MEM parameter.  In our original example we had four vmkernel ports so with VMWare fixed VMware vSphere Round Robin multipathing you would have four connections to each volume.  When you install MEM it will look at the membersessions MEM parameter and change the number of connections to the default of 2 connections per volume.

You may be concerned that changing from four connections per volume to two connections per volume might have a performance impact but this is usually not true. MEM will use all four VMkernel ports for the various volumes but just not use all vmkernel ports for all volumes. In addition, the EqualLogic array connection load balancing will keep the network load on each individual array port balanced out as evenly as possible.


Add Additional
EqualLogic Arrays in Pool

Let’s say your environment grows and you need more storage capacity.  Let’s add another two EqualLogic arrays to that single member pool in our original example.   The volumes in the pool will now spread out over all three arrays. That is, there is a “slice” of the volume on each of the three arrays.  Now the 3rd MEM parameter – volumesessions – comes into play.  MEM will create 2 connections (membersessions default) to each of the three arrays that have a slice of the volume. MEM is aware of what portion of each volume is on each array so these connections allow it to more efficiently pull volume data down to the ESX server. Standard ESX MPIO doesn’t know about the EqualLogic volume structure so it can’t be as efficient as MEM.

The only parameter that changes in the table from the first example is the number of arrays which increases from 1 to 3.

So let’s get our subvalues X, Y and Z for the situation where there are 3 arrays rather than just one:

X = [Lesser of (# of vmkernel ports) or (membersessions parameter value)]  = 2

Y = [# of volumes connected to by this ESXi host] = 30

Z = [Lesser of ((# of arrays in pool) or (volumesessions/membersessions)] 

= (3) or (6/2)

 = 3

So   X * Y * Z = 180 connections per ESXi host in this example.  180 is less than the totalsessions limit of 512 so MEM will create a total of 180 connections from each ESXi host. 

We have 4 ESXi hosts in our environment this EQL array will have a total of 720 (4 x 180) connections from those ESXi hosts.   720 total connections is within the limits for a PS6xxx series array but is well over the connection limit for a PS4xxx series pool.  However, if any additional expansion of the environment occurs – such as adding two additional ESXi hosts – the session count will now be 1080. So in some circumstances we may need to make some adjustments in the MEM parameters or array group configuration to optimize our configuration.   We’ll talk about that in the next section.
Planning the Number of iSCSI Connections

So if you have done your math and see that you’re getting near the array firmware limits for connections how do can you alter the number of connections?  There are several choices for this including:

  • reduce the number of volumes per pool
  • reduce the number of ESX servers that connect to each volume
  • move some of the EQL arrays/volumes to another pool
  • reduce the membersessions MEM parameter limit on each ESX server
  • reduce the volumesessions MEM parameter limit on each ESX serer
  • reduce the totalsessions MEM parameters limit on each ESX server.

Remember when you’re doing your math that you also need to include any connections from non-ESXi hosts when deciding if you’re going to exceed the array iSCSI connection limit.

As we’ve seen a little bit of planning will help you keep the iSCSI connections to your EqualLogic pool at an optimal level.

2 Responses to “Equallogic MEM and Number of iSCSI Connections in the Pool”

  1. Danny says:

    Wah! I just use the default value on my PS6000XV.

  2. Martin says:

    I wish I had read this before starting to deploy MEM – on Dell’s recommendations I might add.

    I have just added MEM to the third of 27 ESXi hosts and hit the “1024 connections” limit already. We have 7 arrays in our pool configured as one storage pool to capitalise on max number of spindles per LUN and dynamic performance management.

    It looks like we are going to have to manage the way hosts are allowed to connect to volumes. Presently all hosts see all volumes – which looks like a no-go option with MEM.

    And I thought MEM was going to be one of those nice quick wins !

    Great article. Thanks.

Leave a Reply to Danny