Citrix XenServer Configuration Guide (Formerly Known as FAQ 1561)
This topic includes:
Citrix XenServer Compatibility Lists
The DataCore Server's Settings
Known Issues in Citrix XenServer Configuration Guide
Overview
This guide provides configuration settings and considerations for Hosts running Citrix XenServer with SANsymphony.
Basic XenServer storage administration skills are assumed including how to connect to iSCSI and Fibre Channel target ports and the discovering, mounting, and formatting of disk devices.
Earlier releases of XenServer may have used different settings than the ones listed here, therefore if, when upgrading an Citrix host, there are previously configured, DataCore specific settings that are no longer listed, leave the original setting’s value as it was.
Change Summary
Refer to DataCore FAQ 838 for the latest version of the FAQ and see the Previous Changes section that lists the earlier changes made to this FAQ.
Changes since January 2021
Section(s) | Content Changes |
---|---|
XenServer compatibility lists - All |
Added XenServer 8.2 has been added. |
XenServer compatibility lists – All Known Issues - All |
Removed
All references to End-of-Life XenServer versions up to and including XenServer 7.0 have been removed. |
If there is still a requirement to use an older XenServer version with SANsymphony then contact DataCore Technical Support to give advice on any relevant information that may have now been removed from this document.
Citrix XenServer Compatibility Lists
XenServer Versions
Citrix Hypervisor |
With ALUA |
Without ALUA |
---|---|---|
7.1 |
Qualified |
Qualified |
7.2 |
Qualified |
Qualified |
8.2 |
A minimum requirement of SANsymphony 10.0 PSP 16 |
A minimum requirement of SANsymphony 10.0 PSP 16 |
- Fibre Channel and iSCSI front end port connection types are both qualified.
- SCSI UNMAP support is not available in SANsymphony versions 10.0 PSP 5 and earlier.
Qualified vs. Not Qualified vs. Not Supported
Qualified
This combination has been tested by DataCore and all the host-specific settings listed in this document applied using non-mirrored, mirrored, and Dual virtual disks.
Not Qualified
This combination has not yet been tested by DataCore using Mirrored or Dual virtual disk types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.) even if the host-specific settings listed in this document are applied. Self-qualification may be possible please see Technical Support FAQ #1506.
Mirrored or Dual virtual disk types are configured at the user's own risk; however, any problems that are encountered while using VMware versions that are 'Not Qualified' will still get root-cause analysis.
Non-mirrored virtual disks are always considered 'Qualified' - even for 'Not Qualified' combinations of XenServer/SANsymphony.
Not Supported
This combination has either failed 'high availability' testing by DataCore using Mirrored or Dual virtual disk types; or the operating System's own requirements/limitations (e.g., age, specific hardware requirements) make it impractical to test. DataCore will not guarantee 'high availability' (failover/failback, continued access etc.) even if the host-specific settings listed in this document are applied. Mirrored or Dual virtual disk types are configured at the user's own risk. Self-qualification is not possible.
Mirrored or Dual virtual disk types are configured at the user's own risk; however, any problems that are encountered while using VMware versions that are 'Not Supported' will get best-effort Technical Support (e.g., to get access to virtual disks) but no root-cause analysis will be done.
Non-mirrored virtual disks are always considered 'Qualified' – even for 'Not Supported' combinations of XenServer/SANsymphony.
XenServer Versions that are End of Support Life
Self-qualification may be possible for versions that are considered ‘Not Qualified’ by DataCore but only if there is an agreed ‘support contract’ with Citrix. Please contact DataCore Technical Support before attempting any self-qualification of XenServer versions that are End of Support Life.
For any problems that are encountered while using XenServer versions that are EOSL with DataCore Software, only best-effort Technical Support will be performed (e.g., to get access to virtual disks). Root-cause analysis will not be done.
Non-mirrored virtual disks are always considered 'Qualified'.
The DataCore Server's Settings
Operating System Type
When registering the Host for the first time, choose the 'Citrix XenServer' menu option.
Port Roles
Ports that are used to serve virtual disks to hosts should only have the front-end role checked. While it is technically possible to check additional roles on a front-end port (i.e., Mirror and Backend), this may cause unexpected results after stopping the SANsymphony software.
When SANsymphony has been stopped, Front-end and default Mirror Ports will also be 'stopped'. Ports with the back-end role or Mirror Ports that are explicitly configured to use only the Initiator SCSI Mode will remain ‘running’.
Multipathing
The Multipathing Support option should be enabled so that Mirrored virtual disks or Dual virtual disks can be served to hosts from all available DataCore FE ports. Refer to the Multipathing Support section from Hosts.
Non-mirrored Virtual Disks and Multipathing
Non-mirrored virtual disks can still be served to multiple hosts and/or multiple host ports from one or more DataCore Server FE ports if required; in this case the host can use its own multipathing software to manage the multiple host paths to the Single virtual disk as if it was a Mirrored or Dual virtual disk.
Serving Virtual Disks
For the First Time
DataCore recommends that before serving any virtual disk for the first time to a host, that all DataCore front-end ports on all DataCore Servers are correctly discovered by the host first.
Then, from within the SANsymphony Console, the virtual disk is marked Online, up to date and that the storage sources have a host access status of Read/Write.
To More than One Host Port
DataCore virtual disks always have their own unique Network Address Authority (NAA) identifier that a host can use to manage the same virtual disk being served to multiple ports on the same host server and the same virtual disk being served to multiple hosts.
While DataCore cannot guarantee that a disk device's NAA is used by a host's operating system to identify a disk device served to it over different paths generally we have found that it is. And while there is sometimes a convention that all paths by the same disk device should always using the same LUN 'number' to guarantee consistency for device identification, this may not be technically true. Always refer to the host Operating System vendor’s own documentation for advice on this.
DataCore's Software does, however always try to create mappings between the host's ports and the DataCore Server's Front-end (FE) ports for a virtual disk using the same LUN number where it can. The software will first find the next available (lowest) LUN 'number' for the host-DataCore FE mapping combination being applied and will then try to apply that same LUN number for all other mappings that are being attempted when the virtual disk is being served. If any host-DataCore FE port combination being requested at that moment is already using that same LUN number (e.g., if a host has other virtual disks served to it from previous) then the software will find the next available LUN number and apply that to those specific host-DataCore FE mappings only.
The XenServer Host’s Settings
Multipath Configuration Settings
The 'Defaults' Section
Applies to all versions of XenServer
Add or modify the polling_interval as 10 (seconds)
defaults { polling_interval 10 }
This is a DataCore-required value that manages how often the Host checks for access on a previously detected as failed virtual disk path. A smaller setting will interfere with Host performance.
Do not add the 'polling_interval' parameter to the 'device' section as it will not work as expected.
The 'Device' Section
The following entries are all DataCore-required values. See the notes section following for more information.
With ALUA:
# XenServer 8.2
device {
vendor "DataCore"
product "Virtual Disk"
path_checker "tur"
path_grouping_policy group_by_prio
failback immediate
path_selector "round-robin 0"
prio alua
}
# XenServer 7.1/7.2
device {
vendor "DataCore"
product "Virtual Disk"
path_checker tur
failback 30
path_selector "round-robin 0"
rr_min_io_rq 100
# rr_min_io 100
path_grouping_policy group_by_prio
prio alua
}
Without ALUA:
# XenServer 8.2
device {
vendor "DataCore"
product "Virtual Disk"
path_checker "tur"
path_grouping_policy group_by_prio
failback immediate
path_selector "round-robin 0"
}
# XenServer 7.1/7.2
device {
vendor "DataCore"
product "Virtual Disk"
path_checker tur
failback 10
path_grouping_policy failover
prio hp_sw
}
- vendor / product
By default, all virtual disks created in SANsymphony will use the vendor and product strings ‘DataCore’ and ‘Virtual Disk’ listed above. Also, see Changing Virtual Disk Settings. - failback
For XenServer 7.1/7.2 an extra ‘wait’ period (10 seconds) was added to prevent unnecessary ‘failback’ attempts. - path_checker
This is a DataCore-specific value. No other value should be used. - path_grouping_policy
This is a DataCore-specific value. No other value should be used. - path_selector (applies to ALUA enabled Hosts only)
This is a DataCore-specific value. No other value should be used. - prio
For XenServer 7.1/7.2 the 'hp_sw' switch was required to get consistent failover/failback DataCore's own qualification tests. - rr_min_io_rq (applies to ALUA enabled Hosts only)
Use on systems running kernels newer than 2.6.30. This is a DataCore-required value. No other value should be used. - rr_min_io (applies to ALUA enabled Hosts only)
Use on systems running kernels older than 2.6.31.
This is a DataCore-required value. No other value should be used.
Known Issues in Citrix XenServer Configuration Guide
The following is intended to make DataCore Software users aware of any issues that affect performance, access or may give unexpected results under specific conditions when SANsymphony is used in configurations with XenServer Hosts.
Some of these Known Issues have been found during DataCore’s own testing, but others have been reported by users. Often, the solutions identified for these issues were not related to DataCore's own products.
DataCore cannot be held responsible for incorrect information regarding another vendor’s products and no assumptions should be made that DataCore has any communication with these other vendors regarding the issues listed here.
We always recommend that the vendors should be contacted directly for more information on anything listed in this section.
For ‘Known issues’ that apply specifically to DataCore Software’s own products, please refer to the relevant DataCore Software Component’s release notes.
Hardware
Affects all XenServer Versions
QLogic QLA405x/406x iSCSI HBAs
Use QLogic’s Firmware revisio
n 3.01.49 or greater.
General
Affects XenServer 7.1/7.2
Virtual disks must be served to both the XenServer pool master as well as its members to be discovered as pool member operations always go 'via' the XenServer 'Master' Server.
For example, if a XenServer Pool member has virtual disk 1 served to it but the Master Server does not (or vice versa), then the XenServer will never discover virtual disk 1 when scanning for it.
Affects XenServer 7.1/7.2
XenServer may not detect any new size changes from a previously served virtual disk
Even if a Storage Repository (SR) is destroyed (and the virtual disk(s) are unserved); any subsequent change to a virtual disk's size may not be re-detected if it is served back to the same XenServer Host and the previous virtual disk size will be reported.
Affects all XenServer Versions
The multipath.conf file may get overwritten when upgrading
After upgrading from an older version of XenServer, always check that the upgrade has not overwritten changes made to the multipath.conf file. Depending on the version of XenServer, multipath.conf may be a soft link that points to /etc/multipath-enabled.conf.
It is usually not enough to simply save a copy of this file and re-apply it after the upgrade without a further reboot – contact Citrix for more advice.
Appendices
A: Preferred Server and Preferred Path Settings
Without ALUA Enabled
If Hosts are registered without ALUA support, the Preferred Server and Preferred Path settings will serve no function. All DataCore Servers and their respective Front End (FE) paths are considered ‘equal’.
It is up to the Host’s own Operating System or Failover Software to determine which DataCore Server is its preferred server.
With ALUA Enabled
Setting the Preferred Server to ‘Auto’ (or an explicit DataCore Server), determines the DataCore Server that is designated ‘Active Optimized’ for Host IO. The other DataCore Server is designated ‘Active Non-Optimized’.
If for any reason the Storage Source on the preferred DataCore Server becomes unavailable, and the Host Access for the virtual disk is set to Offline or Disabled, then the other DataCore Server will be designated the ‘Active Optimized’ side. The Host will be notified by both DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the ALUA state of both DataCore Servers and act accordingly.
If the Storage Source on the preferred DataCore Server becomes unavailable but the Host Access for the virtual disk remains Read/Write, for example if only the Storage behind the DataCore Server is unavailable but the FE and MR paths are all connected or if the Host physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or iSCSI Cable failure) then the ALUA state will not change for the remaining, ‘Active Non- optimized’ side. However, in this case, the DataCore Server will not prevent access to the Host nor will it change the way READ or WRITE IO is handled compared to the ‘Active Optimized’ side, but the Host will still register this DataCore Server’s Paths as ‘Active Non-Optimized’ which may (or may not) affect how the Host behaves generally.
Refer to Preferred Servers and Preferred Paths sections from Port Connections and Paths for more information.
In the case where the Preferred Server is set to ‘All’, then both DataCore Servers are designated ‘Active Optimized’ for Host IO.
All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the distance that the IO has to travel to the DataCore Server. For this reason, the ‘All’ setting is not normally recommended. If a Host has to send a WRITE IO to a ‘remote’ DataCore Server (where the IO Path is significantly distant compared to the other ‘local’ DataCore Server), then the WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore Server, but for the remote DataCore Server to mirror back to the local DataCore Server and then for the mirror write to be acknowledged from the local DataCore Server to the remote DataCore Server and finally for the acknowledgment to be sent to the Host back across the SAN, can be significant.
The benefits of being able to use all Paths to all DataCore Servers for all virtual disks are not always clear cut. Testing is advised.
For Preferred Path settings it is stated in the SANsymphony Help:
A preferred front-end path setting can also be set manually for a particular virtual disk. In this case, the manual setting for a virtual disk overrides the preferred path created by the preferred server setting for the host.
So for example, if the Preferred Server is designated as DataCore Server A and the Preferred Paths are designated as DataCore Server B, then DataCore Server B will be the ‘Active Optimized’ Side not DataCore Server A.
In a two-node Server group there is usually nothing to be gained by making the Preferred Path setting different to the Preferred Server setting and it may also cause confusion when trying to diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths.
For Server Groups that have three or more DataCore Servers, and where one (or more) of these DataCore Servers shares Mirror Paths between other DataCore Servers setting the Preferred Path makes more sense.
So for example, DataCore Server A has two mirrored virtual disks, one with DataCore Server B, and one with DataCore Server C and DataCore Server B also has a mirrored virtual disk with DataCore Server C then using just the Preferred Server setting to designate the ‘Active Optimized’ side for the Host’s virtual disks becomes more complicated. In this case the Preferred Path setting can be used to override the Preferred Server setting for a much more granular level of control.
B: Reclaiming Storage from Disk Pools
How Much Storage will be Reclaimed?
This is impossible to predict. SANsymphony can only reclaim Storage Allocation Units that have no block-level data on them. If a host writes its data ‘all over’ its own filesystem, rather than contiguously, the amount of storage that can be reclaimed may be significantly less than expected.
Defragmenting data on virtual disks
It may be possible to use a Host’s own defragmentation tools to consolidate data spread out all over the host’s filesystem but care should be taken as even more storage may be allocated while the existing data is defragmented.
Once any defragmentation is completed then additional steps will need to wipe the ‘free’ filesystem space on the host and then use SANsymphony’s ‘Manual Reclamation’ feature – see next.
Notes on SANsymphony's Reclamation Feature
Automatic Reclamation
SANsymphony checks for any ‘zero’ write I/O as it is received by the Disk Pool and keeps track of which block addresses they were sent to. When all the blocks of an allocated SAU have received ‘zero’ write I/O, the storage used by the SAU is then reclaimed. Mirrored and replicated virtual disks will mirror/replicate the ‘zero’ write I/O so that storage can be reclaimed on the mirror/replication destination DataCore Server in the same way.
Manual Reclamation
SANsymphony checks for ‘zero’ block data by sending read I/O to the storage. When all the blocks of an allocated SAU are detected as having ‘zero’ data on them, the storage used by the SAU is then reclaimed.
Mirrored virtual disks will receive the manual reclamation ‘request’ on all DataCore Servers involved in the mirror configuration at the same time and each DataCore Server will read from its own storage. The Manual reclamation ‘request’ is not sent to replication destination DataCore Servers from the source. Replication destinations will need to be manually reclaimed separately.
Reclaiming Storage on the Host Manually
A suggestion would be create a raw VDI on the storage repository and format it using the 'vdi-create' command.
Here is an example with a 50GB VDI file:
xe vdi-create sr-uuid={Datacore-SR-UUID} type=user virtual-size=50GiB name-label=reclaimdisk / sm-config:type=raw
Now present the VDI device to an existing virtual machine then from within the VM's own operating system issue 'zeroes' to the VDI disk device.
- For Microsoft Windows VMs use Microsoft’s own ‘sdelete’ tool.
- For Unix/Linux-based Hosts, use the dd command to fill all of the unused file system space with 'all-zero' write I/O.
Previous Changes
Section(s) | Content Changes | Date |
---|---|---|
General |
Updated This document has been reviewed for SANsymphony 10.0 PSP 11. No additional settings or configurations are required. |
January 2021 |
The Citrix Hypervisor Host’s settings | Added
Both 7.x and 6.5 Multipath Configuration: Use the rr_min_io_rq setting (instead of rr_min_io) for kernels newer than 2.6.30 |
October 2019 |
General | Updated
This document has been reviewed for SANsymphony 10.0 PSP 9. No additional settings or configurations are required. |
|
The Citrix Hypervisor Host’s settings | Updated
Both 7.x and 6.5 Multipath Configuration: Fixed the inconsistent explanation for path_selector and path_checker settings within the notes section. |
|
The DataCore Server’s settings – Port Roles | Updated | July 2019 |
General | Removed
|
|
Citrix Hypervisor Compatibility Lists | Added
Citrix Hypervisor 8.x added to compatibility list. Currently, this version is considered ‘Not Supported’ for SANsymphony 9.0 PSP 4 Update 4 and ‘Not Qualified’ for all versions of SANsymphony 10.x. |
June 2019 |
Citrix Hypervisor Compatibility Lists | Updated
Versions of XenServer that were marked as ‘Not Qualified’ but are now considered ‘End of Life’ by Citrix have been marked as ‘Not Supported’. Please see page 3 for the difference between ‘Not Qualified’ and ‘Not Supported’. |
|
General | Updated
This document has been reviewed for SANsymphony 10.0 PSP 8. No additional settings or configurations are required. |
October 2018 |
XenServer compatibility list | Updated
SANsymphony is now supported for all 7.x Citrix versions. |
May 2018 |
General | Updated
This document has been reviewed for SANsymphony 10.0 PSP 7. No additional settings or configurations are required. |
February 2018 |
Appendix B – Configuring Disk Pools | Removed
The information here has been removed as it is now superseded by the information in: The DataCore Server- Best Practice Guidelines What was previously 'Appendix C' has now been moved to 'Appendix B'. |
|
XenServer compatibility list |
Updated
XenServer 7.2 is now Qualified with SANsymphony -V 10.x with or without ALUA. |
November 2017 |