DataCore SANsymphony– iSCSI Network and Best Practices (Formerly Known as FAQ 1691 and 1626)
Explore this Page
Overview
This document combines DataCore FAQs 1691 (iSCSI Best Practices) and 1626 (iSCSI Network) into a single guide for iSCSI network design, configuration, and tuning. It includes key performance considerations, PowerShell scripts for Windows (2008 R2–2022), and recommendations for NIC, TCP, BIOS, security, and virtualization settings to ensure reliable, high‑performance iSCSI deployments with DataCore SANsymphony.
iSCSI Network
DataCore has supported sending SCSI commands over an IP Network since 2001, first with our own STP driver until iSCSI became ratified and an iSCSI driver for Windows was available on Windows 2000 in 2003.
DataCore have an iSCSI Target driver but rely on Third Party iSCSI initiator drivers to send the packets across the IP network. iSCSI 'sits' above and uses TCP in the network layer as the protocol to transfer commands from the initiator to the target. iSCSI uses specific name formats, typically IQN (iSCSI Qualified Name) but sometimes EUI (Extended Unique Identifier).
DataCore recommend a few specific parameters that help stabilize and improve performance. Please refer to iSCSI Best Practices section for scripts to apply these recommendations. This document explains these settings and their potential impact.
This networking area is very complex and changes to OS and a customer implementation will determine the best setting for that specific installation.
What are the differences between iSCSI Network and Fibre Channel?
One of the major difference between iSCSI and FC relates to I/O congestion. When an iSCSI path is overloaded, the TCP/IP protocol drops packets and requires them to be resent. FC communication over a dedicated path has a built-in pause mechanism when congestion occurs.
When a network path carrying iSCSI storage traffic is oversubscribed, a bad situation quickly grows worse and performance further degrades as dropped packets must be resent.
There can be multiple reasons for an iSCSI path being overloaded, including:
- Oversubscription (too much traffic)
- Network switches that have a low port buffer
- Network bandwidth dependent on the Ethernet standards used (1Gb, 10Gb, 25Gb, 40Gb, etc.)
There are other mechanisms such as port aggregation and bonding links that can deliver greater network bandwidth.
When implementing software iSCSI with network interface cards rather than dedicated iSCSI Adapters, the interfaces can consume additional CPU resources. One way of reducing this demand is to use a feature called a TOE (TCP/IP offload engine).
TOEs shift TCP packet processing tasks from the server CPU to specialized TCP processors on the network adaptor or storage device. Most enterprise-level networking chipsets today offer TCP offload or checksum offload, which improves CPU overhead.
How does iSCSI Network architecture work?
iSCSI initiators must manage multiple, parallel communication links to multiple targets. Similarly, iSCSI targets must manage multiple, parallel communications links from multiple initiators.
Several identifiers exist in iSCSI to make this happen, including:
- iSCSI Name
- ISID (iSCSI session identifiers)
- TSID (target session identifier)
- CID (iSCSI connection identifier)
- iSCSI portals
What are iSCSI Network Names, Initiators, and Targets?
iSCSI nodes have globally unique names that do not change when Ethernet adapters or IP addresses change. iSCSI supports two name formats as well as aliases:
- EUI (Extended Unique Identifier)
- IQN (iSCSI Qualified Name)
- Initiators – such as hosts, which are data consumers
- Targets – such as disk arrays or tape libraries, which are data providers
- Software iSCSI adapters that use standard network cards
- Hardware iSCSI adapters with their own processing and offload capabilities
DataCore provides a target iSCSI driver that interfaces with a standard network adapter or iSCSI hardware adapter. The DataCore iSCSI driver has been built and configured to handle many hosts accessing it across the network.
A DataCore target is implemented where each distinct IP address has a distinct IQN, whereas some hosts will have a Global IQN across the host and many ports belong to that IQN.
When using software iSCSI on the Host, multiple NICs will assume the same IQN (as is the case with Microsoft Windows and VMware ESX). With versions up to SANsymphony 10.0.7.2, only one IP address (sharing an IQN) connecting to the same DataCore target is supported.
What are the best practices for iSCSI Network design?
Network design is key to making sure iSCSI works.
Avoid Oversubscription:
Oversubscription occurs when more users are connected to a system than can be fully supported at the same time.
Networks and servers are almost always designed with some amount of oversubscription, assuming that users do not all need the service simultaneously.
If they do, delays are certain and outages are possible. Oversubscription is permissible on general-purpose LANs, but you should not use an oversubscribed configuration for iSCSI.
Dedicated LAN for iSCSI:
Best practice is to have a dedicated LAN for iSCSI traffic and not share the network with other traffic. Do not oversubscribe the dedicated LAN, and keep the iSCSI traffic separate from other networks, including management networks.
What TCP congestion settings are recommended for iSCSI Network?
TCP settings control a lot of characteristics surrounding the connection between IP Addresses.
This can mean that extra unnecessary traffic is transferred using the same connection or space is reserved for traffic that will never be used.
Prior to Windows 2012 these had to be set individually.
Since Windows 2012, a Transport filter called ‘DataCenter’ is recommended by DataCore to optimize these settings where a single command carries out numerous changes to TCP parameters.hop
Refer to the New-NetTransportFilter documentation for more details.
What power settings are recommended for iSCSI Network?
SANsymphony handles all I/O and as such needs to have CPU optimized to deal with High Performance at all times.
- Set System Power Option: High Performance
- Disable Power Saving: Power Saving should be disabled on all Network Adapters to maintain the highest response.
All BIOS settings should also be applied as in the Best Practice Guide.
What network adapter settings are recommended for iSCSI Network?
Make sure the Network card has the latest driver and firmware from the manufacturer applied, as updates often improve performance and resolve bottlenecks. Network throughput and resource usage can be improved by tuning the network adapter.
General Network Adapter Settings
The correct tuning settings depend on the network adapter, the workload, the host computer resources, and performance targets.
Recommended NIC Settings:
- Interrupt Moderation: Enabled (preferably Adaptive).
- Interrupt Moderation limits the rate of interrupts to the CPU during packet transmit and receive. Interrupts are handled as a group andCPU utilization decreases. As a result, performance gains from load-balancing used in adapter teaming becomes a smaller part of overall performance as compared to the performance gains achieved through the increased server CPU headroom provided by interrupt moderation. With some adapters an 'Adaptive' setting is possible which will adjust according to the interrupts rates that occur.
- Low Latency Interrupts: Off
- Disabling low latency interrupts prevents excessive context switching, improving efficiency for high‑throughput workloads.
- NUMA Node: System Default
- Ensures the NIC aligns with the system’s NUMA architecture for optimal memory access and performance.
- Receive Side Scaling (RSS): Disabled
- RSS distributes network processing across multiple cores, but for iSCSI traffic, disabling it can reduce latency and improve predictability.
- Virtualization: Disabled
- Disabling virtualization features on iSCSI NICs prevents additional overhead caused by hypervisor‑level network virtualization.
- Jumbo Frames: Disabled.
- Jumbo frames aim to deliver additional throughput by increasing the size of the payload in each frame from a default MTU of 1,500 to an MTU of up to 9,000. Delivery of the larger payload can take longer and if any part is not delivered then the whole payload has to be resent. Also implementation needs great care and consideration. All devices sitting in the I/O path (iSCSI target, physical switches, network interface cards must be able to implement jumbo frames for this option to operate correctly. A common issue with jumbo-frame configurations is that the MTU value on the switch isn't set correctly. In most cases, this must be higher than that of the hosts and storage, which are typically set to 9,000. Switches must be set higher to account for IP overhead.
- Large Send Offload (LSO): Enabled
- LSO enables the adapter to offload the task of segmenting TCP messages into valid Ethernet frames. The adapter is able to complete data segmentation much faster than the OS, and LSO can improve transmission performance. In addition, the adapter will use fewer CPU resources. It is enabled by default on both Broadcom and Intel adapters. Disabling LSO resulted in decreased performance in almost every workload.
- Offloading Options: All enabled (IPsec Offload set to Auth Header & ESP Enabled).
- The network adapter may not be powerful enough to handle the offload requirements when there is high throughput. Generally, the throughput itself is not a limitation of the adapter, and enabling offload capabilities allows throughput to be sustained if needed.
- Packet Priority & VLAN: Both enabled.
- This allows control of sending and receiving tagged frames for QoS and VLAN. Priority Enabled sends and receives QoS‑tagged frames, while VLAN Enabled sends and receives VLAN‑tagged frames.
- Flow Control: Enabled.
- Flow control regulates network traffic by pausing frames when necessary, helping prevent buffer overflows. It is enabled by default and is beneficial in congested network environments.
- Network Adapter Transmit/Receive Buffers: Transmit 16384, Receive 4096
- Some network adapters default to low transmit and receive buffer values to conserve host memory. These low settings can cause dropped packets and reduced performance. Where the adapter does not support these exact values, configure them as close as possible, particularly the transmit buffer size.
- Receive Segment Coalescing (RSC): Enabled.
- RSC allows the adapter to combine multiple incoming packets into larger segments, reducing the number of IP headers that need processing. This decreases CPU overhead and improves performance, especially in receive‑intensive workloads such as DataCore Servers handling traffic from many hosts. DataCore recommends enabling RSC.
- For ESX NIC settings:
- Use VMware VMXNET 3 virtual NIC adapters (no special settings are required in SSV).
- Update the ESX physical NIC drivers and firmware to the latest VMware HCL listed. Running the latest ESX build is not sufficient.
What other considerations should be made for iSCSI Network?
Reducing the Target Commands can help reduce latency.
- DataCore recommends only changing these if latency is seen.
- Reducing these to 256 or 64 will limit the queues and hence a command spends less time in a queue.
- Warning: If set too low, the queue may become full and no more commands will be accepted.
Refer to the Changing the Maximum Outstanding Target Commands section from SANsymphony Help for more information.
What are the recommendations for Delayed ACK in iSCSI Network?
When packets are received an acknowledgement (ack) is sent from the target to the initiator to acknowledge the packet has been dealt with. Grouping several packets together before sending an ‘ack’ can save some network overhead.
However, this can also cause a delay in the sending as some applications will queue up packets until they receive the ‘ack’, and DataCore recommend disabling this setting.
This will prevent the resend processing time from going to 5 frames/sec when a packet is dropped. Disabling Delayed Ack will not prevent packets from being dropped, but if they are dropped the resend delay will be improved, and timeouts will not occur.
Default Windows delay: 200 milliseconds.
To disable:
- Set the Registry value TcpAckFrequency to 1.
- For Server 2003 and 2008, refer to Microsoft KB328890.
- For Server 2012 and 2016, refer to Set-NetTCPSetting.
- For ESX, refer to Broadcom KB1002598.
What are the recommendations for Nagle’s Algorithm in iSCSI Network?
Nagle's algorithm is a technique to improve TCP performance by buffering output in the absence of an ‘ack’ response until a packet's worth of output has been reached.
The delayed ‘ack’ algorithm will delay sending an ‘ack’ under certain conditions. Disabling Nagle's algorithm can improve iSCSI SAN performance where bursts of SAN I/O or a high frequency of iSCSI commands can trigger ‘ack’ delays and increase write latency.
To disable, set the Registry value TcpNoDelay to 1.
To turn off TCP Nagle in Windows TCP/IP stack, refer to KB235624 for more information.
What are the TCP Global & Custom Settings for Windows 2008/2008 R2 in iSCSI Network?
TCP Global & Custom Settings
Windows 2008 is nearing EOL but there are still many companies running as a Host. Compared to the alter version of Windows (2012 and 2016) Windows 2008 is particularly vulnerable to Performance issues if specific TCP settings are not in place, hence these Windows versions require many specific settings that are not needed with later OS versions.
All the TCP settings have to be applied manually with Windows 2008. The TCP Settings control a lot of characteristics surrounding the connection between IP Addresses, and can mean that a lot of extra unnecessary traffic is transferred using the same connection, or space reserved on the connection for traffic that will never be used.
To view current settings:
- Command prompt: netsh int tcp show global.
- Or run the PowerShell cmdlet: Get-NetTCPSetting
- Command prompt: netsh int tcp set global
- Or run the PowerShell cmdlet: Set-NetTCPSetting
Some Windows 2008 settings cannot be customized without Hotfix. Refer to KB2472264.
Recommended TCP Global Parameters for Windows 2008 & 2008 R2:
Netsh int tcp set global RSS=Enabled
Netsh int tcp set global chimney=Disabled
Netsh int tcp set global netdma=Disabled
Netsh int tcp set global dca=Disabled
Netsh int tcp set global autotuninglevel=Normal
Netsh int tcp set global congestionprovider=CTCP
Netsh int tcp set global ecncapability=Enabled
Netsh int ip set global taskoffload=disabled
Netsh int tcp set global timestamps=Disabled
Netsh interface tcp set global initialRto=3000
Netsh interface tcp set supplemental custom minrto=20
Netsh interface tcp set supplemental custom icw=4
Netsh interface tcp set supplemental custom delayedacktimeout=10
- Reboot or disable/enable the NIC for changes to take effect.
- Additional details are available in Microsoft’s PowerShell documentation. Refer to Set-NetTCPSetting.
What do additional TCP settings mean for iSCSI Network?
The following TCP settings are recommended for Windows 2008 and 2008 R2 to optimize iSCSI performance. These must be applied manually:
- Receive-Side Scaling (RSS): Enabled
- RSS enables network adapters to distribute the kernel-mode network processing load across multiple processor cores in multi‑core computers. This improves scalability for high‑traffic iSCSI environments. Refer to Microsoft Article HH997036 documentation for more information.
- Chimney Offload State: Disabled
- Disables TCP Chimney Offload, which transfers TCP/IP connection processing from the CPU to a compatible network adapter. In Windows Server 2008, this feature offloads network processing to reduce CPU usage, but may cause unpredictable performance in some environments. Refer to Microsoft Article 951037.
- NetDMA: Disabled
- Disables NetDMA, which provides operating system support for Direct Memory Access (DMA) offload. NetDMA allows TCP/IP to bypass the CPU when copying received data into application buffers, reducing CPU load. Refer to Microsoft documentation.
- Direct Cache Access (DCA): Disabled
- Direct Cache Access provides a mechanism for NetDMA clients to indicate that destination data is targeted for a CPU cache. Refer to Microsoft documentation.
- Receive Window Auto-Tuning: Normal
- The TCP receive window size is the amount of data that a TCP receiver allows a TCP sender to send before having to wait for an acknowledgement. After the connection is established, the receive window size is advertised in each TCP segment. Advertising the maximum amount of data that the sender can send is a receiver-side flow control mechanism that prevents the sender from sending data that the receiver cannot store. A sending host can only send at a maximum the amount of data advertised by the receiver before waiting for an acknowledgment and a receive window size update. By reducing the receive buffer there will be a reduction in the receiving rate, making it more difficult to drop packets. For more information about TCP receive window tuning, see Microsoft Knowledge Base article 878127.
- Add-On Congestion Control Provider: CTCP
- CTCP (Compound TCP) prevents a sending TCP peer from overwhelming the network by controlling the number of segments sent (the send window). It aggressively increases the receive window for connections with large window sizes and maximizes throughput by monitoring delay variations and losses. CTCP can significantly improve performance on high‑latency connections. Refer to Microsoft documentation.
- ECN Capability: Enabled
- When TCP segments are lost, TCP assumes the loss is due to congestion and reduces the sender's transmission rate, which significantly impacts throughput. ECN allows routers that support this feature to mark packets instead of dropping them when congestion occurs. This early signaling helps prevent packet loss and improves overall network efficiency and throughput between TCP peers. Refer to Microsoft Knowledge Base article 878127 for more information.
- RFC 1323 Timestamps: Disabled
- Disables TCP timestamping and window scaling features. Window scaling allows TCP to negotiate a scaling factor for the receive window size, supporting a very large window of up to 1 GB. The TCP receive window determines how much data a sender can transmit before requiring an acknowledgment. Refer to Microsoft Knowledge Base article 938205 for more information.
- TCP/IP Task Offload: Disabled
- Disables TCP/IP offloading to NIC, avoiding driver conflicts. Refer to TCP/IP Task Offload (Microsoft Documentation) for more information
- Initial RTO (ms): 3000ms
- Specifies the period, in milliseconds, before connect, or SYN, retransmit. The acceptable values for this parameter are: increments of 10, from 300ms through 3000ms.
- Minimum RTO (ms): 20
- Sets the minimum retransmission timeout for TCP connections. Acceptable values range from 20 ms to 300 ms in increments of 10.
- Initial Congestion Window: 4 MSS
- Specifies the initial size of the congestion window. Provide a value to multiply by the maximum segment size (MSS). The acceptable values for this parameter are: even numbers from 2 through 64.
- Delayed Ack Timeout: 10ms
- Sets the time, in milliseconds, to wait before sending an acknowledgment (ACK) when fewer than the delayed acknowledgment frequency of packets are received. Use the DelayedAckFrequency parameter to adjust this value. Reducing the timeout can improve throughput on low‑latency networks by accelerating TCP window growth. Valid values range from 10 to 600 ms, in 10‑ms increments.
The following settings do not exist in Windows 2008, only in Windows 2008 R2:
- Receive Segment Coalescing State (RSC): Enabled
- RSC is a stateless offload technology that reduces CPU utilization for network processing on the receive side by offloading tasks to an RSC‑capable network adapter. High CPU usage from networking tasks can limit server scalability, decreasing transaction rates, raw throughput, and efficiency. With RSC enabled, an RSC‑capable NIC can parse multiple TCP/IP packets and strip their headers while preserving the payloads, combine the payloads of multiple packets into one larger packet, and deliver this single combined packet to the network stack for further processing by applications. This helps improve performance and efficiency in receive‑heavy workloads.
- Non‑SACK RTT Resiliency: Disabled
- Specifies whether to enable round‑trip‑time resiliency for clients that do not support Selective Acknowledgment (SACK).
- Max SYN Retransmissions: 2
- Sets the maximum number of times the server resends SYN packets without receiving a response.
For more details about TCP Chimney Offload, Receive‑Side Scaling, and Network Direct Memory Access features in Windows Server 2008, refer to Microsoft Knowledge Base article 951037 for more information.
How can buffer sizes and data lengths be tuned for iSCSI Network?
The TCP stack may wait for a predetermined time before notifying the application that data is available, which can cause latency, particularly on Windows Server 2008. Reducing the buffer size has been observed to help smooth out these latency spikes.
This change cannot be made using the UI or PowerShell and must be performed through the registry.
To reduce latency and improve I/O smoothing, adjust the following parameters:
Registry edits for receive segment length
- Locate the MAC address of each target port to be changed.
- Navigate to the "Portal0" subkey and change the value of MaxRcvDataSegLen:
- Default: 256 KB (0x40000)
- Recommended:
- 64 KB (0x10000) for physical servers.
- 32 KB for ESX hosts
- Incremental adjustment: Reduce in 8 KB increments (example: 56 KB = 0xE000, 48 KB = 0xC000, 40 KB = 0xA000, 32 KB = 0x8000) until I/O smooths out.
- Restart the adapter
- After making changes, disable and re‑enable the corresponding iSCSI adapter in Device Manager > DataCore FibreChannel Adapters, or reboot the system.
- Locate the corresponding DataCore Software iSCSI Adapter by matching the MAC address:
- Right‑click the adapter and go to Properties > Details > Location Information to match the MAC address.
- Adjust initiator I/O parameters
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<bus id>\Parameters\maxburstlength
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<bus id>\Parameters\maxtransferlength
Set both to 64 KB (default is 256 KB).
Changing Max Outstanding Commands may reset these settings. Always re‑verify after making modifications. These changes will temporarily disrupt connections, but initiators will automatically reconnect once the adapter restarts. A reboot of the server will also reset the changes.
What BIOS settings are recommended for iSCSI Network?
Refer to "The DataCore Server" FAQ for more information.
What are the recommendations for configuring SANsymphony in a Virtual Machine?
Refer to "Hyper-converged Virtual SAN Deployment" FAQ for more information.
What other considerations are important for iSCSI Network?
Use 10GbE or higher
- Providing a larger pipe (10GbE or more) can help achieve greater throughput. However, if there is insufficient I/O to fully utilize a 1GbE connection, upgrading to higher bandwidth may not provide any benefit.
Multi-Path policy
The Round Robin (RR) Path Policy automatically rotates I/O across all available paths, distributing load across configured connections.
- While this can improve performance, it may also hinder it if the network is saturated.
- Typically, RR sends a set number of I/O commands down one path before moving to the next, and so on.
- In geographically dispersed configurations, if the Preferred Path is set to "ALL", some I/O may unnecessarily travel across a geographical gap. This means I/O might need to be mirrored back to the originating DataCore Server, acknowledged across the gap, and then returned to the host — resulting in four trips (2 for I/O and 2 for ACK) instead of two trips if using a local Preferred path.
- Recommendation: Carefully design multipathing policies based on your topology.
Minimize switch hops
- Reduce the number of switch hops between host and storage to improve latency.
NIC Teaming
To ensure high availability, iSCSI networks must be designed to avoid any single points of failure (SPOF).
- Best practice: Avoid NIC teaming for iSCSI where possible.
- If teaming is used, disable port security on the switch ports where the virtual IP addresses are shared.
- Port security is often enabled to prevent IP spoofing, but this can block virtual IP failover.
- For most LAN switches, port security can be enabled or disabled on a per‑port basis.
- Hardware-based flow control is recommended for all NICs and switches.
What are the security considerations for iSCSI Network?
- Private Network:
- iSCSI traffic is unencrypted and should only be used on trusted networks. Best practice is to isolate iSCSI traffic on separate physical switches or private VLANs.
- Encryption:
- iSCSI supports IPSec for securing communication.
- IKE can also be used for VPN security.
- Authentication:
- DataCore supports CHAP for authentication:
- CHAP uses challenge‑response with a shared secret key.
- The target initiates the challenge, and both parties know the key.
- It periodically repeats challenges to prevent replay attacks.
- While CHAP is inherently one‑way, it can be configured in both directions for mutual authentication.
- DataCore supports CHAP for authentication:
How does disk alignment affect iSCSI Network performance?
This is not a recommendation specific to iSCSI, because it also can have an adverse effect on the performance of all block storage. Nevertheless, to account for every contingency, it should be considered a best practice to have the partitions of the guest OS running with the virtual machine aligned to the storage.
What is the summary of best practices for iSCSI Network?
There are many parameters and settings to consider when using an iSCSI Network. DataCore has tested many of these settings and recommends following them.
In troubleshooting iSCSI performance issues, there are generally three stages to follow:
- Check the basics:
- Physical issues
- BIOS settings
- Power Saving disabled everywhere
- Latest Driver/Firmware
- Make TCP changes:
- Only one command for Windows 2012/2016
- Adjust Windows 2008 and network card settings:
- Reduce I/O Transfer (MaxRcvSegLen)
- Reduce Initiator I/O to 64K
- Network Card Settings
- Disable SR‑IOV
- Disable Nagle
- Disable Delayed Ack
- Apply specific Virtual Server settings
iSCSI Best Practices
DataCore have a software iSCSI Target driver but rely on Third Party iSCSI initiator drivers to send the packets across the IP network. iSCSI 'sits' above and uses TCP in the network layer as the protocol to transfer commands from the initiator to the target.
There are many Network Settings that can affect iSCSI performance. DataCore have found that there are a few specific parameter settings that help to stabilize (Windows 2008 R2) and improve (Windows 2012, 2012 R2, 2016, 2019 and 2022) performance.
Attached to this FAQ are the scripts that can be run to implement these recommended settings on NICs used for iSCSI Targets. Please refer to iSCSI Network for more detailed information.
It should be noted that this networking area is complex and settings can get changed deliberately or through changes in components. This script will re-apply Best Practice settings.
- These scripts are intended for use only on ports dedicated to iSCSI. It is not recommended to use these scripts on ports such as the DataCore Server Management Port as it will disable features which that port may require.
- These settings should be applied to DataCore servers and iSCSI initiators on any Windows hosts mapped to the DataCore Servers.
What are the key changes in this version?
Section(s) | Content Changes | Date |
---|---|---|
Scripts | Revised iSCSI script to support Windows 2022. | 5 May 2022 |
FAQ and Scripts | Clarified Windows 2019 support; added note that script is recommended for dedicated iSCSI ports only. | 19 Jan 2022 |
Scripts | Updated script signature; no other changes. | 10 Dec 2020 |
Scripts | Revised iSCSI script to support Windows 2019. | 26 Nov 2019 |
Scripts | Revised iSCSI script to support double-byte characters. | 20 Mar 2019 |
Scripts | Improved scripts for clarity. | 4 Jan 2019 |
FAQ | Clarified scripts are applicable to both Windows hosts and DataCore servers. | 11 Dec 2018 |
Scripts | Updated Windows 2008 script to prevent execution on non-Windows 2008 servers. | 12 Nov 2018 |
vCenter | Fixed vCenter Server naming contention. | 1 Nov 2018 |
Scripts & FAQ | New Shorter Version – vSphere VM Windows 2012 and 2016 scripts removed and a new single PS script to make it easier to run. FAQ simplified in order to help implement correct settings, and additional FAQ 1691 for the more detailed information created. | 19 Oct 2018 |
How does iSCSI work in DataCore environments?
DataCore have a software iSCSI Target driver but rely on Third Party iSCSI initiator drivers to send the packets across the IP network. iSCSI 'sits' above and uses TCP in the network layer as the protocol to transfer commands from the initiator to the target.
What are the recommended baseline settings for iSCSI?
There are different baseline optimal settings for:
- Windows 2008 R2
- Windows 2012/2012 R2, 2016, 2019 and 2022 configurations
At the bottom of this FAQ is a link to iSCSI Best Practices PowerShell scripts which are provided to help automate the configuration of these settings. Please ensure that the system is healthy and mirrors are synchronized before running the script.
Be aware that the scripts will reinitialize the Network Cards for the changes needed to take effect, therefore allow the script to complete on each DataCore Server before running it on the next one in a Server Group to avoid any production impact.
How do I configure Windows 2008 R2 for iSCSI?
To configure iSCSI on Windows 2008 R2 hosts and DataCore servers running SANsymphony (up to version 10.0.6.5):
- Run the TCP_iSCSI_Best_Practices_WIN2008.ps1 script.
- Be aware that the scripts will reinitialize the Network Cards for the changes needed to take effect, therefore allow the script to complete on each DataCore Server before running it on the next one in a Server Group to avoid any production impact.
- Windows 2008 R2, and all releases of SANsymphony which run on this OS, are now end-of-life. This script is provided as-is and is no longer supported.
How do I configure Windows 2012/2012 R2/2016/2019/2022 for iSCSI?
To configure iSCSI on Windows 2012, 2012 R2, 2016, 2019, and 2022 for use with all versions of SANsymphony (and for DataCore servers running SANsymphony 10.0 PSP6 or later):
Use the PowerShell script:
- Run the iSCSI_Best_Practices_3.9.ps1 script provided by DataCore.
- This script will automatically configure all detected iSCSI ports with DataCore’s recommended best practice settings.
- Run the script interactively:
- Launch a native PowerShell window (do not invoke PowerShell from cmd.exe).
- Running the script without any parameters allows interactive configuration for each port.
- Prepare the system before execution:
- Ensure the system is healthy and all mirrors are synchronized before making changes.
- Be aware of network card reinitialization:
- The script will reinitialize network adapters to apply changes.
- To avoid production impact, run the script on one DataCore Server at a time within a server group.
What scripts are provided for applying these best practices?
DataCore provides PowerShell scripts to automate applying iSCSI best practice configurations.
When executed, these scripts will:
- TCP Congestion Control for iSCSI on default Port 3260.
- System Power set to High Power.
- Configure Network Cards used for iSCSI with recommended settings.
What power settings should I use?
High Performance rather than Balanced (the default setting) should always be used to gain immediate power of the CPU being used.
What network adapter settings are recommended?
This script will apply the following recommended changes to the selected iSCSI network adapter:
- Disable DNS registration
- Disable WINs
- Disable net adapter bindings (including "Client for Microsoft Networks", "File and Printer Sharing for Microsoft Networks" and IPv6)
- Disable Nagle
- Disable Delayed Ack
- Disable adapter power saving
- Disable SRIOV
What are the best practices for TCP Congestion Control?
TCP Settings control the connection between IP Addresses.
- The following command provides congestion control for iSCSI traffic:
New-NetTransportFilter -SettingName Datacenter -LocalPortStart 3260 -LocalPortEnd 3260 -RemotePortStart 0 -RemotePortEnd 65535
- To verify the setting is in place, run:
- To remove this setting use:
Get-NetTransportFilter -SettingName "Datacenter" | -LocalPortEnd 3260 –LocalPortStart 3260 | Remove-NetTransportFilter
Are there any special considerations for iSCSI Best Practices in virtual environments?
Refer to "Hyper-converged Virtual SAN Deployment" FAQ documentation for more information.
What other considerations can improve iSCSI performance?
In troubleshooting iSCSI performance issues follow these two stages:
- Make sure all normal things are checked including physical issues, BIOS settings, Power Saving disabled everywhere, Latest Driver/Firmware, High Performance Power Plan, Windows Updates.
- Run the script:
- Apply the recommended TCP changes (only one command for Windows 2012/2012 R2/2016/2019/2022).
- Configure the network card used for iSCSI.
Summary of iSCSI Best Practices
- Use dedicated NICs for iSCSI.
- Run the provided scripts to apply recommended network and TCP settings.
- Always set power to High Performance and keep drivers/firmware up to date.