Major SANsymphony Features

Please note that the best practices for Disk Pools that are documented here may not always be practical (or possible) to apply to existing Disk Pools in your configuration but should be considered for all new Disk Pools.
These recommendations are also for optimal performance, not minimal capacity where a larger storage allocation unit (SAU) means less additional work that the Disk Pool must do to keep its internal indexes up to date, which results in better overall performance within the Disk Pool, especially for very large configurations.
While a larger SAU size will mean that there is often more, initial capacity allocated by the Disk Pool, the chance of newer Host writes needing to allocate yet more new SAUs will be less likely and will instead be written to one of the already allocated SAUs. The downside of larger SAU size is less granularity in data tiering, more overhead consumption for differential snapshots, and lower chance to successfully trim (reclaim) allocations at fragmented filesystems.
The following applies to all types of Disk Pools, including normal, shared, and SMPA Pools.
Also see:
The Disk Pool Catalog
The Disk Pool Catalog is an index that is used to manage information about each Storage Allocation Unit's location, its allocation state, and its relationship to the storage source it is allocated for within the Disk Pool. The Catalog is stored on one or more of the physical disks in the Disk Pool - also used for SAU data - but in a location that only the Disk Pool driver can access. Each DataCore Server's Disk Pool has its own, independent Catalog regardless if the Disk Pool is shared or not. Information held in the Catalog includes:
- Whether an SAU is allocated to a Storage Source or not
- The Physical Disk within the Disk Pool where an allocated SAU can be located
- The Storage Source an allocated SAU 'belongs' to
- The Virtual Disk's Logical Block Address (LBA) that the allocated SAU represents when accessed by a Host.
Whenever an SAU is allocated, reclaimed, or moved to a new physical disk within the same Disk Pool, the Catalog is updated.
Catalog updates must happen as fast as possible to as not to interfere with other I/O within the Disk Pool; for example, if the Catalog is being updated for one SAU allocation and another Catalog update for a different SAU is required, then this other Catalog update will have to wait for a short time before its own index can be updated. This can be noticeable when a lot of SAUs need to be allocated all within a very short time; and while the Disk Pool will try to be as efficient as possible when handling multiple updates for multiple SAU, there is an additional overhead while the Catalog is updated for each new allocation before the I/O written to the SAU is considered complete. This can, in extreme cases, result in unexpected I/O latency during periods of significant SAU allocations.
Therefore, we recommend that the Catalog be located on the fastest disk possible within the Disk Pool. As of SANsymphony 10.0 PSP9, the location of the catalog will be proactively maintained per Disk Pool to be located on the fastest storage.
DataCore recommends therefore that all Disk Pools have a minimum of 2 physical disks in tier 1 also used for storing the primary and secondary Disk Pool Catalogs and that these physical disks are as fast as possible.
As the Catalog is located within the first 1GB of the physical disk used to store it and as there is a minimum Disk Pool requirement of any physical disk to have enough free space to allocate at least one SAU, it is required that no physical disk be smaller than 2GB in size; 1GB for the Catalog itself and 1GB for the largest SAU possible within the Disk Pool (see previous section on Storage Allocation Unit size in this chapter).
Where is the Catalog Located?
The Catalog is always stored within 1GB of the start of a physical disk's LBA space, and its location depends on the release of SANsymphony.
In all releases, there is only ever a maximum of two copies of the Catalog in a Disk Pool at any time.
SANsymphony 10.0 PSP9 and later
When creating a new Disk Pool, the catalog is always located on the first physical disk added. If a second physical disk is added to the Disk Pool, then a backup copy of the Catalog is stored on that second physical disk – the Catalog on the first physical disk is then considered the primary copy. If subsequent disks are added to the pool, then the Catalog(s) may be moved to locate the Catalog(s) on the smallest disk(s) in the lowest numbered tier in the pool.
Before 10.0 PSP9
When creating a new Disk Pool, the catalog is always located on the first physical disk added. If a second physical disk is added to the Disk Pool, then a backup copy of the Catalog is stored on that second physical disk – the Catalog on the first physical disk is then considered as the primary copy. Tier assignment of the physical disk does not influence on where the Catalog will be stored. Any further physical disks added to the Disk Pool will not be used to store further copies.
Catalog Location and Pool Disk Mirrors
If the physical disk that holds the primary Catalog is mirrored within the Disk Pool, then the physical disk used as the mirror will now hold the backup Catalog. If the backup Catalog was previously on another physical disk in the Disk Pool before the mirroring took place, then then this other (non-mirrored) physical disk will no longer hold the backup Catalog.
Also see:
How the Catalog Location is Managed during Disk Decommissioning
If the physical disk that holds the primary copy of the Catalog is removed then the backup copy on the remaining second physical disk in the Disk Pool will be automatically 'promoted' to be the primary Catalog and, if available, a backup copy will be written to the ‘next available’ physical disk in the lowest tier added to the Disk Pool. If the physical disk that holds the backup copy of the Disk Pool is removed, then a new backup copy of the Catalog will be written to the 'next available' physical disk in the lowest tier in the Disk Pool. The location of the primary copy remains unchanged.
The system always tries to keep all copies of the Catalog at the lowest available tier number, usually tier 1. A user cannot move the Catalog to a physical disk of their choice in a Disk Pool.
Also see:
How the Catalog Location is Managed during Physical Disk I/O Failures
If there is an I/O error when trying to update or read from the primary Catalog, then the backup Catalog will become the new primary catalog and if there is another physical disk available then that will now become the new backup catalog location according to the rules above.
Storage Allocation Unit Size
The SAU size is chosen at the time the Disk Pool is created and cannot be changed - which is why it is not always easy to apply some of these best practices to existing Disk Pools.
Each SAU represents several contiguous Logical Block Addresses (LBA) equal to its size and, once allocated, will be used for further reads and writes within the LBA range it represents to a Virtual Disk's storage source. Any time a write I/O is sent by the Host to an LBA not able to be allocated within existing SAUs for that Virtual Disk a new SAU is allocated by the Disk Pool.
The amount of space taken in the Disk Pool's Catalog (see previous section) for each allocated SAU is the same regardless of the size of the SAU that was chosen when the Disk Pool was created. The Catalog has a maximum size, which means that the larger the SAU size chosen, the larger the amount of physical disk that can be added to a Disk Pool.
As each SAU is allocated, the Disk Pool's Catalog is updated which. The smaller the SAU size the more likely it is that any new write I/O will be outside of the LBA range of already allocated SAUs and so the more likely that the Catalog will need to be updated. In the previous section - The Disk Pool Catalog – it is recommended to have the Catalog on the fastest disk possible to be able to complete any Catalog updates as fast as possible, recommendations for SAU sizes largely depend on use cases. Larger SAU sizes tend to have fewer catalog updates but also lead to data concentration on fewer physical providers and less granular space and tiering efficiency. See the below table for some guidance on SAU size selection.
SAU Size |
Maximum Pool Size |
Use Cases |
---|---|---|
4 Mega Byte |
32 Tera Byte |
Highly space efficient Highest random IO Legacy Linux/Unix Filesystems Small snapshot pool |
8 Mega Byte |
64 Tera Byte |
Data Base Workloads Medium snapshot pool |
16 Mega Byte |
128 Tera Byte |
High random IO Large snapshot pool |
32 Mega Byte |
256 Tera Byte |
VDI Workloads Transactional File Services |
64 Mega Byte |
512 Tera Byte |
Medium random IO |
128 Mega Byte |
1 Peta Byte |
General Purpose Generic VMware Generic Hyper-V |
256 Mega Byte |
2 Peta Byte |
General File services |
512 Mega Byte |
4 Peta Byte |
Low random IO |
1 Giga Byte |
8 Peta Byte |
Maximum Capacity High sequential IO Archival storage Large file services |
In addition, the more SAUs there are in a Disk Pool the more work must be done to analyze those SAUs, either for Migration Planning or reclamation within the Disk Pool or for GUI updates. Hence, the larger the SAU the lower the resources required to carry out those tasks.
Its impact on reclamation should also be considered when choosing an SAU size. Previously allocated SAUs are only reclaimed when they consist entirely of “zero” writes, any non-zero write will cause the SAU to remain allocated. The larger the SAU, the higher the likelihood that this is the case.
It is recommended that mirrored Virtual Disks use Disk Pools with the same SAU size for each of their storage sources.
Physical Disks in a Disk Pool
The number of physical disks
The more physical disks there are in a Disk Pool the more I/O can be distributed across them at the same time giving an overall performance gain. Performance increases almost linearly for up to 24 physical drives added, then the gain flattens out. However, price and capacity may be significant determining factors when deciding whether to have many small, fast disks (optimum for performance) or fewer large slower disks (optimum for price).
The number of disks in a Pool should be assigned to satisfy the expected performance requirements with an allowance of 50% (or more) for future growth. It is not uncommon for a Disk Pool that has been sized appropriately to eventually suffer because of increased load over time.
Backend Device Multipathing
SANsymphony uses a generic multipathing solution to access Fibre-Channel backends. This system does not support ALUA for path management and operates in a simple failover-only approach with the option to select a preferred path to return to after a path failure is corrected. With more than 2 paths per device, this can lead to extended failover times and eventually pools failing temporarily. It is recommended to not expose more than 2 paths to a backend device to guarantee a failover within the timeout period.
If ALUA to the backend is required for Fibre-Channel attached backend devices a system can be configured to use non-DataCore Fibre-Channel drivers with MPIO and ALUA support. Path management and monitoring then need to be performed with appropriate 3rd party tools.
For iSCSI attached backends, SANsymphony uses the built-in iSCSI initiator from the Windows OS and relies on the Windows MPIO framework for path failover. This allows round- robin access and ALUA for non-SMPA pools.
SMPA pools require the multipathing to operate in Failover-Only mode, whether iSCSI or a 3rd party Fibre-Channel connection is in use.
Configuring RAID sets in a Disk Pool
See RAID Controllers and Storage Arrays.
Auto-Tiering Considerations
Whenever write I/O from a Host causes the Disk Pool to allocate a new SAU, it will always be from the highest performance / lowest numbered available tier as listed in the Virtual Disk's tier affinity unless there is no space available, in which case the allocation will occur on the 'next' highest tier and so on. The allocated SAU will then only move to a lower tier if the SAU's data temperature drops sufficiently. This can mean that higher tiers in a Disk Pool always end up being full, forcing further new allocations to lower Tiers unnecessarily.
New allocations when there is no space in a tier that matches the Virtual Disk’s Storage Profile will instead go to tiers that are out-of-affinity. Allocations that are outside of the set tiers for a Virtual Disk will only migrate when they can move back to a tier that is within affinity. Until PSP15 no migrations due to heat, or in tier re-balancing will occur until that has happened. With PSP15 a fair migration queuing has been introduced to allow faster and better reaction of the migration plan to user and access-driven changes. The system now has several concurrent migration buckets which are permanently exercised in a fair mode. The buckets are for:
- Affinity Migrations
- Encryption Migrations
- Config Settings Migrations
- Decryption Migrations
- Temperature Migrations
- Rebalancing Migrations
Use the Disk Pool's 'Preserve space for new allocations' setting to ensure that a Disk Pool will always try to move any previously allocated, low-temperature SAUs down to the lower tiers without having to rely on temperature migrations alone. DataCore recommends initially setting this value to 20% and then adjust according to your IO patterns.
All tiers in a Disk Pool should be populated, there should not be any tiers that also do not have physical disks assigned.
Also see:

The Disk Pool Catalog
- DataCore recommends all new Disk Pools have 2 physical disks in tier 1 used for storing the primary and secondary Disk Pool Catalogs and that these physical disks are as fast as possible.
Storage Allocation Unit Size
- Use an SAU size according to your intended use case and the target size of the pool. Pools for snapshots and legacy Unix/Linux filesystems may benefit in efficiency from smaller SAU sizes.
The Number of Physical Disks in a Disk Pool
- The more physical disks there are in a Disk Pool the more I/O that can be distributed across them at the same time giving an overall performance gain almost linear for up to 24 disks.
Configuring RAID Sets for Use in a Disk Pool
- The information on how to configure RAID sets for use in a Disk Pool is covered in the chapter 'RAID Controllers and Storage Arrays'.
Auto-Tiering Considerations
- Use the Disk Pool's 'Preserve space for new allocations' setting to ensure that a Disk Pool will always try to move any previously allocated, low-temperature SAUs down to the lower tiers without having to rely on temperature migrations alone. DataCore recommends initially setting this value to 20% and adjusting it according to your IO patterns, after a period.

Also see:
Replication Settings
Data Compression
When enabled, the data is not compressed while it is in the buffer but within the TCP/IP stream as it is being sent to the remote DataCore Server. This may help increase potential throughput sent to the remote DataCore Server where the link between the source and destination servers is limited or a bottleneck. It is difficult to know for certain if the extra time needed for the data to be compressed (and then decompressed on the remote DataCore Server) will result in quicker replication transfers compared to no Data Compression being used at all. For replication links at 10GBit or faster line speed, it usually takes longer to compress/decompress than to transmit the raw data sets.
A simple comparison test should be made after a reasonable period by disabling compression temporarily and observing what (if any) differences there are in transfer rates or replication time lags.
See the section ‘Data Compression’ from the online help for more information.
Any third-party, network-based compression tool can be used to replace or add additional compression functionality between the links used to transfer the replication data between the local and remote DataCore Servers, again comparative testing is advised.
Transfer Priorities
Use the Replication Transfer Priorities setting - configured as part of a Virtual Disk’s storage profile - to ensure the Replication data for the most important Virtual Disks are sent more quickly than others within the same Server Group.
See the section ‘Replication Transfer Priority’ from the online help for more information.
The Replication Buffer
The location of the Replication buffer will determine the speed that the replication process can perform its three basic operations:
- Creating the Replication data (write).
- Sending the Replication data to the remote DataCore Server (read).
- Deleting the Replication data (write) from the buffer once it has been processed successfully by the remote DataCore Server.
Therefore, the disk device that holds the Replication buffer should be able to manage at least 2x the write throughput for all replicated Virtual Disks combined. If the disk device used to hold the Replication buffer is too slow it may not be able to empty fast enough (to be able to accommodate new Replication data). This will result in a full buffer and an overall increase in the replication time lag (or latency) on the Replication Source DataCore Server.
A full Replication buffer will prevent future Replication checkpoint markers from being created until there is enough available space in the buffer and in extreme cases may also affect overall Host performance for any Virtual Disks served to it that are being replicated. Using a dedicated storage controller for the physical disk(s) used to create the Windows disk device where the buffer is located will give the best possible throughput for the replication process. Do not use the DataCore Server’s boot disk to not cause contentions for space and disk access.
It is technically possible to ‘loop back’ a Virtual Disk to the DataCore Server as a local SCSI disk device to then use as the Replication buffer’s location. This is not recommended as apart from the extra storage capacity that this will require, there may be unexpected behavior when the SANsymphony software is ‘stopped’ (e.g., for maintenance) as the Virtual Disk being used would suddenly no longer be available to the Replication process, potentially corrupting the replication data that was being flushed while the SANsymphony Software was stopping. Creating a mirror from the Virtual Disk being ‘loop-backed’ may be considered a possible solution to this but in the case where the mirrored Virtual Disk used for the Replication buffer also has to handle a synchronous mirrored Virtual Disk resynchronization (e.g. after an unexpected shutdown of the DataCore mirror partner) the additional reads and writes used by the mirror synchronizing process as well as not using DataCore Server’s write caching (while the mirror is not healthy) will significantly reduce the overall speed of the Replication buffer so this configuration is not recommended either.
The size of the Replication buffer
The size of the buffer will depend on the following:
- The number of write bytes that are sent to all Virtual Disks configured for replication
- The speed of the Windows disk device that the buffer is using
- The speed of the Replication Network Link (see the next section) to the Replication Group
Situations when the Replication Link is ‘down’, and where the replication process will continue to create and store replication data in the buffer until the link is re-established, need to be considered too. For example, plan for an ‘acceptable’ amount of network downtime for the Replication Group (e.g., 24 hours) and knowing (even approximately) how much replication data could be generated in that time would allow for an appropriate sizing to prevent the Replication ‘In log’ state.
Planning for future growth of the amount of replication data must also be considered. Creating GPT-type Windows disk devices and using Dynamic Disks will give the most flexibility in that it should be trivial to expand an existing NTFS partition used for the location of an existing Replication buffer if required. It is also possible to switch to a different drive path at any given time to make use of a larger capacity in case Dynamic Disks are not a desired choice.
Be aware that determining the optimum size of the buffer for a particular configuration is not always trivial and may take a few attempts before it is known.
Which RAID do I use for the Replication buffer?
While the performance of the Replication buffer is important, a balance may need to be struck between protecting the data held in the Replication buffer (i.e., RAID 1 or 5) and improving read/write performance (e.g., RAID 0 or even no RAID at all!). It is therefore difficult to give a specific recommendation here as this will depend on the properties of the physical disks and the RAID controller being used to create the Windows disk device used to hold the Replication buffer as to whether the gain in read/write performance by not (for example) using RAID 5, can be considered insignificant or not. Comparative testing is strongly advised.
Filesystem and Formatting considerations
The files stored for replication buffering are usually 4 MByte in size and so NTFS formatting with the largest possible cluster size improves the overall performance of the buffer. It also makes sense to turn off NTFS “8.3 naming” and last access time tracking to optimize performance.
Replication Connections
TCP/IP link speed
The speed of the network link will affect how fast the replication process will be able to send the replication data to the remote DataCore Server and therefore influence how fast the buffer can empty. Therefore, the link speed will have a direct effect on the sizing of the Replication buffer. For optimum network bandwidth usage, the network link speed should be at least half the speed of the read access speed of the buffer.
WAN/LAN optimization
The replication process does not have any specific WAN or LAN optimization capabilities but can be used alongside any third-party solutions to help improve the overall replication transfer rates between the local and remote DataCore Servers.
Connections between single Replication Groups
A remote Replication Server will receive two different types of TCP/IP traffic from the local DataCore Server Group:
- All Replication configuration changes and updates are made via the SANsymphony Console. This includes Virtual Disk states; all Replication performance metrics (e.g., transfer speeds and the number of files left to transfer). This TCP/IP traffic is always sent to and from the ‘controller nodes’ of both the Source and Destination Replication Groups1.
- The Replication data between the Source and Destination Replication Groups. This TCP/IP traffic is always sent from the DataCore Server selected when the Virtual Disk was configured for Replication on the Source Server Group regardless of which DataCore Server is the ‘controller node’.
In both cases, the DataCore Server’s own Connection Interface setting is still used.
This means that if the ‘controller node’ is not the same DataCore Server that is configured for a particular Virtual Disk’s Replication, then the two different TCP/IP traffic streams (i.e., Configuration changes and updates and Replication data) will be split between two different DataCore Servers on the Source with each DataCore Server using their own Connection Interface setting.
As the elected ‘controller node’ can potentially be any DataCore Server in the same Server Group it is very important to make sure that all DataCore Servers in the same Local Replication Group can route all TCP/IP traffic to all DataCore Servers in the Remote Replication Group and vice versa.
Connections between Multiple Replication Groups
For Server Groups that have more than one Replication Group configured in them, using a separate, dedicated network interface for each connection to each Replication Group may help improve overall replication transfer speeds.
Other Replication Considerations
Replication destination storage performance
The storage writes performance of the replication destination system defines the rate of replication file transfers from the source. The destination will only request another file to be shipped after the current one has been written to its pool. Multiple replication files for individual replicated virtual disks may transmit in parallel, as each of them can acquire individual parallel data streams into different pools. Slow pools at the destination may lead to underutilized network bandwidth and source replication buffers filling up. Plan accordingly to scale the destination system with the load growing at the source side.
Anti-Virus software
The data created and stored in the replication buffer cannot be used to ‘infect’ the DataCore Server’s own Operating System. Using Anti-Virus software to check the replication data is therefore unnecessary and will just increase the overall replication transfer time as the files are scanned delaying their sending and removal from the buffer as well as adding to the number of reads to the buffer disk.
Avoiding unnecessary ‘large’ write operations
Some special, Host-specific operations – for example, Live Migration, vMotion, or host-based snapshots - may generate significant ‘bursts’ of write I/O that may, in turn, unexpectedly fill the Replication buffer adding to the overall replication time lag.
A Host Operating System’s page (or swap) file can also generate a ‘large’ amount of extra, unneeded replication data - which will not be useful after it has been replicated to the remote DataCore Server. Use separate Virtual Disks if these operations are not required to be replicated.
Some third-party backup tools may ‘write’ to any file that they have just backed up (for example to set the ‘archive bit’ on a file it has backed up) and this too can potentially generate extra amounts of replication data. Use time-stamp-based backups to avoid this.
Encryption

SANsymphony Replication Settings
- Enable Data Compression to improve overall replication transfer rates where the link between the source and destination Server Group is limited.
- Use the Transfer Priorities setting to prioritize 'more important' Virtual Disks to replicate faster than the others.
The Replication Buffer
- Use a disk as fast as possible, (for example RAID 1 or No Raid) for the best read/write performance; only use RAID protection if required and if the loss of overall performance for that protection is negligible. It should be capable of handling at least twice the write I/O throughput for all replicated Virtual Disks combined. Comparative testing between using RAID 1 and RAID 5 is advised.
- Use a dedicated SCSI controller for the best possible throughput and do not use the DataCore Server’s boot disk.
- Do not use a SANsymphony Virtual Disk as a location for a Replication buffer.
- In the case of unexpected Replication Network Link connection problems, size the buffer accordingly to consider a given period that the network may be unavailable without the buffer being able to fill up.
- Use a GPT partition style and eventually Dynamic Disks to allow for expansion of the Replication buffer in the future.
- Format with the largest possible NTFS cluster size, disable 8.3 namings, and last access time tracking.
Replication Network Connections
- Each DataCore Servers in both the ‘local’ Server Group and the ‘remote’ Replication Group must each have its own routable TCP/IP connection to and from other.
- For optimum network bandwidth usage, the network link speed should be at least half the speed of the read access speed of the buffer.
- Use a dedicated network interface for each Replication Group to get the maximum possible replication transfer speeds to and from the DataCore Server(s).
- Enable Compression to improve overall replication transfer rates.
Other
- Replication destination pool write performance may limit replication file requests from the source.
- Exclude the replication buffer from any Anti-Virus software checks.
- Host operations that generate large bursts of writes - such as Live Migration, vMotion, host-based snapshots, or even page/swap files for example –that are not required to be replicated should use separate, un-replicated Virtual Disks.
- Use timestamp-based backups on Host files that reside on a Virtual Disk to avoid additional replication data being created by using a file’s ‘archive-bit’ instead.
- Where encrypted Virtual Disks are being replicated, see the “Encryption” section of this document.

Also see:
Considerations for Disk Pools
In the Disk Pool chapter – see Storage Allocation Unit Size - we recommended to using the appropriate SAU per use case of a Disk Pool. When using Snapshots, however, it is recommended to use the smallest SAU size possible. This is because Virtual Disks often have multiple, differential snapshots created from them that are deleted after a relatively short time. As each snapshot destination created from a Virtual Disk is an independent storage source, using a large SAU size in this situation can sometimes lead to excessive and unnecessary allocations of storage from a Disk Pool (and in extreme cases cause the Disk Pool to run out of SAUs to allocate).
Multiple snapshots for a Virtual Disk can also mean that as each snapshot is deleted, any SAUs that were allocated to that snapshot will need to be reclaimed and not only does this contribute to extra I/O within the Disk Pool (to zero out the previously allocated SAUs) but more Catalog updates as the SAUs are 'removed' from their association with the snapshot's storage source.
As the recommendation here for the smallest SAU may conflict with SAU size for a given use case of a Disk Pool, it is recommended that Snapshots should have their own dedicated Disk Pools.
Considerations for 'Preferred' DataCore Servers
When using mirrored Virtual Disks, a snapshot destination can be created from either, or both, storage sources on each DataCore Server. If possible create all snapshots on the non- preferred DataCore Server for a Virtual Disk, this is because the overall workload on the preferred DataCore Server of a Virtual Disk will be significantly more than that of the non- preferred side which only has to manage mirror write I/O whereas the preferred side received both reads and writes from the Host as well as having to manage the I/O writes to the mirror on the other DataCore Server.
Number of Snapshots per Virtual Disk
Although it is possible to have up to 1024 snapshots per Virtual Disk, each active snapshot relationship may add load to the source Virtual Disk for the copy-on-write process.
Copy-on-Write
Copy-on-write is the mechanism whereby data is copied to a snapshot before being overwritten on the source Virtual Disk. This is required any time a write I/O would otherwise lose data required for a snapshot.
Copy-on-Write behavior before 10.0 PSP11
As an example, if there are 10 differential snapshots enabled for a single Virtual Disk then any write I/O sent to the Virtual Disk could end up generating 10 additional I/O requests – one for each snapshot – as the copy-on-write process has to be completed to each snapshot before the DataCore Server can accept the next write to the same location on the source Virtual Disk. The additional wait time for all 10 I/Os to complete for that 1 initial I/O sent to the source can be significant and end up causing considerable latency on the Host.
For this reason, it’s best to keep the number of differential snapshots for each source Virtual Disk to as few as possible.
Copy-on-Write behavior from 10.0 PSP11 onwards
As of SANsymphony 10.0 PSP11, the copy-on-write process has been changed. Using the same example as before, if there are 10 differential snapshots enabled for a single Virtual Disk then for any write I/O to a section of the Virtual Disk that has not yet been migrated into a snapshot, there will be only a single copy-on-write process to the most recently created snapshot. All other snapshots which require it will refer to that data.
This gives a significant improvement to migration times and storage usage for Virtual Disks with multiple snapshots, but should the snapshot containing that data fail, all other snapshots which require it will be affected.
Deletion of a snapshot containing copy-on-write data required by other snapshots will first need to migrate the data to another snapshot before completing. This happens silently and access to the data being migrated is maintained during the process.

Considerations for Disk Pools
- Create dedicated Disk Pools for Snapshots.
- Use the smallest SAU size possible for the Snapshot Disk Pool.
Considerations for Virtual Disks
- Where possible create snapshots on the non-preferred side of a mirrored Virtual Disk.
- Keep the number of differential snapshots per Virtual Disk to as few as possible.

Also see:
How does it work - Continuous Data Protection (CDP)
Considerations for SANsymphony Servers
CDP requires adequate resources (memory, CPU, disk pool storage performance, and disk capacity) and should not be enabled on DataCore Servers with limited resources. Before enabling, review the following FAQ:
The DataCore Server - System Memory Considerations
Considerations for Disk Pools
Writes to the History Log are sequential, but I/O being destaged may not be, as it could be for any SAU presently assigned to the Virtual Disk with CDP enabled or a new allocation.
For this reason, it is recommended to use separate, dedicated pools for CDP-enabled Virtual Disks, and for the History Logs for those Virtual Disks.
When creating a Disk Pool for CDP History Logs, it is recommended to use an SAU size for large sequential operations.
Disk Pools used should always have sufficient free space. System Health thresholds and email notifications via tasks should be configured for notification when Disk Pool free space reaches the attention threshold to ensure sufficient free space.
Considerations for Virtual Disks
Enabling CDP for a Virtual Disk increases the amount of write I/O to that Virtual Disk as it causes writes to go to the History Log as well as the underlying physical disk. This may increase I/O latency to the Disk Pools used by the Virtual Disk and the History Log and decrease host I/O performance to Virtual Disks using these Disk Pools if not sized accordingly.
The default history log size (5% of the Virtual Disk size with a minimum size of 8 GB) may not be adequate for all Virtual Disks. The history log size should be set according to I/O load and retention time requirements. Once set, the retention period can be monitored, and the history log size can be increased if necessary. The current actual retention period for the history log is provided in the Virtual Disk Details > Info Tab (see Retention period).
Enable CDP on non-Preferred Servers to reduce the impact of History Log filling.
Wait to enable CDP until after recoveries are completed and/or when large amounts of data have been copied or restored to the Virtual Disk.
When copying large amounts of data at one time to newly created Virtual Disks, enable CDP after copying the data to avoid a significant I/O load.
Use caution when enabling or disabling CDP on a Virtual Disk which is served to Hosts. These activities may be disruptive and may result in slower performance, mirrors failing, or loss of access (if not mirrored).
Do not send large blocks of zero writes to a CDP-enabled Virtual Disk as this could result in shortened retention time and does not allow the pool space to be reclaimed until destaged. If pool reclamation is needed, disable CDP and wait for the History Log to destage, then run the zeroing routines. CDP can then be re-enabled.
Considerations for Rollbacks
Rollbacks are designed to be enabled for short periods and then to be split or reverted once the desired data has been found or recovered. Where possible, do not send writes to a rollback or keep it enabled for a long period.
Rollbacks should only be created to find a consistent condition before a disruptive event, and then restore the Virtual Disk data using the best rollback. Rollbacks should be split once the required point-in-time has been found. Delete any rollbacks which are no longer needed.
After an event that requires restoration of data, I/O to the affected Virtual Disk should be immediately suspended and then rollbacks should be created. Suspending I/O will keep older data changes from being destaged, which in turn will keep the rollback from expiring or I/O to the Virtual Disk from failing (where a “Persistent” rollback has been created). Keep I/O suspended until data recovery is complete.

Considerations for Disk Pools
- Create dedicated Disk Pools for CDP History Logs.
- Use the appropriate SAU size for the History Log Disk Pool.
Considerations for Virtual Disks
- Where possible enable CDP on the non-preferred side of a mirrored Virtual Disk.
- Enable CDP after mirror recoveries are complete.
- Enable CDP after copying large amounts of data to the Virtual Disk.
- Do not send large blocks of zero writes to a CDP-enabled Virtual Disk, as this can result in shortened retention time.
Considerations for Rollbacks
- When an issue occurs which requires a rollback, stop I/O to the affected Virtual Disk to avoid older data being destaged (where a persistent rollback has not been created).
- Do not send writes to a rollback Virtual Disk which has not been split.
- Enable rollbacks for the least amount of time possible.
- Once the point in time required has been found, split the rollback.

Also see:
SANsymphony: Disk Pool Encryption - Technical Details
Before implementing encryption, review the BIOS section.
Encryption and decryption are performed as data is written to or read from pool disks and as such there is a small performance overhead while the encryption or decryption takes place.
Considerations for Disk Pools
When using encryption, you can use either:
- Mixed pools containing both encrypted and unencrypted Virtual Disk sources, or
- Dedicated pools containing either encrypted or unencrypted Virtual Disks sources.
Mixed |
Dedicated |
---|---|
Pro: Provides ease of management as both Virtual Disk types can reside in the same Disk Pool. |
Con: Requires manual management of Disk Pools to ensure they only contain Virtual Disks of one type. |
Con: Increased migration activity due to conversion of SAUs between types (encrypted and unencrypted). |
Pro: Only one type of SAU (encrypted or unencrypted) is required. |
Where dedicated Disk Pools are required, the log store and map store should be set to the unencrypted pool, as on server shutdown these write to a hidden, unencrypted Virtual Disk in the specified pool. Setting either of these features to a pool intended to be dedicated to only encrypted Virtual Disks would therefore cause it to have both encrypted and unencrypted storage sources in it.
Also see:
As of 10.0 PSP11, Virtual Disks in mixed Disk Pools can allocate an SAU of the opposite type, for example, an encrypted Virtual Disk can allocate an unencrypted SAU where no SAU of the required type is available. That SAU will then be converted to the correct type. This functionality is enabled by default but can be disabled per Disk Pool where required. Please open an incident with DataCore Support for details when this is necessary.
Encryption Keys
Upon creation of the first encrypted Virtual Disk in a pool, an encryption key will be generated. This is stored locally in the Windows key store. This key should be exported and stored in a safe location (not on the SANsymphony node itself) for use in a disaster recovery scenario. For shared pools, (supported with PSP10 and later) the same key is used for all SANsymphony nodes, but for standard mirrored Virtual Disks, the encryption key for each Disk Pool will be unique, and keys for Disk Pools on both sides of the mirror should be exported and retained.
Also see:
With SANsymphony 10.0 PSP11, it is now possible to use a central KMS with the Key Management Interoperability Protocol (KMIP) for the centralized storage of encryption keys. When used, the encryption key(s) required by a node will be retrieved when the DataCore Executive Service is started on that server, they will not be stored in the local registry or Windows key store. If it is not possible to communicate with the KMS at service startup, encrypted Virtual Disks will be unavailable. As such, the KMS must be reachable from all nodes in the Server Group at service startup. However, a loss of access to the KMS will not interrupt I/O if this occurs while SANsymphony is already running.
Also see:
When using KMS, security certificates used for the secure session with the KMS server are stored on each of the SANsymphony nodes in the group in a subfolder of the SANsymphony installation folder called “Client”. This folder is not included in support bundles or backups.
It is recommended to ensure that any remote KMS in use is highly available and does not use the same DataCore Server Group for its storage.
Replicating Encrypted Virtual Disks
Data is encrypted as it is written to Disk Pool storage, and not before. As such, when encrypted Virtual Disks are replicated, the data written to the buffer, and therefore also sent to the destination Virtual Disk, will not be encrypted. In cases where this data needs to be encrypted as well, hardware-level encryption will need to be implemented on the replication buffer and the link between Server Groups. DataCore does not have any recommendations for encryption other than to mention that the extra time required for the encryption/decryption process of the replication data might add to the overall replication time lag. Comparative testing is advised.

Considerations for Disk Pools
- Where dedicated encrypted and unencrypted Disk Pools are required, the log store and map store should be set to an unencrypted pool.
- Ensure the encryption keys are backed up and stored in a safe location (not on the SANsymphony servers) before writing any data to encrypted Virtual Disks.
- Where a KMS is used, ensure it is highly available, and avoid using storage served from any Server Group which is also using it for key storage.
Considerations for Replication
- Where end-to-end encryption is required, implement hardware-level encryption on the replication buffer, and the link between Server Groups.