General Performance Considerations

What to Expect from Fibre-Channel

With 16 GBit or faster Fibre-Channel boards a CPU, thread is capable of forwarding 250k IOps per port. If the performance requirement is less, then multiple FC ports can share a single thread. Scale the number of ports and cores according to the actual performance requirements of the targeted environment.

What to Expect from iSCSI

A CPU thread, depending on clock speed, is capable of forwarding 60k-80k IOps per virtual iSCSI target portal. If the performance requirement is less, then multiple iSCSI target portals can share a single thread. It is possible to have multiple IP addresses per NIC to bind multiple iSCSI target portals to fast physical interfaces to maximize the utilization of the connections.

Port Scaling Considerations

When it comes to port scaling it is always good to have any functional port role at a redundant layout per SANsymphony server. Ports are identified for Front-End, Mirror, and Back-End roles. Having at least 2 of each gives a good level of resiliency within the server and prevents mirrored Virtual Disks to fail or failover to a remote SANsymphony server.

While it is possible to share multiple roles per port it is not recommended to do it. The only roles to eventually be shared are Mirroring and Back-End on the same port because Back- End performance requirement is usually reduced by cache optimization and so it may coexist with the requirements for the mirror traffic.

Front-End roles should always be exclusive per port to guarantee non-blocking access to the cache.

When Front-End is scaled up with additional ports, then Mirroring should be scaled as well.

As a rule of thumb, try not to have more than 50 initiators logging into a single target port. SANsymphony technically supports way more, but all initiators will compete for the bandwidth, and under high load conditions some hosts may lose the competition and suffer from unexpectedly high latency.

In Fibre-Channel environments try not to have more than 2 initiator line speeds connecting to a target to prevent suffering from “slow drain” effects during high load conditions in the entire fabric.

Scale Up vs. Scale Out

Scale up means adding more to a single SANsymphony Server in a Server Group.

Scale-out means adding more SANsymphony Servers to a Server Group up to 64 Servers in total.

Scale-out also may mean having multiple SANsymphony Server Groups in a single Datacenter to have the ability to operate on different code revisions and to apply application- specific Service Level Agreements (SLA) more differentiated than within only a single instance of federated Servers.

While it may make sense to add as much as possible to the responsibility of a single Server, it may be wise to spread the risk to more than just 2 Servers for high availability and distribution of work. SANsymphony Server Groups operate as a federated grid of loosely coupled Servers. This means that each Server contributes to a parallel effort with all other Servers to achieve the desired goal in performance and availability. With just 2 Servers 100% of the presented Virtual Disk mirrors will lose redundancy as soon as one of the Servers is compromised. With 4 Servers this risk is already down to 50%, plus it is possible to have 3-way mirrors which tolerate the loss of a mirror member and still stay highly available with 2 out of 3 data copies in case of a Server outage. The more Servers are exist in a Server Group the more the risk reduces. It is always a question of “how many eggs to carry in a single basket”!

A similar situation exists from the storage pool perspective. While it is possible to scale a single pool up to 8 PB in capacity, it may not be wise to do so, because as soon as a single backend device of a pool fails, the pool turns offline and leads to Virtual Disk mirrors to losing redundancy.

Furthermore, a single pool has a limited IO performance as it is a single-performance instance, while multiple pools scale performance out in parallel.