Transfer Policies Explained

by John Heaton

In the beginning...

Historically, transfer policies have been referenced by the following terms:

  • fixed
  • aggressive fair
  • fair or adaptive
  • trickle

With Console 1.4 and fasp2.5 these names have since been mapped to the following:

  • fixed remains fixed
  • aggressive becomes high
  • fair/adaptive is simply fair
  • trickle is now low

The details

In the end, this is all semantics. For an explanation of what these policies mean in practice, we must first define fixed and adaptive:

Transfers can be classified as either fixed or adaptive. As the names indicate, a fixed mode transfer uses a constant rate regardless of network conditions or end-to-end throughput. On the other hand, an adaptive transfer will adapt to the available bandwidth, be it the network capacity, a virtual capacity imposed by VLINK, or congestion. Bottlenecks like endpoint line/circuit capacity or intermediate router bandwidth is gauged and the fasp™ protocol will adjust itself to maximize what is available, but not in a negative way. Concurrent UDP and TCP flows on any shared segment, that also utilizes available bandwidth, is observed and allowed to co-exist with fasp™.

Fixed mode transfers can be quite destructive to networks by flooding the network with more data than can be handled. It is useful for certain applications of the fasp™ technology, and in troubleshooting, but generally discouraged. The adaptive rate control mechanism allows for multiple flows to run concurrently without the fear of over-driving a network.

The terms high (originally Aggressive), fair, and low (formerly Trickle) are modes of the adaptive rate control technology. Fundamentally, the names map to 2α, α, andα/10, respectively, whereα is a target queuing factor in the internal congestion control algorithm. In practice, this means that all adaptive or fair-type transfers react the same on the network with regard to non-fasp™ traffic, but internally they allocate bandwidth to confirm to the ratios listed above. One example is the following: take three transfers running concurrently. The end-to-end bandwidth is 60Mb/s, and every transfer targets that rate using the fair policy. A single transfer would get the entire 60Mb/s, but the three running concurrently would equally share the 60Mb/s equally, or 20Mb/s. Now change one of the transfer from fair to high. Now the transfers adjust so that the high transfer gets 30Mb/s, and the other two fair transfers run at 15Mb/s apiece.

The high and fair modes have similar characteristics to TCP. The low transfer policy does have a side-effect where it is actually less aggressive than TCP. This means that a transfer using a low setting will actually scale back significantly when other network traffic is introduced. This is desirable on networks that have congestion intolerant protocols, like VoIP. VoIP is very sensitive to congestion, and the utilization imposed by fasp™ could cause VoIP traffic to still be delivered, but not in the intended timeframe necessary to constitute a well-formed call. Other techniques can be used in conjunction with VoIP, but this is a typical use of the low policy.

The vlink technology is used to also combat this scenario (although it is not limited to such). It imposes a virtual aggregate bandwidth cap; Let's revisit what this means in context of example above of the 60Mb/s pipe. If there was a vlink on the server (or user, or user group) that enforced a 15Mb/s cap, then those transfers would be limited to 15Mb/s to share. Three fair transfers would get 5Mb/s; 1 high and 2 fair would see a ~7Mb/s transfer and two 3.5Mb/s transfers.

Powered by Zendesk