All About Adaptive Transfers

The Value of Playing Fairly

One of the key benefits of fasp™ is the adaptive rate control technology. Another term for this is fair, as in a fair-mode transfer. Simple UDP blasters like UDT and Tsunami achieve speed by flooding the network with packets in a way that incurs excessive overhead and squeezes out other traffic. Aspera's fasp™ is designed to be efficient without sacrificing speed while also playing fairly with other traffic, like HTTP. The fasp™ protocol also supports the notion of a fixed rate transfer, which simply transmits data at the rate specified (somewhat like a UDP blaster). This mode is useful in some situations, but fair mode transfers are much nicer to the network.

Adjusting Down to the Weakest Link

Aspera's fasp™ views the network as one end-to-end queue. The capacity of this queue is only as great as the smallest part; put another way, from one end to the other a transfer can only achieve the rate allowed by the most restrictive bottleneck. These bottlenecks cause congestion, and can come from a number of sources: over-provisioned routers, QOS policies, poor links, and other traffic. As bottlenecks are introduced queuing occurs which signals back to fasp™ that the network may be overdriven if the rate is not reduced. This is the essence of the adaptive rate: as the available bandwidth is reduced, fasp™ will adapt to the changes. Congestion is measured thousands of times a second, and as bandwidth becomes available or decreases the protocol will work to use the amount available to it. This means the network is fully utilized without impact to concurrent fasp™ transfers or other traffic on the network.

Setting Rates

The Target Rate

Every Aspera transfer has a target rate which is the goal for the transfer to send at. By setting a target rate, this is the ceiling which a transfer cannot exceed; the target rate sets an expectation on the performance that is to be achieved by the transfer. Setting a proper target ensures the protocol is not too aggressive in it's adaptation to the available bandwidth. There is currently no mechanism to just tell a transfer to exist, with the protocol adjusting up or down as needed. The reason for this is that not all network traffic is the same. Protocols like FTP and HTTP are tolerant of data pipes being used to their fullest, with some contention as the fasp™ probes the network. Other protocols like VoIP are very sensitive to jitter and congestion. For these sensitive protocols, it is normally good to set aside a portion of the available bandwidth for these transfers. For example, on a T3 (with 45Mb/s available) it may be advisable to only target transfers at 35Mb/s. This gives transfers 35Mb/s to transfer with, but the remaining 10Mb/s can be allocated for VoIP or other traffic.

In a single transfer, the target rate is also used to ascertain the performance of the transfer compared to expected network conditions. Setting a transfer to 155Mb/s (the downstream bit-rate of an OC3) and getting this rate consistently means the user can calculate the amount of data expected to be moved in a given time frame. A user transferring a 14GB file could do this in 12 minutes.

The target rate is a suggestion to the protocol on how fast to go. In adaptive mode the transfer will adjust to the available bandwidth. If the user believes the circuit is an OC3, but it is a T3/DS3 (effective rate of 44.7Mb/s) the fasp™ protocol will adjust to 44.7Mb/s. In addition to this, if a user shares the circuit with another application that uses HTTP, the Aspera transfer will also accommodate the HTTP transfer.

If there is available bandwidth for multiple transfers to get their target rate, then those transfers will go at that rate. On a T3, two 10Mb/s transfers would go at 10Mb/s for an aggregate throughput of 20Mb/s. Conversely, if the T3 had two transfers running at 45Mb/s, they would both share equally the available 45Mb/s, each taking roughly 21-22Mb/s.

Minimum Target Rate

Aspera transfers also have the notion of a minimum transfer rate. This essentially sets a fixed transfer rate up to the minimum rate, after which the transfer has the same network characteristics of fair transfers. The use of this setting is one method for enforcing priority of transfers relative to others. By placing data on the network at a specific rate, and disregarding other network traffic, the the transfer is giving itself priority over other flows on the network. Setting the minimum transfer rate is desirable in well controlled environments where priority is necessary, but multiple concurrent transfers with a minimum rate can also lead to the same congestion scenarios that exist with fixed transfers.

Variations on the Adaptive Rate

A good description of the differences are found in the Transfer Policies Explained article. In it there is a description of the concept of a low priority adaptive transfer, the normal fair mode, and a high priority transfer mode that is aggressive relative to other Aspera flows.

Artificial Bottlenecks (are good)

Aggregate Rate Limiting with VLINK

The concept of vlink is quite simple. A vlink trunk is a virtual bottleneck that is used to control how transfers use bandwidth. The simple solution mentioned above was to set a lower target rate to allow other, sensitive traffic space to operate. The problem with this was also explained above: when there is more than one transfer running, both will attempt to get their target rate if the bandwidth is available. A vlink creates an aggregate bottleneck for Aspera transfers. On the example T3, a vlink of 35Mb/s would limit all Aspera transfers, one or multiple concurrent, to 35Mb/s, regardless of target rates. This provides the "breathing room" necessary to ensure sensitive protocols like VoIP can run without problems.

Disk Based Rate Adaptation

As network technology has improved, the speed of the network is quickly outpacing the capability of the hardware connected to the network. The slowest portion of the hardware platform is currently the storage. While advances in storage capacity are abundant in the marketplace, the disk systems available still require spinning platters of disks, seeking and searching for places to read and write data from. Various techniques can be used to overcome this physical problem, such as creating high-speed volumes or using expensive solid state memory. Aspera's contribution to this is the disk based rate control introduced with fasp™ version 2.2.

Because the fasp™ protocol views the network as a big queue with various bottleneck point where congestion occurs, the core protocol has been extended to view the disk as an additional congestion signal. This ability of fasp™ to adjust to disk conditions prevents the protocol from overdriving the disk caches, and ultimately the volumes themselves.

 

Have more questions? Submit a request

0 Comments

Article is closed for comments.
Powered by Zendesk