It is common to use
ascp to test out the capability of the network. Because the fasp™ protocol has the ability to fully utilize the network bandwidth, it is an ideal tool for testing the boundaries of the end-to-end network path.
Note: This method is illustrated using the command-line
ascp. The same concept can be done with the GUI, barring a few exceptions. In fact, since command-line transfers can be viewed in the GUI, it is usually nice to view the transfers graphically while running the tests from the CLI.
There are several methods to going about the testing of bandwidth -- this is just one method and it is iterative. The steps that will be followed are as follows:
- Identify a reasonably large file to transfer so transfers last a little time
- Run a fair transfer of that file with a target rate value that is expected to be possible on the network.
- If the transfer is...
- smooth and consistent, and target rate is the same as the achieved rate, raise the target rate and repeat 2
- smooth and consistent, and the achieved rate is lower than the target rate, the achieved rate is the channel capacity.
- inconsistent and non-uniform, lower the target and repeat 2, or there may be congestion on the channel.
The above is very generic, but the method will allow a consistent way to test bandwidth. To be effective, the Target rate chosen in step 2 in each iteration needs to be realistic. This can be harder to do, and each scenario is different. A good rule of thumb is to increase or lower the target by increments of 1, 5, 10, or 100Mb/s, with the size increment selected based on a general sense of how much the bandwidth is. For example, there is no reason to test at 100Mb/s if it is only expected to run at 45Mb/s; instead try 50Mb/s, and if the performance is lower than expected, and the transfer is not consistent, lower the rate and try again.
If the end-to-end capacity is fully available for a transfer, then all transfers should look smooth in a graph of the bandwidth. If the bandwidth fluctuates, it is normally a sign that there is contention of some form, typically related to concurrent use of the bandwidth (eg, congestion).
Congestion may also manifest itself if the transfer cannot go higher than an amount that is supposed to be supported. In fair mode the Aspera fasp™ protocol will adjust to whatever the network will allow in an attempt to no overdrive the network. If you, as the user, expect a higher transfer rate, but it is not getting achieved, it may be due to congestion.
Accounting for storage
A key feature of fasp™ is the ability to throttle itself if the storage cannot go fast enough. Just like congestion above, the transfer will not go faster than the storage is capable, to prevent sending data multiple times (because disk caches are overrun). This is basically an extension of the congestion based rate control to include the storage's capability.
If there is a belief that storage is contributing to slow transfer speed, two things can be done:
- Benchmark the disks (KB article TBD)
- Run a transfer without disks
For point 2, with the introduction of fasp™ 2.7, a new set of option has been added to
ascp. Since disks can be too slow to read or write from,
ascp has been augmented with an ability to specify not to read from disk and not to write to disk. These options can be mixed and matched to do a wide variety of testing, but in the simplest sense, take the following example:
$ ascp -Q -T -l 1000m --no-read --no-write /tmp/20G.mov email@example.com:/
--no-write; the switch names are self explanatory. This transfer will only look at the source file as a reference for how much data to send, and the destination directory is only a formality to ensure correctness of syntax. The example will attempt to do a transfer at 1Gb/s, regardless of the disk on either end. If there is a lower transfer rate, it is not disk-bound, so it is likely the network or a pegged out processor.
As is apparent, while conceptually simple, there is a lot that goes into testing bandwidth. The information here is a guideline -- please contact Aspera Support or SEs to assist in these tests to make sure the systems are performing to expectations.