Aspera supports the configuration of high-availability transfer nodes. In such a design, Aspera transfers are spread across two or more nodes, or rotate amongst them, as the case may be. There are several architectural designs from which to choose, depending on your site’s complexity and expertise.
As with any Aspera transfer, each node must be properly licensed. Aspera sells an HA license for this purpose; as long as all nodes are behind a single aggregation point, you may use that HA license for as many transfer nodes as you wish within the aggregate bandwidth limit licensed.
There are three steps involved in the configuration:
- Synchronize configuration files and user IDs
- Configure shared storage
- Choose and implement an HA method
Synchronize configuration files
- The aspera.conf must be identical on each transfer node.
- The UID and GID of all Aspera transfer users must be the same on each transfer node.
- Via the asnodeadmin command, ensure that each Aspera node user maps to the proper Aspera transfer user.
Each node must have access to shared storage, mounted and docrooted identically.
Lastly, you will choose between one of the three HA schemes below. Each has its advantages and disadvantages. Which is best for you will depend on your particular situation. Aspera will be happy to advise you.
Domain Name Service (DNS) has the ability to rotate among a specified set of IP addresses. This is accomplished by giving the same FQDN (Fully-qualified Domain Name) to two or more IP addresses via multiple A records. By setting the TTL (time to live) to a very low value (e.g. five minutes), clients will cache a particular IP address only briefly, ensuring that they ask for the rotating IP address almost every time. It is also possible to weight the different IP addresses, so that some are returned more often than others. The main disadvantage of round-robin DNS is the inability to automatically remove failed nodes from the rotation. However, it should be noted that in some cases there exist advanced DNS tools (such as Route 53 on Amazon Web Services) which can do health checks and remove down nodes from the pool.
- cheap; everyone uses DNS already
- your DNS is probably already HA, by virtue of multiple DNS servers
- there is only the one rotation scheme
- most DNS tools know nothing about node health and will continue to point at dead nodes
In this scenario, you would configure the local operating system to create a cluster. The cluster is responsible for monitoring a defined set of services on the primary node; should any of those services fail, the cluster would failover the failed services to one of the other cluster nodes. The main disadvantage of an OS cluster is the administrative overhead.
- cheap; the cluster software probably already exists on your host
- the cluster presents a single VIP (virtual IP) or Virtual Interface and associated IP to which clients refer
- automatic failover in case primary node dies
- only one node is active at a time
- tricky to configure and maintain
- short downtime while service is being failed over to another node
- each OS has different clustering software
This approach involves the purchase of load balancer hardware, whose job it is to monitor the pool of transfer nodes and decide where each new incoming service request should be sent. All nodes are active, as the load is spread equally across them. The load balancers can dynamically add and remove nodes as circumstances dictate. The main disadvantage of load balancers is their expense.
- finer control
- the workload is spread across all active nodes
- can preserve persistent HTTP sessions
- dynamically add/remote nodes in pool (never point to a dead node)
- multiple balancing schemes from which to choose
- expensive; requires purchase of separate hardware
- for a full HA solution, you must purchase two load balancers, so that they themselves are HA
- may require advanced configuration like sticky sessions to ensure traffic routed to the correct nodes