Release Notes Updated: September 30, 2016
- Cluster Transfer Performance Enhancements
- The transfer server has been upgraded to IBM Aspera Enterprise Server 3.6.2. For more information, see the release notes at: http://downloads.asperasoft.com/en/release_notes/default_1/release_notes_304.
- Cluster node network drivers have been upgraded to enable AWS "enhanced networking".
- Improved Support for Running Clusters in a Private VPC (AWS)
- Admins can configure a specific private IP address to be used as state-store host and node configuration download when launching clusters.
- Admins can use the Cluster Manager's private IP address for cluster nodes to download the node configuration and connect to the state store.
- Admins can now specify private DNS names when launching a cluster.
- Cluster Provisioning Enhancements
- Admins can now run a custom first boot script on the Cluster Manager specified through user data. For example, you can assign an elastic or secondary private IP address needed for automated failure recovery.
- Admins can now run a custom first boot script on cluster nodes specified in the cluster configuration.
- Admins can now configure cluster nodes to create and mount a separate "swap volume" when using instance types that do not provide local instance store volumes.
- Cluster Management Enhancements
- Cluster manager and nodes now include jq and cloud-specific command line utilities.
- Error messages in the status tab are greyed out when an activity returns to healthy.
- Admins can now specify multiple DNS hosted zones and configure transfer nodes with separate hosted zones for public and private IP addresses.
- Admins can now configure hosted zone IDs for when there are multiple hosted zones with the same name.
- The default Cluster Manager Console timeout has been set to two weeks.
- The password for the admin user of the ATC-API is automatically set to the Instance ID on first boot of the instance.
- The Cluster Manager console now shows the Public IP and Private IP columns instead of the Hostname column for cluster nodes.
- Logs on cluster nodes and the Cluster Manager now rotate.
- The recursive file count feature is disabled in aspera.conf.
ISSUES FIXED IN THIS RELEASE
ATC-176 - State store backup fails because the redis.rdb file is too large.
ATC-149 - Stats-collector fails to connect to RDS.
ATC-147 - Instances terminated manually during image upgrade remain in "REPLACEMENT IN PROGRESS" state forever.
ATC-145 - The atcm service fails to start with error code 143.
ATC-142 - The cluster refuses new transfers when the Cluster Manager is not available.
ATC-141 - AWS cluster nodes of instance type M3 cannot be rebooted.
ATC-133 - The Cluster Manager monitors transfers using port 9092 instead of 443.
ATC-117 - Creating a cluster with an invalid cluster configuration results in a NullPointerException error instead of a ValidationException error.
ATC-114 - Auto scale policy lacks validation.
ATC-99 - Stats-collector fails if backup or restore configuration contains password with special characters (for example, ^).
ATC-80 - Locking timeouts set incorrectly for the AK synch and Cluster Master periodic activities.
ATC-74 - Degraded cluster node is shut down even if it has the SCALEKV cluster role
ATC-45 - Failures to start new nodes are counted when calculating and verifying Start Frequency: Count setting for the auto scale policy.
ATC-26 - Nodes in DEGRADED state are not replaced with new nodes.
ATC-25 - Failure to acquire AUTO_SCALE activity lock during image upgrade prevents upgrade from proceeding.
AMAZON MACHINE IMAGE (AMI) INFORMATIONCluster Manager Image
ATC-207 - The Cluster Manager Console displays a max of ten access keys in the drop-down menu.
For on-line support resources for Aspera products, including raising new support tickets, please visit the Aspera Support Portal. Note that you may have an existing account if you contacted the Aspera support team in the past. Before creating a new account, first try setting a password for the email that you use to interact with us. You may also call one of our regional support centers.