Migrate AWS (Qbert) cluster GP2 Volumes to GP3
Introduction
AWS has made Amazon EBS General Purpose SSD volume type gp3, designed to provide predictable 3,000 IOPS baseline performance and 125 MiB/s, regardless of volume size. With gp3 volumes, you can provision IOPS and throughput independently, without increasing storage size, at costs up to 20% lower per GB compared to gp2 volumes. Read more here: https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/
With Platform9 5.6.5 release, AWS gp3 ebs volumes have been set as default for all new aws type clusters with a default Throughput of 125 MiB/s
and IOPS as3000.
Platform9 users can migrate their existing gp2 cluster volumes to gp3 using the following procedure.
Migration Requirements
Platform9 Version
The Platform9 version must be PMK 5.6.8 or PMK 5.8.
Migrate volumes to GP3 for a specific cluster
Step1: Update the AWS Launch configuration to set GP3 volume type as default :
Call edit
operation on the cluster:
Edit operation will update the AWS launch configuration from
gp2
volume type togp3
. (It will update the launch configuration with VolumeType=gp3, VolumeThroughput: 125 and VolumeIops: 3000)Users can use the PF9 UI or directly edit API:
UI: Infrastructure > Clusters > [select your cluster] > edit > update cluster
- There is no need to modify parameter on the UI page.
API: PUT /qbert/v4/<project_id >/clusters/<cluster_uuid>
- Note for custom VolumeThroughput/VolumeIops customers can use the Edit API with the below payload:
PUT /qbert/v4/<project_id>/clusters/<cluster_uuid>
Payload: { "ebsVolumeThroughput": 250, "ebsVolumeIops": 4000 }
Step2: Migrate the volumes from GP2 to GP3
There are two ways to do this:
- Migrate all volumes in a cluster using the script provided by Platform9.
- Migrate individual volumes on a cluster.
Migrate all volumes in a cluster using the script provided by Platform9.
Download the script from this link
Pre-requisites for running the script:
- AWS CLI is present on the system. https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
- AWS CLI is configured & credentials must be present in ~/.aws/credentials [Configuring the AWS CLI - AWS Command Line Interface ]
Migrate the volumes:
python3 migrate_ebs_volume_gp2_to_gp3.py --kdu="XXXX.platform9.horse" --tenant=test --username="XXXX@platform9.com" --password="XXXXXX" --cluster_id 6d99076d-8137-444a-bcaf-e165f873a9f7
For clusters with a large number of worker nodes, migration can be done in a batch of the desired count of nodes in parallel. Default is set to 5 nodes. Example:
python3 migrate_ebs_volume_gp2_to_gp3.py --kdu="XXXX.platform9.horse" --tenant=test --username="XXXX@platform9.com" --password="XXXXXX" --cluster_id 6d99076d-8137-444a-bcaf-e165f873a9f7 --workerbatchsize 10
Migrate individual volumes on a cluster.
- Configure the AWS cli with accessKey/SecretKey/region [Configuring the AWS CLI - AWS Command Line Interface ]
- Get the volume information to migrate
AWS Console > Instances > select the node > Storage > Block device > <get the volumeID> OR using the below script
"""
This script will fetch volume ids for given cluster_uuid
pre-req:
aws credentials must be present in ~/.aws/credentials (you can add it using aws configure )
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
Usage:
python3 fetch_volumeids_qbert_awsclusters.py --cluster_uuid 4460981f-1d3e-4373-bf41-7e7be4d662ad
Output:
{'i-0355d93e5c5f5e0f3': {'Name': 'pf9-master-nehatest5-4460981f',
'PrivateDnsName': 'ip-10-0-1.77.us-west-2.compute.internal',
'PrivateIpAddress': '10.0.1.77',
'Volumes': [('vol-057556257a215e1', '/dev/sda1'),
('vol-0ae4dff010dfdd7', '/dev/sdb')]},
.....
}
"""
import argparse
from pprint import pprint
import boto3
import botocore
client = boto3.client('ec2')
if __name__ == '__main__':
result = {}
parser = argparse.ArgumentParser()
parser.add_argument('--cluster_uuid', type=str, required=True)
args = parser.parse_args()
cluster_uuid = args.cluster_uuid
#print("Fetching volume_ids for cluster: ", cluster_uuid)
custom_filter = [{
'Name': 'tag:ClusterUuid',
'Values': [cluster_uuid]}]
try:
response = client.describe_instances(Filters=custom_filter)
for node in response["Reservations"]:
for instance in node["Instances"]:
#pprint(instance)
result[instance["InstanceId"]] = {
"PrivateIpAddress": instance.get("PrivateIpAddress", ""),
"PrivateDnsName": instance.get("PrivateDnsName", "")
}
for tags in instance['Tags']:
if tags["Key"] == 'Name':
# print(tags["Value"])
name = tags["Value"]
break
result[instance["InstanceId"]]["Name"] = name
result[instance["InstanceId"]]["Volumes"] = []
for disks in instance["BlockDeviceMappings"]:
# print(disks["Ebs"]["VolumeId"])
result[instance["InstanceId"]]["Volumes"].append(
(disks["Ebs"]["VolumeId"], disks["DeviceName"]))
pprint(result)
except botocore.exceptions.NoCredentialsError as err:
print(str(err), ".Please configure the aws creds using 'aws configure'")
except Exception as err:
raise Exception(err)
- Modify the volume using https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-volume.html
aws ec2 modify-volume --volume-type <type of volume> --iops <value> --throughput <value> --volume-id <volume_id>
Example:
Modify volume with default throughput/iops:
aws ec2 modify-volume --volume-type gp3 --volume-id vol-0ae4dff010dfdd799
Modify volume with custom throughput/iops:
aws ec2 modify-volume --volume-type gp3 --iops 3500 --throughput 130 --volume-id vol-0ae4dff010dfdd799
Check the status of the modification
Once the "ModificationState" is "completed", migration is completed.
aws ec2 describe-volumes-modifications --volume-id vol-0ae4dff010dfdd799