Configure automatic cluster scaling

Last updated:2021-05-11 10:41:31

Cluster AutoScaler (CA) is an add-on that is used to automatically scale nodes of a Kubernetes cluster. When the capacity of a cluster is insufficient, CA automatically creates a new node by calling Cloud Provider. When the utilization of a node is lower than 50% for more than 10 minutes, CA deletes the node to reduce costs.

Create a scaling group

  1. Log in to the KCE console.
  2. In the left navigation pane, click Cluster.
  3. Click the name of a cluster to go to the cluster management page.
  4. In the left navigation pane, choose Manage Nodes > Scaling Group.
  5. Click Create Scaling Group.
  6. In the Launch Configuration section, set the parameters as required.
    • Name: the custom name of the launch configuration. The name must be unique in the region.
    • Cluster: the cluster to which the scaling group belongs.
    • Creation Method: the method of creating KEC instances in the scaling group. Valid values: Custom Server Configuration and Based on existing node configuration. If you select Based on existing node configuration, the launch configuration inherits the configurations of the selected node, including the instance type, CPU, memory, system disk size, data disk size, disk type, security group, VPC, and subnet.
    • Billing Mode: the billing mode of KEC instances in the scaling group. Only Pay-As-You-Go is supported.
    • Instance configuration: the configurations of KEC instances in the scaling group, including the data center, CPU, memory, IPv6 support, architecture, type, image, and system disk.
    • Data Disk: data disk specifications of KEC instances. You can customize the specifications and select whether to format the disk and mount it to a specified directory.
    • Container Storage Directory: specifies whether to customize a directory to store container and image data. We recommend that you store the data in a data disk. If a directory is not set, /data/docker is used.
    • Login Mode: the login mode of KEC instances in the scaling group. Valid values: Password and Key.
    • Project: the project to which KEC instances in the scaling group belong.
    • Label: the label to be attached to KEC instances in the scaling group. Labels allow you to implement flexible scheduling.
  7. In the Scaling Group Configuration section, set the parameters as required.
    • VPC: the default VPC where the cluster resides. The VPC cannot be changed.
    • Associated Subnet: the subnet where KEC instances in the scaling group reside.
    • Network Expansion Policy: the policy for selecting the subnet. Valid values:
      • Balanced Distribution: During a scale-out, the instances are evenly distributed on the subnets. If the target availability zone has insufficient resources for the scale-out, other subnets are selected based on the Selected First rule.
      • Selected First: During a scale-out, the instances are scaled out based on the order of the subnets selected in the list.
    • Security Group: the security group of KEC instances in the scaling group.
    • Quantity Range of Node: the quantity range of KEC instances in the scaling group.
  8. Click Create to create the scaling group.

Scale-out condition

CA checks whether a cluster has sufficient resources to schedule newly created pods every 10 seconds. If resources are insufficient, CA calls Cloud Provider to create a new node.
Whenever the Kubernetes scheduler cannot find a node to schedule a pod, the scheduler sets the PodCondition of the pod to false and the reason to unschedulable. CA checks for unschedulable pods at the specified interval. If an unschedulable pod exists, CA creates a node to schedule the pod.

Scale-out policy

If a cluster has more than one scaling group, you can specify the policy for selecting the scaling group to scale out. The following options are supported:

  • random: CA randomly selects a scaling group.
  • most-pods: CA selects the scaling group with the maximum capacity. This scaling group has more resources to schedule pods.
  • least-waste: CA selects the scaling group with the least available resources after pod scheduling.

Scale-in condition

CA checks resource usage of nodes at the specified interval, which is 10 seconds by default. If the resource utilization of a node is lower than 50% (default value) for 10 minutes, and the pods on this node can be moved to other nodes, CA automatically deletes the node from the cluster. In this case, the pods on the node are automatically scheduled to other nodes.

A node will not be deleted if the pods on this node meet one of the following conditions:

  • The pods are configured with PodDisruptionBudget (PDB), but do not meet the PDB.
  • The pods belong to the kube-system namespace.
  • The pods are not created by controllers such as Deployment, ReplicaSet, job, and StatefulSet.
  • The pods use local storage.
  • The pods cannot be rescheduled due to other reasons. For example, resources are insufficient or other nodes do not meet the nodeSelector or nodeAffinity settings of the pods.

Usage notes

  • CA conflicts with the Auto Scaling service that is based on monitoring metrics. Do not configure automatic scaling based on monitoring metrics for the scaling groups in a cluster.
  • You must specify the request value of pods. CA is triggered when the pods cannot be scheduled due to insufficient resources based on the request value.
  • Do not directly modify the nodes that belong to a scaling group. Make sure that the nodes in a scaling group have the same configurations.
  • When you delete a scaling group, KEC instances in the scaling group are also deleted. Exercise caution when you perform this operation.
  • Services may be interrupted during a scale-in. For example, if a Service contains a controller with a single replica, the pod may be restarted on another node when the current node of the pod for that replica is deleted. Before you enable automatic scaling, make sure your Services can tolerate potential interruption. We recommend that you configure PDB for pods to prevent a node from being deleted during a scale-in.

Did you find the above information helpful?

Unhelpful
Mostly Unhelpful
A little helpful
Helpful
Very helpful

What might be the problems?

Insufficient
Outdated
Unclear or awkward
Redundant or clumsy
Lack of context for the complex system or functionality

More suggestions

0/200

Please give us your feedback.

Submitted

Thank you for your feedback.

问题反馈