All Documents
Current Document

Content is empty

If you don't find the content you expect, please try another search term

Documentation

Implement load balancing in a multi-cluster federation

Last updated:2021-05-11 10:41:36

Overview

Use Istio to implement load balancing in a cluster federation

Istio supports multi-cluster deployment and implements cross-cluster service discovery and load balancing. Istio multi-cluster supports the following deployment modes: multiple control plane replicas and shared control plane. The shared control plane deployment is further divided into single network deployment and multi-network deployment. In a multi-cluster federation, clusters often reside in different availability zones and belong to different networks. Therefore, the multi-network deployment mode with a shared control plane meets the requirements for load balancing in a multi-cluster federation.

Istio multi-network shared control plane deployment

1611838512855.jpg

In the multi-network shared control plane deployment, a host cluster is used to deploy the control plane. Other remote clusters are connected to the control plane of the host cluster to join the Istio service mesh. In Istio 1.7, the control plane is deployed in the istiod form, and the istiod deployed in the remote clusters is used to run Certificate Authority (CA) and workload webhook injection in the clusters. Service discovery is implemented on the control plane of the host cluster. Clusters communicate through gateways. Communication between workloads in different clusters do not require VPN connections or direct access.

Configure clusters

Prerequisites

  • The Kubernetes version of the clusters is 1.16 or later.
  • The federation contains at least two Kubernetes clusters, and they can reside in different VPCs.
  • If the remote member clusters reside in a VPC different from that of the host cluster, API server public access must be enabled for the remote member clusters.
  • At least 4 CPU cores and 8 GB memory are reserved on worker nodes in the host cluster.

In the following example, two clusters that reside in different VPCs are used to implement an Istio multi-cluster service mesh and cross-cluster load balancing based on a cluster federation.

Preparations

Create two clusters

Log in to the KCE console. Create a Kubernetes cluster in the CN North 1(Beijing) region and another in the CN East 1(Shanghai) region.

image.png

image.png

Create a cluster federation

In the KCE console, choose Multi-Cluster > Federation Management in the left navigation pane. On the page that appears, select the region where the target host cluster resides, and click Create Cluster Federation.

Select the region and host cluster, and click Create to start deploying the control plane of the federation.

Check the deployment status of the components. After all components are deployed, the host cluster enters the Not added state. Choose More > Join Federation to add the host cluster to the federation.

Click Add Member Cluster to add the other Kubernetes cluster to the federation.

In this example, the region of the member cluster is different from that of the host cluster. This is the typical deployment mode to implement cross-region disaster recovery based on a cluster federation.

After the two Kubernetes clusters are added to the federation, you can view them on the Federation Management page.

image.png

View the status of the cluster federation

Connect to the host cluster and obtain the cluster information of the federation.

$ kubectl get kubefedclusters -n kube-federation-system
NAME                                   AGE    READY
d161c69e-286b-4541-901f-b121c9517f4e   5m     True
fcb0f8d3-9907-4a4a-9653-4584b367ee29   12m    True
Obtain the federation certificate and kubefedctl

After the cluster federation is created, the federation certificate is stored in the ConfigMap kubefedconfig in the kube-system namespace of the host cluster. Save the federation certificate to the local ~/.kube/config file. kubectl and kubefedctl read the federation certificate from this file.

$ kubectl get cm -n kube-system kubefedconfig -o yaml

Check whether the certificate is correctly configured, and switch the context to the host cluster.

$ kubectl config get-contexts
CURRENT   NAME                                           CLUSTER                                        AUTHINFO                                     NAMESPACE
          context-d161c69e-286b-4541-901f-b121c9517f4e   cluster-d161c69e-286b-4541-901f-b121c9517f4e   admin-d161c69e-286b-4541-901f-b121c9517f4e   
          context-fcb0f8d3-9907-4a4a-9653-4584b367ee29   cluster-fcb0f8d3-9907-4a4a-9653-4584b367ee29   admin-fcb0f8d3-9907-4a4a-9653-4584b367ee29

$ kubectl config use-context context-fcb0f8d3-9907-4a4a-9653-4584b367ee29
Switched to context "context-fcb0f8d3-9907-4a4a-9653-4584b367ee29".

Download the kubefedctl command-line tool.

$ wget https://github.com/kubernetes-sigs/kubefed/releases/download/v0.4.1/kubefedctl-0.4.1-linux-amd64.tgz
$ tar zxvf kubefedctl-0.4.1-linux-amd64.tgz
$ rm kubefedctl /usr/local/bin/

Deploy an Istio multi-cluster service mesh

In the following example, the istioctl command-line tool and the configuration file provided by Istio are used. Download and install them.

$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.7.2 && export PATH=$PWD/bin:$PATH

In this example, bidirectional Transport Layer Security (TLS) is enabled between the control plane and application pods during the installation of the Istio. Use the Istio certificate in the Istio sample directory as the shared root CA. Create the same namespace and secret in the host cluster and member cluster to store the root certificate.

# Create a namespace and a federated namespace both called istio-system.
$ kubectl create namespace istio-system
$ kubefedctl federate ns istio-system

# Create the secret and federated secret.
$ kubectl create secret generic cacerts -n istio-system \
    --from-file=samples/certs/ca-cert.pem \
    --from-file=samples/certs/ca-key.pem \
    --from-file=samples/certs/root-cert.pem \
    --from-file=samples/certs/cert-chain.pem

$ kubefedctl federate secret cacerts -n istio-system

Install the host cluster

When you deploy the Istio control plane in the host cluster, replace the variables in the configuration template with the following environment variables, and generate a configuration file.

# Set environment variables.
$ export MAIN_CLUSTER_CTX=context-fcb0f8d3-9907-4a4a-9653-4584b367ee29
$ export REMOTE_CLUSTER_CTX=context-d161c69e-286b-4541-901f-b121c9517f4e

$ export MAIN_CLUSTER_NAME=cluster-fcb0f8d3-9907-4a4a-9653-4584b367ee29
$ export REMOTE_CLUSTER_NAME=cluster-d161c69e-286b-4541-901f-b121c9517f4e

$ export MAIN_CLUSTER_NETWORK=network1
$ export REMOTE_CLUSTER_NETWORK=network2

# Create a configuration file for the host cluster.
cat <<EOF> istio-main-cluster.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      multiCluster:
        clusterName: ${MAIN_CLUSTER_NAME}
      network: ${MAIN_CLUSTER_NETWORK}

      # Mesh network configuration. This is optional and may be omitted if
      # all clusters are on the same network.
      meshNetworks:
        ${MAIN_CLUSTER_NETWORK}:
          endpoints:
          - fromRegistry:  ${MAIN_CLUSTER_NAME}
          gateways:
          - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
            port: 443

        ${REMOTE_CLUSTER_NETWORK}:
          endpoints:
          - fromRegistry: ${REMOTE_CLUSTER_NAME}
          gateways:
          - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
            port: 443

      # Use the existing istio-ingressgateway.
      meshExpansion:
        enabled: true
EOF

# Deploy the control plane.
$ istioctl install -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX}
Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
✔ Istio core installed                                                                                                                                       
✔ Istiod installed                                                                                                                                           
✔ Ingress gateways installed                                                                                                                                 
✔ Installation complete    

$ kubectl get pod -n istio-system --context=${MAIN_CLUSTER_CTX}
NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-6bdbbc5566-c9kxk   1/1     Running   0          26s
istiod-689b5cbd7d-2dsml                 1/1     Running   0          37s

# Set the environment variable ISTIOD_REMOTE_EP.
$ export ISTIOD_REMOTE_EP=$(kubectl get svc -n istio-system --context=${MAIN_CLUSTER_CTX} istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ echo "ISTIOD_REMOTE_EP is ${ISTIOD_REMOTE_EP}"

Install the remote cluster

# Create a configuration file for the remote cluster.
cat <<EOF> istio-remote0-cluster.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      # The remote cluster's name and network name must match the values specified in the
      # mesh network configuration of the primary cluster.
      multiCluster:
        clusterName: ${REMOTE_CLUSTER_NAME}
      network: ${REMOTE_CLUSTER_NETWORK}

      # Replace ISTIOD_REMOTE_EP with the value of ISTIOD_REMOTE_EP set earlier.
      remotePilotAddress: ${ISTIOD_REMOTE_EP}

  ## The istio-ingressgateway is not required in the remote cluster if both clusters are on
  ## the same network. To disable the istio-ingressgateway component, uncomment the lines below.

  components:
   ingressGateways:
   - name: istio-ingressgateway
     enabled: true
EOF

$ istioctl install -f istio-remote0-cluster.yaml --context=${REMOTE_CLUSTER_CTX}
$ kubectl get pod -n istio-system

Configure cross-cluster load balancing

Configure Ingress gateways

In the multi-network shared control plane deployment for the cluster federation, use Istio Ingress gateways as the traffic entry to implement cross-network communication. To improve network communication security, configure the Ingress gateways to use port 443 and Server Name Indication (SNI) header.

cat <<EOF> cluster-aware-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cluster-aware-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: tls
      protocol: TLS
    tls:
      mode: AUTO_PASSTHROUGH
    hosts:
    - "*.local"
EOF

Create gateways in the host cluster and member cluster to implement cross-cluster service routing.

$ kubectl apply -f cluster-aware-gateway.yaml --context=${MAIN_CLUSTER_CTX}
$ kubectl apply -f cluster-aware-gateway.yaml --context=${REMOTE_CLUSTER_CTX}

Configure cross-cluster service registration

To implement cross-cluster load balancing in the cluster federation, make sure that the control plane deployed in the host cluster can access the kube-apiserver Service in all clusters. This helps implement service discovery and obtain the endpoint and pod attributes. To access the member cluster, you must configure the kube-apiserver public access certificate of the member cluster. Log in to the KCE console, and click the name of the member cluster. Click Get Cluster Config on the Basic Information page. Click Public Access Config in the Cluster Config File dialog box to obtain the public access certificate.

image.png

Add the public access certificate of the remote cluster to the .kube/config file, and run the following command. Istio uses the public access certificate to create Service account io-reader-service-account and related role and role binding information in the remote cluster, and creates a secret in the host cluster to save the certificate information of io-reader-service-account.

$ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
    kubectl apply -f - --context=${MAIN_CLUSTER_CTX}

Test cross-cluster load balancing

Deploy the helloworld V1 Service in the host cluster and the helloworld V2 Service in the member cluster. Then, you can access this Service from any cluster. Traffic is routed between the clusters.

Deploy the helloworld Service

Create a namespace called sample and attach the istio-injection=enabled label to the namespace.

$ kubectl create namespace sample --context=${MAIN_CLUSTER_CTX}
$ kubectl label namespace sample istio-injection=enabled --context=${MAIN_CLUSTER_CTX}

Create a federated namespace called sample.

$ kubefedctl federate ns sample

Create a FederatedDeployment to deploy the helloworld Service in the host cluster and member cluster by using image V1 and image V2, respectively. helloworld-deploy.yaml:

apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: helloworld
  namespace: sample
spec:
  template:
    metadata:
      labels:
        app: helloworld
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: helloworld
          version: v1
      template:
        metadata:
          labels:
            app: helloworld
            version: v1
        spec:
          containers:
            - image: docker.io/istio/examples-helloworld-v1
              name: helloworld
  placement:
    clusters:
      - name: fcb0f8d3-9907-4a4a-9653-4584b367ee29
      - name: d161c69e-286b-4541-901f-b121c9517f4e
  overrides:
    - clusterName: d161c69e-286b-4541-901f-b121c9517f4e
      clusterOverrides:
        - path: "/spec/template/spec/containers/0/image"
          value: "docker.io/istio/examples-helloworld-v2"
        - path: "/spec/template/metadata/labels/version"
          value: "v2"
        - path: "/spec/selector/matchLabels/version"
          value: "v2"
        - path: "/metadata/labels/version"
          value: "v2"

helloworld-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: helloworld
  labels:
    app: helloworld
spec:
  ports:
  - port: 5000
    name: http
  selector:
    app: helloworld

Deploy the helloworld Service in the host cluster and member cluster.

kubectl apply -f helloworld-svc.yaml -n sample --context=${MAIN_CLUSTER_CTX}
kubectl apply -f helloworld-svc.yaml -n sample --context=${REMOTE_CLUSTER_CTX}

Verify load balancing

To verify that the traffic of the helloworld Service is distributed between the clusters, deploy the sample sleep Service to call the helloworld Service.

# Deploy the sleep Service in the host cluster and member cluster.
$ kubectl apply -f samples/sleep/sleep.yaml -n sample --context=${MAIN_CLUSTER_CTX}
$ kubectl apply -f samples/sleep/sleep.yaml -n sample --context=${REMOTE_CLUSTER_CTX}

# Call the helloworld.sample Service in the host cluster repeatedly.
$ kubectl exec -it -n sample -c sleep --context=${MAIN_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello

# Call the helloworld.sample Service in the member cluster repeatedly.
$ kubectl exec -it -n sample -c sleep --context=${REMOTE_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello

Note: If the Service is deployed properly, the traffic of the helloworld.sample Service is distributed to the cluster-d161c69e-286b-4541-901f-b121c9517f4e and cluster-fcb0f8d3-9907-4a4a-9653-4584b367ee29 clusters. The responses fall into two types: v1 and v2.

Hello version: v2, instance: helloworld-v2-758dd55874-cxjnw
Hello version: v1, instance: helloworld-v1-7c5df4c84d-vzjtj

Run the following command in the host cluster to view the routing information registered by Istio for the helloworld Service. Istio discovers and registers two routes, one directing to the local helloworld Service and the other to the helloworld Service of the member cluster based on the EIP of the Ingress gateway of the member cluster.

$ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o name | cut -f2 -d'/' |     xargs -I{} istioctl -n sample --context=${MAIN_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
ENDPOINT              STATUS      OUTLIER CHECK     CLUSTER
10.2.2.14:5000        HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local
120.92.145.63:443     HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local

# Obtain the endpoint information of the helloworld Service in the local host cluster.
$ kubectl get ep helloworld -n sample --context=${MAIN_CLUSTER_CTX}
NAME         ENDPOINTS        AGE
helloworld   10.2.2.14:5000   20m

# Obtain the endpoint information of the helloworld Service in the remote member cluster.
$ kubectl get svc -n istio-system istio-ingressgateway --context=${REMOTE_CLUSTER_CTX}
NAME                   TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                                                      AGE
istio-ingressgateway   LoadBalancer   10.254.60.1   120.92.145.63   15021:31509/TCP,80:32473/TCP,443:30816/TCP,15443:31135/TCP   20m
On this page
Pure ModeNormal Mode

Pure Mode

Click to preview the document content in full screen
Feedback