Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Scroll Documents: Update page title prefix

...

Prerequisites

The following tools are required to complete the integration setup:

If you don’t have an existing EKS cluster, you can use the following command to provision one that uses the eksctl default cluster parameters:

Code Block
languagebash
eksctl create cluster --name poccluster

The following tools are required to complete the integration setup:

...

kubectl: Version 1.28+

...

eksctl

...

AWS CLI

...

EKS Nodegroup IAM

By default, the EKS node group should have the following AWS-managed IAM roles attached:

  • AmazonEC2ContainerRegistryReadOnly : This allows read-only access to Amazon EC2 Container Registry repositories

  • AmazonEKS_CNI_Policy : This provides the Amazon VPC CNI Add-on permissions it requires to modify the IP address configuration on your EKS worker nodes

  • AmazonEKSWorkerNodePolicy : This allows Amazon EKS worker nodes to connect to Amazon EKS Clusters

  • AmazonSSMManagedInstanceCore : This is to enable AWS Systems Manager service core functionality

AWS IAM Authenticator

...

Apply the following changes to the EKS cluster’s aws-auth ConfigMap to ensure the dynamic X-Compute EKS nodes can join the cluster:

  1. Edit the aws-auth ConfigMap in the kube-system namespace:

    Code Block
    languagebash
    kubectl edit configmap aws-auth -n kube-system
  2. Insert the following groups into the mapRoles section and replace the role ARN values with the outputs generated at this prerequisite step.

    Code Block
    languageyaml
        - groups:
          - system:masters
          rolearn: <Insert the Role ARN of your Worker IAM Role>
          username: admin
        - groups:
          - system:masters
          rolearn: <Insert the Role ARN of your Controller IAM Role>
          username: admin

AWS Node Termination Handler Configuration for Spot Instances

The AWS Node Termination Handler is a tool that monitors spot instance termination events in AWS. By default, when a spot interruption occurs, the handler drains the affected node and attempts to reschedule the pods on other machines. This behavior can result in all pods on the node being removed and the node eventually being terminated, even if the workload is migrated elsewhere.

To prevent this behavior, you can either:

  1. Uninstall the AWS Node Termination Handler.

  2. Modify its configuration to disable node draining on-spot interruptions.

  • Identify the DaemonSet name.
    Run the following command to find the DaemonSet associated with the AWS Node Termination Handler:

    Code Block
    languagebash
    kubectl -n kube-system get daemonset | grep aws-node-termination-handler

    Example output:

    Code Block
    languagebash
    aws-node-termination-handler-exodemo   4         4         4       4            4           kubernetes.io/os=linux   11d
  • Edit the DaemonSet.
    Use the following command to edit the DaemonSet:

    Code Block
    languagebash
    kubectl -n kube-system edit daemonset aws-node-termination-handler-example
  • Update the configuration.
    In the editor, search for the parameter ENABLE_SPOT_INTERRUPTION_DRAINING and ENABLE_REBALANCE_DRAINING, and set it to false:

    Code Block
    languagebash
    - name: ENABLE_SPOT_INTERRUPTION_DRAINING
      value: "false"
    - name: ENABLE_REBALANCE_DRAINING
      value: "false"

Amazon VPC CNI

Infrastructure Optimizer supports the Amazon VPC CNI pluginv1.18.2-eksbuild.1 or newer.

...

Info

If you are using a Mac, please install gnu-sed and replace sed with gsed in the above script.

Code Block
brew install gnu-sed

This script will restart the Amazon VPC CNI DaemonSet.

Amazon VPC CNI Plugin With IRSA

Info

OPTIONAL - This section is required only if your cluster customized the IAM roles used by the Amazon VPC CNI plugin’s service account (IRSA). For more information about the EKS IRSA, see documentation here.

...