Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Scroll Documents: Update page title prefix

...

Prerequisites

The following tools are required to complete the integration setup:

If you don’t have an existing EKS cluster, you can use the following command to provision one that uses the eksctl default cluster parameters:

Code Block
languagebash
eksctl create cluster --name <cluster_name>

The following tools are required to complete the integration setup:

poccluster

EKS Nodegroup IAM

By default, the EKS node group should have the following AWS-managed IAM roles attached:

  • AmazonEC2ContainerRegistryReadOnly : This allows read-only access to Amazon EC2 Container Registry repositories

  • AmazonEKS_CNI_Policy : This provides the Amazon VPC CNI Add-on permissions it requires to modify the IP address configuration on your EKS worker nodes

  • AmazonEKSWorkerNodePolicy : This allows Amazon EKS worker nodes to connect to Amazon EKS Clusters

  • AmazonSSMManagedInstanceCore : This is to enable AWS Systems Manager service core functionality

AWS IAM Authenticator

Apply the following changes to the the EKS cluster’s aws-auth ConfigMap to ensure the dynamic X-Compute EKS nodes can join the EKS Kubernetes management APIcluster:

  1. Edit the aws-auth ConfigMap in the kube-system namespace:

    Code Block
    languagebash
    kubectl edit configmap aws-auth -n kube-system
  2. Insert the following groups to into the mapRoles section :and replace the role ARN values with the outputs generated at this prerequisite step.

    Code Block
    languageyaml
        - groups:
          - system:masters
          rolearn: <Insert the Role ARN of your Worker IAM Role>
          username: admin
        - groups:
          - system:masters
          rolearn: <Insert the Role ARN of your Controller IAM Role>
          username: admin

AWS Node Termination Handler Configuration for Spot Instances

The AWS Node Termination Handler is a tool that monitors spot instance termination events in AWS. By default, when a spot interruption occurs, the handler drains the affected node and attempts to reschedule the pods on other machines. This behavior can result in all pods on the node being removed and the node eventually being terminated, even if the workload is migrated elsewhere.

To prevent this behavior, you can either:

  1. Uninstall the AWS Node Termination Handler.

  2. Modify its configuration to disable node draining on-spot interruptions.

  • Identify the DaemonSet name.
    Run the following command to find the DaemonSet associated with the AWS Node Termination Handler:

    Code Block
    languagebash
    kubectl -n kube-system get daemonset | grep aws-node-termination-handler

    Example output:

    Code Block
    languagebash
    aws-node-termination-handler-exodemo   4         4         4       4            4           kubernetes.io/os=linux   11d
  • Edit the DaemonSet.
    Use the following command to edit the DaemonSet:

    Code Block
    languagebash
    kubectl -n kube-system edit daemonset aws-node-termination-handler-example
  • Update the configuration.
    In the editor, search for the parameter ENABLE_SPOT_INTERRUPTION_DRAINING and ENABLE_REBALANCE_DRAINING, and set it to false:

    Code Block
    languagebash
    - name: ENABLE_SPOT_INTERRUPTION_DRAINING
      value: "false"
    - name: ENABLE_REBALANCE_DRAINING
      value: "false"

Amazon VPC CNI

Infrastructure Optimizer supports the AWS Amazon VPC CNI pluginv1.18.2-eksbuild.1 or newer.

Download and run the this

View file
nameconfigure-aws-nodes.sh
script to:

  • Configure the node affinity rules of the aws-node DaemonSet to not run on x-compute nodes

  • Install and configure the exo-aws-node DaemonSet to run on x-compute nodes

Info

If you are using a Mac, please install gnu-sed and replace sed with gsed in the above script.

Code Block
brew install gnu-sed

This script will restart the AWS Amazon VPC CNI DaemonSet

...

.

...

...

Amazon VPC CNI Plugin With IRSA

Info

OPTIONAL - This section is required only if your cluster customized the IAM roles used by the AWS Amazon VPC CNI plugin’s service account (IRSA). For more information about the EKS IRSA, see their documentation here.

Determine whether an IAM OpenID Connect (OIDC) provider is already associated with your EKS cluster:

Code Block
languagebash
oidc_id=$(aws eks describe-cluster --name $cluster_namepoccluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) && aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4

If the final command returns an a non-empty output, then your EKS cluster already has an IAM OIDC provider attached.

...

Code Block
languagebash
eksctl utils associate-iam-oidc-provider --cluster <cluster_name>poccluster --approve

Save Run this command to the inline IAM policy to a JSON file named cni_iam.json:

Code Block
languagebash
cat > cni_iam.json <<EOT 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Deny",
      "Action": "ec2:UnassignPrivateIpAddresses",
      "Resource": "*"
    }
  ]
}
EOT

This user-defined policy denies the ability to unassign one or more secondary private IP addresses, or “IPv4 Prefix Delegation” prefixes from a network interfaceensures that the Amazon VPC CNI doesn’t unassign the IP address of your workloads running on Infrastructure Optimizer sandboxes by denying the ability to perform such unassignments.

Use the following command to create the policy:

Code Block
languagebash
aws iam create-policy --policy-name cni_iam_policy --policy-document file://cni_iam.json

Add Add-on Then use eksctl to override the existing Amazon VPC CNI with the IRSA to enable pod networkingIRSA settings:

language
Code Block
bash
new_policy_arn=$(aws iam list-policies --query 'Policies[?PolicyName==`cni_iam_policy`].[Arn]' --scope Local --no-cli-pager --output text)

eksctl create addon --name vpc-cni \
Code Block
languagebash
eksctl update iamserviceaccount \                                                                                                                   --cluster <cluster_name> \     (ivan@isim-dev2.us--version latest \
 west-1.eksctl.io/default)
  --name aws-node \
  --attach-policy-arn ${new_policy_arn} \
namespace kube-system \
  --cluster poccluster \
   --attach-policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
  --attach-policy-arn "${new_policy_arn}" \
  --approve