(v2.3.0.0) Integrating with EKS
Prerequisites
The following tools are required to complete the integration setup:
If you don’t have an existing EKS cluster, you can use the following command to provision one that uses the eksctl
default cluster parameters:
eksctl create cluster --name poccluster
EKS Nodegroup IAM
By default, the EKS node group should have the following AWS-managed IAM roles attached:
AmazonEC2ContainerRegistryReadOnly
: This allows read-only access to Amazon EC2 Container Registry repositoriesAmazonEKS_CNI_Policy
: This provides the Amazon VPC CNI Add-on permissions it requires to modify the IP address configuration on your EKS worker nodesAmazonEKSWorkerNodePolicy
: This allows Amazon EKS worker nodes to connect to Amazon EKS ClustersAmazonSSMManagedInstanceCore
: This is to enable AWS Systems Manager service core functionality
AWS IAM Authenticator
Apply the following changes to the EKS cluster’s aws-auth
ConfigMap to ensure the dynamic X-Compute EKS nodes can join the cluster:
Edit the
aws-auth
ConfigMap in thekube-system
namespace:kubectl edit configmap aws-auth -n kube-system
Insert the following groups into the
mapRoles
section and replace the role ARN values with the outputs generated at this prerequisite step.- groups: - system:masters rolearn: <Insert the Role ARN of your Worker IAM Role> username: admin - groups: - system:masters rolearn: <Insert the Role ARN of your Controller IAM Role> username: admin
AWS Node Termination Handler Configuration for Spot Instances
The AWS Node Termination Handler is a tool that monitors spot instance termination events in AWS. By default, when a spot interruption occurs, the handler drains the affected node and attempts to reschedule the pods on other machines. This behavior can result in all pods on the node being removed and the node eventually being terminated, even if the workload is migrated elsewhere.
To prevent this behavior, you can either:
Uninstall the AWS Node Termination Handler.
Modify its configuration to disable node draining on-spot interruptions.
Identify the DaemonSet name.
Run the following command to find the DaemonSet associated with the AWS Node Termination Handler:Example output:
Edit the DaemonSet.
Use the following command to edit the DaemonSet:Update the configuration.
In the editor, search for the parameterENABLE_SPOT_INTERRUPTION_DRAINING
andENABLE_REBALANCE_DRAINING
, and set it tofalse
:
Amazon VPC CNI
Infrastructure Optimizer supports the Amazon VPC CNI plugin v1.18.2-eksbuild.1 or newer.
Download and run this script to:
Configure the node affinity rules of the
aws-node
DaemonSet to not run onx-compute
nodesInstall and configure the
exo-aws-node
DaemonSet to run onx-compute
nodes
If you are using a Mac, please install gnu-sed
and replace sed
with gsed
in the above script.
This script will restart the Amazon VPC CNI DaemonSet.
Amazon VPC CNI Plugin With IRSA
OPTIONAL - This section is required only if your cluster customized the IAM roles used by the Amazon VPC CNI plugin’s service account (IRSA). For more information about the EKS IRSA, see documentation here.
Determine whether an IAM OpenID Connect (OIDC) provider is already associated with your EKS cluster:
If the final command returns a non-empty output, then your EKS cluster already has an IAM OIDC provider attached.
Otherwise, enable an OIDC using the next command:
Run this command to the inline IAM policy to a JSON file named cni_iam.json
:
This user-defined policy ensures that the Amazon VPC CNI doesn’t unassign the IP address of your workloads running on Infrastructure Optimizer sandboxes by denying the ability to perform such unassignments.
Use the following command to create the policy:
Then use eksctl
to override the existing Amazon VPC CNI IRSA settings: