...
Prerequisites
The following tools are required to complete the integration setup:
If you don’t have an existing EKS cluster, you can use the following command to provision one that uses the eksctl
default cluster parameters:
Code Block | ||
---|---|---|
| ||
eksctl create cluster --name <cluster_name> |
The following tools are required to complete the integration setup:
kubectl - https://kubernetes.io/docs/tasks/tools/
eksctl - https://eksctl.io/
AWS CLI - https://aws.amazon.com/cli/
Helm - https://helm.sh
poccluster |
EKS Nodegroup IAM
By default, the EKS node group should have the following AWS-managed IAM roles attached:
AmazonEC2ContainerRegistryReadOnly
→ : This allows read-only access to Amazon EC2 Container Registry repositoriesAmazonEKS_CNI_Policy
→ : This provides the Amazon VPC CNI Add-on permissions it requires to modify the IP address configuration on your EKS worker nodesAmazonEKSWorkerNodePolicy
→ : This allows Amazon EKS worker nodes to connect to Amazon EKS ClustersAmazonSSMManagedInstanceCore
→ : This is to enable AWS Systems Manager service core functionality
AWS IAM Authenticator
Apply the following changes to the the EKS cluster’s aws-auth
ConfigMap to ensure the dynamic X-Compute EKS nodes can join the EKS Kubernetes management APIcluster:
Edit the
aws-auth
ConfigMap in thekube-system
namespace:Code Block language bash kubectl edit configmap aws-auth -n kube-system
Insert the following groups to into the
mapRoles
section :and replace the role ARN values with the outputs generated at this prerequisite step.Code Block language yaml - groups: - system:masters rolearn: <Insert the Role ARN of your Worker IAM Role> username: admin - groups: - system:masters rolearn: <Insert the Role ARN of your Controller IAM Role> username: admin
AWS Node Termination Handler Configuration for Spot Instances
The AWS Node Termination Handler is a tool that monitors spot instance termination events in AWS. By default, when a spot interruption occurs, the handler drains the affected node and attempts to reschedule the pods on other machines. This behavior can result in all pods on the node being removed and the node eventually being terminated, even if the workload is migrated elsewhere.
To prevent this behavior, you can either:
Uninstall the AWS Node Termination Handler.
Modify its configuration to disable node draining on-spot interruptions.
Identify the DaemonSet name.
Run the following command to find the DaemonSet associated with the AWS Node Termination Handler:Code Block language bash kubectl -n kube-system get daemonset | grep aws-node-termination-handler
Example output:
Code Block language bash aws-node-termination-handler-exodemo 4 4 4 4 4 kubernetes.io/os=linux 11d
Edit the DaemonSet.
Use the following command to edit the DaemonSet:Code Block language bash kubectl -n kube-system edit daemonset aws-node-termination-handler-example
Update the configuration.
In the editor, search for the parameterENABLE_SPOT_INTERRUPTION_DRAINING
andENABLE_REBALANCE_DRAINING
, and set it tofalse
:Code Block language bash - name: ENABLE_SPOT_INTERRUPTION_DRAINING value: "false" - name: ENABLE_REBALANCE_DRAINING value: "false"
Amazon VPC CNI
Infrastructure Optimizer supports the AWS Amazon VPC CNI pluginv1.18.2-eksbuild.1 or newer.
Download and run the this
View file | ||
---|---|---|
|
Configure the node affinity rules of the
aws-node
DaemonSet to not run onx-compute
nodesInstall and configure the
exo-aws-node
DaemonSet to run onx-compute
nodes
Info | ||
---|---|---|
If you are using a Mac, please install
This script will restart the AWS Amazon VPC CNI DaemonSet |
...
. |
...
...
Amazon VPC CNI Plugin With IRSA
Info |
---|
OPTIONAL - This section is required only if your cluster customized the IAM roles used by the AWS Amazon VPC CNI plugin’s service account (IRSA). For more information about the EKS IRSA, see their documentation here. |
Determine whether an IAM OpenID Connect (OIDC) provider is already associated with your EKS cluster:
Code Block | ||
---|---|---|
| ||
oidc_id=$(aws eks describe-cluster --name $cluster_namepoccluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) && aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4 |
If the final command returns an a non-empty output, then your EKS cluster already has an IAM OIDC provider attached.
...
Code Block | ||
---|---|---|
| ||
eksctl utils associate-iam-oidc-provider --cluster <cluster_name>poccluster --approve |
Run this command to the inline IAM policy to a JSON file named cni_iam.json
:
...
This user-defined policy ensures that the AWS Amazon VPC CNI doesn’t unassign the IP address of your workloads running on Infrastructure Optimizer sandboxes , by denying the ability to perform such unassignments.
...
Code Block | ||
---|---|---|
| ||
aws iam create-policy --policy-name cni_iam_policy --policy-document file://cni_iam.json
|
Then use eksctl
to override the existing Amazon VPC CNI IRSA settings:
Code Block |
---|
new_policy_arn=$(aws iam list-policies --query 'Policies[?PolicyName==`cni_iam_policy`].[Arn]' --scope Local --no-cli-pager --output text) |
...
Code Block | ||
---|---|---|
| ||
eksctl update iamserviceaccount \ (ivan@isim-dev2.us-west-1.eksctl.io/default) --name aws-node \ --namespace kube-system \ --cluster <cluster_name>poccluster \ --attach-policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \ --attach-policy-arn "${new_policy_arn}" \ --approve |