This section will guide you through the process of submitting jobs to the EKS Cluster by modifying the workload POD configuration.
Pre-Submission Requirements:
Step 1: Modifying the Workload Pod YAML File
To ensure that your workload runs on Exostellar nodes with certain attributes, and it is scheduled by the Exostellar Karpenter autoscaler, you need to set affinity settings in your workload YAML file. Below is an example of how to modify your YAML file to include these settings:
apiVersion: v1 kind: Pod metadata: name: my-nginx annotations: karpenter.sh/do-not-disrupt: "true" spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: karpenter.sh/nodepool operator: In values: - pool-a containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 resources: requests: cpu: "1"
Step 2: Adding Node Labels in Exostellar Karpenter NodePool
If your workload requires scheduling on nodes with specific labels, it is required to configure Exostellar Karpenter NodePool so nodes are created with needed labels in autoscaling process. To ensure that Exostellar Karpenter provisions nodes with the appropriate labels, you need to update the NodePool configuration with the help the following command:
kubectl edit nodepool pool-a
An example NodePool configuration shown below creates nodes that has jobtype
of batch and node family of c5:
apiVersion: karpenter.sh/v1beta1 kind: NodePool metadata: creationTimestamp: "2024-06-02T20:06:58Z" name: pool-a spec: limits: cpu: "400" template: metadata: labels: jobtype: batch nodeFamily: c5 spec: nodeClassRef: name: pool-a requirements: - key: topology.kubernetes.io/zone operator: In values: - us-east-1a resources: {} status: {}
Step 3: Submitting the Workload
After modifying the workload YAML file, submit the job to the EKS Cluster just like you would run a regular pod:
kubectl apply -f workload-pod.yaml
To monitor the job and check for its completion, use the following commands:
kubectl get pods
To view the scheduling events and further details for the workload POD:
kubectl describe pod my-workload-pod
To check the logs for the workload POD:
kubectl logs my-workload-pod