This section will guide you through the process of submitting jobs to the EKS Cluster by modifying the workload configuration files.
Pre-Submission Requirements:
Submitting a Single Pod
Step 1: Modifying the Workload Pod YAML File
To ensure that your workload runs on Exostellar nodes scheduled by the Exostellar Karpenter and block Karpenter from voluntarily disrupting your workload, you need to set affinity settings and annotations in your workload YAML file. Below is an example of how to modify your YAML file to include these settings:
Code Block |
---|
|
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
annotations:
karpenter.sh/do-not-disrupt: "true"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: karpenter.sh/nodepool
operator: In
values:
- pool-a
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 1 |
Step 2: Adding Node Labels in Exostellar Karpenter NodePool (optional)
If your workload requires scheduling on nodes with specific labels, it is required to configure Exostellar Karpenter NodePool so nodes are created with needed labels in the autoscaling process. To ensure that Exostellar Karpenter provisions nodes with the appropriate labels, you need to update the NodePool configuration with the following command:
...
Code Block |
---|
|
apiVersion: karpenter.sh/v1beta1
kind: NodePool
metadata:
name: pool-a
spec:
limits:
cpu: "400"
template:
metadata:
labels:
jobtype: batch
nodeFamily: c5
spec:
nodeClassRef:
name: pool-a
requirements:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
resources: {}
status: {} |
Step 3: Submitting the Pod
After modifying the pod YAML file, submit the job to the EKS Cluster just like you would run a regular pod:
Code Block |
---|
|
kubectl apply -f my-nginx.yaml |
Post-Submission Results
Exostellar Karpenter will automatically bring up the node that is suitable for your request, to check the new node that has been added:
...
Code Block |
---|
|
NAME STATUS ROLES AGE VERSION
ip-192-0-128-xxx.us-east-2.x-compute.internal Ready <none> 40s v1.29.3-eks-ae9a62a |
Scaling your Workloads
Step 1: Modifying the Workload YAML File
Similar to the above, here is an example workload YAML file:
Code Block |
---|
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
annotations:
karpenter.sh/do-not-disrupt: "true"
spec:
replicas: 100
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: karpenter.sh/nodepool
operator: In
values:
- pool-a
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 1 |
Step 2: Submitting the Workload
After modifying the YAML file, submit the workload to the EKS Cluster:
Code Block |
---|
|
kubectl apply -f deployment.yaml |
Post-Submission Results
Exostellar Karpenter will automatically bring up all nodes required for your request as above. To check the deployment status:
...