This section will guide you through the process of submitting jobs to the EKS Cluster by modifying the workload configuration files.
Pre-Submission Requirements:
Submitting a Single
...
Pod
Step 1: Modifying the Workload Pod YAML File
To ensure that your workload runs on Exostellar nodes scheduled by the Exostellar Karpenter and block Karpenter from voluntarily disrupting your workload, you need to set affinity settings and annotations in your workload YAML file. Below is an example of how to modify your YAML file to include these settings:
Code Block |
---|
|
apiVersion: apps/v1
kind: PodDeployment
metadata:
name: my-nginx
annotations: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
tolerations:
karpenter - key: "exokarpenter.sh/do-not-disruptx-compute"
operator: "trueExists"
spec: effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: karpenterexokarpenter.sh/nodepool
operator: In
values:
- pool-a
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 1 |
Step 2: Adding Node Labels in Exostellar Karpenter NodePool (optional)
If your workload requires scheduling on nodes with specific labels, it is required to configure Exostellar Karpenter NodePool so nodes are created with needed labels in the autoscaling process. To ensure that Exostellar Karpenter provisions nodes with the appropriate labels, you need to update the NodePool configuration with the following command:
Code Block |
---|
kubectl edit nodepoolexonodepool pool-a |
An example NodePool configuration shown below creates nodes that has jobtype
of batch
and nodeFamily
of c5
:
Code Block |
---|
|
apiVersion: karpenter.sh/v1beta1
kind: NodePoolExoNodePool
metadata:
name: pool-a
spec:
limits:
cpu: "400"
template:
metadata:
labels:
jobtype: batch
nodeFamily: c5
spec:
nodeClassRef:
name: pool-a
requirements:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
resources: {}
status: {} |
Step 3: Submitting the
...
Pod
After modifying the pod YAML file, submit the job to the EKS Cluster just like you would run a regular pod:
Code Block |
---|
|
kubectl apply -f my-nginx.yaml |
Post-Submission Results
Exostellar Karpenter will automatically bring up the node that is suitable for your request, to check the new node that has been added:
Code Block |
---|
|
kubectl get nodes -A |
The created node has x-compute
as part of the name.
Code Block |
---|
|
NAME STATUS ROLES AGE VERSION
ip-192-0-128-xxx.us-east-2.x-compute.internal Ready <none> 40s v1.29.3-eks-ae9a62a |
Scaling your Workloads
Step 1: Modifying the Workload YAML File
Similar to the above, here is an example workload YAML file:
Code Block |
---|
|
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
annotations:
karpenter.sh/do-not-disrupt: "true"
spec:
replicas: 100
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
tolerations:
- key: "exokarpenter.sh/x-compute"
operator: "Exists"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: exokarpenter.sh/nodepool
operator: In
values:
- pool-a
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 1 |
Step 2: Submitting the Workload
After modifying the YAML file, submit the workload to the EKS Cluster:
Code Block |
---|
|
kubectl apply -f deployment.yaml |
Post-Submission Results
Exostellar Karpenter will automatically bring up all nodes required for your request as above. To check the deployment status:
Code Block |
---|
|
kubectl get deployment -A |
You will see:
Code Block |
---|
|
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default my-nginx 100/100 100 100 6m3s |
To check all created pods:
Code Block |
---|
|
kubectl get pod -A |
Results (only showing the top 10 lines):
Code Block |
---|
|
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default my-nginx-7587f74b9b-2jsjf 1/1 Running 0 63m 192.0.131.16 ip-192-0-132-33.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-2vkg6 1/1 Running 0 63m 192.0.128.55 ip-192-0-133-107.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-4d2pp 1/1 Running 0 63m 192.0.128.48 ip-192-0-133-107.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-4g4p5 1/1 Running 0 63m 192.0.141.70 ip-192-0-129-67.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-4n4bv 1/1 Running 0 63m 192.0.130.213 ip-192-0-142-59.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-5hxm8 1/1 Running 0 63m 192.0.140.133 ip-192-0-134-198.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-5jzlk 1/1 Running 0 63m 192.0.131.212 ip-192-0-132-33.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-5mxqt 1/1 Running 0 63m 192.0.130.208 ip-192-0-142-59.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-5nv76 1/1 Running 0 63m 192.0.131.250 ip-192-0-138-106.us-east-2.x-compute.internal <none> <none>
default my-nginx-7587f74b9b-5zt54 1/1 Running 0 63m 192.0.131.218 ip-192-0-132-33.us-east-2.x-compute.internal <none> <none> |