The following manual steps for EAR will be replaced with a simplified workflow for command line users and alternatively, the Mangement Console (Web UI) will be able to replace most of these steps, as well.
Connect to Your LSF Head Node
To integrate Infrastructure Optimizer to your LSF cluster we need to update the configuration and add assets to perform the integration tasks. To complete this root or sudo access on the LSF Master node is required.
Get a shell on the head node and navigate to the LSF_TOP directory.
cd $LSF_TOP/conf/resource_connector
Make subdirectories here:
mkdir -p exostellar/json exostellar/conf exostellar/scripts cd exostellar/json
Pull Down the Default LSF Environment Assets as a JSON Payload:
The packages for
jq
andcurl
are required:yum install jq curl
CentOS EoL: You may need to ensure yum can still function due to recent End of Life cycle from RedHat for CentOS 7. The following command is an exmple mitigation for an internet-dependent yum repository:
sed -i -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*.repo yum clean all
Pull down from the MGMT_SERVER default assets for customization:
curl -X GET http://${MGMT_SERVER_IP}:5000/v1/env/lsf | jq > default-lsf-env.json
The asset will look like:
Edit the LSF Environment JSON for Your Purposes:
Copy
default-lsf-env.json
to something convenient likeenv0.json
.cp default-lsf-env.json env0.json
Note Line numbers listed below reference the above example file. Once changes start being made on the system, the line numbers may change.
Line 2 :
"EnvName"
is set tolsf
by default, but you can specify something unique if needed.Lines 5-17 can be modified for a single pool of identical compute resources or they can be duplicated and then modified for each “hardware” configuration or “pool” you choose. When duplicating, be sure to add a comma after the brace on line 17, except when it is the last brace, or the final pool declaration.
PoolName
: This will be the apparent hostnames of the compute resources provided for LSF.It is recommended that all pools share a common trunk or base in each
PoolName
.
Priority
: LSF will treat all pools as having equal priority and then make scheduling decisions based on alphabetical naming. It may be beneficial to set smaller nodes with a lower priority, something like:2-core nodes : Priority=10
4-core nodes : Priority=100
8-core nodes : Priority=1000
So that jobs are scheduled on the smallest node that fulfills the resource requirements of the job.
PoolSize
: This is the maximum number of these compute resources.ProfileName
: This is the default profile name,az1
: If this is changed, you will need to carry the change forward.CPUs
: This is the targeted CPU-core limit for this "hardware" configuration or pool.ImageName
: This is tied to the AMI that will be used for your compute resources. This name will be used in subsequent steps.MaxMemory
: This is the targeted memory limit for this "hardware" configuration or pool.MinMemory
: reserved for future use; can be ignored currently.UserData
: This string is a base64 encoded version of user_data.To generate it:
cat user_data.sh | base64 -w 0
To decode it:
echo "<LongBase64EncodedString>" | base64 -d
It’s not required to be perfectly fine-tuned at this stage; it will be refined and corrected later.
You may format
user_data.sh
in the usual ways:#cloud-config runcmd: - [sh, -c, "set -x"] - [sh, -c, "hostname $( echo ip-$ (hostname -I |sed 's/\./-/g' |sed 's/ //g' ) )"] - [sh, -c, "echo root:AAAAAA |chpasswd"] - [sh, -c, "sed -i.orig '3d' /etc/hosts"] - [sh, -c, "echo >> /etc/hosts"] - [sh, -c, "echo -e \"$( hostname -I )\t\t\t$( hostname )\" >> /etc/hosts"] - [sh, -c, "sed -i 's/awshost/xiohost/g' /opt/lsf/conf/lsf.conf"] - [sh, -c, "source /opt/lsf/conf/profile.lsf"] - [sh, -c, "lsadmin limstartup"] - [sh, -c, "lsadmin resstartup"] - [sh, -c, "badmin hstartup"]
or
#!/bin/bash set -x IP=$( hostname -I |awk '{print $1}' ) NEW_HOSTNAME=ip-$( echo ${IP} |sed 's/\./-/g' ) hostname ${NEW_HOSTNAME} echo >> /etc/hosts echo -e "${IP}\t\t${NEW_HOSTNAME}" >> /etc/hosts . /opt/lsf/conf/profile.lsf lsadmin limstartup lsadmin resstartup badmin hstartup
VolumeSize: reserved for future use; can be ignored currently.
All other fields/lines in this asset can be ignored.
Validate and Push the Customized Environment to the MGMT_SERVER
Validate the JSON asset with
jq
:jq . env0.json
You will see well-formatted JSON if
jq
can read the file, indicating no errors. If you see an error message, that means the JSON is not valid.
When the JSON is valid, the file can be pushed to the MGMT_SERVER:
curl -d "@env0.json" -H 'Content-Type: application/json' -X PUT http://${MGMT_SERVER_IP}:5000/v1/env
Pull Down the Default Profile Assets as a JSON Payload:
The default is named
az1
.curl -X GET http://${MGMT_SERVER_IP}:5000/v1/profile/az1 |jq > default-profile.json
Copy it to faciliatate customization, leaving the default for future reference.
cp default-profile.json profile0.json
The asset will look like this:
Edit the Profile JSON for Your Purposes:
Tagging instances created by the backend is controlled by two sections, depending on the function of the asset:
Controllers are On-Demand instances that manage other instances. By default, they are tagged as seen on lines 6-9, above, and 1-4 below.
{ "Key": "exostellar.xspot-role", "Value": "xspot-controller" }
To add additional tags, duplicate lines 1-4 as 5-8 below (as many times as you need), noting that an additional comma is added on line 4.
{ "Key": "exostellar.xspot-role", "Value": "xspot-controller' }, { "Key": "MyCustomKey", "Value": "MyCustomValue" }
Don’t forget the comma between tags.
Workers will be created by Controllers as needed and they can be On-Demand/Reserved instances or Spot. By default, they are tagged as seen on lines 26-30, above, and 1-4 below:
{ "Key": "exostellar.xspot-role", "Value": "xspot-worker" }
Add as many tags as needed.
{ "Key": "exostellar.xspot-role", "Value": "xspot-worker" }, { "Key": "MyCustomKey", "Value": "MyCustomValue" }
Don’t forget the comma between tags.
Note Line numbers listed below reference the above example file. Once changes start being made on the system, the line numbers may change.
Line 11 -
InstanceType
: Controllers do not generally require large instances.In terms of performance, these On-Demand Instances can be set as
c5.xlarge
orm5.xlarge
with no adverse effect.
Line 20 -
MaxControllers
: This will define an upper bound for your configuration.Controllers will manage upto 80 workers.
The default upper bound is 800 nodes joining your production cluster: notice line 20
"MaxControllers": 10,
.If you plan to autoscale past 800 nodes joining your production cluster,
MaxControllers
should be increased.If you want to lower that upper bound,
MaxControllers
should be decreased.
Line 21 -
ProfileName
: This is used for your logical tracking, in the event you configure multiple profiles.Lines 31-34 -
InstanceTypes
here in the Worker section, this refers to On-Demand instances – if there is no spot availability, what instances do you want to run on.Lines 38-43 -
SpotFleetTypes
: here in the Worker section, this refers to Spot instance types – because of the discounts, you may be comfortable with a much broader range of instance types.More types and families here, means more opportunities for cost optimization.
Priorities can be managed by appending a
:
and an integer, e.g.m5:1
is a higher priority thanc5:0
.
Line 48 -
Hyperthreading
: This is reserved for future use, can be ignored currently.Line 52 -
NodeGroupName
: This string appears in Controller Name tagging <profile>-NGN-countAll other field/lines can be ignored in the asset.
Validate and Push the Customized Profile to the MGMT_SERVER
Validate the profile with the quick
jq
test.jq . profile0.json
Push the changes live.
curl -d "@profile0.json" -H 'Content-Type: application/json' -X PUT http://${MGMT_SERVER_IP}:5000/v1/profile
Download Scheduler Assets from the Management Server
curl -X GET http://${MGMT_SERVER_IP}:5000/v1/xcompute/download/lsf -o lsf.tgz
If the EnvName was changed (above in Edit the LSF Environment JSON for Your Purposes - Step 2 ), then the following command can be used with your CustomEnvironmentName
:
curl -X GET http://${MGMT_SERVER_IP}:5000/v1/xcompute/download/lsf?envName=CustomEnvironmentName -o lsf.tgz
Unpack them into the exostellar
folder:
tar xf lsf.tgz -C ../ cd .. mv assets/* . rmdir assets
Ensure lsb.modules is prepared for Resource Connector
schmod_demand
plugin must be enabled in${LSF_TOP}/conf/lsbatch/<cluster-name>/configdir/lsb.modules
.Begin PluginModule SCH_PLUGIN RB_PLUGIN SCH_DISABLE_PHASES ... schmod_demand () () ... End PluginModule
Add New Queue and Resource Definitions to LSF for Resource Connector
Add a new queue to
lsb.queues
cd ${LSF_TOP}/conf/lsbatch/<ClusterName>/configdir vi lsb.queues
Insert a Queue Declaration such as below:
Add a line to the Resource Definitions in
lsb.shared
:cd ${LSF_TOP}/conf vi lsb.shared
Example before modification:
Begin Resource RESOURCENAME TYPE INTERVAL INCREASING DESCRIPTION # Keywords awshost Boolean () () (instances from AWS) vm_type String () () (vm types for different templates) templateID String () () (template ID for the external hosts) End Resource
Examples with the required line added:
Begin Resource RESOURCENAME TYPE INTERVAL INCREASING DESCRIPTION # Keywords awshost Boolean () () (instances from AWS) xiohost Boolean () () (instances from Infrastructure Optimizer) vm_type String () () (vm types for different templates) templateID String () () (template ID for the external hosts) End Resource
Compute AMI Import
During Prepping the LSF Integration an AMI for compute nodes was identified or created. This step will import that AMI into Infrastructure Optimizer. Ideally, this AMI is capable of booting quickly.
./parse_helper.sh -a <AMI-ID> -i <IMAGE_NAME>
The AMI-ID should be based on a LSF compute node from your cluster, capable of running your workloads.
The AMI should be created by this account.
The AMI should not have product codes.
The Image Name was specified in the environment set up previously and will be used in this command.
Additionally, we can pass
-s script.sh
if troubleshooting is required.
Validation of Migratable VM Joined to Your LSF Cluster
The script test_createVm.sh
exists for a quick validation that new compute resources can successfully connect and register with the scheduler.
./test_createVm.sh -h xvm0 -i <IMAGE_NAME> -u user_data.sh
The hostname specified with
-h xvm0
is arbitrary.The Image Name specified with
-i <IMAGE_NAME>
should correspond to the Image Name from theparse_helper.sh
command and the environment setup earlier.The
-u user_data.sh
is available for any customization that may be required: temporarily changing a password to faciliate logging in, for example.The
test_createVm.sh
script will continuously output updates until the VM is created. When the VM is ready, the script will exit and you’ll see all the fields in the output are now filled with values:Waiting for xvm0... (4) NodeName: xvm0 Controller: az1-qeuiptjx-1 Controller IP: 172.31.57.160 Vm IP: 172.31.48.108
This step is meant to provide a migratable VM so that sanity checking may occur:
Have network mounts appeared as expected?
Is authentication working as intended?
What commands are required to finish bootstrapping?
Et cetera.
Lastly, LSF services should be started at the end of bootstrapping.
It may take 5 minutes or longer for the LSF services to register with the LSF Master Host.
When the RC Execution Host is properly registered, it will be visible via the
lshost
command.
To remove this temporary VM:
Replace VM_NAME with the name of the VM ,
-h xvm0
example above.curl -X DELETE http://${MGMT_SERVER_IP}:5000/v1/xcompute/vm/VM_NAME
The above steps may need to be iterated through several times. When totally satisfied, stash the various commands required for successful bootstrapping and overwrite the user data scripts in the
${LSF_TOP}/conf/resource_connector/exostellar/conf
directory.There will be a per-pool
user_data
script in that folder. It can be overwritten at any time a change is needed and the next time a node is instantiated from that pool, the node will get the changes.A common scenario is that all the
user_data
scripts are identical, but it could be beneficial for different pools to have differentuser_data
bootstrapping assets.
Validate Integration with LSF
Integration steps are complete and a job submission to the new queue is the last validation:
As a user, navigate to a valid job submission directory and launch a job as normal, but be sure to specifiy the new queue:
bsub -q NewQueueName < job-script.sh