Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Scroll Documents: Update page title prefix

The following manual steps will be replaced with a simplified workflow for command line users and alternatively, the Mangement Console (Web UI) is available for login and configuration as a Web UI.

To ensure optimal setup of Infrastructure Optimizer, please make a note of the following information that will be used during installation and integration:.

Slurm Installation

  • SLURM_CONF_DIR : directory where slurm.conf is located

  • SLURM_BIN_DIR : directory where slurm’s binaries are located, usually on users' PATH

Exostellar Management Server Information

  • MGMT_SERVER_IP : The internal or private IP Address can be found in the CloudFormation Outputs tab.

Facilitating Commands

Variables can be export’d to facilitate copy/paste commands in the next sections of this guide, or source an arbitrary file, for example : . /root/facilitate or source /root/facilitate.

Code Block
export MGMT_SERVER_IP=173.31.23.23
export SLURM_CONF_DIR=/opt/slurm/etc

1. Slurm Compute Environment

...

To prepare the nested compute node we need to process an AMI. If you have an existing AMI for your compute environment we can leverage that. If an AMI needs to be created from an existing compute node the following steps walk through that process.

The key concepts to keep in mind about getting a good AMI is that it should boot fast and do little to no work, for example in bootstrapping or user_data. Also, it should have everything required to run the workflows and authenticate users.

To create an AMI from a slurm compute node:

...

Allocate a compute node:

  • Code Block
    salloc -N 1 -J ami-creation --no-shell --exclusive --nodelist=<NodeNanme>

When the job is allocated, gather some information on the node running the job:

...

The salloc command above should have output a JOB_ID.

...

The squeue command should show the JOB_ID running on particular node.

...

Issue the following command to capture information about that node:

  • Code Block
    scontrol show node <NODENAME>
  • Look for the NodeAddr= field in the output to find the Private IPv4 Address of the node running the ami-creation job.

...

Navigate to AWS console EC2 Instances page and search for the Private IPv4 Address.

...

Select that instance and click “Create an image” from the Actions button in the upper right corner.

...

AMI Creation takes several minutes to complete. Give the AMI a unique name, optionally a description, and accept the default values provided by the prompts.

...

Base

2. Slurm Head Node Configuration Location

3. Slurm Application Environment

4. Customizing the Slurm Application Environment

5. Upload Slurm Application Environment

6. The Default Profile for Slurm Integration

7. Customizing Profiles for Slurm Integration

8. Upload Slurm Profiles

9. Download Configuration Assets for Slurm Integration

10. Import Slurm Compute AMI

11. Validation of Migratable VM and Slurm Communications

12. Final Validation with Slurm Job

13. Slurm Knowledge Base