Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

UNDER CONSTRUCTION

The following manual steps for EAR will be replaced with a simplified workflow for command line users and alternatively, the Mangement Console (Web UI) will be able to replace most of these steps, as well.

Connect to Your

...

LSF Head Node

During Early Access, integreation requires a handful of commands and root or sudo access on the slurm controller, where slurmctld runsLSF Master Node.

  1. Get a shell on the head node and navigate to the slurm configuration directory, where slurm.conf residesLSF_TOP directory.

    1. Code Block
      cd $SLURM_CONF_DIR$LSF_TOP/conf/resource_connector
  2. Make subdirectories here:

    1. Code Block
      mkdir -p exostellar/json exostellar/conf exostellar/scripts
      cd exostellar/json

Pull Down the Default

...

LSF Environment Assets as a JSON Payload:

  1. The packages for jq and curl are required:

    1. Code Block
      yum install jq curl
    2. CentOS EoL: You may need to ensure yum can still function due to recent End of Life cycle from RedHat for CentOS 7. The following command is an exmple mitigation for an internet-dependent yum repository:

    3. Code Block
      sed -i -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*.repo
      yum clean all
  2. Pull down from the MGMT_SERVER default assets for customization:

    1. Code Block
      curl -X GET http://${MGMT_SERVER_IP}:5000/v1/env/slurmlsf | jq > default-slurmlsf-env.json
  3. The asset will look like:

Expand
titledefault-slurmlsf-env.json
Code Block
{
  "EnvName": "slurm",
  "HeadAddress": "<HeadAddress>",
  "Pools": [
    {
      "PoolName": "xvm16-",
      "PoolSize": 10,
      "ProfileName": "az1",
      "VM": {
        "CPUs": 16,
        "ImageName": "ubuntu",
        "MaxMemory": 60000,
        "MinMemory": 4096,
        "UserData": "I2Nsb3VkLWNvbmZpZwpydW5jbWQ6CiAgLSBbc2gsIC1jLCAibWtkaXIgLXAgL3hjb21wdXRlIl0KICAtIFtzaCwgLWMsICJtb3VudCAxNzIuMzEuMjQuNToveGNvbXB1dGUgL3hjb21wdXRlIl0KICAtIFtzaCwgLWMsICJta2RpciAtcCAvaG9tZS9zbHVybSJdCiAgLSBbc2gsIC1jLCAibW91bnQgMTcyLjMxLjI0LjU6L2hvbWUvc2x1cm0gL2hvbWUvc2x1cm0iXQogIC0gW3NoLCAtYywgInJtIC1yZiAvZXRjL3NsdXJtIl0KICAtIFtzaCwgLWMsICJsbiAtcyAveGNvbXB1dGUvc2x1cm0vIC9ldGMvc2x1cm0iXQogIC0gW3NoLCAtYywgImNwIC94Y29tcHV0ZS9zbHVybS9tdW5nZS5rZXkgL2V0Yy9tdW5nZS9tdW5nZS5rZXkiXQogIC0gW3NoLCAtYywgInN5c3RlbWN0bCByZXN0YXJ0IG11bmdlIl0KICAjIEFMV0FZUyBMQVNUIQogIC0gWwogICAgICBzaCwKICAgICAgLWMsCiAgICAgICJlY2hvIFhTUE9UX05PREVOQU1FID4gL3Zhci9ydW4vbm9kZW5hbWU7IHNjb250cm9sIHVwZGF0ZSBub2RlbmFtZT1YU1BPVF9OT0RFTkFNRSBub2RlYWRkcj1gaG9zdG5hbWUgLUlgIiwKICAgIF0KCg==",
        "VolumeSize": 10
      }
    }
  ],
  "Security": {
    "User": "slurm",
    "UserKeyPem": ""
  },
  "Slurm": {
    "BinPath": "/bin",
    "ConfPath": "/etc/slurm",
    "PartitionName": "normal"
  },
  "Type": "slurm",
  "Id": "1f019d92-d356-42b4-ba2c-d65bea40474a"
}

...