Document toolboxDocument toolbox

v2.4.0.0 Adding or Modifying Pools Slurm

It’s common to refine or otherwise modify configurations over time. Soon there will be a CLI tool to obviate the need for the following manual steps, which are the current steps required for reconfiguration after the initial integration has been pushed into production.

  1. Navigate to the exostellar directory where the configuration assets reside:

    1. cd ${SLURM_CONF_DIR}/exostellar
  2. Setup a timestamp folder in case there’s a need to rollback:

    1. PREVIOUS_DIR=$( date +%Y-%m-%d_%H-%M-%S ) mkdir ${PREVIOUS_DIR}
  3. Place the contents of the exostellar directory in the timestamp directory:

    1. mv * ${PREVIOUS_DIR}
  4. Make a new json folder and copy the env.json and profile.json:

  5. Edit env1.json as needed, e.g.:

    1. Add more pools if you need more CPU-core or Memory options availble in the partition.

    2. Increase the node count in pools.

    3. Environment Configuration Information for reference.

  6. Likely, profile0.json may not need any modification.

    1. Profile Configuration Information for reference.

  7. Validate the JSON asset with jq:

  8. You will see well-formatted JSON if jq can read the file, indicating no errors. If you see an error message, that means the JSON is not valid.

  9. When the JSON is valid, the file can be pushed to the MGMT_SERVER:

  10. If the profile was changed, validate it with the quick jq test.

  11. Push the changes live:

  12. Grab the assets from the MGMT_SERVER:

    1. If the EnvName was changed (above in Edit the Slurm Environment JSON for Your Purposes - Step 2 ), then the following command can be used with your CustomEnvironmentName :

  13. Unpack them into the exostellar folder:

  14. Edit resume_xspot.sh and add sed command snippet ( | sed "s/XSPOT_NODENAME/$host/g" ) for every pool:

    • becomes:

  15. Introducing new nodes into a slurm cluster requires restart of the slurm control deamon:

  16. Integration steps are complete and a job submission to the new partition is the last validation:

    1. As a user, navigate to a valid job submission directory and launch a job as normal, but be sure to specifiy the new partition:

      1. sbatch -p NewPartitionName < job-script.sh

 

Â