Skip to content

Workload Management

The workload management/queueing system for the Virtual Cluster is Slurm. This chapter contains information which helps to understand how the system is configured and how to perform common administrative tasks. The master daemon of the system and all configuration files and programs are installed on the “service” instance (or on the “slurm” instance) of the Virtual Cluster.

Configuration#

The configuration file for Slurm can be found in /etc/slurm/slurm.conf on the "service" or "slurm" instance, respectively. Refer to the Slurm website to find out more about the different configuration options.

Warning

Making changes in the Slurm configuration can damage the Virtual Cluster and lead to unexpected behavior. It is strongly recommended to contact Schrödinger prior to applying any changes to the Slurm configuration to ensure undisrupted operation of the Virtual Cluster.

Each partition with dynamic instances (i.e., the “session” partition and all “compute” partitions) correspond to a “partition” in Slurm with the same name. This allows to easily choose the appropriate hardware resources for a given job simply by selecting the appropriate partition when submitting the job. Each instance is referred to as a “node” by Slurm.

The partitions of a Virtual Cluster and their state as seen by Slurm can be displayed by the following command:

sinfo

Changing the Maximum Size of the Virtual Cluster#

The maximum number of instances allowed for each partition can be configured in the Slurm configuration file.

Note

Prior to increasing the maximum size of the Virtual Cluster, please check, that the current quota of your resource provider account/subscription used to host the Virtual Cluster allows starting any higher number of instances.

The following procedure can be used to change the maximum size of the Virtual Cluster in this case:

  1. Drain all instances of the Virtual Cluster (see Draining Instances and Partitions) and make sure that all dynamically created instances have been terminated by using the command:

    sudo -i vc-instance-manager instance list
    
  2. Shut down the Slurm controller daemon on the “service” instance or the "slurm" instance, respectively, using the following command:

    sudo systemctl stop slurmctld
    
  3. Open the file /etc/slurm/slurm.conf in any text editor and search for lines starting with NodeName and PartitionName. For each partition there is one entry of each type. The nodes/instances are named using the following scheme:

    <partition-name>-<N>
    

    where “N” is an integer number (index). The node/instance with the highest index defines the maximum size of the Virtual Custer for a particular partition. E.g., the entries:

    NodeName=compute-cpu-[1-10] ...
    PartitionName=compute-cpu Nodes=compute-1-[1-10] ...
    

    in /etc/slurm/slurm.conf describe the instances in the “compute-cpu” partition. To change the maximum size of this partition from 10 to 20 instances, the above lines need to be modified as follows:

    NodeName=compute-1-[1-20] ...
    PartitionName=compute-cpu Nodes=compute-1-[1-20] ...
    
  4. Open the file /etc/slurm/gres.conf found on the "service" instance or the "slurm" instance, respectively, in any text editor. The lines in this file contain the NodeName parameter as well. Change the value of this parameter in the same way as described in the previous point.

  5. Open the file /usr/local/sbin/vc-instance-manager in any text editor and change the property “max” for the partition for which you want to change the size to the same value entered into the /etc/slurm/slurm.conf file.

  6. Start the Slurm controller daemon using the following command:

    sudo systemctl start slurmctld
    

License Aware Scheduling (License Checking)#

The workload management system is configured for license aware scheduling, i.e., jobs requiring Schrödinger licenses currently in use are not started and instances for these jobs are not created. A service on the “service” instance interacts with the accounting system of the workload management system to obtain and update the available licenses. The status of this service can by queried using the following commands (on the “service” instance):

sudo systemctl status slurm-schrodinger-license-sensor

shows the output of the service and

scontrol show lic

shows the current licenses registered in the accounting system of the workload management system. The numbers displayed as “Total” describe the number of licenses available for jobs running on the Virtual Cluster. It may vary from the overall total number if licenses are checked out by systems external to the Virtual Cluster. Licenses checked out by such systems are subtracted from the total number of licenses as they are unavailable for jobs on the Virtual Cluster.

Draining Instances and Partitions#

To drain and disable a partition of the Virtual Cluster the following command can be used by a user with “admin” permissions:

scontrol update Partition=<name> State=DRAIN Reason=<any-reason>

This can be useful to perform configuration changes which requires the Virtual Cluster being idle.

Note

The partition will continue to process all existing jobs which already started execution before this command was executed, but will not accept the submission of new jobs and won’t start any of the jobs assigned to this partition which have not started execution.

A partition and its nodes can be enabled again using the following command:

scontrol update Partition=<name> State=UP

The command:

scontrol show partitions

shows the current property and state of partitions.