Skip to main content
NetApp Solutions

Over-Quota Fairness

Contributors kevin-hoke

In this section, we expand the scenario in which multiple teams submit workloads and exceed their quota. In this way, we demonstrate how Run:AI’s fairness algorithm allocates cluster resources according to the ratio of preset quotas.

Goals for this test scenario:

  • Show queuing mechanism when multiple teams are requesting GPUs over their quota.

  • Show how the system distributes a fair share of the cluster between multiple teams that are over their quota according to the ratio between their quotas, so that the team with the larger quota gets a larger share of the spare capacity.

At the end of Basic Resource Allocation Fairness, there are two workloads queued: one for team-b and one for team-c. In this section, we queue additional workloads.

For details including job submissions, container images used, and command sequences executed, see Testing Details for section 4.10.

When all jobs are submitted according to the section Testing Details for section 4.10, the system dashboard shows that team-a, team-b, and team-c all have more GPUs than their preset quota. team-a occupies four more GPUs than its preset soft quota (four), whereas team-b and team-c each occupy two more GPUs than their soft quota (two). The ratio of over-quota GPUs allocated is equal to that of their preset quota. This is because the system used the preset quota as a reference of priority and provisioned accordingly when multiple teams request more GPUs, exceeding their quota. Such automatic load balancing provides fairness and prioritization when enterprise data science teams are actively engaged in AI model development and production.

Figure showing input/output dialog or representing written content

The results of this testing scenario show the following:

  • The system starts to de-queue the workloads of other teams.

  • The order of the dequeuing is decided according to fairness algorithms, such that team-b and team-c get the same amount of over-quota GPUs (since they have a similar quota), and team-a gets a double amount of GPUs since their quota is two times higher than the quota of team-b and team-c.

  • All the allocation is done automatically.

Therefore, the system should stabilize on the following states:

Project GPUs allocated Comment

team-a

8/4

Four GPUs over the quota. Empty queue.

team-b

4/2

Two GPUs over the quota. One workload queued.

team-c

4/2

Two GPUs over the quota. One workload queued.

team-d

0/8

Not using GPUs at all, no queued workloads.

The following figure shows the GPU allocation per project over time in the Run:AI Analytics dashboard for the sections Achieving High Cluster Utilization with Over-Quota GPU Allocation, Basic Resource Allocation Fairness, and Over-Quota Fairness. Each line in the figure indicates the number of GPUs provisioned for a given data science team at any time. We can see that the system dynamically allocates GPUs according to workloads submitted. This allows teams to go over quota when there are available GPUs in the cluster, and then preempt jobs according to fairness, before finally reaching a stable state for all four teams.

Figure showing input/output dialog or representing written content