Run:AI Platform for AI Workload Orchestration
Faster time to innovation. By usingRun:AI resource pooling, queueing, and prioritization mechanisms together with NetApp storage system, researchers are removed from infrastructure management hassles and can focus exclusively on data science. Run:AI and NetApp customers increase productivity by running as many workloads as they need without compute or data pipeline bottlenecks.
Increased team productivity. Run:AI fairness algorithms guarantee that all users and teams get their fairshare of resources. Policies around priority projects can be preset, and the platform enables dynamic allocation of resources from one user team to another, helping users to get timely access to coveted GPU resources.
Improved GPU utilization. The Run:AI Scheduler enables users to easily make use of fractional GPUs, integer GPUs, and multiple nodes of GPUs for distributed training on Kubernetes. In this way, AI workloads run based on needs, not capacity. Data science teams able to run more AI experiments on the same infrastructure.