Skip to main content
NetApp Solutions

Test procedure

Contributors banum-netapp

This section describes the test procedures used to validate this solution.

Operating system and AI inference setup

For AFF C190, we used Ubuntu 18.04 with NVIDIA drivers and docker with support for NVIDIA GPUs and used MLPerf code available as a part of the Lenovo submission to MLPerf Inference v0.7.

For EF280, we used Ubuntu 20.04 with NVIDIA drivers and docker with support for NVIDIA GPUs and MLPerf code available as a part of the Lenovo submission to MLPerf Inference v1.1.

To set up the AI inference, follow these steps:

  1. Download datasets that require registration, the ImageNet 2012 Validation set, Criteo Terabyte dataset, and BraTS 2019 Training set, and then unzip the files.

  2. Create a working directory with at least 1TB and define environmental variable MLPERF_SCRATCH_PATH referring to the directory.

    You should share this directory on the shared storage for the network storage use case, or the local disk when testing with local data.

  3. Run the make prebuild command, which builds and launches the docker container for the required inference tasks.

    Note The following commands are all executed from within the running docker container:
    • Download pretrained AI models for MLPerf Inference tasks: make download_model

    • Download additional datasets that are freely downloadable: make download_data

    • Preprocess the data: make preprocess_data

    • Run: make build.

    • Build inference engines optimized for the GPU in compute servers: make generate_engines

    • To run Inference workloads, run the following (one command):

make run_harness RUN_ARGS="--benchmarks=<BENCHMARKS> --scenarios=<SCENARIOS>"

AI inference runs

Three types of runs were executed:

  • Single server AI inference using local storage

  • Single server AI inference using network storage

  • Multi-server AI inference using network storage