Skip to main content
NetApp Solutions

Deploy Grafana Dashboard

Contributors kevin-hoke

After everything is deployed, we run inferences on new data. The models predict failure on network device equipment. The results of the prediction are stored in an Iguazio TimeSeries table. You can visualize the results with Grafana in the platform integrated with Iguazio’s security and data access policy.

You can deploy the dashboard by importing the provided JSON file into the Grafana interfaces in the cluster.

  1. To verify that the Grafana service is running, look under Services.

    Figure showing input/output dialog or representing written content

  2. If it is not present, deploy an instance from the Services section:

    1. Click New Service.

    2. Select Grafana from the list.

    3. Accept the defaults.

    4. Click Next Step.

    5. Enter your user ID.

    6. Click Save Service.

    7. Click Apply Changes at the top.

  3. To deploy the dashboard, download the file NetopsPredictions-Dashboard.json through the Jupyter interface.

    Figure showing input/output dialog or representing written content

  4. Open Grafana from the Services section and import the dashboard.

    Figure showing input/output dialog or representing written content

  5. Click Upload *.json File and select the file that you downloaded earlier (NetopsPredictions-Dashboard.json). The dashboard displays after the upload is completed.

Figure showing input/output dialog or representing written content

Deploy Cleanup Function

When you generate a lot of data, it is important to keep things clean and organized. To do so, deploy the cleanup function with the cleanup.ipynb notebook.

Benefits

NetApp and Iguazio speed up and simplify the deployment of AI and ML applications by building in essential frameworks, such as Kubeflow, Apache Spark, and TensorFlow, along with orchestration tools like Docker and Kubernetes. By unifying the end-to-end data pipeline, NetApp and Iguazio reduce the latency and complexity inherent in many advanced computing workloads, effectively bridging the gap between development and operations. Data scientists can run queries on large datasets and securely share data and algorithmic models with authorized users during the training phase. After the containerized models are ready for production, you can easily move them from development environments to operational environments.