Conclusion

Contributors

AI-driven automation and edge computing is a leading approach to help business organizations achieve digital transformation and maximize operational efficiency and safety. With edge computing, data is processed much faster because it does not have to travel to and from a data center. Therefore, the cost associated with sending data back and forth to data centers or the cloud is diminished. Lower latency and increased speed can be beneficial when businesses must make decisions in near-real time using AI inferencing models deployed at the edge.

NetApp storage systems deliver the same or better performance as local SSD storage and offer the following benefits to data scientists, data engineers, AI/ML developers, and business or IT decision makers:

  • Effortless sharing of data between AI systems, analytics, and other critical business systems. This data sharing reduces infrastructure overhead, improves performance, and streamlines data management across the enterprise.

  • Independently scalable compute and storage to minimize costs and improve resource usage.

  • Streamlined development and deployment workflows using integrated Snapshot copies and clones for instantaneous and space-efficient user workspaces, integrated version control, and automated deployment.

  • Enterprise-grade data protection for disaster recovery and business continuity. The NetApp and Lenovo solution presented in this document is a flexible, scale-out architecture that is ideal for enterprise-grade AI inference deployments at the edge.

Acknowledgments

  • J.J. Falkanger, Sr. Manager, HPC & AI Solutions, Lenovo

  • Dave Arnette, Technical Marketing Engineer, NetApp

  • Joey Parnell, Tech Lead E-Series AI Solutions, NetApp

  • Cody Harryman, QA Engineer, NetApp

Where to find additional information

To learn more about the information described in this document, refer to the following documents and/or websites:

Version history

Version Date Document version history

Version 1.0

March 2021

Initial release

Version 2.0

October 2021

Updated with EF and MLPerf Inference v1.1