Microsoft today announced that Windows ML, the API for running machine learning inferences on Windows devices, will soon make its way to more places. Going forward, it’ll be available as a standalone package that can be shipped with any Windows app, enabling Windows ML support for CPU inference on Windows versions 8.1 and newer and GPU hardware-acceleration on Windows 10 1709 and newer.
That should make it easier for developers to ship AI-imbued Windows apps with feature parity. As for business and consumer users of those apps, the change should translate to improved in-app experiences.
Previously, Windows ML was supported as a built-in Windows component on Windows 10 version 1809 (October 2018 Update) and newer. Microsoft says it’ll continue to update the API with each new version of Windows, but that in the future, there will be a corresponding redistributable Windows ML package with matching new features and optimizations.
“We understand the complexities developers face in building applications that offer a great customer experience, while also reaching their wide customer base,” wrote Windows AI platform senior program manager Nick Geisler in a blog post. “Delivering reliable, high-performance results across the breadth of Windows hardware, Windows ML is designed to make ML deployment easier, allowing developers to focus on creating innovative applications.”
Roughly a year into its release, Windows ML has made its way into a number of popular Windows apps. Windows Photos taps Windows ML to help organize photo collections, while Windows Ink leverages it to analyze handwriting, converting ink strokes into text, shapes, lists and more. And Adobe Premiere Pro offers a Windows ML-powered feature that takes videos and crops them to any aspect ratio, all while preserving the important action in each frame.
Microsoft also today revealed its plans to unify its approach with Windows ML, ONNX Runtime, and DirectML. Specifically, it will bring the Windows ML API and a DirectML execution provider to the ONNX Runtime GitHub project, so that developers can choose the API set that works best for their app. (The ONNX Runtime is an inference engine for the Open Neural Network Exchange, which aims to provide machine learning framework interoperability.) The Windows ML and DirectML preview is available as source as of this week, with instructions and samples on how to build it as well as a prebuilt package for CPU deployments.