Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
For conceptual guidance, see Run ONNX models with Windows ML).
You can think of the APIs in the Microsoft.WindowsAppSDK.ML NuGet package as being the superset of these two sets:
- Windows ML APIs. Windows ML APIs in the Microsoft.Windows.AI.MachineLearning namespace, such as the ExecutionProviderCatalog class and its methods (which are Windows Runtime APIs). These APIs are documented in the topic you're reading now.
- ONNX Runtime APIs. Windows ML implementations (in the Microsoft.WindowsAppSDK.ML NuGet package) of certain APIs from the ONNX Runtime (ORT). For documentation, see the ONNX Runtime API docs. For example, the OrtCompileApi struct. For code examples that use these APIs, and more links to documentation, see the Use Windows ML to run the ResNet-50 model tutorial.
The Microsoft.WindowsAppSDK.ML NuGet package
The Microsoft Windows ML runtime provides APIs for machine learning and AI operations in Windows applications. The Microsoft.WindowsAppSDK.ML NuGet package provides the Windows ML runtime .winmd
files for use in both C# and C++ projects.
The pywinrt Python wheels
The Microsoft Windows ML runtime leverages the pywinrt project to provide Python access to the same Windows ML APIs. The package name is winui3-Microsoft.Windows.AI.MachineLearning. Additional packages are required to use Windows App SDK in python. For details, see the Run ONNX models with Windows ML topic.
Windows ML APIs
ExecutionProviderCatalog class
The ExecutionProviderCatalog class provides methods to discover, acquire, and register AI execution providers (EPs) for use with the ONNX Runtime. It handles the complexity of package management and hardware selection.
This class is the entry point for your app to access hardware-optimized machine learning acceleration through the Windows ML runtime.
// Get the default catalog
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
// Ensure and register all compatible execution providers
await catalog.EnsureAndRegisterAllAsync();
// Use ONNX Runtime directly for inference (using Microsoft.ML.OnnxRuntime namespace)
ExecutionProviderCatalog methods
ExecutionProviderCatalog.GetDefault method
Returns the default ExecutionProviderCatalog instance that provides access to all execution providers on the system.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
ExecutionProviderCatalog.FindAllProviders method
Returns a collection of all execution providers compatible with the current hardware.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
var providers = catalog.FindAllProviders();
foreach (var provider in providers)
{
Console.WriteLine($"Found provider: {provider.Name}, Type: {provider.DeviceType}");
}
ExecutionProviderCatalog.EnsureAndRegisterAllAsync method
Ensures all compatible execution providers are ready and registers them with ONNX Runtime.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
try
{
// This will ensure providers are ready and register them with ONNX Runtime
await catalog.EnsureAndRegisterAllAsync();
Console.WriteLine("All execution providers are ready and registered");
}
catch (Exception ex)
{
Console.WriteLine($"Failed to prepare execution providers: {ex.Message}");
}
ExecutionProviderCatalog.RegisterAllAsync method
Registers all compatible execution providers with ONNX Runtime without ensuring they are ready. This only registers providers that are already present on the machine, avoiding potentially long download times which may be required by EnsureAndRegisterAllAsync
.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
await catalog.RegisterAllAsync();
ExecutionProvider class
The ExecutionProvider class represents a specific hardware accelerator that can be used for machine learning inference.
ExecutionProvider methods
ExecutionProvider.EnsureReadyAsync method
Ensures the execution provider is ready for use by downloading and installing any required components.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
var providers = catalog.FindAllProviders();
foreach (var provider in providers)
{
await provider.EnsureReadyAsync();
Console.WriteLine($"Provider {provider.Name} is ready");
}
ExecutionProvider.TryRegister method
Attempts to register the execution provider with ONNX Runtime and returns a boolean indicating success.
var catalog = Microsoft.Windows.AI.MachineLearning.ExecutionProviderCatalog.GetDefault();
var providers = catalog.FindAllProviders();
foreach (var provider in providers)
{
await provider.EnsureReadyAsync();
bool registered = provider.TryRegister();
Console.WriteLine($"Provider {provider.Name} registration: {(registered ? "Success" : "Failed")}");
}
ExecutionProvider properties
Name | Type | Description |
---|---|---|
Name | string | Gets the name of the execution provider |
DeviceType | ExecutionProviderDeviceType | Gets the type of device (CPU, GPU, NPU, etc.) |
IsReady | bool | Gets whether the execution provider is ready for use |
LibraryPath | string | Gets the path to the execution provider library |
Implementation notes
The Windows ML runtime is integrated with the Windows App SDK and relies on its deployment and bootstrapping mechanisms:
- Automatically discovers execution providers compatible with current hardware
- Manages package lifetime and updates
- Handles package registration and activation
- Supports different versions of execution providers
Framework-Dependent Deployment
Windows ML is delivered as a framework-dependent component. This means your app must either:
- Reference the main Windows App SDK NuGet package by adding a reference to
Microsoft.WindowsAppSDK
(recommended) - Or, reference both
Microsoft.WindowsAppSDK.ML
andMicrosoft.WindowsAppSDK.Runtime
For more information on deploying Windows App SDK applications, see the Package and deploy Windows apps documentation.
Using ONNX Runtime with Windows ML
For C++ applications, after registering execution providers, use the ONNX Runtime C API directly to create sessions and run inference.
For C# applications, use the ONNX Runtime directly for inference using the Microsoft.ML.OnnxRuntime
namespace.
For Python applications, use the separate ONNX Runtime wheel (onnxruntime
) for inference. For the experimental release, please use the onnxruntime-winml==1.22.0.post2
package from index https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple
.
Python notes
Initialize Windows App SDK
All Windows ML calls should happen after the Windows App SDK is initialized. This can be done with the following code:
from winui3.microsoft.windows.applicationmodel.dynamicdependency.bootstrap import (
InitializeOptions,
initialize
)
with initialize(options = InitializeOptions.ON_NO_MATCH_SHOW_UI):
# Your Windows ML code here
Registration happens out of WinML
The ONNX runtime is designed in a way where the Python and native environments are separate. And native registration calls in the same process will not work for the Python environment. Thus, the registration of execution providers should be done with the Python API directly.
Use pywinrt in another process
Due to some limitations in the Python projection of WinRT, it's recommended to get the execution provider information in a separate worker process. For a complete example, see the Windows ML Python sample.