Integrating an NVidia GPU with Hyper-V VMs and Linux Containers on Windows

I have written in the past about the differences of running Linux and Windows Docker Containers and some of the challenges with Windows containers. One such challenge has been interacting with AI hardware from vendors like NVidia and Intel.

Today, based on some changes made in 2016, I will walk you through how to expose an NVidia Tesla GPU to VMs and Linux Containers on Windows.

Enabling Feature: Discrete Device Assignment

Starting in 2016, Microsoft released a capability in Hyper-V called “Discrete Device Assignment“: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda. Notably, this gives us the ability to dedicate devices to VMs, which is exactly how we get access to GPU optimized VMs in Azure: https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu and Azure Stack Edge: https://azure.microsoft.com/en-us/blog/microsoft-is-expanding-the-azure-stack-edge-with-nvidia-t4-gpu-preview/.

Enabling NVidia GPU for VMs

Below a the simple PowerShell script, based on: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#configure-the-vm-for-dda, which grants the VM Access.

#Configure the VM for a Discrete Device Assignment
$vm = 	"Win10IoT"
#Set automatic stop action to TurnOff
Set-VM -Name $vm -AutomaticStopAction TurnOff
#Enable Write-Combining on the CPU
Set-VM -GuestControlledCacheTypes $true -VMName $vm
#Configure 32 bit MMIO space
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm
#Configure Greater than 32 bit MMIO space
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm

#Find the Location Path and disable the Device
#Enumerate all PNP Devices on the system
$pnpdevs = Get-PnpDevice -presentOnly
#Select only those devices that are Display devices manufactured by NVIDIA
$gpudevs = $pnpdevs |where-object {$_.Class -like "Display" -and $_.Manufacturer -like "NVIDIA"}
#Select the location path of the first device that's available to be dismounted by the host.
$locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
#Disable the PNP Device
Disable-PnpDevice  -InstanceId $gpudevs[0].InstanceId

#Dismount the Device from the Host
Dismount-VMHostAssignableDevice -force -LocationPath $locationPath

#Assign the device to the guest VM.
Add-VMAssignableDevice -LocationPath $locationPath -VMName $vm

Below is an screen shot, proving Hyper-V configured with DDA, granting the Guest VM dedicated access to the NVidia Tesla GPU. The host is named “Server2016” and the name of the VM is “Win10IoT”.

While shown on Windows Server 2016 above, this same feature works on Windows Client.

Enabling NVidia GPU for Linux Containers on Windows

Discussed here: https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/linux-containers, Linux Containers on Windows (LCOW) is enabled via Hyper-V and a special VM. This being the case, enabling GPUs in LCOW is similar to GPUs in VMs.

Using Moby, Moby from Microsoft [used on Azure IoT Edge] or Docker Desktop, the special VM is named ‘DockerDesktopVM’.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s