update: using a single GPU, same results. It’s not the dual-gpu unsupported problem. Something else is hosing my hyper-v host from the guest install.
The results, well, sucked. Nuked the system totally. So don’t cross the streams!
In my lab I was building an MDT share so I could mass deploy endpoints. I’m a hyper-v guy, and a gamer, and an Ethereum miner, so I have a couple video cards in my rig. Now I know, multiple video cards aren’t supported in Hyper-V RemoteFX. I get it. I don’t know why, sounds like some code laziness going on but whatever. I don’t use RemoteFX acceleration in my guest VMs, so I didn’t think much of it…
Until I stood up a Windows 7 Professional SP1 updated install and injected the Hyper-V additions into it as part of the normal MDT process.
Apparently the guest driver initialization hosed the host? I didn’t think that was supposed to be possible. I’d file a bug but I’m a pro customer now, not enterprise level, so I can’t.
How to find this? Well, GFL looking in the SYSTEM event log, where crashes are typically recorded. It’s empty.
The esoteric Microsoft-Windows-Hyper-V-VMMS/Admin log though? It was pretty helpful. Just have to think to look down in there…
“The required GPU resources could not be accessed. This server cannot run as a RemoteFX host without a GPU. Verify that the GPU is correctly installed.”
Followed quickly by “The machine contains non-identical GPUs. This is not a supported configuration for RemoteFX.”
So I get it. “You shouldn’t be running in this config”. Fair enough.
Should Hyper-V allow this configurations’ guest to crash the host? Probably not.
Another fun fact. I don’t use the GPUs for the guest anyway…