July 7, 2024

Finding EFI Firmware That Is Compatible

I run some QEMU/KVM virtual machines as part of one of my GitHub Actions workflows. To reduce the workflow’s runtime and to help make it reproducible, it uses pre-built disk images and pre-defined XML domain configurations for libvirt. That worked well until I changed the workflow to use the new Ubuntu 24.04 images. Afterwards, libvirt rejected the domain configuration with an error I had not encountered before:

libvirt.libvirtError: operation failed: Unable to find ’efi’ firmware that is compatible with the current configuration

Why does a virtual machine image suddenly stop working just because I upgraded Ubuntu? I was stumped, particularly because a quick web search did not turn up much useful information.

The offending domain configuration looked as follows:

<os firmware='efi'>
  <type arch='x86_64' machine='q35'>hvm</type>
  <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
  <nvram template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/mymachine_VARS.fd</nvram>
  <boot dev='hd'/>
</os>

<loader/> describes the UEFI firmware to use. <nvram/> defines the path to the firmware’s variable store. It is machine-specific and contains the path to the boot loader, for example. See libvirt’s Domain XML format documentation for further information.

Thankfully, the mailing list message accompanying the patch that introduced the error message into libvirt explains what the problem is:

The [old] message can be misleading, because it seems to suggest that no firmware of the requested type is available on the system.

What actually happens most of the time, however, is that despite having multiple firmwares of the right type to choose from, none of them is suitable because of lacking some specific feature or being incompatible with some setting that the user has explicitly enabled.

In Stephen Finucane’s helpful article UEFI Support in Libvirt, I learnt that we can ask libvirt what firmware it knows about by running:

$ virsh domcapabilities --machine q35 | xmllint --xpath '/domainCapabilities/os' -

And sure enough, /usr/share/OVMF/OVMF_CODE.fd is no longer there because it has been removed1 from the ovmf package shipped with Ubuntu 24.04:

<os supported="yes">
  <enum name="firmware">
    <value>efi</value>
  </enum>
  <loader supported="yes">
    <value>/usr/share/OVMF/OVMF_CODE_4M.ms.fd</value>
    <value>/usr/share/OVMF/OVMF_CODE_4M.secboot.fd</value>
    <value>/usr/share/OVMF/OVMF_CODE_4M.fd</value>
    <enum name="type">
      <value>rom</value>
      <value>pflash</value>
    </enum>
    <enum name="readonly">
      <value>yes</value>
      <value>no</value>
    </enum>
    <enum name="secure">
      <value>yes</value>
      <value>no</value>
    </enum>
  </loader>
</os>

Solving the Problem of the Missing Firmware

We have multiple options for dealing with the missing firmware, and none is fun. The reason is that the firmware (OVMF_CODE.fd) and its variable store (OVMF_VARS.fd) are inextricably linked. It is impossible to change the firmware only to OVMF_CODE_4M.fd and continue using the old variable store. The virtual machine simply will not boot.

Recreating the Virtual Machine Images

As I use Packer, re-creating the virtual machine with a different firmware image is a matter of minutes. While older versions of Debian and Ubuntu already come with OVMF_CODE_4M.fd, Fedora also switched from a raw to a qcow2 file (OVMF_CODE_4M.qcow2), making sharing the same virtual machine image across distributions impossible. If that is important to you, you might fare better with Adding OVMF_CODE.fd Back In.

Migrating to OVMF_CODE_4M.fd

If it is difficult to re-create the virtual machine, there is an option to migrate to OVMF_CODE_4M.fd. Debian 12 and Ubuntu 24.04 include the program 2M_VARS-to-4M_VARS.sh (direct link) in the ovmf package. According to the HOWTO, converting the old variable store to the 4M variant becomes a matter of running:

$ 2M_VARS-to-4M_VARS.sh -i mymachine_VARS.fd 

Adding OVMF_CODE.fd Back In

OVMF is portable and can be copied between systems. Unfortunately, it is insufficient to grab OVMF_CODE.fd and OVMF_VARS.fd from a system that still has them and put them into /usr/share/OVMF. If you do this and run virsh domcapabilities, they will not show up, and libvirt will not accept the domain configuration despite the firmware image and variable store being in the right place.

You also need a matching firmware descriptor in /usr/share/qemu/firmware. These descriptors not only record the known firmware images but also the supported machine types and features like Secure Boot. It is usually sufficient to grab the JSON file that refers to OVMF_CODE.fd and copy it over. The firmware should then appear in virsh domcapabilities.

Resetting the Variable Store

That is the approach for adventurous people. Change the firmware to OVMF_CODE_4M.fd and reset the variable store when starting the domain for the next time:

$ virsh start --reset-nvram domain

Then, use the UEFI shell to manually select the boot loader, start the system and re-install grub-efi. If the guest is a Windows machine, it should boot right away without you ever seeing the UEFI shell (see below for why this works).

Solving the Problem of the Incompatible Firmware

Not only can the firmware be absent. It can also be incompatible with the domain configuration, resulting in the same error message. Usually, that happens when creating a domain for the first time.

The key to finding the problem is to look at the domain configuration and compare it with the output of virsh domcapabilities.

Let’s suppose the domain configuration looks as follows:

<os firmware='efi'>
  <type arch='x86_64' machine='pc'>hvm</type>
  <loader secure='no'/>
  <boot dev='hd'/>
</os>

To list the supported configurations, run:

$ virsh domcapabilities --machine pc | xmllint --xpath '/domainCapabilities/os' -

On my machine, it yields

<os supported="yes">
  <enum name="firmware">
    <value>efi</value>
  </enum>
  <loader supported="yes">
    <value>/usr/share/edk2/ovmf/OVMF_CODE_4M.qcow2</value>
    <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
    <enum name="type">
      <value>rom</value>
      <value>pflash</value>
    </enum>
    <enum name="readonly">
      <value>yes</value>
      <value>no</value>
    </enum>
    <enum name="secure">
      <value>no</value>
    </enum>
  </loader>
</os>

As we gather from the output, the configuration above should work. But if we were to enable Secure Boot, it would no longer work because the only supported value for secure is no.

Generally speaking, there are two ways to fix such a problem:

  • Change the domain configuration.
  • Install firmware that supports the domain configuration.

As such problems usually manifest when creating the domain for the first time, changing the domain configuration to match the capabilities is the best option. Furthermore, Linux distributions typically include firmware that covers all supported combinations. If you encounter a combination that is not valid, that is because it is not supported at all.

The only case I can think of where installing firmware makes sense is to test old configurations no longer supported by newer Linux distributions. One example would be using firmware for 2 MB flash devices that are no longer present on Ubuntu 24.04 and newer. In those cases, you can follow the advice in the section Adding OVMF_CODE.fd Back In.

The Easy Way Out

The hands-down easiest way to avoid any problems with UEFI firmware is to forego UEFI boot and rely on the venerable BIOS to boot. You have never seen your problems go away this fast.

If that is not an option, it is time to do it the Windows way. Also known as doing it wrong. In addition to placing the boot loader into the vendor-specific directory (\EFI\Microsoft\boot) in the EFI System Partition (ESP), the Windows installer puts it into the Removable Media Path (\EFI\boot\), too. The reason is that every firmware knows how to start a system from the Removable Media Path because that is how installers are started from removable media (hence the name). By putting the boot loader into the Removable Media Path, Windows avoids problems with buggy boot loaders.

We can do the same and force the installation of grub-efi onto the Removable Media Path. Because the Removable Media Path is known, we no longer need a variable store (OVMF_VARS.fd and friends) that tells the firmware where to look for the boot loader. Consequently, we can get rid of the <nvram/> entry in our domain configuration and tell libvirt to select whatever firmware it finds automatically:

<os firmware='efi'>
  <type arch='x86_64' machine='q35'>hvm</type>
  <loader secure='no'/>
  <boot dev='hd'/>
</os>

That should work everywhere as long as there is some UEFI firmware on the system. Use secure='yes' for UEFI Secure Boot.

This excellent post on the Unix & Linux StackExchange provides a deep dive into the different boot loaders. Force grub-efi installation to the removable media path on the Debian Wiki2 provides an explanation of why using the Removable Media Path is wrong, and how to force grub-efi into the Removable Media Path on Debian. Microsoft has instructions on how to do it manually on Ubuntu3.


  1. The firmware image OVMF_CODE.fd (and its siblings with a similar name) is for guests with a 2 MB flash device. 2 MB flash is no longer considered sufficient for use with Secure Boot and does no longer seem to work with recent versions of Windows 11. ↩︎

  2. If you ever need to know more about UEFI, the UEFI page on the Debian Wiki is a great start. ↩︎

  3. Because relying on the Removable Media Path is apparently the only way to boot a Linux guest on a Hyper-V Generation 2 virtual machine. ↩︎