A lot has been said about UEFI Secure Boot, and how it essentially puts the bulk of desktop and laptop computers sold today under the control of Microsoft. One the other side of the fence, proponents of the technology have kept arguing that bootloader signing is actually a useful security measure and that Secure Boot, as an implementation of it, should not be dismissed as an evil scheme for world domination. Both sides rarely discuss each other’s arguments, effectively creating a sterile debate. To improve upon this status quo, I would like to discuss here some limitations of Secure Boot in the realm of computer security, and show how embracing a more open-minded design could also be a path through which these problems would be greatly mitigated.
On the necessity of a signature check bypass
First, I just have point out that there are actually few use cases for bootloader-level malware. Writing one is has a complexity similar to that of writing a small OS, while making it do something actually useful for its author, in a fashion that sustains target OS updates, is as hard as reverse-engineering the target OS altogether. Even when someone manages to do both, there is still lots of work to do before the malware can actually be committed to the UEFI boot partition of the victim and thus booted by the firmware, since this area of the HDD is generally kept under fairly tight scrutinity by all modern OSs. With this in mind, for typical malware tasks such as personal information theft and denial of service attacks, malware authors are generally better off messing with higher-level layers of the OS stacks, such as web browsers, which are directly able to perform the relatively advanced tasks that such malware aims at.
That being said, there exist a few malware tasks that are best performed by code operating close to the bottom of the target OS’ stack. Rendering a computer unbootable is a very good example of such a task. However, it so happens that Secure Boot, in its current design actually makes this specific attack easier to carry out. On an traditional, unsigned operating system, achieving such a result would involve significantly damaging one of the core OS components involved in booting, in a matter that totally breaks its operation. Whereas with Secure Boot, all one has to do is to alter one single byte of one of these binaries. That byte can even be located anywhere : error message strings, default class members that are never used, deprecated API function stubs… just manage to flip a bit in a binary that is to be loaded by the UEFI firmware, and on subsequent boots, the machine will refuse to boot due to signature check fails. Because, you know, that keeps you safe.
Think this is a bad scenario? How about having it happening without even requiring the intervention of malware, then? HDDs and SSDs can only store data for a finite time, because their contents get gradually corrupted by various physical phenomena, in a manner that gets first silently fixed by error correction codes, then becomes more definitive. The most common method used to mitigate this outcome is to keep multiple copies of vital OS binaries around, and be able to replace corrupted ones with fresh copies when problems occur on boot. Windows, Mac OS X and ChromeOS are three examples of mainstream OSs which support such an automated recovery scenario to some degree. But with Secure Boot’s current design, such a feature is effectively impossible to implement : if a binary signature check fails, the UEFI firmware will refuse to boot the OS altogether, and recovery features will thus never get an occasion to run.
To avoid this outcome, a Secure Boot-equipped system should not totally abort OS boot in the event of a signature check failure. Instead, it should at least offer a way to boot the OS anyway after displaying a scary data corruption warning, so as to let users do emergency backups of their data before they send out their computer for lengthy repairs. Ideally, OSs should also be able bypass this default behaviour, as an example by providing “recovery kernels”, which are booted in the event of a signature check fail, and silently restore a fresh copy of corrupted system files from a trusted location without having the user even know about it. In case you wonder where I get such a scenario from, this is my understanding of how ChromeOS’ integrity checking mechanism works.
Towards a secure handling of Platform Keys
Now, let’s discuss the matter of key management. Since binary signing uses public-key cryptography, checking a bootable binary’s signature involves the use of a public cryptographic key, that must first be enrolled inside the firmware’s key management database. In Secure Boot’s design, however, not all of such public keys are considered equal. There is a single “Platform Key”, which can be used to check binary signatures but also, as a bonus, holds absolute control over the whole key management mechanism. If you want to enroll or delete additional “Key Exchange” public keys, replace a previously installed Platform Key or Key Exchange Key, or disable Secure Boot altogether, using a standard method that works on all UEFI-equipped computers, you will need knowledge of the private key associated to the Platform Key.
And as a matter of fact, it so happens that most Windows PCs sold today share a common Platform Key, which is under the control of Microsoft Corporation.
Now, I would like you to think for a second about how big of a security vulnerability this is. Recent incidents at Comodo and DigiNotar should remind us that humans are bad at keeping secrets and no single private key cannot be leaked, even if you spend ridiculous amounts of money to prevent this from happening. With Secure Boot’s current design, if the private key associated to Microsoft’s Platform Key were to leave the walls of the company and end up in evil hands, it would result in a worldwide security breach across all computers sharing that platform key. Windows exploits notwithstanding, attackers with physical access to a target machines could also exploit Secure Boot’s obtuse current design to replace Microsoft’s platform key with their own, and wipe all other public keys from the firmware database, effectively creating an unbootable computer that cannot be fixed by any standard mean.
At this point, I should clarify that I understand why there is such a thing as an almighty Platform Key in Secure Boot’s design. It makes sense of a firmware like UEFI where all configuration tasks can be carried out by OS binaries, and where it is the only standard way to do things. And this API-based design, in turn, also makes sense, because an API is easier to standardize across multiple device form factors and hardware architectures than any other programming construct. Yet even with this in mind, putting the same Platform Key on billions of computers is still worse than stupid, it is a crime against computer security and a disaster waiting to happen.
What should be done, instead, is to use randomly generated, machine-specific Platform Keys, which are subsequently used to enroll whatever OS-specific public keys need to be present. For maximal security, it should also be possible to reset said Platform Key with minimal disturbances. Here is an example of how it could be done in practice:
- UEFI firmware is initially in a default, platform key-less state.
- A standard UEFI function, which could be called GeneratePlatformKey() as an example, is used to generate and enroll a first Platform Key and the associated private key.
- Said private key is subsequently used to enroll OS-specific public keys.
- Subsequent calls to GeneratePlatformKey() from OS code require explicit user validation of a scary warning to be carried out. Doing it this way is possible because UEFI provides standard mechanisms for managing such scary warnings.
In the common case of automated OS installation by OEMs, it is also worth pointing out that there is no step in this process that fundamentally cannot be automated. Still, additional changed are in order because our improved Secure Boot achieves feature and convenience parity with the current design…
On public key updates and self-signed binaries
So far, we have taken satisfactory steps towards securing the Platform Key against the effect of leaks. However, what about OS-specific public keys? These can be compromised too, and it would be nice if trusted OSs could silently replace them as part of their standard security update process, without requiring explicit user or OEM intervention for this operation. To do so, we could mirror Secure Boot’s current design for Platform Keys, by making it possible for a bootable binary to replace a public key in the firmware database. Just like with Platform Key todays, the OS would have to prove knowledge of its private key to do so, as an example by signing some random firmware-generated data with it.
However, it’s worth pointing out that for every convenience feature of this sort, there is an associated security risk. In this case, if an attacker could get its hands on a private key before the OS manufacturer has replaced its public key, he would be able to render a number of computers temporarily unbootable by replacing the OS manufacturer’s public key with his own in the firmware database. This is not the same as the binary corruption case discussed above, because the problem here is not that the binary signature is wrong, but rather that the key used to sign it is unknown. Thus, automated recovery scenarios based on restoring fresh binaries would not work, and a more complex repair procedure would be in order. In meantime, it is, as before, a good thing to be able to boot the compromised OS in spite of the failed signature check, so as to make preparations before sending it for repairs (which would be necessary anyway, since someone with knowledge of an OS’ public key can do much more harm than just altering the firmware database).
On a slightly related note, the common use case of installing an OS from a trusted installation media (such as the official DVD/liveUSB/whatever) also prompts me to request the inclusion of a standard procedure for handling self-signed binaries in a Secure Boot environment. Obviously, self-signed binaries are not great, because if someone can tamper with the binary itself, then he can tamper with the public key that accompanies it as well. However, nothing prevents security-conscious users from checking that the key included in the self-signed binary is the right one before installing the OS. Thus, it should be enough to have bootable self-signed binaries come with some sort of scary firmware warning, that provides the option to both let a binary run once and enroll its signing key permanently, so as to make sure user think twice before booting untrusted OS binaries.
Think that leaving the option to boot untrusted binaries is too much of a security risk? As as matter of fact, one has to do this at some point even with Secure Boot’s current design. In said design, the first OS that gets installed initializes the Secure Boot functionality and enrolls its public key as the machine’s Platform Key, thusly preventing the installation of other OSs that are signed with a different signing key. Or, to say it otherwise, a self-signed OS install disk first boots without warning and manages to install its public key in the firmware database, before other OS install discs which attempt to do things in exactly the same way get prevented to boot altogether. Well, that’s UEFI logic for you…
When discussing the issues of Secure Boot as it exists today, many people get stuck at the “It gives Microsoft too much control over our computer” stage. As has been shown here, however, there are a lot of other problems with Secure Boot’s spec and current usage, which hamper its security, reliability, and practicality as a security measure. More than just complaining about this fact, I have also proposed the following changes to the UEFI spec, which should be carried out, in my opinion, before Secure Boot can proudly bear the “Secure” part of its name:
- For security and reliability reasons, there should be a (scary) way to boot an OS even when the firmware signature check has fail
- To avoid this worst-case scenario, automated OS recovery from voluntary and involuntary kernel corruption, a la ChromeOS, should also be supported
- Platform Keys should not be governed by OS manufacturers, but rather be a machine-specific signing key, that may be reset through a standard procedure
- For feature parity with the current spec, OSs should be able to reset their public “Key Exchange” key without having knowledge of the Platform Key
- For feature parity and convenient OS installation, self-signed binary and foreign public key enrollment should also be supported, with appropriate precautions
As it stands, Secure Boot is, in my opinion, a rather poor implementation of bootloader and OS kernel signing, that feels more like something designed for platform control than OS security, and thus kind of deserves the ridiculous “Restricted Boot” nickname which it has been given by some FSF extremists. I profoundly dislike the way this UEFI feature makes a bad name of kernel signing, which is a worthwhile security measure, and thus wish members of the UEFI Forum would have thought a bit more about what their design entailed, and how appropriate the naming was, before unleashing this technological slander upon the PC world.