Although this seems to become a minority opinion among the main actors of the OS market, I firmly believe that as soon as a piece of computer hardware becomes widely used, the interface through which it interacts with software should be standardized. In this article, I wanted to discuss the benefits of such hardware standardization, its true cost, and what happens when it is neglected. I’ll use graphics hardware as a main example throughout this article, but the reasoning applies as well to all other varieties of widespread computer hardware such as sound and wireless network chips.
What is the point of hardware standards ?
The most obvious benefit of hardware standards is that they avoid effort duplication in system software. As an example, if there was such a thing as a standard GPU interface, operating systems would only need one single GPU driver to interface with every single graphic chipset out there. Less duplicated effort means in turn less driver code to write (and thus spare time for more interesting things), lighter kernels that load and work faster, less compatibility problems on OS upgrades, and perhaps most important less bugs and low-level exploits.
Standardized hardware interfaces are also vital to smaller OS projects which cannot rely on manufacturers writing drivers for them and don’t have the technical reference manuals and workforce it takes to do it themselves. Without those, such projects would be forced to focus on a few pieces of hardware at the detriment of others, thus being at the mercy of manufacturer blackmail and user base fragmentation. As it turns out, practices that worked in the 80s do not work so well in the modern diversified computing landscape, and OSs that rely on specific hardware to work will face a high risk of falling into irrelevance if they happen to have bet on the wrong horse.
Finally, standardized interfaces also ease comparison of similar hardware with each other, though it only does so from a point of view of capabilities, not performance. As an example, let’s look at the way Windows games request hardware that is compatible with DirectX version N or higher : using this information, potential players can tell whether the game will run at all, but not how fast. Keeping performance requirements sane still remains the responsibility of developers for now, until someone comes up with such a thing as a balanced synthetic benchmark for graphic cards.
How to get the most out of it ?
Standardization detractors will argue that hardware standardization harms innovation. Their rationale is that if every new bit of, say, graphics hardware, is to be standardized and added to a spec, it will make it dramatically harder for chipset vendors to introduce new technology, and hardware evolution will just become a boring race of faster clocks and higher core counts. I would gladly argue that this is already pretty much the case, and not a bad thing at all, but anyway, I still have to provide a more serious answer to this criticism than “the problem does not really exist”.
First, let’s be clear on something : what I advocate is the standardization of hardware-software interfaces, not hardware in itself. It is sure easier to build hardware that stays close to the spec, but ever since the invention of microcontrollers and PROMs, it has also been fairly easy for hardware manufacturers to perform a spec-standard operation in a new, more efficient way. We thus have to distinguish hardware features that can be silently implemented without changing the spec from hardware features that explicitly require software support.
This is, obviously, a classical abstraction design trade-off. At one extreme the spec is very low-level, close to the bare metal, and it must be updated very often to keep up with hardware evolution, and puts relatively high constraints on hardware design. At the other extreme, the spec is very high-level, closer to a user-mode API, and it becomes fairly easy to introduce hardware that complies with it… but a lot of hardware power also ends up hidden, and chipsets need to feature a large amount of spec support firmware code, which can be buggy.
The optimum amount of spec abstraction offers a right balance between all these desirable and undesirable characteristics, which obviously depends on the kind of hardware that we are talking about. As an example, in the case of GPUs, we don’t want to set in stone either the equivalent of an arbitrary GPU’s technical reference manual nor a full graphic API like DirectX or OpenGL, but rather something inbetween. Defining that something is outside of the scope of this article by being too GPU-specific, but I hope that I have shown fairly clearly how it should be done : either by starting from low-level documentation and slowly abstracting out minor chip-specific details until a satisfactory common abstraction is obtained, or by starting from a high-level API and going lower-level.
For now, let’s explore what happens when this standardization work is NOT done instead.
The horrors of a nonstandard universe
My first non-standard hardware horror story will again be about graphic chipsets, in the x86 world of the average Joe’s desktop or laptop. Here, the consequences of GPU’s lack of standardization are easy to behold even on the majority platform, Windows, which has the support of all GPU manufacturers behind it : since it is up to GPU drivers to implement a large part of OpenGL or Direct3D support, the wonderful result is drivers that weight a good hundred of MBs and are full of device-specific glitches, resulting in turn in random game bugs and crashes on specific hardware that can only be fixed through a highly annoying update process. And when Microsoft tried to ask for a slight modernization of graphic drivers during the Vista days, the result was an epic blunder to the point where it took years before the new drivers were comparable with their older XP cousins. Minority OSs have it worse : they are forced to either stick with a very small set of supported chipsets, like OS X, or with no modern OpenGL support on hardware that does support it, like Linux and BSD. Not what I would call a shining success.
Think that GPUs are an exception ? Well, let’s discuss sound and wireless networking on that same PC platform then. Microsoft only managed to make sound drivers work properly on Windows by going berserk on sound chipset manufacturers and reducing the sound hardware support of Vista to an ultra-minimalist state, emulating everything else in software and effectively killing the sound card market. Wireless networking support was also pretty awful before the Windows 7 days, with a bunch of crashes and vendor-specific “connexion utilities”, but this time I don’t know how they managed to fix it. My bet is on calling out every manufacturer one by one and integrating the fixed drivers directly in the Windows NT kernel. And what of Linux ? Well, support for both families of chipsets remained totally random for a long time, as became painfully obvious when the desktop guys recently attempted to update the high-level audio architecture with PulseAudio and exposed countless bugs in the underlying drivers in this way. Frequently used audio chipsets got a quick update, while for others… let’s say that it took more time. I can’t wait until the next time the Linux wireless networking stack will need an update on its side.
So, surely, the problem with x86. After all, isn’t it an awful hardware architecture to build computer with, full of legacy stuff that makes the life of low-level developers a nightmare ? Well, although x86 has been getting a lot of hate from the tech world lately, I’d say that it’s not so bad in practice, but don’t just take my word for it as we’re now going to explore the deepest abyss of non-standard hardware with my last example, the whole ARM ecosystem.
Imagine a world where one cannot even run code on devices in a standard way. A world where there is no standard way to send and receive even the most basic input and output. A world where the only thing which you can rely on, likely per chance, is the ability of your compiler to produce code that does arithmetic operations on memory contents, in an unoptimized way. This is the world which people who develop operating systems for phones and tablets live in. Who doesn’t like building one static kernel per target device because there is no standard way to dynamically probe hardware features in the ARM world ?
And how well those this dream of standard haters work out for the actors of the mobile market ? Well, the developers of iOS and Windows Phone 7 have already given up on supporting more than a handful of individual ARM system-on-chips, and greatly lag behind the competition in their ability to cater to every user’s need because of this. Android accepted the challenge, but had to leave it to phone manufacturers to get some quality drivers and low-level OS layers for their individual devices, with a result that is now apparent : lots of phones randomly crashing and freezing, OS updates that are not even distributed to 10% of users near half a year after their release, and software compatibility across devices that seems to be driven by sheer luck. I fail to see a compelling case for nonstandard hardware here.
Well, at least Microsoft are going to attempt introducing standard firmware features like UEFI and ACPI on Windows 8 and Windows Phone 8 devices, so as to bring their OEM’s ARM devices nearly on par with an x86 computer as far as standardization is concerned. I wish them luck in this endeavour, if they have not nearly killed their main phone OEM by accident by the time these OSs are ready to ship.