Designing new systems is hard, so good engineers are very skilled at reusing old designs in new applications. And most of the time, this is the right approach. But sometimes, the old tool is no longer the best for its new job. I have discussed why I think this is the case with the C programming language and its closest derivatives, which most existing OS implementations use for legacy reasons. In this post, I would like to ask whether the same isn’t true of the core design and user experience of today’s main personal computer (PC) operating systems. Where, as usual, I will take “personal computer” not in the sense of IBM’s 1984 trademark, but in that of a general-purpose computer designed that is designed for use by one person at once.
The role of a PC operating system
To answer the question I raise today, I first need to answer a more fundamental one: what is the problem being solved by a PC operating system ? What is its basic purpose ?
Like nearly all computer programs in existence, an operating system is designed to help human users perform a number of tasks with the machine it runs on. In the case of a general-purpose PC OSs, the users of the software can be split in three broad groups : engineers (hardware or software), who add functionality to the machine, end users, who make use of that functionality, and administrators, who keep the machine in proper condition, ensuring continuity and quality of service.
As is often the case, these groups of users have conflicting needs. For example, software developers prefer to design command-line interfaces, as these are often simpler to implement on today’s OSs, while most end-users prefer to have graphical user interfaces due to their higher discoverability. And administrators would rather have end users and developers use as little of the capabilities of the machine as possible, so as to limit the amount of harm that they can do. Whereas said users usually prefer, to the contrary, to be able to do as much as possible with their hardware, with as little hassle as possible.
An interesting peculiarity of personal computers, though, which is fairly unique to this branch of computing, is that there is a significant overlap between the aforementioned categories of users. People with advanced computer knowledge usually tend to play all three roles on their machine to a variable extent. And when the OS allows for it, engineers will directly work on the platform that they are targeting instead of using another kind of computer to get their work done.
The difficult job of a PC operating system is thus to offer standard interfaces for these three categories of users to interact smoothly with one another, taking into account the fact that there may be people who fit the use cases of multiple user categories. Additionally, like all operating systems, a PC OS will also need to provide all kinds of users with standard ways to interact with the machine that it is installed on.
Designing for corporate environments
For historical reasons, most PC OSs available today have been designed for corporate environments, in which end user and administrator roles are kept strictly distinct. They typically target hardware configurations from a long gone era, in which computers are massively multi-user, while today’s machines are mostly individually-owned, and only sparingly shared. And their security infrastructure follows the threat model of early days of computing where the main security challenge is to protect machine users from one another, while one can be very trusting of the software that is installed on said machine.
This mindset can be seen at work in many areas of the user interface they expose. For example, in how end users are discouraged from using the machine with an administrator account, how even then they need to constantly click “yes” or type their password to every action that affects the whole system, and how administrative tools are often voluntarily kept extremely obscure. The latter usually require a very high level of tech litteracy to be used properly, and basically slap inexperienced users in the face with a massive “THIS IS NOT YOUR JOB, YOU SHOULD NOT BE HERE, GO AWAY BEFORE YOU BREAK SOMETHING !!!”.
Unfortunately, such design does not reflect the modern reality of PC usage. As individual computer ownership has never grown faster, having dedicated and well-trained administrators to take care of the well-being of each and every machine in existence has long stopped being an economically viable option. More and more, as computing moves forward, end users and administrator roles will be merging, and computer users will need to acquire the necessary knowledge to perform basic machine administration tasks. We can regret and ridicule this situation all we want, but ultimately there is no way around this. So the sooner the design of operating systems acknowledge this and start making end user machine administration less of a chore, the better.
Denial is a strong force in human psychology, though, and not everyone has realized the inescapable nature of this fate yet. As an example, in a desperate effort to preserve the system administration statu quo, and make some money in the way, some actors of the emerging mobile device market have went as far as to deprive computer owners from basic control on their machine and what it can run, arguing that this is for their own good. I can only hope that this is how ridiculous things will ever get before the personal computing world accepts its fate, but that one could go this far without major user opposition is already frightening.
Oh, and by the way, to those who still think that the administrative model of present-day PC OSs is about security… I would linke to know, if this is the case, why it is only extremely recently that said OSs’ security model even started to incorporate protection against malware which will delete or encrypt all of an end user’s personal files. If we are being serious for a second, the security model of PC OSs has neglected modern threats and been in need for a significant overhaul for a while now. And making OSs become no better than malware when it comes to locking users away from their machine’s true potential is most certainly not going to help.
Meanwhile, in the engineer’s cave…
As for engineers, the amount of energy which OSs expend at catering to their needs will vary quite significantly from one OS to another. Typically, OSs who were born as the result of an engineer wanting to scratch an itch, like Linux or some BSDs, will prove a bit better in this area. But overall, the OS-engineer interface could still use a lot more work.
If we consider hardware interfacing, as an example, one can only be amazed at how half-hearted desktop OSs have been at properly abstracting “new” hardware categories such as GPUs or multifunction printers. More often than not, instead of providing thin layers of hardware-side interfacing, the way they should, they have left it up to hardware engineers themselves to implement heavy-weight abstractions, like full OpenGL implementations, or parts of the platform’s printing infrastructure.
But as sysadmins used to dealing with vendor-written drivers will attest, this is a bad route to take. Hardware companies are skilled at making hardware and coding the tiny resource-contrained firmwares that power it, not implementing complex operating system components. The user experience of any user-facing software created by hardware vendors is more often than not absolutely awful. Hardware vendors are terrible at designing OS abstractions that also span their competitor’s products, causing much effort duplication (CUDA being a good example of this). And finally, most developers working for hardware companies seem to be in some kind of competition to product the most buggy, insecure, and bloated piece of software in existence, so one should seek to get as little of their code as possible inside of an OS.
And yet, for some reason, in a move that can only be described as crazy, modern-day OSs basically give these people the key to your front door. They will readily give hardware vendor’s “driver” software full access to one’s machine, with root access to OS APIs and kernel-mode CPU control. They will give the world’s worst software writers their most absolute trust, only asking in exchange, from time to time, that they follow some code signing money extorsion scheme.
This insanity reflects a more general rule of thumb: in modern OSs, the lower-level one gets in the abstraction stack, the lower-quality and obscure OS APIs and ABIs get. Function names lose their vowels, strings lose Unicode support, APIs start to get more deprecated versions than recommended ones, and interface documentation weakens. This is problematic because low-level APIs are, in effect, those which are most critical to an OS’ reliability and continued operation. I would argue that ideally, when resources are lacking to share development effort equally, the lowest level of an OS’ stack should probably be the one receiving the most attention. But due to their lack of user-visible effect, more often than not, the reverse is unfortunately true.
Another area of the OS-engineer interface which could use improvement is the lack of synergy between OS interfaces targeted at developers on one hand, and those targeted at end users and administrators on the other hand. The inconsistency between both interfaces is problematic as it simultaneously means more work for OS developers, who need to duplicate their engineering efforts, and also more work for users who want to learn new things. The latter causes an artificial disconnect between user roles, which is a source of unnecessary tensions, such as when the favored UI metaphor for developers is distinct from the favored UI metaphor for end users.
This is a relatively recent trend. The original UNIX design, as an example, took much care to provide unified abstractions covering both end user and developer use cases. Files, pipes, inputs and outputs, are examples of such. It seems that it is only as such elegant legacy designs did not manage to adapt themselves to modern use cases, that developers started hacking together ad-hoc and inconsistent interfaces in a rush. Perhaps the modern computing world would be very different, had they chosen instead to sit down, and think calmly of clean and efficient ways to solve the new OSdeving challenges that rose before them.
Was this all for end users ?
Much like the last-resort defense of an inexperienced user experience designer typically involves someone’s grandparents and close relatives, OS designers sometimes try to justify their worst decision with an appeal to some alleged benefits to the end user experience.
I personally fail to see how end user experience is supposed to benefit from bad engineering practices all around a given software product.
I fail to see how end users benefit from having to learn one entirely new interaction paradigm anytime they try a new computer form factor, in disregard of the decades of research done on responsive user interfaces in such areas as web development.
I fail to see how end users benefit from being taken hostage to UI beta-testing, as people who design end user UIs end up releasing half-baked schizophrenic prototypes like Windows 8, early releases of GNOME 3 and KDE 4, or about one in three OSX release.
I fail to understand how the ideology of iterative development is supposed to stand for this, when it explicitly targets itself as suitable for non-critical, relatively unimportant software, in short everything that an OS or desktop environment is not. Because once such a product has been labeled “final” or “released”, other software will be built upon it, barring the road to simple changes in future iterations.
I fail to see much work getting done on such important end user matters as data protection (against software crashes and “unsaved changes”), perceived performance (no need to buy a computer that’s thousands of times faster if its UI remains every bit as sluggish as before), or interface organization (say, are hierarchical filesystems really the best we can do ?).
“End user” is an actual part of an OS’ user base, not an excuse. If OS designers want to claim that they are doing something for their sake, perhaps it would be about time they actually started doing so.
On personal computers, operating systems serve as a standard interface between the machine they are installed on, and three categories of users : engineers, who build functionality, end users, who use it, and administrators, who keep the machine running well.
Over PC history, these user roles have seen deep mutations, such as the merging of end users and administrator roles, and the appearance of new security, hardware interfacing, and software development challenges. PC operating systems have failed to adapt to these changes correctly, often instead resorting on ill-designed hackish solution, whose continued operation sometimes is only vaguely ensured by ever-increasing platform control from OS developers. Consequently, the overall quality of the interface provided by these OSs to all of their users has suffered.
Said interface has become increasingly fragmented from one class of users to another, causing work duplication and unnecessary difficulty for users shifting from one role to another. It has also become increasingly ill-suited for its job, making life unnecessarily hard for end user administrators (which is now the common case), providing a security model that proves increasingly inadequate and hard to adjust to modern-day constraints, and making the work of engineers increasingly challenging. End users, who are often used as an excuse, don’t benefit either from the increasing OS development sloppiness.
I would thus argue that we have reached a point where a clean break in PC OS design, favoring a full design-level cleanup of existing OS metaphors over holy software compatibility and useless GPU-accelerated fluff, is becoming necessary at this point.