Ever since students have started playing pranks on the first academic computers, and malware has made its appearance, operating systems have needed to deal with security considerations. These days, one cannot just let random software do whatever it wants on personal computers, just like one couldn’t let users do whatever they want back in the days of multi-user timesharing systems.
As a middleman between software and hardware, the operating system is in a privileged position to police the wereabouts of software and users. But how exactly should it do that? Answering this question revolves around building an OS security policy. Who is the OS working for? What is it protecting? From which threats? And how?
This post will attempt to draft some security policy ideas for TOSP, by answering a number of these questions. It will be light on the “how”, however, which will be further discussed by subsequent articles.
Who is the OS working for?
This is one of those draw-a-line-in-the-sand questions, that have no single universal and correct answer, but need to be answered in order to move forward.
As a basis for my answer, let us consider that in the personal computing landscape, operating systems are installed on computers that are the private property of an individual. This individual is usually an end user of that machine, and may possibly take other roles such as administrating the machine, or engineering new functionality for it.
By installing an operating system on that machine, or having it installed by a third party, the machine owner is showing an enormous amount of trust in that operating system. Operating systems can do a anything they want with hardware, from playing with storage drives to infecting USB peripherals with malware and destroying expensive industrial property. They can delegate this capability to any user-mode software, either voluntarily or through a bug. Their potential to wreak havoc is enormous, and fundamentally has to be for OSs to do their job properly.
For this to be a reasonable trade-off, the operating system has to prove itself worthy of that end user trust. It has to work entirely at the service of that end user. And it has to provide clear benefits in exchange for the trust it receives.
A consequence of this view, for example, is that any self-respecting operating system developer must not implement technology that is openly hostile to a sizeable fraction of its user base. Technology that shuts user away from the data flows occuring between software, for example, or that leaks personal user data to a commercial partner (government intelligence agency, advertising groups…) without opt-in user consent, has no business being in an operating system.
What is the OS protecting?
Operating systems see everything. They have access to the address space of all running applications, to all data exchanges ongoing between them, and to all the packets they exchange with the hardware and the outside world. They are also tasked with managing all data that is stored on the permanent storage devices of the machine. For this reason, the first thing that an OS has to protect carefully is information.
A large body of literature has already studied the problematics of information security, so I’m not going to go over all that in the limited space of a weekly WordPress article. But essentially, a computer system tasked with protecting information must primarily ensure three things about that information:
- Confidentiality (that the information is only accessed with user consent)
- Integrity (that the information is only modified with user consent)
- Availability (that the information is only destroyed or otherwise made unavailable to software and users with user consent)
Operating systems may also use their authority as a trusted third party to prove a number of properties about a piece of information. For example, they may guarantee authenticity (that data originates from a specific user or software and has not been tampered with), accountability (that data modification may be traced back to a specific actor), or non-repudiation (that a user cannot deny that he created a piece of information).
Beyond responsible information handling, operating systems are also trusted as the only gatekeeper to hardware functionality on a machine. Unlike operating system software, user-mode software can be made to have access to nothing but a private adress space and unprivileged CPU instructions (across an OS-controlled share of CPU time). This means that the OS has absolute control on what software can and can’t do.
Using this control, the OS can guarantee that software does not perform any task that the machine user has not consented to. For example, filling the screen with black pixels, screaming random stuff on the LAN’s broadcast channel, or running the house’s computer-controlled heater at full power. In a context where software source code has long grown too complicated for even an experienced developer to understand in a reasonable time, the OS is the only trusted third party that may rigorously ensure that software will bend to a user’s will.
Which threats can an OS defend against?
Software running on bare metal computer hardware can do exactly whatever it wants. And as history has shown repeatedly, what software wants isn’t always in line with what a machine’s users want. As a middleman between software and hardware, operating system software can put the user back in control, and has a duty to do so.
As mentioned earlier, a specific area in which the OS can intervene to defend its user against rogue software is information security, that is, defense against unauthorized information access, tampering, and destruction by software and the third parties behind it. It is important, however, to realize that the OS can only do this for information that falls under its control, and that is clearly labeled by a user as sensitive.
For example, no operating system functionality can prevent a user from typing his online banking password into a well-made imitation of a bank’s website. For an OS point of view, what happens is just random keyboard input being sent, in a legit fashion, to a web browser process, which subsequently orders the transmission of encrypted SSL packets towards a certain IP adress. The OS does not operate at the right level of abstraction to handle this kind of threats to information security, which is better managed by user-mode software such as password manager plug-ins to web browsers.
The information security role of an OS, in this kind of scenario, is to ensure that no software can intercept, modify or destroy keyboard strokes on their way to the web browser. And, similarly, that intruders cannot access in-RAM copies of sensitive data, or cleartext copies of network packets before they are encrypted.
Similarly, an OS cannot prevent the web browser, in this scenario, from going rogue and sending out sensitive user data to a third pary. All it can do is limit the impact of software going rogue. This is done by reducing the amount of actions that can be done by evil or exploited code, to those which may be reasonably expected from a given program. For example, a web browser cannot suddenly decide to go and format the system hard drive.
For operating systems, the security goals of information security and process least privilege are intimately linked. The less functionality untrusted processes have access to, the lesser the chances that thay can find a way to abuse it in order to violate information security or acquire more privilege. Conversely, if the integrity of operating system data cannot be guaranteed, then the OS cannot limit software privilege.
Finally, although TOSP does not aim for multi-user operation as a core design goal, it also has to support a minimal amount of it, by separating a legit machine user from an untrusted one. An example of a scenario where this must be done, is the situation where a computer is left physically unattended as it is turned off or performing some task (heavy computation and network transfers typically). In this case, an OS must feature protections against an unauthorized user gaining access to such a machine and the information it holds.
This example scenario leads to a number of attack vectors, from basic UI interactions (that may be prevented with lock screens, a strong password, and/or some kind of physical “second factor” device), to plugging malicious peripherals into the computer, all the way to physically opening the computer and taking some of its components in order to extract data from them. The latter scenario is unfortunately hard to fully defend against, but proper use of mass storage encryption can considerably reduce its odds of success.
Building an operating system comes with a number of design choices, and the area of security, where OSs have a huge role to play, is no exception.
Here, I presented my vision of TOSP’s security system as something that keeps a machine’s legitimate owner as the only one in full control of his computer and data. This control is to be achieved mainly by carefully restricting what software and untrusted users are allowed to do, and restricting it to what a legit user has clearly consented to.
I acknowledge that this vision of computer users as the ones in control of their computer’s security is not popular these days. Latest developments in popular personal computer operating systems have rather gone in the direction of infantilizing users for the OS developer’s personal profit. However, I believe that part of the bad reputation that personal computer users have gotten when it comes to security largely stems from an education problem, and a lack of proper information from OS software itself.
For example, when an operating system presents me with a dialog whose basic information content is “A program wants to perform administrative tasks. [Cancel/Allow]”, as Windows, OSX, and popular Linux distros all do, I honestly have no idea of why software requests root access. I allow it if it’s software I trust, but then I basically put as much trust in that software as in my whole operating system, since as soon as it’s running as root, it has indirect access to everything on my computer. It’s a fundamentally broken security model.
So TOSP is about those PC users who eat the red pill. Who are ready to learn a thing or two in order to use their computer safely. Who won’t shy away from grabbing a search engine and looking up a couple of unknown terms when an actually informative OS dialog requests confirmation for a dangerous action. For these users, it is about getting more control on what goes on in their computers, running a system that is secure by default, and actually having the information they need handed, in language that is as clear as feasible, when they need it.
In the past, the emergence of such fine-grained security systems has been hampered by performance concerns. People would shove the entire hardware abstraction layer into their kernel, or use absurdly broad access control policies like “application can access the full user home folder”, in an attempt to reduce the amount of security check that have to be performed during system execution. I hope that the extra power of modern hardware, in conjunction with less drastic performance compromizes such as solely letting drivers access their assigned device’s address space, will end this kind of practice.