Imagine that one day, as you come to the office, you would find a mysterious muscular man at the entrance, performing something that looks like an ID check. Mumbling about this world’s pandemic tendency towards security psychosis, you would search through your bag for your ID card or driver’s license. But as you would show it to him, he would say that he does not recognize it as valid.
Instead, he would direct you towards the services of Don Corleone, inc., selling *certified* IDs (basically a cheap copy of your id with a shiny stamp on it) for a hefty price, and say that he won’t let you enter without one of these.
Obviously, this scenario would be illegal in most countries, and you would be within your right to call the police. The state is the only legitimate authority trusted with the power to publish ID documents, and when it is democratically elected and subjected to the scrutinity of millions of citizens, one can expect that it won’t abuse its ID editing powers for fun and profit.
Yet for some reason, this is exactly the kind of protection racket that we deal with daily as we connect to the world wide web using HTTPS and have to interact with the current-generation Public Key Infrastructure (PKI), which is based on the concept of Certification Authorities (CA). And as I will elaborate, it gets worse. But first, let’s discuss why we are putting up with that. Continue reading
As discussed in the last post, the purpose of authentication, using mechanisms such as passwords, is to ensure that the person in front of the keyboard is indeed the user she claims to be by testing her knowledge of some secret data. This is typically necessary in situations where users are not there to check that no one physically tampers with their computers, such as when a computer is left powered off, sleeping, or crunching numbers on its own with the screen locked.
Obviously, due to their sensitive nature, authentication secrets should be used sparingly. The more often they are used, the more likely they are to be intercepted by an attacker willing to perform identity theft, with potentially disastrous consequences. Yet I don’t know about you, but in a typical week, the two system dialogs which I interact with most often on my computers are these two fellows:
Both of them are dialogs with a simplistic appearance that any user application can replicate, which request me under a vague justification (“make changes”, “modify essential parts of [my] system”) to input a sensitive authentication credential giving full access to my system.
To Microsoft’s credit, their own implementation is a bit better, in that when setup properly, the dialog does not necessarily require a password to be input, appears less frequently, and can have a quite distinctive appearance (translucent black background) that is hard for malicious software to replicate. But I still think that these privilege elevation dialogs, and the habits they give users to input their (administrative) password whenever they are requested to without a clear justification, are a bit of a security issue. Continue reading
Moving on in the present series regarding TOSP’s stance on security matters, I would like to stop discussing file access control and process sandboxing for a bit and discuss a somewhat higher-level, but equally important concept: authentication.
For those new to the term, here’s a short reminder: when you log into an operating system, much like on any other computer service, some process occurs where the operating system checks that you are the user who you claim to be. The OS does this by having you prove that you possess a piece of information which only you should know about, such as a password. This process is what is called user authentication.
In this post, I will discuss the various options which the modern world offers for authentication, their respective advantages and drawbacks, and the role which an operating system could play not only for the authentication of its own users, but also for authentication of users to third-party services.
Ever since people decided, around the UNIX days, that file access needed to be controlled, all mainstream operating systems have featured file access control policies that follow the same basic principles.
Every user of a computer has a unique numerical identifier (UID), belongs to one or more groups each having a similarly unique identifier (GID), and somewhere up in the sky there is a “super-user”, called root or Administrator, that can bypass file access control entirely whenever the need arises. Processes are run under the responsibility of a given user and group, and files are also owned by a user and group, by default that of the process which created them.
In this post, I will discuss why this security model has, in my view, outlived its usefulness in the area of general-purpose personal computing, and why I believe that process capabilities would be a better fit in this area. Continue reading
Files are one of the most frequently used OS abstractions. They provide a convenient way for programs to produce long-lived output, and to store persistent state inbetween runs. From a user point of view, files also provide a standard, application-agnostic way to share any kind of data between computers, to be contrasted with the hoops that one has to jump through on those ill-designed mobile operating systems that do not have a user-visible file abstraction.
The power of files lies in how by using them, programs can store and retrieve named data without having to care about where and how it is stored. But such great power comes, as usual, with great responsibilities, which is why the abstraction can get pretty complex under the hood. After discussing a bit the UX of files earlier this year, I will now discuss the security implications of typical file manipulations. Continue reading
As we’ve seen last week, a core security goal of a personal computer operating system is to ensure that programs only do what the computer users want. Unfortunately, not all software is equal on this front. Large, monolithic software with a broad feature set, such as modern web browsers, is for example hard to sandbox properly. In this blog, I will discuss an old software design trick from UNIX which can be used to reduce the need for such large software monoliths, and ways it can be made competitive with monolithic software from a performance point of view. Continue reading
Ever since students have started playing pranks on the first academic computers, and malware has made its appearance, operating systems have needed to deal with security considerations. These days, one cannot just let random software do whatever it wants on personal computers, just like one couldn’t let users do whatever they want back in the days of multi-user timesharing systems.
As a middleman between software and hardware, the operating system is in a privileged position to police the wereabouts of software and users. But how exactly should it do that? Answering this question revolves around building an OS security policy. Who is the OS working for? What is it protecting? From which threats? And how?
This post will attempt to draft some security policy ideas for TOSP, by answering a number of these questions. It will be light on the “how”, however, which will be further discussed by subsequent articles. Continue reading