Moving data around: on dataflow IPC semantics

As we’ve seen last week, a core security goal of a personal computer operating system is to ensure that programs only do what the computer users want. Unfortunately, not all software is equal on this front. Large, monolithic software with a broad feature set, such as modern web browsers, is for example hard to sandbox properly. In this blog, I will discuss an old software design trick from UNIX which can be used to reduce the need for such large software monoliths, and ways it can be made competitive with monolithic software from a performance point of view. Continue reading

TOSP security goals

Ever since students have started playing pranks on the first academic computers, and malware has made its appearance, operating systems have needed to deal with security considerations. These days, one cannot just let random software do whatever it wants on personal computers, just like one couldn’t let users do whatever they want back in the days of multi-user timesharing systems.

As a middleman between software and hardware, the operating system is in a privileged position to police the wereabouts of software and users. But how exactly should it do that? Answering this question revolves around building an OS security policy. Who is the OS working for? What is it protecting? From which threats? And how?

This post will attempt to draft some security policy ideas for TOSP, by answering a number of these questions. It will be light on the “how”, however, which will be further discussed by subsequent articles. Continue reading

Primum non nocere

Input sanitization is a well-accepted software design principle these days. Pretty much every developer worth his salt will agree that production-grade programs should not hang, crash, or do some very funky and undesirable things in response to user input, no matter how malicious the user is and how malformed the input is. But in low-level software development, when building programs that have a lot of power, I would argue that software should also be very mindful of its output, and the harm it may cause to the system they run on. This post will be a rough discussion of that. Continue reading

Usability considerations

Usability matters, for all software. Unless you have a reliable way to force users to use your most crappy software, you better make sure that they enjoy it. But it doesn’t have to be a chore, and can actually be a very fun and fulfilling challenge.

For example, contrary to popular belief, usability does not necessarily come at the expense of software power, but can actually enhance it, as usability helps users find out about the functionality that is available, and use it more efficiently through the reduction of error occurences and impact. And usability can improve the efficiency of security systems, by reducing the odds that users will try to get around these in potentially harmful ways, instead of understanding them and making the most of them.

Now, if we want to improve the usability of something as big and general-purpose as an operating system, how do we get around it? Unfortunately, there is no set of Big Rules Set In Stone that will make every software usable in a straightforward way. But, over time, a number of rules of thumb have proven quite effective at making software more pleasant to use, and this blog post will be about discussing these. Continue reading

The scope of shared libraries

NOTE : This post was supposed to go online on the 19th, but remained mark as a draft in WordPress for some reason. Apologies for the unnecessary publication delay.

Operating systems are essentially about three things : abstracting out hardware, making unrelated application programs cooperate, and standardizing important functionality that is hard for application software to get right, such as GUI rendering or cryptography.

In current-generation OSs, the later mission is most frequently fulfilled by shared libraries, sometimes complemented by kernel code and user-mode processes when these prove absolutely necessary. In practice, however, the benefits of isolating system code into separate processes may be underrated. This post will discuss the relative merits of these two system services implementations, and why I believe that in many cases, the later option should receive more love than it currently does. Continue reading

There’s more to standards than filesystems

About 40 years ago, when the UNIX operating system was designed, its creators thought that a powerful virtual filesystem, shell scripts, and text I/O streams ought to be enough for everybody. And this is how Linux ended up nowadays with many incompatible way to do such fundamental things as boot-time initialization, GUI rendering, audio input and output, and now hardware accelerated graphics.

“Them fools !”, probably thought the Windows NT designers some 20 years later. “We shall make sure that every system functionality is specified in a standard way, so that this never happens to us.”. And nowadays, there are several incompatible APIs for pretty much every single functionality of Windows.

If there’s a lesson to be learned from the past there, I think it’s that operating systems need standard ways to do important things, but that doing standards right is unfortunately fiendishly hard. Continue reading

Doing one thing and doing it well

Back when the UNIX operating system was designed, its development team set out to follow a set of design principles that became later known as the “Unix philosophy”. One of these design principles was that the UNIX software ecosystem should be planned as a set of small software modules that do one thing, do it well, and are easy to combine with one another in order to achieve a greater purpose. This approach is to be contrasted with the more common monolithic approach to software design, which attempts to build huge software product that address an entire business area on their own, such as Microsoft Visual Studio, or Adobe Photoshop, and the present post will be dedicated to exploring it. Continue reading