The scope of shared libraries

NOTE : This post was supposed to go online on the 19th, but remained mark as a draft in WordPress for some reason. Apologies for the unnecessary publication delay.

Operating systems are essentially about three things : abstracting out hardware, making unrelated application programs cooperate, and standardizing important functionality that is hard for application software to get right, such as GUI rendering or cryptography.

In current-generation OSs, the later mission is most frequently fulfilled by shared libraries, sometimes complemented by kernel code and user-mode processes when these prove absolutely necessary. In practice, however, the benefits of isolating system code into separate processes may be underrated. This post will discuss the relative merits of these two system services implementations, and why I believe that in many cases, the later option should receive more love than it currently does.

What libraries are good at

Libraries are a bunch of code and static data that can be linked to a program at compile, load or run time in order to augment the later program’s functionality. This is usually implemented by keeping some function pointers uninitialized in a program’s code, and altering these to point to the library’s code when the library is loaded. In some advanced programming languages like Ada or Python, it is also possible to have some initialization code run automatically at library load time, when static initialization is not sufficient.

Most programming language these days support the creation and use of libraries, and can interoperate with one another using C coding conventions, which makes libraries the most standard and easiest way to augment a computer program’s functionality in any language. Creating a software library is also pretty trivial using any popular compiler/linker or interpreter-based toolchain.

Another advantage of libraries is that their state is not shared between processes. This means that when a crash occurs in a software library, it usually will only have an impact on the application that ran the faulty library routine. This makes for some level of fault containment, though of course crashes should rather not happen at all, especially in system code that is shared by thousands of applications, possibly including fairly critical ones.

The simplicity of the library abstraction also makes for nice benefits in terms of computation performance. Library code is usually nearly indistinguishable from program code in terms of performance, as calling it only incurs a function call overhead. This overhead is so small (around 30ns on a modern processor) as to only become a bottleneck when a function is to be called about 10 million times per second. Any multi-process scheme involves a context switching overhead that is about 100 times higher (around 4μs on Linux), so even the most efficient IPC protocol will become a bottleneck inside of tight program loops much faster (around 100k function calls per second).

Another benefit of implementing system functionality as libraries is that these provide a simple scheme for task prioritization. Code that runs inside of a library is understood by the system scheduler as eating into the host process’ time budget, so if a decent process scheduling algorithm is available, it will work as well when processes run system libraries as when they run their own code. To achieve similar results when code is run inside of an external process requires much more elaborate system abstractions, such as clients passing “scheduling tokens” to servers allowing the later to start a thread at client priority.

Finally, shared libraries ease deployment. This is not a big concern for OS functionality, which can be assumed to be either already there or readily available on any sufficiently recent OS install. But for third parties distributing code libraries, which is where much system functionality originates from in open-source OSs, libraries make developer’s lives a lot easier than process-based services.

When to use external processes

As introduced before, the main avenue to implement system functionality besides libraries is to put it either inside of the OS kernel, or inside of another user-mode process. For the purpose of this article, the distinction between both is irrelevant, so I will not distinguish between both in the following. Sufficient is to say, when an application process wants the system to perform some task, it locates the system service process responsible for that tasks, and provides it with a request to be fulfilled using an interprocess communication mechanism.

One area where such server processes shine is updating. Only foolish programmers believe that they will get their code right the first time, and that it will need zero maintenance and improvements in the future, yet updating software libraries is a surprisingly painful task. It is impossible when libraries are statically linked to their host program, pretty tricky even when they are dynamically linked (as the library must remain consistent with whatever part of it is included inside of client programs), and extremely difficult at run time. With weaker IPC bindings, server process updating is comparatively a lot easier to achieve, even at run time if the server was designed for it.

Related to run-time updating is the problem of fault recovery. As mentioned before, library crashes cause an instant crash of the host application process too, which will cause user data loss if the application was not designed to keep its state backed up in a safe place. When a system server process crashes, however, its client process is still alive, which means that only a comparatively small number of system processes need to be designed for crash resiliency in order to protect the OS from its own crashes.

This also has implications for debugging. It can be pretty difficult to isolate the exact cause of a crash when an application dies upon running a library function, whereas in a client-server inter-process contract, the identity of the crashing process provides a direct and unambiguous indication of who is to blame for a software failure. Servers should never crash in response to client input, however invalid, and clients should never crash in response of server output, however invalid, end of the story.

Another obvious benefit of putting system functionality inside of processes is that system code becomes rigorously separated from application code. In OSdeving parlance, they live in separate address spaces. While this may become a liability in the event where the server process has a bug and ends up leaking its data about one client to another client, in general it has the major benefit of ensuring that clients and servers cannot tamper with each other’s code, data, and stack. Moreover, their code is understood as separate by the OS’ security infrastructure, which makes it possible to give different system access permissions to either.

The later means that it is a lot easier to enforce a separation of concerns between applications and the OS, as well as inbetween OS components, when system services are implemented as processes. To understand why this is useful, consider that if an operating system’s virtual filesystem were to be implemented using shared libraries instead of a separate process, every application process would need full disk access in order to read or write to a single file. Considering the potential for massive data loss from either malware or buggy processes acting drunk, that’s putting way too much trust into application processes. Which is why even in the most library-friendly OSs, the VFS is never implemented that way.

Server processes are centralized and their integrity is guaranteed by the OS, so they are also perfect for implementing OS-wide policies. Although here, too, the security implications are obvious, let’s consider instead an OS in which users are allowed to change such UI properties as font sizes. If user interface rendering is handled by libraries, keeping such system settings consistent across applications is close to impossible, and requires active cooperation from every user application, whereas if GUI rendering is handled by a small number of system processes, keeping track of such system policy changes becomes a lot easier, even at run time.

The centralized nature and guaranteed integrity of server processes can also be used to avoid state and effort duplication. With the help of copy-on-write mechanisms and disk caching, libraries can partially mimick this efficiency for their code and constant data, but server processes take it to another level by making even data that is generated at run time shareable between processes. There is thus no need for every application process to repeat slow service initialization procedures, or to duplicate state information that could be shared with other application processes, such as complex rendered bitmaps of user interface elements in GUI rendering.

Although shared libraries have a performance advantage in the realm of highly repetitive computational tasks, server processes shine in the area of asynchronous I/O and parallel computations. Most imperative programming languages are designed to process an isolated sequences of synchronous tasks, not to handle slow I/O to exploit parallel hardware, and these limitations are propagated into libraries coded in these languages. Whereas IPC protocols, due to their artificial nature, can be made much more friendly to asynchronous and parallel workflows, freeing application software from the burden to care about these things when they don’t perform any significant computational task themselves.

Conclusions

Modern operating systems love shared libraries, sometimes for excellent reasons. They are easy to use and deploy, standard, can’t take the whole OS with them when they crash, and naturally support fast and well-prioritized computations.

However, like every proverbial hammer, shared libraries can make every problem look like a nail, and it is important to keep some perspective on what they can’t do well. In a sense libraries and external processes are very much akin to client-side and server-side web development : simple and unimportant tasks are probably best served by libraries, while any major infrastructure would be better served by dedicated service processes.

External processes provide a better basis to build easily upgradeable, fault-tolerant operating systems. They should be considered necessary for any security-critical work, as well as whenever system-wide policies have to be maintained. And the performance cost of IPC may, in some cases, actually be offset by the benefits of resource centralization and the abstraction freedom that custom IPC primitives provide for asynchronous and parallel computation.

As an aside, let us note that the problems presented by shared libraries may also be encountered when one deals with interpreted programs. Due to the nature of the interpretation process, two unrelated programs, an interpreter and an application, have to interact in a fairly intimate way, which raises important security concerns that have yet to find a satisfying answer, even in modern variants of interpretation such as JIT compilation. There is a reason why a large fraction of security exploits originate from interpreter software such as office suite macros, CLI shell interpreters, web browsers, Adobe’s Acrobat and Flash, or Oracle’s Java runtime…

See you next week !

One thought on “The scope of shared libraries

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s