Doing one thing and doing it well

Back when the UNIX operating system was designed, its development team set out to follow a set of design principles that became later known as the “Unix philosophy”. One of these design principles was that the UNIX software ecosystem should be planned as a set of small software modules that do one thing, do it well, and are easy to combine with one another in order to achieve a greater purpose. This approach is to be contrasted with the more common monolithic approach to software design, which attempts to build huge software product that address an entire business area on their own, such as Microsoft Visual Studio, or Adobe Photoshop, and the present post will be dedicated to exploring it.

Of modules and monoliths

The modular approach to software design was devised so to address a number of well-known shortcomings to the monolithic approach that came before it:

  • Monolithic software is resource-intensive, because using it involves loading, initializing, and expending the run-time costs of a lot of functionality, only a fraction of which will actually be used
  • Due to its lack of focus, it is hard to design properly
  • Due to its design complexity, it is hard to understand and extend
  • Due to its implementation complexity, it is hard to optimize and debug
  • Due to its wide range of functionality, it is hard to secure
  • Due to its all-encompassing philosophy, it does not play well with other software
  • And due to its functional complexity, it is hard to grasp and master for users

Modular software design averts these problems to a large extent. It reduces the “big ecosystem design” concern of monolithic software to a simpler problem of vertical integration, where module interfaces must be made usable, future-proof, and consistent with one another. However, that’s also where its main weakness lies.

To design a modular software ecosystem, requires designing and documenting a lot more software interfaces than is necessary for monolithic software. Of course, properly implementing monolithic software also requires splitting it into a large number of independent code modules, but the monolithic approach essentially allows developers not to do that when they don’t want to.

This means that when one needs to develop software quickly and on a low budget, either because the gods of economics commanded it, or because software is being developed as a hobby and developers want to keep it fun, the monolithic approach gives more flexibility. Of course, that flexibility comes at a serious cost in terms of software reliability and maintainability, which should never be neglected, but these are concerns that users and company executives are usually not sensitive to, until it is too late.

The fall of the UNIX legacy

For its day, UNIX did a fairly good job at being the poster child for modular software design. It had problems, of course, such as CLI argument inconsistency or an overweight kernel, but overall it got the basics right. Yet if one looks at the main heir of UNIX today, Linux distributions, one cannot help but notice that they have increasingly strayed away from the path of modular software design.

For example, I am typing this post on the 17.1 Xfce release of the Linux Mint distribution. At the bottom layer, it uses a modern Linux kernel, which basically is a huge monolith that does everything from filesystem management to 3D rendering in a single big software package. Some parts of it that are too legacy for direct use, such as the ALSA audio driver interface, and that would be too expensive for developers to replace, are then abstracted away using higher-level wrappers like PulseAudio.

Most of the graphical user interface that I use is generated and handled using an enormous monolith called X11, which is well known for its tendency to crash and take all applications out with it when it dies. The rest is shared between two other fairly big pieces of software, a desktop shell known as Xfce, and a huge monolithic library called GTK+.

From time to time, one of these monoliths grows so messy that it needs to be replaced, because like any good monolith, nobody understands it well enough to fix it. This happens an awful lot with audio, networking and graphics infrastructure, for example. Because the components involved are huge, replacing them tends to involve major development effort, and cause small community crises every time, with lots of software breaking, people ranting, and other OS users smirking.

So effectively, it is not too far of a stretch to say that the modern Linux ecosystem, in contrast to its UNIX ancestors, has become as monolithic and unmaintainable as its Windows and OSX competitors. I don’t know much about how the BSDs do it, but I can only guess that because of their much smaller user base, they have no choice but to follow the Linux technology trends most of the time, if they want to reach comparable application and hardware support.

Modular software is still relevant

Now, we have to consider that since all of today’s personal computer operating systems were designed, the computing world has changed dramatically. And, in my view, it has done so in a fashion that only favors the modular software design model that was once heralded by UNIX.

We trust more and more computers in our daily lives. They know everything about us, they have access to our bank accounts, and our jobs and lives depend on their continued proper operation in more ways than we would usually think of. Gone are the day were critical software tasks were solely relegated to specialized high-integrity embedded systems: nowadays, even a tiny cellphone holds enough power over us to be able to cause tremendous harm in the event where malfunctions or security exploits would cause it to fall outside of our control.

The issues raised by this growing trust on personal computers are showcased by the recent massive increase in ransomware attacks all over the world. These remind us that it is no longer acceptable to let untrusted or partially trusted software do whatever it wants. As a protection against malicious software, application sandboxing is here to stay, and the only way to do it properly is to carefully separate software into isolated components of well-defined tasks, living in separate address spaces, something which modular software has taken care of for ages.

Simultaneously, computer functionality is constantly expanding, and modular software is better suited than monolithic one when it comes to managing this expansion. Want to open many data files formats? Support many devices? Implement many cryptographic primitives? Handle many networking protocols? Modular software is a natural way to handle this kind of “more of the same” scenarios by using the translator concept pioneered by UNIX and BeOS, where the implementation details of multiple algorithms are hidden by a large number of independent software components, developed by many different developers, and operating under common abstract interfaces.

Finally, a third way in which modular designs help software address the concerns of the 21th century is by making it easier to build fault-tolerant application designs. When a monolithic software or library crashes, a huge amount of information is lost, and it is very hard to reconstruct this information in order to cleanly recover from the crash. To the contrary, small software components can be much more easily designed with full awareness of which parts of their state requires backup and persistence, which doesn’t, and how that lost state can be recovered when the unforeseen happens.


In some areas, the design of UNIX and its descendants shows its age today. For example, their traditional focus on textual CLI interfaces does not mix well with non-technical and multilingual users, and the inconsistency between their end-user and developer interfaces requires much development effort duplication. Conversely, historical UNIX development concerns such as portability across many hardware architectures are much less relevant today, as Intel and ARM have essentially acquired a monopoly on all things personal computing.

However, it appears to me that the concept of building an operating system out of many small software modules is as brilliant and relevant today as it was in the days where UNIX was designed. And this concept, itself, has not proven to be outdated. The popularity of software based on a dataflow metaphor like Microsoft Excel, LabVIEW, SPAD, and Max/MSP in their respective realms of application, shows that modular software paradigms can be made very accessible to a non-programmer audience, and used to produce extremely advanced software functionality.

Consequently, I believe that if a personal computer operating system were to be designed today, it would be very wise for it to follow the UNIX principle of basing itself on a modular design, with extra support for modern-day concerns like GUI operation, application sandboxing, and vertical integration. And I really hope that as time goes by, this software design lesson from the past will not be forgotten.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s