Usability considerations

Usability matters, for all software. Unless you have a reliable way to force users to use your most crappy software, you better make sure that they enjoy it. But it doesn’t have to be a chore, and can actually be a very fun and fulfilling challenge.

For example, contrary to popular belief, usability does not necessarily come at the expense of software power, but can actually enhance it, as usability helps users find out about the functionality that is available, and use it more efficiently through the reduction of error occurences and impact. And usability can improve the efficiency of security systems, by reducing the odds that users will try to get around these in potentially harmful ways, instead of understanding them and making the most of them.

Now, if we want to improve the usability of something as big and general-purpose as an operating system, how do we get around it? Unfortunately, there is no set of Big Rules Set In Stone that will make every software usable in a straightforward way. But, over time, a number of rules of thumb have proven quite effective at making software more pleasant to use, and this blog post will be about discussing these. Continue reading

The scope of shared libraries

NOTE : This post was supposed to go online on the 19th, but remained mark as a draft in WordPress for some reason. Apologies for the unnecessary publication delay.

Operating systems are essentially about three things : abstracting out hardware, making unrelated application programs cooperate, and standardizing important functionality that is hard for application software to get right, such as GUI rendering or cryptography.

In current-generation OSs, the later mission is most frequently fulfilled by shared libraries, sometimes complemented by kernel code and user-mode processes when these prove absolutely necessary. In practice, however, the benefits of isolating system code into separate processes may be underrated. This post will discuss the relative merits of these two system services implementations, and why I believe that in many cases, the later option should receive more love than it currently does. Continue reading

There’s more to standards than filesystems

About 40 years ago, when the UNIX operating system was designed, its creators thought that a powerful virtual filesystem, shell scripts, and text I/O streams ought to be enough for everybody. And this is how Linux ended up nowadays with many incompatible way to do such fundamental things as boot-time initialization, GUI rendering, audio input and output, and now hardware accelerated graphics.

“Them fools !”, probably thought the Windows NT designers some 20 years later. “We shall make sure that every system functionality is specified in a standard way, so that this never happens to us.”. And nowadays, there are several incompatible APIs for pretty much every single functionality of Windows.

If there’s a lesson to be learned from the past there, I think it’s that operating systems need standard ways to do important things, but that doing standards right is unfortunately fiendishly hard. Continue reading

Doing one thing and doing it well

Back when the UNIX operating system was designed, its development team set out to follow a set of design principles that became later known as the “Unix philosophy”. One of these design principles was that the UNIX software ecosystem should be planned as a set of small software modules that do one thing, do it well, and are easy to combine with one another in order to achieve a greater purpose. This approach is to be contrasted with the more common monolithic approach to software design, which attempts to build huge software product that address an entire business area on their own, such as Microsoft Visual Studio, or Adobe Photoshop, and the present post will be dedicated to exploring it. Continue reading

The importance of error management

After spending some time considering good practices for the handling and organization of data, let’s move a bit to the related subject of data loss and software reliability.

Most of the time, data loss occurs due to errors, a broad term which in software engineering jargon can designate two very different things:

  1. A hardware, software or human issue requiring attention, which breaks the normal flow of computer programs.
  2. A kind of user mistake where software does what a user ask for, but not what the user meant.

In this post, we will discuss what happens when errors are not managed properly, and software strategies for managing them by identifying, preventing, detecting, handling, and possibly reporting them. Continue reading

Ideas for a universal serialized data format

To conclude the ongoing post series on data serialization and organization, I would like to introduce a proposal for a universal data serialization format. If you have studied the subject a bit, a knee-jerk reaction may be “Why? Don’t we already have JSON, CSV, XML YAML HDF5, and countless programming language-specific options?”. However, the huge amount of formats we have suggests to me that the data serialization format problem has not be fully solved. And I hope I can convince you that I have a sound proposal to solve it in a very general way, without going for extreme complexity at the same time. Continue reading

Intermission

This week, I had a very busy week-end, one of my storage drives went bad on me (yooohooo ata errors madness !), and I’ve been attenting the ESRF User meeting for professional networking reasons. All of this has eaten quite a bit into my weekly spare time, so there will be no post tomorrow. See you next week !