Some thoughts on targeted user experience
Howdy everyone !
Just to follow up my previous post and reassure you that I did not shoot myself or abandon the project, yesterday was a rather reassuring day as far as my coding skills are concerned ;) I gave me two hours (2×50 minutes with a 20 minutes long break in the middle) to write some code, and I did write a satisfying amount of code during this time. Maybe, if I tentatively make some numbers, 20-25% of the initial virtual memory management code (I won’t even try to estimate how much time the polishing step will take. I consider it is an everyday process). Guess I should give me deadlines and breaks more often, including when I code.
This is not the subject I want to talk about today. Those pieces of memory management are worth nothing until they finally get together in a proper memory allocation system. Today, I wanted to talk to you about the targeted end user experience. Because, well, it’s what matters in the end, and it’s been a while since I stuck my head in the code and didn’t took the time to think about it for a reasonably long time.
To sum it up, I would like to follow the following principles :
- Maximum reliability, in all senses of the term.
- Slightest level of annoyance.
- Maximum level of flexibility and user freedom.
- And never, ever, forget usability.
In the following blog post, I will explain what each of those points mean. Part of this has already been said in the design work done earlier, but I think it’s good to recall about it, and part of this is totally new.
The “reliable” adjective has two meanings in both English and French. Derived from the verb “to rely”, it can be applied both to objects and people. One may rely on a tool if it always does its task properly with no unexpected and destructive behaviour. Used about a human being, this verb can encompass the much larger sense of trusting that person : will it keep a secret ? If we give some money, won’t it keep it and run away ? And other aspects of the complex human relationship system where trusting someone is required.
With information technology, both meanings apply. We don’t want our computer to crash and lose some complex work we were doing on it (e.g. if SVN didn’t exist, I would be *highly* pissed off if I lost all of my OS codebase due to some random virus). We don’t want it to give our credit card number to untrusted people or divulge private data about us either. High reliability is achieved when people feel at ease with their computers, and when their computer is actually safe to use in such nonchalance conditions.
The secret to high reliability is information duplication through backups (so that if some software or hardware fails, its data can be retrieved), high separation of system components (first to minimize the impact of each component crashing, second to minimize the chance of each component crashing and ease its bugfixing by making its code smaller and more open to careful human review), and a well thought-out security model which only gives each application the bare minimum of security rights required to achieve its mission (so that the impact of it going amok or being infected is greatly reduced).
One of my goals is to bring those three approaches in my operating system.
As you would have noticed, “slightest annoyance” is not “no annoyance”. Now, I hear you right away : “Come on, you’re okay with computers being annoying now ? You’re on your way to making the new Windows, dude”. I agree with you that computer should be as non-annoying as possible, my point is that for optimal productivity, some annoyance and complexity is sometimes required, though it should be avoided like pest where it’s not needed.
Case in point : automated security and bugfix updates.
Keeping the system updated is necessary. Not applying updates means reducing system reliability, which we don’t want to do. It’s also necessary to inform the user about their existence, first because not doing so is illegal in many countries, second because we must take the blame if some security update introduces a regression which makes a program not work any more. The user should not be led to think that an application which worked perfectly yesterday can suddenly break on a morning for no reason, this is the kind of things which leads to the distrust of several people towards computing nowadays.
Now, there’s a difference between these basic principles, and displaying each time our user turns on the computer (or worse, while he’s busy, interrupting his workflow) “New updates for X are available, do you want to install them ?”. Where X is Java, Flash Player, Opera, or whatever software he installed on his computer. This is WRONG. Annoying the user makes him hating updates, disabling them, and reducing security. Then getting attacked by some virus and getting his files trashed. Then hating computers as a whole, and beginning some pointless holy war against them.
The proper way of doing this is as follow : first, all software updates must be managed by a single application, created by the system manufacturer. This is globally a safer practice as it reduces the amount of third-parties the user has to trust, and this allows the system manufacturer to have better control on the update process and prevent it from becoming the hell described above. Next, permission to installing security updates must be asked only once. Preferably at first boot, when the user is busy setting up his computer anyway. The proper wording is something like this, I think :
“Although we do our best to provide you with best-quality software, some mistakes may go unnoticed during the testing phase. When a user tells us about some flaws he found, we listen to him and quickly fix them, so that no future user has to experience them.
You, too, can benefit from increased software quality, by allowing us to have the software on your computer fixed automatically from its bugs and glitches. It’s free, the process won’t interrupt your work in any way, and agreeing to this does not imply automatically installing new versions of our software if you don’t like them.
[X] I agree, keep my software as reliable as possible.
And then, updating works just as advertised : lowest-priority process, including from a network usage point of view. While using a non-free internet connection (e.g. pay-per-Mo CDMA/UMTS), auto-updating is automatically disabled. No pop-up window or notification about updates ever (except a tiny and extremely non-intrusive one as an apology if a major regression occurs due to an update someday).
To paraphrase Einstein : “Everything should be as friendly as possible, but no friendlier”.
Maximum flexibility and user freedom
Looking at a some commercial operating systems nowadays, a popular belief seems to be that end users are idiots. If we provide them with something they can break, they will break it. The answer being to reduce user freedom to the bare minimum required to complete its task. As an example by restricting access to the filesystem and system settings, or preventing him to install applications which have not been checked and digitally signed by some (unknown) third party first.
In my opinion, this is taking computing backwards. Computers are powerful tools because they are flexible. You take the same machine, and you make an image editor, a web browser, or a general-purpose gaming system. This is just wondrous. Why should we get rid of it ? And how could some random bearded geek know exactly what each of his users need in order to only let these features available ?
In my opinion…
- If a third-party software can mess up the computer without the user knowing, the operating system is faulty.
- If a user, being presented a dialog asking him if it is safe to let a program do something, does not understand what that something is and hence can’t decide and ends up clicking “Yes”, the operating system is faulty.
- If the user is doing something that can break the computer somehow without knowing it and getting the appropriate cautious mindset, the operating system is faulty.
- If it takes expert knowledge of computer science to know if something is dangerous or not, the operating system is, again, faulty.
Provided that the operating system warns the user about dangerous behaviours from him or his applications in an appropriate, understandable, and friendly manner, nothing wrong should happen. There should be no need to jail the user. If such a need appears, the operating system must urgently be curated from the defects leading to this situation.
Well, in fact, most of what I say in this post is already about usability, but usability as a whole is a killer feature, too, so let’s consider what it generally is. Usability is about two things : knowing his user, and adapting the product to its needs.
Our main user is someone who doesn’t want to know what’s under the hood of the computer, except when having him knowing this is an absolute requirement. Someone who’s busy, and doesn’t want to be annoyed, especially by the machine he’s working on. Tweaking the machine must be a hobby and a first-boot task, not an everyday task. For everyday use, the machine should work “just fine”. No questions whatsoever, no hassle. No manual should be necessary either, what he’s doing and what he has to do should be quite obvious to the user.
We have other target users, obviously, like people who are going to develop the software for this operating system and people who are going to install it on our non-knowledgeable user’s computer and service it when things go so wrong that the average guy can’t fix them. I’ve already try to introduce some personas, user stereotypes, in an earlier post, but I still have to polish them a bit more.
Apart from that, all of our users are human beings, with various hardware. Their operating system must be human-friendly (including when humans become visually impaired or deaf with age), and flexible enough to work just as nicely on all future and past hardware that has at least the power and the screen size of a cheap 10″ netbook. There are good books on that subject, I can give suggestions about an excellent one in French ;)
Well, that’s all I can think of currently when I have user experience in mind. That may sound obvious, but most of the things I use on a daily basis strangely lack part of or all of these simple, obvious qualities, that make the difference between good operating system software and great operating system software…
Thank you for reading, as usual !