Skip to content

Some thoughts on targeted user experience

September 12, 2010

Howdy everyone !

Just to follow up my previous post and reassure you that I did not shoot myself or abandon the project, yesterday was a rather reassuring day as far as my coding skills are concerned ;) I gave me two hours (2×50 minutes with a 20 minutes long break in the middle) to write some code, and I did write a satisfying amount of code during this time. Maybe, if I tentatively make some numbers, 20-25% of the initial virtual memory management code (I won’t even try to estimate how much time the polishing step will take. I consider it is an everyday process). Guess I should give me deadlines and breaks more often, including when I code.

This is not the subject I want to talk about today. Those pieces of memory management are worth nothing until they finally get together in a proper memory allocation system. Today, I wanted to talk to you about the targeted end user experience. Because, well, it’s what matters in the end, and it’s been a while since I stuck my head in the code and didn’t took the time to think about it for a reasonably long time.

To sum it up, I would like to follow the following principles :

  • Maximum reliability, in all senses of the term.
  • Slightest level of annoyance.
  • Maximum level of flexibility and user freedom.
  • And never, ever, forget usability.

In the following blog post, I will explain what each of those points mean. Part of this has already been said in the design work done earlier, but I think it’s good to recall about it, and part of this is totally new.

Maximum reliability

The “reliable” adjective has two meanings in both English and French. Derived from the verb “to rely”, it can be applied both to objects and people. One may rely on a tool if it always does its task properly with no unexpected and destructive behaviour. Used about a human being, this verb can encompass the much larger sense of trusting that person : will it keep a secret ? If we give some money, won’t it keep it and run away ? And other aspects of the complex human relationship system where trusting someone is required.

With information technology, both meanings apply. We don’t want our computer to crash and lose some complex work we were doing on it (e.g. if SVN didn’t exist, I would be *highly* pissed off if I lost all of my OS codebase due to some random virus). We don’t want it to give our credit card number to untrusted people or divulge private data about us either. High reliability is achieved when people feel at ease with their computers, and when their computer is actually safe to use in such nonchalance conditions.

The secret to high reliability is information duplication through backups (so that if some software or hardware fails, its data can be retrieved), high separation of system components (first to minimize the impact of each component crashing, second to minimize the chance of each component crashing and ease its bugfixing by making its code smaller and more open to careful human review), and a well thought-out security model which only gives each application the bare minimum of security rights required to achieve its mission (so that the impact of it going amok or being infected is greatly reduced).

One of my goals is to bring those three approaches in my operating system.

Slightest annoyance

As you would have noticed, “slightest annoyance” is not “no annoyance”. Now, I hear you right away : “Come on, you’re okay with computers being annoying now ? You’re on your way to making the new Windows, dude”. I agree with you that computer should be as non-annoying as possible, my point is that for optimal productivity, some annoyance and complexity is sometimes required, though it should be avoided like pest where it’s not needed.

Case in point : automated security and bugfix updates.

Keeping the system updated is necessary. Not applying updates means reducing system reliability, which we don’t want to do. It’s also necessary to inform the user about their existence, first because not doing so is illegal in many countries, second because we must take the blame if some security update introduces a regression which makes a program not work any more. The user should not be led to think that an application which worked perfectly yesterday can suddenly break on a morning for no reason, this is the kind of things which leads to the distrust of several people towards computing nowadays.

Now, there’s a difference between these basic principles, and displaying each time our user turns on the computer (or worse, while he’s busy, interrupting his workflow) “New updates for X are available, do you want to install them ?”. Where X is Java, Flash Player, Opera, or whatever software he installed on his computer. This is WRONG. Annoying the user makes him hating updates, disabling them, and reducing security. Then getting attacked by some virus and getting his files trashed. Then hating computers as a whole, and beginning some pointless holy war against them.

The proper way of doing this is as follow : first, all software updates must be managed by a single application, created by the system manufacturer. This is globally a safer practice as it reduces the amount of third-parties the user has to trust, and this allows the system manufacturer to have better control on the update process and prevent it from becoming the hell described above. Next, permission to installing security updates must be asked only once. Preferably at first boot, when the user is busy setting up his computer anyway. The proper wording is something like this, I think :

“Although we do our best to provide you with best-quality software, some mistakes may go unnoticed during the testing phase. When a user tells us about some flaws he found, we listen to him and quickly fix them, so that no future user has to experience them.

You, too, can benefit from increased software quality, by allowing us to have the software on your computer fixed automatically from its bugs and glitches. It’s free, the process won’t interrupt your work in any way, and agreeing to this does not imply automatically installing new versions of our software if you don’t like them.

[X] I agree, keep my software as reliable as possible.

And then, updating works just as advertised : lowest-priority process, including from a network usage point of view. While using a non-free internet connection (e.g. pay-per-Mo CDMA/UMTS), auto-updating is automatically disabled. No pop-up window or notification about updates ever (except a tiny and extremely non-intrusive one as an apology if a major regression occurs due to an update someday).

To paraphrase Einstein : “Everything should be as friendly as possible, but no friendlier”.

Maximum flexibility and user freedom

Looking at a some commercial operating systems nowadays, a popular belief seems to be that end users are idiots. If we provide them with something they can break, they will break it. The answer being to reduce user freedom to the bare minimum required to complete its task. As an example by restricting access to the filesystem and system settings, or preventing him to install applications which have not been checked and digitally signed by some (unknown) third party first.

In my opinion, this is taking computing backwards. Computers are powerful tools because they are flexible. You take the same machine, and you make an image editor, a web browser, or a general-purpose gaming system. This is just wondrous. Why should we get rid of it ? And how could some random bearded geek know exactly what each of his users need in order to only let these features available ?

In my opinion…

  • If a third-party software can mess up the computer without the user knowing, the operating system is faulty.
  • If a user, being presented a dialog asking him if it is safe to let a program do something, does not understand what that something is and hence can’t decide and ends up clicking “Yes”, the operating system is faulty.
  • If the user is doing something that can break the computer somehow without knowing it and getting the appropriate cautious mindset, the operating system is faulty.
  • If it takes expert knowledge of computer science to know if something is dangerous or not, the operating system is, again, faulty.

Provided that the operating system warns the user about dangerous behaviours from him or his applications in an appropriate, understandable, and friendly manner, nothing wrong should happen. There should be no need to jail the user. If such a need appears, the operating system must urgently be curated from the defects leading to this situation.

Prioritizing usability

Well, in fact, most of what I say in this post is already about usability, but usability as a whole is a killer feature, too, so let’s consider what it generally is. Usability is about two things : knowing his user, and adapting the product to its needs.

Our main user is someone who doesn’t want to know what’s under the hood of the computer, except when having him knowing this is an absolute requirement. Someone who’s busy, and doesn’t want to be annoyed, especially by the machine he’s working on. Tweaking the machine must be a hobby and a first-boot task, not an everyday task. For everyday use, the machine should work “just fine”. No questions whatsoever, no hassle. No manual should be necessary either, what he’s doing and what he has to do should be quite obvious to the user.

We have other target users, obviously, like people who are going to develop the software for this operating system and people who are going to install it on our non-knowledgeable user’s computer and service it when things go so wrong that the average guy can’t fix them. I’ve already try to introduce some personas, user stereotypes, in an earlier post, but I still have to polish them a bit more.

Apart from that, all of our users are human beings, with various hardware. Their operating system must be human-friendly (including when humans become visually impaired or deaf with age), and flexible enough to work just as nicely on all future and past hardware that has at least the power and the screen size of a cheap 10″ netbook. There are good books on that subject, I can give suggestions about an excellent one in French ;)

Well, that’s all I can think of currently when I have user experience in mind. That may sound obvious, but most of the things I use on a daily basis strangely lack part of or all of these simple, obvious qualities, that make the difference between good operating system software and great operating system software…

Thank you for reading, as usual !

About these ads

From → OS development

6 Comments
  1. Compaq_Owner permalink

    I don’t know, I mean, it’s a good theory, but I think implementing it will be the hardest part,

    For example, there is a way to prevent all malicious code being compiled, by having all code checked, before being compiled, on some big database in the sky. And maybe even a registry key is generated, for that application, and the application is marked as safe for the system, but then again, that would piss developers off so much, and it would make compiling, and debugging a longer process.

    I guess, what I’m trying to say is building an operating system, for both low level and high level users, is well, a very difficult task, I want say impossible, but, difficult none the less.

    It would be good if the computer new how smart the user was, and then worked to that level, maybe you should put in some kind of test to see how smart the user is at setup?

    But then, that would piss off the high level users…

  2. I would love to reply and argue with you about this, but I must better understand what exactly you are talking about and considering as hardly implementable theory. Can you be more precise about it ?

    About when the multiple user targets enter in an interest conflict, I have two answers.
    1/When no compromise is possible, the main targeted user has precedence as a general rule.
    2/At first boot, I indeed thought about asking “How do you find yourself knowledgeable about computers ?”, the answer adjusting some UI parameters to favor either power of ease of use in a careful balance. The wording must be extremely well-thought, though, so that no one finds it offending. It must not lead inexperienced users to say they’re very knowledgeable, nor piss off more experienced users. Finally, it makes writing tutorials and documentation more complicated, which is why it would be best if we kind avoid asking this question, in my opinion, although you’re right that it might become the best thing to do someday.

    About this…

    For example, there is a way to prevent all malicious code being compiled, by having all code checked, before being compiled, on some big database in the sky. And maybe even a registry key is generated, for that application, and the application is marked as safe for the system, but then again, that would piss developers off so much, and it would make compiling, and debugging a longer process.

    Well, I don’t want to do this for several reasons.
    1/I think it won’t be needed, if the app capabilities and globally the security system are well-thought from the very start (only time will prove me right or wrong on this)
    2/This is a hobby project, I don’t have the financial resource it takes to build and maintain that big database in the sky.
    3/Databases only work on already known viruses, like most antiviruses, while my idea would adapt itself to new viruses to some extent and hence sounds safer.
    4/Finally, I don’t want my OS to rely on extensive work from me or my hypothetic future trusted security board in order to be usable. I hate the likes of Steam which go down if the company behind them goes down and have the user trusting an unknown and everyday-changing board of random employees.
    5/Then, as you say, it would piss developers off. One of my bets is that I can make programming on my OS a very enjoyable task, this is not exactly the proper way to achieve this goal ;)

  3. Compaq_Owner permalink

    Arggggggh…

    Sorry, but after reading my last post I realised what a poorly constructed piece of text it was,

    I guess what I’m trying to say in it, is that it would be good to have all applications written for every OS, checked by some independent third party group like Safe Software Foundation, (if only it existed) and then once the application has been checked, its granted a certificate, so that the OS its on, can declare it safe to execute.

    But, it’s only an idea; I’m not saying you should do it, or that you have to, maybe someone else reading this blog will say “hmm that’s a good idea, I have the money, time and resources to make that happen.”, and I don’t mean third party, as in that guy down the road, you buy all your spare parts from, I mean third party as in a group like ISO, or Free Software Foundation, but hell its only an idea.

  4. Well, this looks like an interesting idea, indeed, but I wonder who could have enough power on the computer world to make this a reality…
    * OS-independent apps already exist. HTML, JavaScript, Java, Python, and to a lesser extent C# are cross-platform technologies. The problem is that they sometimes lack raw horsepower for power-hungry applications, and that they often lack integration in each OS as a drawback for their cross-platform model. All those issues must be fixed
    * Checking all applications on the universe requires some major financial, human and computing power.

    There are some OS which have their applications more or less deeply checked before entry, like Android and iOS, and this raises some concerns about the power and efficiency of the application-checking group, too. Those concerns must be heard and answered with a social structure that is not a dictatorship of any kind. A structure that’s harder to manage.

    Yup, you might have some idea, but making it a reality would be hard. Not impossible, but hard. Little by little, whoever likes this idea supports might succeed, but in the long term only (~10 years minimum)

  5. Amenel permalink

    “death with age” ? that’s 100% of humans. I think you meant “deaf”.

    I pretty much agree with all you wrote, i.e. the principles. And just like Compaq_Owner, I am wondering how practical some ideas are.

    For instance, “all software updates must be managed by a single application”… I don’t know whether a model, between the Windows Setup.exe’s, the Mac OS X bundles and the Linux repositories, has emerged as the better software distribution/installation model, although I have a preference for the Mac bundles and I avoid Linux partly because of the dependencies nightmare.

    In my eyes, being a Java dev in my daytime job and a fervent admirer of the Eclipse platform, I think any software should be hot-updatable: I mean, updating should simply consist of overwriting a lib file with the newer version, whether it be one or many libs/files. Unload the already loaded dynamic lib and reload and you’re done; problem is, I’m not sure any dynamic library system allows this, which is a shame. For sure, Java does not. The program, meaning the .exe that a Windows user double clicks, should just be a stupid bootstrap thing, not more. I can already hear people screaming about performance, etc. All computers are fast enough for doing this on-the-fly. Apps just have to be better coded and ridden of the bloat that fattens them. I know you’ll agree with at least part of this as I recall that you’ve written about on-the-fly patching of live processes.

    The Windows DLL with multiple versions of the files scattered all around the system and application folders with no directly identifiable version number? Hell. The Linux dependency thingy? Nightmare-ish joke. The Mac bundles seem to be the best solution so far, despite the space wasting… which is not so much of a problem for me.

    Another example of things whose practicality is not that obvious: the system preventing software misdeeds… Will there be a giant permissions list for each software? And if yes, how will the permissions be granted/refused?

    By the way, did you listen to the early OSnews podcasts? I don’t remember the exact number, was it 8, 9 10 or else, I don’t remember as I’ve listened to several episodes during the last week-end, but Kroc talked about the same “annoyance” that you talked about and he takes a similar stance about programs getting in the way of using the computer. Only he was talking about the Ubuntu notification system and how some developer abused it and made notifications for things like “your application image preview is downloading…”. It was hilarious.

    Lastly, what’s the excellent book in French you were referring to? I could use some insights as I’m currently giving a lot of thought into UI, usability, ease of use, intuitiveness, etc. Oh, btw, I surrendered to the smartphone trend and since yesterday, I have an HTC Desire, the Android power monster. I must say I am not impressed, maybe even disappointed. I remember your article about touch screens, cloud computing, etc. It’s really not intuitive and some basic things like “Rename” or “Move” or configuration options are nowhere to be found. “Frustrated” is my current state of mind and I don’t get yet what all the fuss around touchscreen-ed smartphones is all about.

  6. “death with age” ? that’s 100% of humans. I think you meant “deaf”.

    Indeed. Thanks for the fix !

    I pretty much agree with all you wrote, i.e. the principles. And just like Compaq_Owner, I am wondering how practical some ideas are.

    Well, I see two possible sources of issues : unfeasible implementations, and a non-cooperative user base for things which require user cooperation.

    Against the second source, starting a new codebase that’s explicitly not an N-th windows/unix clone seems like a good start. The user, be it an end-user or a developer, will expect something new, including new rules. If he does not agree with them, he can discuss them with the members of the team and see if we can reach an agreement. Otherwise, well… that project is not for them, it’s sad but I fear we can do nothing about it.

    Against the first source, I try to think about the working product when I envision something, and mentally stress-test the idea a little (e.g. how would my everyday software behave in that environment ? What would be on-screen I/O like ?). That should remove some of the most obvious traps. For the rest, well… I’ll know it when I find myself totally stuck, and at that point it’ll still be time to try to find an alternative or ditch the idea altogether is there’s none.

    Sure, many of my ideas are ambitious, if they are implementable at all. But in my opinion, that’s the point of hobby OSdeving. Breaking compatibility, experimenting with possibly unfeasible new ideas, trying to do things the way I think they should be done… The frustration of discovering someday that one of my ideas is not implementable in practice is the price I pay for this, but I think that it’s really worth it.

    For instance, “all software updates must be managed by a single application”… I don’t know whether a model, between the Windows Setup.exe’s, the Mac OS X bundles and the Linux repositories, has emerged as the better software distribution/installation model, although I have a preference for the Mac bundles and I avoid Linux partly because of the dependencies nightmare.

    In my case, it’s technical constraints that make the choice : being a hobbyist, I don’t have the financial resources it takes to maintain a full system repository and keep it safe from the dependencies nightmare. At best I can host updates for the OS itself, but I cannot host them for every single fart app out there. Plus, I want this project to depend as little on its team as possible once it’s codebase is mature, so things should be managed on a decentralized fashion, just like on Windows and OSX.

    It seems to me that a windows installer has more or less the same contents as a .app bundle. The difference is that it is asking the user to give admin permissions to an untrusted executable, spreads its contents all over the hard drive and the Windows Registry in a messy fashion, potentially alters system settings or other applications (e.g. by installing toolbars in web browsers), and finally fills up several locations of the user’s workspace with shortcuts, sometimes without asking if it should do that. Oh, and its repetitive “next next I agree next” model introduces bad habits in the user base : every malware with an installer look and next buttons will tend to be trusted much more easily.

    So for me, the answer is obvious : I must start from the OSX app bundle model and tweak it to fit my needs. It’s not perfect, but it’s the sanest option of all three.

    To ensure that updating is being done on a system-wide fashion, I’d proceed like this :
    -Forbid applications to overwrite executable files except with special user permission (development tools would require this permission). Make sure that the official IDE issues a clear warning about why this is being done. This, aside from offering some protection about old-fashioned malware, makes sure that developers trying to introduce their own update solution as they always do on windows will encounter some issues and read software development guidelines more carefully before going any further.
    -Propose an update API that allows the update daemon to download updates from the app manufacturer’s website and install them in a controlled fashion. Make it simple and fun to use, so that developers actually try to use it instead of bypassing the previous limitation (which is not made to be fully attack-proof, but only to act as a warning).

    In my eyes, being a Java dev in my daytime job and a fervent admirer of the Eclipse platform, I think any software should be hot-updatable: I mean, updating should simply consist of overwriting a lib file with the newer version, whether it be one or many libs/files. Unload the already loaded dynamic lib and reload and you’re done; problem is, I’m not sure any dynamic library system allows this, which is a shame. For sure, Java does not. The program, meaning the .exe that a Windows user double clicks, should just be a stupid bootstrap thing, not more. I can already hear people screaming about performance, etc. All computers are fast enough for doing this on-the-fly. Apps just have to be better coded and ridden of the bloat that fattens them. I know you’ll agree with at least part of this as I recall that you’ve written about on-the-fly patching of live processes.

    Yup, I did, and still think that it’s a good idea, but getting it to work might require some help from app developers, especially those who make compilers. Here are my thought about it so far :

    First, let’s consider some library update. I think we both know how library loading internally works : the operating system looks for undefined symbols in the executable file’s headers, then look for the corresponding symbols in the library’s headers. It loads the library and the program in memory, then makes sure that the program’s symbols are replaced with memory addresses pointing to the library’s functions.
    Now consider the following mechanism : when a new version of the library comes around, the system determines if some program is using the old version of the library currently. If so, it finds the executable file of this program, and has a look at its symbol table. This way, it knows which memory addresses in the executable segment of the program have to be changed. So it loads the new version of the library, and then the program’s executable segment is tweaked so that it points to the new library. Updating is complete.

    This is the simplest way of doing this that I can think of. It works provided that the following assumptions are satisfied :
    1/The program’s executable segment has not been modified since it has been loaded (self-modifying code is bad anyway, so let’s just forbid its use. No program segment should have W and X attributes at the same time).
    2/The executable file has not been modified either (another good reason to forbid its modification)
    3/The loading process has remained unchanged, so that the kernel can reliably know where that executable segment is (no issue, the ELF spec is not subject to change)
    4/The library does not maintain an internal set of data even when none of its functions are run (global variables, static class properties…), that would not be transferred to the image of the new library loaded in memory

    That last issue, unlike the others, is far from being trivial to get around. In fact, I don’t think that doing so is possible while keeping strict System-V ELF compatibility. Should we break that compatibility, an option would be to have all global variables registered in an OS-specific part of the ELF headers by compilers, so that the update daemon has enough information about them to transfer their contents to the new version of the library.

    Now, consider hot-updating of programs. There’s several new issues to consider compared to libraries :
    1/All programs have a internal state that must be transferred from the old version of the program to the new version. So finding an answer to that problem becomes mandatory, unless we want all code to be interpreted and the interpreter to require being aware of such internal OS mechanisms in order to work. Not really attractive.
    2/It’s generally safe to assume that at some point, a library will not be called and can be replaced. This is not the case with programs, that are always in the middle of doing something except when they’re calling a library. The problem here is that what a program is currently doing is an information that’s nightmarishly hard to store when what you have in mind is to restore a different version of the program in the same state.

    The solution for the library can be applied to the #1 issue about programs as well, but as far as #2 is concerned I don’t know what I can do about it. In this precise case, I’d tend to use the ostrich algorithm : programs tend to be small and spend most of their time calling large libraries, there’s little chance that they actually include a vulnerability themselves. If the update is about bugfixing and the user encounters the bug while the update is being installed, he’ll just close the program and re-launch it, effectively starting the updated version. We can afford such isolated crashes. The sole thing to do to increase reliability in this model is to heavily encourage devs to use dynamic linking instead of static linking, since statically-linked libraries can’t be easily updated.

    The Windows DLL with multiple versions of the files scattered all around the system and application folders with no directly identifiable version number? Hell. The Linux dependency thingy? Nightmare-ish joke. The Mac bundles seem to be the best solution so far, despite the space wasting… which is not so much of a problem for me.

    Actually, OSX’s solution is just as DLL hell-prone as Windows’ solution. There’s system-wide SOs and per-application SOs on OSX too (though the latter are hidden in bundles), and DLLs have the option to include identifiable version number in their headers if I remember well. The OSX solution is as poor from a security point of view as the Windows one (because if a software doesn’t gets updated, its libraries don’t get updated either), and as good from a software distribution point of view too (software using incompatible or bleeding-edge versions of the same library can continue to leave a life on their own).

    I just happen to choose the mac bundle solution because it’s best from a distribution point of view, but between dll hell and dependency hell, no perfect solution to this problem has been found yet afaik. I just choose the compromise which fits my needs best.

    Another example of things whose practicality is not that obvious: the system preventing software misdeeds… Will there be a giant permissions list for each software? And if yes, how will the permissions be granted/refused?

    There would be a set basic tasks that all software can do because they are close to always needed and not very dangerous. Memory allocation is an example of task that all software has the right to do (the software allocating the most memory being killed in the event of a memory outage anyway). For filesystem-related permissions, that could be full access to their private folder (except write access to their executable files), right to use system-wide libraries, and access to files that the user explicitly “given” to the application (as an example through use of drag-and-drop, shell parameters, or some standard open/save dialog).

    Then, to do some dangerous tasks, the application would have to acquire an explicit permission. Just like in real life you can own a round-shaped scissors but have to acquire a licence if you want to own a firearm. As an example, a compiler has to acquire permission to create executable files and edit them, because this is useful for malware and not useful for normal apps.

    An example of mechanism for acquiring permissions would be a file located in a standard location of the app bundle that describes the permissions required by the application. On its side, the operating system keeps track of the permissions granted to each application. If the application is launched for the first time or there’s new required security permissions compared to the previous launch (as an example following an update), a warning is issued to the user.

    That warning (I’ve made some mockups if you want) explains that the software wants to do something unusual (the “unusual” side of it is extremely important. The user must feel concerned, the warning dialog must not be omnipresent and considered as an everyday hassle). It gives the list of security permissions required by the software and offers quick on-demand explanations of what each permission means and why his approbation is required before allowing software to do that (e.g : “This software wants full access to your personal directory. This requires your permission, because malicious software can use this access to wipe out all your personal data”). If the user agrees, the system updates its internal data about app permissions and never asks again.

    If software tries to do something that it has not been previously permitted to do while running, the system instantly kills it as a default setting and reports the intrusion to the user. Power users might have the option to tweak granted permissions themselves or have instead the privileged instruction replaced by a NOP or a simple warning (can be handy for debugging as an example, but not a default setting because it can lead applications to do complete nonsense).

    Now, making this work is where it becomes even more fun. If we want the user to actually read our dialog, there must only be a few permissions required for most apps, and they must be human-readable. The “human-readable” aspect requires in turn proper wording (“accessing you personal folder” instead of “recursive file access to ~/*” is an example). The “few permissions” is where developing this becomes an interesting challenge. This requires us to have some vague high-level permissions, grouping together several low-level permissions while remaining safe.

    Add up to this that for obvious security reasons, applications should not define permissions themselves. So the system should provide as much as possible a complete set of permissions that describes everything that can go wrong in a clear fashion, covers any developer’s use case. Of course, more permissions can be added through a system update (or as a privileged instruction ;)), but that’s a fallback solution and should not be used as an universal solution.

    So here’s the current state of the security idea. It’s challenging but doable, I think. And you, are you convinced too ? ;)

    By the way, did you listen to the early OSnews podcasts? I don’t remember the exact number, was it 8, 9 10 or else, I don’t remember as I’ve listened to several episodes during the last week-end, but Kroc talked about the same “annoyance” that you talked about and he takes a similar stance about programs getting in the way of using the computer. Only he was talking about the Ubuntu notification system and how some developer abused it and made notifications for things like “your application image preview is downloading…”. It was hilarious.

    Didn’t listen because I have some issues with staying concentrated on an audio or video content for a long time, especially when sound quality is not cristal clear. I prefer reading, it goes at the speed I want and I can reliably skip the boring parts ;)

    But yeah, the issue of notifications and pop-ups annoying and distracting the user is well-known and has been reported by usability experts for a long time. The KDE4 notification system is being abused as well in default settings, IMO. I plan to work on it later, when I’m working on the UI part, and it promises to be quite an interesting problem to solve.

    Lastly, what’s the excellent book in French you were referring to? I could use some insights as I’m currently giving a lot of thought into UI, usability, ease of use, intuitiveness, etc.

    Ergonomie web, d’Amélie Boucher, éditions Eyrolles, collection Accès libre. J’ai trouvé la deuxième édition en fouinant chez le Gibert Joseph de la fac en ce qui me concerne. A la base ce livre est tourné comme son nom l’indique vers la conception de sites web, mais à la lecture il contient énormément d’éléments passionnants d’ergonomie en général et d’ergonomie logicielle en particulier. Il a aussi une bonne bibliographie si on veut aller plus loin dans tel ou tel domaine de l’ergonomie (Pfou, ça fait du bien d’écrire un peu en français de temps en temps ;))

    Oh, btw, I surrendered to the smartphone trend and since yesterday, I have an HTC Desire, the Android power monster. I must say I am not impressed, maybe even disappointed. I remember your article about touch screens, cloud computing, etc. It’s really not intuitive and some basic things like “Rename” or “Move” or configuration options are nowhere to be found. “Frustrated” is my current state of mind and I don’t get yet what all the fuss around touchscreen-ed smartphones is all about.

    Nice to see that some people think the same things as me on that matter ;) Endlessly arguing with Tony Swash on OSnews just for the fun of it makes me think sometimes that I’m sort of a dinosaur for thinking that touchscreens are just a spawn of the old 2D WIMP interface, except with poor input resolution and nice flexibility on the output side.

    I discovered capacitive touchscreens on my cousin’s iPod Touch, and was too left largely unimpressed. It just looked so… uninteresting. I thought that direct physical contact with the device would maybe make things look better, but in the end I only miss the haptic feedback and precision of old-fashioned buttons and stylus. It’s funny to toy with, but it’s not the revolution everyone claims it is. Just a way to put a pmp, a portable gaming console, and a cellphone in a single device without suffering button overflow. As Jeff Han said when he presented his FTIR technology for cheap large multitouch screens at TED, multitouch input only becomes a really interesting mean of input at large screen sizes.

    In my opinion, touch screens are currently good for showing off and unifying several current content consumption device. They allow nice aesthetics, and maybe in the future they will allow making more sturdy portable devices (though as a physicist I have some doubts about that because of the increase in weight). But working on it ? Oh no, no, no thanks, I’ll keep my E63 ^^ As much as symbian would deserve some polish, it’s just the best for getting things done at the moment, in my opinion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.