Warning : This paper includes strong opinions and harsh criticism of many current ways of thinking within the computing world. Children below the age of 8 may get a headache by attempting to read it.
It is common, these days, to hear about the “post-PC era”.
If we were to believe users of this expression, the post-PC era is about some upcoming times where computer manufacturers stop building for engineers and start building for the user, with more usable interfaces and a feature set that’s better tuned to each one’s need. In practice, are often classified as “post-PC” devices two categories of things :
- App-centric touchscreen computers (some phones, tablets, …)
- “Cloud”-centric computers and services, which are not designed for offline applications and data (Chromebooks, iCloud has a bit of this but isn’t quite there, …)
Before talking more about these devices, I’d like to begin by appreciating the irony of the terminology.
“PC” is a word which stands for “Personal Computer”. You can associate it with two things : either the generic concept of personal computing, that is independent computers which are designed to be owned by an individual with low CS skills, or IBM’s trademark on one of the first computers of this kind, whose heirs currently govern the computer world.
What did we have before personal computers ? Very expensive and complex machines, that either filled an entire room (mainframes) or a significant part of it (minicomputers). Due to these characteristics, individuals could not afford them. Their owners and maintainers were also, reasonably enough, shy about letting users rent and use them directly. So a device which quickly became very popular was the computer terminal, or thin client. It was basically a keyboard, a screen, and some basic circuitry. Computer terminals gave the user a limited (but sufficient) access to a larger computer’s capabilities, and reduced the risks for the owner. Spilling coffee on an inexpensive terminal was much less of a big deal than doing so on a mainframe which costs millions of dollars or a minicomputer which costs a few dozen thousands of dollars.
Terminals also made the way to time-sharing systems, on which several persons use the same big computer at the same time, in a multitasking fashion. For computing companies, this was great news, because it meant more people paying the rent without any need to buy and maintain more hardware on their side. They offered less, and they won more. So were the early days of computing : users were the slaves of big companies which had absolute power on them, because the characteristics of these days’ computers made it the best option.
Fast forward to 2011. Today, we have computers that are significantly more powerful, smaller, and easier to operate than the mainframes of the past. They are good enough for most people’s daily work, and they cost the price of a bicycle. The use of computer terminals has considerably regressed, to the point where they are only used by some large institutions who value the ability to centrally manage everything within the company’s computer network from the techies’ office. Paying for computing time has become extremely uncommon, and is only done for extreme computing needs (like high-precision and large-scale physics and market simulations), using monster computers called supercomputing clusters. The main remnant of centralized computing in people’s everyday life is the Internet, and healthy competition has done a good job at making basic web services like e-mail or online picture and video storage affordable and usable by everyone. Network access costs are, for now, ridiculously cheap in many countries. They cost around the same price as the basic land line phone service, without any pay-per-use extra.
This is what the “personal computing” expression means to me : setting computer users free from the influence of dictatorial corporations, giving them independent computing power and data storage, and making the power of computing affordable to everyone. It puts some equilibrium back in the sysadmin-user relationship, compared to the admin’s past “high priest” status. The sysadmin is the person to meet for tricky questions, but you can solve everyday problems by yourselves.
So now, let’s now examine what people call “post-PC” these days…
Post-PC or pre-PC ?
Having history in mind, the “let’s put everything on the cloud” tendency is interesting to say the least. It means that now that computers are cheaper than ever, that data storage is affordable for everyone, instead of asking for more usable operating system and easier home backups, users are supposed to willingly put themselves back in the hand of big server owners, and let Mamma manage everything for them (knowing that since Mamma is in control of their data, which is the most precious part of their computer in the end, she can start charging a hefty price for her services anytime, in the purest form of blackmail, and they’ll have no choice but to pay or lose their hard work).
As long as the web remains a centralized network, doing that essentially amounts to going back to the pre-PC era, with only the comfort of new technology as a sole extra. By doing so, we’d also transfer tremedously high volumes of data in the invisible pipes behind the web, which will certainly not please all the ISPs that have to invest in faster infrastructure and get no extra money for doing so. In fact, ISPs are already complaining in some countries. Possible outcomes which I see are either Internet prices flying high like on the early days, or an alliance between ISPs and cloud companies. The latter is a terrible perspective from a monopoly abuse potential point of view.
Touchscreen device Q&A
Modern touchscreen ecosystems are pretty interesting too, but for more subtle reasons which I’ll now explain starting with one question : what is a computer ? What is it that makes computers so great and give them such a universal reach in this world, as compared to other technological tools like wrenches, drills, pumps, and space telescopes ?
Computers are universal. They can be used to solve a significantly larger class of problems than other existing tools, offering a power comparable to that of pens and paper in this regard. They are also one of the few tools which can significantly help the human mind, rather than simply giving users a stronger body.
What is it that makes computers so universal ?
Computers are flexible and programmable. Knowledgeable people can easily turn a computer into a tool that’s suitable for purposes that were not envisionable by the time the computer was designed. Computers have a very basic and generic instruction set at the core, and it’s precisely because of this reason that they can target so much purposes.
Who can program desktops and laptops ?
Current-generation desktops and laptops may be programmed by anyone who has the knowledge to do so. There are tons of freely- and widely-available tool around that make it a relatively comfortable and easy task, once one has gone past the initial language learning barrier. It is due to this that such computers have an extremely wide range of software targeting an extremely wide range of domains nowadays. Past computers, which could only be accessed by an elite, were much more limited in purposes.
Who can program touchscreen devices like iPod Touchs, iPads, and Android phones and tablets ?
Due to their limited interfaces, but also explicit design choices from their manufacturers, these devices may only be programmed using a desktop or laptop computer, and only one from a specific brand in the case of iDevices. There is a higher barrier to entry for software developers than on traditional desktop and laptop platforms. This is due to higher financial costs and less profits & visibility of apps. Also, the fact that one single actor has a monopoly on software distribution on that platform means that it does whatever it wants, including trashing any software that may hurt its own business on the platform.
So what if desktops and laptops became solely used by touchscreen device developers ?
They would become a specialized tool, so the number of manufacturers would sink and the price would fly high. So high, in fact, that they’d take a significant part in a developer’s budget, raising the barrier to entry for software development on those platforms. Add to this that developers who have lived in the touchscreen kindergarten for all their lives would have no knowledge of how their computer internally works anymore, since this is deliberately hidden from them. This means that the knowlege-side barrier to entry would also fly high enough to discourage many actors who get into software development today. Add a bit of manufacturer monopoly on software distribution to the scenario, which would become a stronger and stronger asset as those devices become more and more popular, and you get the scenario : servicing touchscreen devices and developing for them would become a niche market, with only big actors surviving, in a way that’s not without reminding of the mainframe market in the past.
Hell, yeah :)
The PC crisis
A summary of the current situation
In the past, due to technical constraints, computing was centralized, managed by a few big monopolies. This only benefited to those monopolies and no one else, so the situation was unstable and screamed for the invention of a more decentralized, personal form of computing. Inventors came up with individual-friendly microcomputers, and even the establishment, embodied by IBM and similar companies, participated, thinking that this was only a new lucrative market that would not hurt their existing business – and that even if it did, so be it, they had to make money off it too in that case !
Nowadays, the computing world has never been better. But it still has its problems. Computer operating systems remain insufficiently clear in their interface and too complex in their behavior, so that many users don’t understand them well and make mistakes. These mistakes may in turn cause significant incident or be used by malicious persons, unless users hand the management of their computer to someone else. Programming simplicity has gone downhill from the days of Visual Basic and Delphi, becoming gradually more arcane and less accessible. That’s not a problem for the developers of today, who have seen the gradual rise of complexity and learned to follow it and deal with it. That’s a problem for the developers of tomorrow, who will have to face the behemoth of modern programming with no prior knowledge. As programming becomes harder, more of them are going to give up, making programming more and more of a small market – with all the issues that this brings.
A few rich actors think that now is the right time to capitalize on this crisis situation and establish a new oligarchy on top of the computing world, like the one which used to govern it in its early days due to technical constraints. If they succeed, computing will sink back into dark ages of oligarchy, innovation will go down to zero, and pricing will fly high. Like with drug dealers, people will come back anyway : our current word is too dependent on the power of computing to live without it, no matter how badly managed it is.
There must be some kind of way out of here, said the joker to the thief…
Myself, I think that a world of free, independent, innovative computing is worth fighting for. I believe that users could control everything about their computer sufficiently well with a small amount of initial knowledge, and only resort to external assistance in case of hardware failure. And I believe that such a cause is worth fighting for, that we should strive to make it happen, not to put users back in a new mainframe computing jail.
Also, I think that although the situation is bad, the fight is not over yet.
One unknown parameter that can prevent a comeback of the mainframe age is Microsoft. Their position is interesting, because they are strong in the current computing market, and would have a lot to lose if a consumption-centric computing model were too rise. They are not good in that market, they have done too little and too late. First they tried to renovate Windows Mobile much too late for it to be effective, and now they’ve decided to go with something completely different in an app/media-centric market where only strong, well-established actors have a chance. Microsoft are the king of the current desktop and laptop market, but in a consumerist world they can’t hope to be more than a second-class citizen. New actors have to lose, since as the tragic “post-PC” zombies frequently say : “We want apps ! Where are the apps ? If it has no app, it’s not interesting to us !”. Pouring lots of money in the thing to compensate for the lack of legacy might work, but the costs would be extreme.
All this means that Microsoft would benefit from trying to keep the current independent computing world alive, at least to some extent. The port of the full Windows 8 to tablets sounds like they have understood that and could be making steps in the right direction, though they still have a lot to do before Windows 8 becomes something else than a schizophrenic OS with two clearly separated personalities, one independent-but-impractical desktop side and one consumerist tablet side.
But perhaps you don’t appreciate, like me, to depend on the actions of an evil corporation to survive. Another unknown factor is the role which open-source and independent projects have to play. Although recent computer evolution has done a good job at destroying any kind of low-level computer mastery and programming knowledge among the youngest, we still have lots of well and alive good programmers. And there are also those few who didn’t learn computer sorcery when they were young but decided to learn it at university out of curiosity or vocation. Though this late impregnation to the computing philosophy leads to higher failure rates, and though many CS education programs aim at leading students towards application-level programming, some still get through. All that knowledge could be used to build the desktop OS of tomorrow, one that doesn’t have the flaws of current ones yet doesn’t put the user on the baby seat without letting him know a thing about what’s happening, aggravating its alienation.
Sadly, out of geek pride and legacy preservation, many of the current independent OSs, be they Linux distros, BSDs, or Haiku, only aim at being understandable to people who have grown with computers, know them to the core, and are not afraid of reading kilometric tutorials and manuals to go further. Through their actions, but also sometimes verbally, they transmit the following message : “We who understand computers well are a superior cast. Computers do not need to become simpler to use, it’s the user who must wrap around their complexity. People who have not learned, through unspecified means, extensive knowledge about them, should not be allowed to use them. Using a computer is not a good way to learn about it. Computers should be like cars, not like bikes.”.
Great. So what is to save computing as it stands today is a bunch of intellectually masturbating and elitist gurus, who advocate noisy, polluting, and impractical transportation medium over clean, healthy, and cheap alternatives ? Well, it is my hope that the open-source world can go beyond that. After all, GNOME 3 is a good example of a product which aims at making computing accessible yet powerful, though it is still too limited for those who don’t master text config. VLC is at the same time very easy to manipulate for basic use and powerful for the advanced user. Firefox offers a clean and lean default package, but keeps power user features around in its extensions. Overall, the open-source world features many examples of application-level user friendly designs. The problem is that those who know how to design user-centric products tend to stay away from the lowest levels of OSs. It’s dirty, it’s complicated, and they think there’s nothing there which the user interacts with. And that’s where they’re deeply, horribly wrong.
Users interact with the lowest layers of their OSs. They just don’t know it, and don’t know who to put the blame on when they fail, so they put the blame on OSs as a whole. When Linux distributions supply broken PulseAudio implementations that break sound on many hardware, people blame Linux as a whole. When distributions ship with broken GPU drivers causing many X crashes instead of sticking with VESA until there’s a truly working driver, people blame Linux. When distributions don’t take the time to make good graphical interfaces to the low-level configuration and force newcomers to manipulate arcane CLIs when the default configuration goes wrong, people, again, blame Linux. And don’t get me started about Linux‘s high UI latency. That’s not stupidity, it’s a perfectly human way of reasoning : if you can’t target precisely, target broadly, you’ll hit the right target in the way.
Because users interact directly with their OSs down to the lowest layers, we need designers and developers to work on OSs that aim at satisfying the users’ needs down to the lowest layers, and get over the snobism that people caring about users frequently have towards low-level programming. Putting a pretty shell on top of arcane low-level layers, like Linux distros or OS X do, won’t ever work, in my opinion, because at some point an archaic dialog asking for root access will always show up, and because it puts the burden of withstanding the gory mess that those low levels are on application developers, unless APIs end up reinventing all low-level functions, or unless you hide the hidden power from everyone but you.
The OS-periment is an attempt to apply a holistic user-centric design to computer OS design, and see how well it goes. It sure is a crazy project, and chances of survival and success are low, but heh, craziness is what makes life exciting :) I don’t know how far it will be going, but if I get to the point where I have something which can do some common tricks and can be obviously extended to do more, which is enjoyable and easy to use without resorting to “kindergarten philosophy”, and which makes software development at all levels a breeze, I’ll consider that I have succeeded, no matter what the numbers are.