Tomorrow’s personal computer world ?
These days, it is commonly claimed that the future of the computer industry is as follow :
- DAPs, video games consoles, and simple cellphones are going to disappear, merged in the touchscreen smartphone market : a device with a screen less than 5″ in size which would do all that those currently do.
- The netbook market and part of the desktop/laptop market would be merged in content consumption-centric devices with a 7-11″ touchscreen : the tablets.
- The rest of the desktop/laptop market would remain, as it is best for work and content creation as a whole, but people would find it overkill to use a current PC for everyday tasks.
Now, I wonder how relevant this separation in three families actually is.
The tablet illusion
There is a qualitative jump between phones and tablets, no question. Having owned an EeePC 701, I’ve decided that I wouldn’t support screens less than 8″ in size in this OS because I consider that those are just too small for anything a little bit complex, be it word processing, graphical editing, or coding. A 7″ screen is good for light web browsing and simple entertainment, but don’t ask for more. And that’s with a precise pointing device like a mouse or a trackpad. When you put a touchscreen on it, it makes things even worse.
On the other hand, the distinction between tablets and desktops/laptops is highly artificial, in the same way as the distinction between a modern netbook and other laptops is.
What is it that makes nowadays’ 10″ tablets differ from a small laptop ?
- Touchscreen input
- A different OS
- Closed hardware specs, fewer I/O
Add up a keyboard and a trackpad, Windows or a Linux distro, and some I/O ports, and you get a modern netbook. With an optical drive and a bigger screen, it would be called a laptop (although its hardware would be underpowered for the price by today’s standards).
What tablets could have been
In short, tablets are only good for content consumption because they want it to be the case. With as small of a modification as the addition of optional stylus input with good precision (think Wacom tablets), they would be a much more versatile device, actually useful for several content creation tasks : grab your stylus for tasks which require precise input, and put it back in your bag for entertainment. Add up an hardware keyboard (like the optional one for Apple’s iPad), and as far as hardware is concerned you’re back with a fully capable laptop, with added modularity as an extra : people buy the optional assets they need, and have the option to buy some other ones later if they change their mind.
Modularity has always been the selling point of computers. What’s the whole point of desktops and laptops, after all ? To be a versatile machine, available to anyone, programmable at will for the tasks you’ll want to ask from it, with only hardware capabilities as a limitation. It’s only logical to make such a machine as modular as possible at the hardware level too.
Tablets have had the opportunity to be the logical evolution of laptops, a more versatile personal computer which is (nearly) as modular as a desktop computer (instead of being a bulky all-in-one device) and much easier to carry around. Yet they have rejected this opportunity. My question will be simple : why ? Let’s look at popular answers.
Debunking some myths
Complexity equals complication
A commonly invoked reason for dumbing down tablets is that they couldn’t be easy to use otherwise. It is based on the widespread misconception that complexity = complication. If your device does lots of things, it should necessarily be harder to operate. Thus, the less functions something has, the easier it is to operate it.
The failure of this way of thinking is that it does not take into account the power of hierarchization. The implicit assumption behind equating complexity and complication is that everything is on the same level, overwhelming the user’s perceptions with an overflow of icons and unneeded information, like the iPhone’s home screen. It does not take into accont the fact that it is fairly easy (and commonplace) to make user interfaces where the most common tasks are easily accessible, where more advanced tasks are slightly hidden, although still easy to access for those who want them.
As an example, I’ll take my Nokia E63 phone, running Symbian S60 3.1 (note how old that OS is). Sending texts, making calls, and other common communication operations, is done simply by typing the beginning of the name of my contact in the home screen and playing a bit with the joystick. Schedule functions are only one keypress away from the home screen. By simply looking at said home screen, I already have access to a lot of information about incoming messages and scheduled events. And yes, for tasks which the underlying OS does not do well, I can set up a few shortcut to commonly used third-party applications.
Among the phones I’ve owned, this one is probably the easiest to use on an everyday basis. Yet once I start to dig in its menu hierarchy, I’ll find an incredibly rich feature set which regularly proves to be useful, and which was not here on my previous, lower-end phones.
So much for equating complexity and complication, eh ?
Desktop OS/touchscreen’s fundamental incompatibility
Another, more interesting claim is that current desktop OSs are unsuitable for touchscreen input, that changing means of input implies a fundamental paradigm shift. It usually goes as follow : “Have you ever seen Windows 7 on tablets ? It’s just laughable how hard to use it is. We need something designed specifically for touchscreens, which means totally dropping some of the overly complex current paradigms, like the filesystem or the extensibility which USB ports provide…”
Notice how the claim shifts from complaining about how a specific operating system performs to complaining about the whole concept of a desktop operating system being ported to a touchscreen device. I did not invent this, it’s usually the way thing goes. Point it out, and with a bit of luck you’ll get a small list of other desktop operating systems which would not perform well on a touchscreen. Fundamental reasons will rarely be mentioned. The reasoning is that if it has not been done so far, it is probably not doable.
This reminds me of a quote from one of the greatest physicists of the 20th century : “One shouldn’t work on semiconductors, that is a filthy mess; who knows whether any semiconductors exist.” (Wolfgang Pauli, 1931).
Myself, I can explain easily why current desktop operating systems don’t play well with touchscreens. They were simply never designed for that. Nobody could have expected how quickly the popularity of touchscreens could rise, after all these years of failed tablet PCs. Past touchscreens were overly expensive for the benefits they brought anyway, so most people just designed for the mouse, a common, cheap, and easy to use PC input peripheral.
As desktops and laptops had more or less one “serious” pointing devices (few things are more clunky than a laptop’s trackpad, that’s why most laptop owners also own a mouse), it was only logical to optimize for it. Therefore all software was designed with mouses in mind, with control positions and size hard-coded, in pixels for ultimate dirtiness (because screens had only one output resolution at the time too). Make no mistake : when put on a tablet, these operating systems and their software mainly pay the price of this inflexible UI toolkit design, not of other unrelated design decisions. Being able to plug an USB pen drive in a computer and access it has really nothing to do with touchscreen accessibility.
If a new desktop operating system was designed today, it would obviously not have this defect, and would smoothly adapt itself to a wide range of input/output peripherals. But you know what ? It’s not even necessary to rewrite a whole OS’ stack before it becomes touchscreen-friendly. Only the highest-level UI layers and applications which depend on it need to take this into account. Even if it still unfortunately means breaking application compatibility (at best current applications would perform as bad as they currently do), it is totally doable, and in fact I’d expect Microsoft to do this with Windows before they get trapped in a reduced part of the computer ecosystem. If they were able to port Windows on ARM, they are able to do this too. But I digress…
Benefiting from the phone ecosystem
Another interesting argument is that putting a phone OS on a tablet allows one to benefit from the existing ecosystem around this OS. You get the already existing phone applications, and a consistent user interface across phones and tablets allows you to appeal to your already existing satisfied phone customers.
What this fails to take into account is that people who would buy a touchscreen smartphone have not much interest in buying a tablet too if it is just an enlarged version of their phone. It’s less portable, more expensive, lasts less long on battery, and all that not to do much more : better web browsing and e-books are sure neat, but are they worth the $500 budget ? Not so much.
As devices which fit in a backpack, tablets compete not with phones, but with laptops and e-books readers. For e-readers, it’s a debatable subject : tablets offer more functionality, but e-readers offer a superior screen. For people for whom reading books is a secondary feature, tablets will win, but they cannot fully replace e-readers in the mind of people who buy the device primarily for reading books as long as they do not switch to some technology better suited for a job like transflective screens. For laptops… Well, as we’ve seen a tablet could theoretically compete – and win – in this field, but current designs are not suitable for this.
A bit of conspiracy theory
How to make money by hurting consumers
Now that I’ve shown that several commonly invoked reasons for making tablets a mere big phone where they could be a better laptop are not valid, I’d like to tentatively suggest an alternate explanation of what has actually happened.
Currently, in the laptop market, users are relatively free to do what they want. You own your hardware, you install whatever OS you want on it, and good extensibility makes you able to do just about whatever you want for some years, until the processing power of the beast becomes insufficient or some unrecoverable technical incident forces you to buy another one.
Due to the relatively high level of hardware standardization in the x86 world, all laptops are usable in an interchangeable fashion. You buy one based on its feature set, design, experience of the brand, and build quality, and you’ll buy the next one based on the same considerations. Laptop manufacturers have to compete on merit and past merit alone, apart from that nothing really distinguishes them from each other. It puts them in a difficult situation.
Now compare this with Apple’s iDevices market. Apple makes sure that a large amount of proprietary technologies are being used by their users and voluntarily neglects open standards support, making it a painful experience to switch from Apple products to similar or better competitors. Brand loyalty is not won and hardly kept, but enforced through a variety of mechanisms which make it simply as hard as can be to switch to another product.
This use of non-standard technologies is akin to releasing yet another new kind of screws, which are quite neat from a mechanical point of view but require a new kind of screwdriver, which only you are allowed to produce because you patented this design. After a lot of people started to use your screws, you start gradually increasing the cost of your screwdrivers. This approach benefits your business, but it is detrimental to your customers at first, then to the market as a whole. Soon, other companies in the field of industrial technologies will see how much profit you make this way, and start making proprietary screw standards too. Gradually, everyone becomes dependent on the proprietary screws of one manufacturer or the other, and the market stabilizes in an equilibrium where manufacturers win boatloads of money and every single customer has lost.
How much the tablet market benefits from cheating
Let’s go back to what distinguishes a tablet from a laptop : one of the key differences is that tablets come with an OS that’s chosen and tweaked by the manufacturer, which you’re not able to modify much, and which you can’t replace with another one because the hardware specs are carefully hidden. This means that once you have bought the product, you are forced to use whatever technology the vendors puts in his custom OS – including proprietary one.
Once the vendor has full control on which technology is present on his products, the next step is to introduce a proprietary software development platform. Said platform must be fully incompatible with competitors’ products, making every software coded by the owner of your product a potential competitive advantage. If you’re really nasty, you can also take this opportunity to sell totally unrelated products from your company (see e.g. how iOS development requires a Mac).
For this to work, multi-platform development must be made so hard that no one will dare to do it. You must make sure that no multi-platform development environment may compete with your proprietary one. This is typically achieved by making user applications have worse performance than the system ones which they run on top on. Another way is to only allow installation of software from your own application store, and to ban any software which allows the execution of third-party code from it. This approach has the additional advantage of extra financial benefits : you can charge an insane amount of money for each software sold on the application store, and developers will have no choice but to deal with it if they want to target your customers.
Everyone will love these triangular screw drivers which you introduce by doing so. Especially you, the manufacturer. And your shareholders.
At this point, reducing the hardware’s feature set and extensibility, and marketing the product differently so that people forget that a tablet is nothing but a small laptop with a touchscreen, becomes totally logical. If people realized that a tablet is a laptop at the core, they wouldn’t accept the loss which a tablet represents for them. They’d ask for a fully capable OS (sadly probably Windows, so that it runs the same applications as their laptop), and wouldn’t be satisfied with the restricted and tightly-controlled ecosystem of phones being applied to more capable hardware. Using buzzwords like curated computing, mobile devices, and the cloud, allows the manufacturer to conveniently avoid this outcome.
It has also the additional advantage of helping the manufacturer make even more money, because non-extensible hardware becomes obsolete and is replaced more quickly. Which is an environmental disaster, sure, but also a big commercial win.
Well, when I take a look at the current tablet market, I’m pretty disgusted. This is nothing but an attempt to weaken the personal computer model and replace with something based on the phone ecosystem, where customers are treated like sh*t by manufacturers and carriers because for a small, less capable device, it doesn’t really matter enough that people would want to fight for it.
I hope that I am wrong. Or that this is going to change in the long run. After all, when IBM tried to lock down the PC hardware platform with the PS/2, it was one of the most memorable failures of computer history. In the mobile OS world, Apple’s iOS platform, which was the most dangerous of all in what it tried to impose, is slowly losing ground to a less dangerous Android. Maybe in a few years, some manufacturer who desperately needs to innovate like Motorola will try to introduce a more open tablet market and it will be a commercial win. In meantime, I’m extremely cautious about those. Those devices are cool, and I still consider targeting them at some point with my OS. But on their side, will they let me run it ?