Internet everywhere ! Doesn’t it sound like a dream ? A world-wide network of computers, under the wise and democratic control of the United States of America, which crosses illusory geographical boundaries and allows human intellect to unite under a single, bright banner of hope and creativity ! Who wouldn’t want that to be fully integrated in his OS ? Sadly, the reality of things is a bit more complex than that, and in this article I will aim at showing what happens when “Internet integration” goes wrong, and why I plan to be relatively conservative in this OS as far as cloudy things are concerned.
Lol, Internet !
When Internet connections started to be available at low prices to everyone, the main OS developers started to shove web-related things everywhere in their products in the name of modernity. Though the best-known remain of this era probably is the inclusion of Internet Explorer into every install of Microsoft Windows, I can pretty much guarantee without risk that everything that could be tried in the name of the web hype was tried.
- Naming products in the name of the Internet ? Meet Apple’s internet-Mac (which you probably know better as iMac).
- Filling the desktop with ads and URLs so that it looks more like a web page ? That would be Microsoft’s Windows 98, which also allowed you to use a web page as part of your desktop background.
- Oh, and noticed that weird fake hard drive icon in the Mac OS X interface that tries to sell you something when you click on it ? That’s Apple’s iWeb (or “Internet Web”), a predecessor to MobileMe and iCloud, and a successor to iTools.
In retrospect, this trend is not so surprising. First, as mentioned, the web was a new and shiny thing, so to be new and shiny every OS had to be as webby as possible. Second, the Internet was a beacon of hope for OS manufacturers who wanted to sell you not only an OS but also tons of little side-services whose total cumulative cost ends up tripling the benefits. Third, as Apple and Microsoft quickly realized in these dark ages of computing, internet connections allow you to sell blatantly unfinished software and fix it later through automated updates, blaming customers’ issues on a lack of updates. All of this could be observed once again in another area of home computing recently, video game consoles, although the situation of cellphones is a little bit more complicated.
Now, all this is business as usual. If you want something that doesn’t attempt to be shiny, to make maximal profit, or to ship as early as possible, but just to do what it is supposed to do and do it well, no matter the cost and delays, then the consumer society is not very well-suited for you and I welcome you to the small world of hobbyists and serious businesses. But beyond these childish games, mainstream OSs also use Internet connections for other purposes that have frightening implications from both a technical and a social point of view.
The darker side
Copy protection gone wrong
Three periods can be relatively well separated in the history of software copy protection technologies.
In the early days, software was not protected against copying, like every other information storage media out there. You bought your software, and you could make as many copies for friends and family as you wanted. Of course, this was illegal, just like lending tape copies of records, and like with audio tapes no one cared, not even most developers. The implicit permission to make copies of the software was just part of the retail price.
Offline copy protection made its true appearance in the CD era, due to the new possibilities offered by this storage medium. Of course, variants of it existed before, but most of them, such as serial numbers or a bunch of sketches in the manual, were trivial to bypass and turned out to be more of an annoyance for users than anything else. With CDs appeared such things as voluntarily damaged disks sectors that were repeatedly read to check if the resulting data was random, hardcoded serial numbers, or even encrypted streams of data as computers grew powerful to decipher them. In the end, though, none of this lasted very long, because all of these technologies had the fundamental flaw that the user is provided with a working copy of the software, from which all there is to do is to strip out the copy protection. And encryption where you provide a decryption key on the software’s storage medium is just about as effective as locking out your house but leaving the key near to the door.
This is why once Internet connections were sufficiently widespread and stable, online copy protection technologies made their appearance. In this scheme, you are not provided with a working copy of your software anymore. Your software either works in a reduced functionality mode or doesn’t work at all unless some unknown piece of information is retrieved from a web server. The idea is that said server centralizes data about all installations of a software and is able to tell if a given retail copy is used multiple times, in which case it won’t provide the required information. Now, of course, the devil is in the details, and one can argue that all crackers have to do is to monitor network accesses to retrieve all the information that’s processed by the server, then get their disassembler of choice and use everything they have at hand to create a version of the software that doesn’t need to contact the web server in order to run.
For that reason, I believe that the future of copy protection, as we already begin to see it, is software which couldn’t work at all without external server assistance. Examples of this include so-called cloud computing services, where your software is run on a distant server and your computer is only a dumb terminal to said server, like OnLive or Google Apps, and software which only makes sense when it transmits information over the Internet like multiplayer games.
And this should frighten us.
Let’s say that tomorrow I buy a knife. It’s a very good knife, sharp and comfortable to hold. If the company that manufactures it goes bust tomorrow, it will remain a very good knife for all its remaining useful life. Logical, isn’t it ? Now, if tomorrow I buy some trendy cloudy stuff, like an OnLive game, and the company goes bust and shuts down and wipes its server, my software has become useless junk. When I try to run it, I get some error message about connection problems, and that’s it. My software, and sometimes all data that I have processed in it for the most extreme approaches, is gone forever. No hacker will ever be able to retrieve it, because I don’t have a working copy on my computer. Just a dumb terminal that sends input events over the web and retrieves moving pictures and audio frames.
I have no big issue with old-school copy protection schemes. They are annoying as hell, but I know that they will be cracked one day. Which is the way all intellectual property works : temporary monopoly on distribution, then release into public domain. Now, with the latest crop of copy protections, this contract is breached. Software and media files, in some events, can be gone forever even if there are still people who want to use or store it and have gotten their copy in time. The digital society just forgot it in an unnatural, unbelievably wrong way.
This kind of technology also has big applications in the realm of information censorship, by the way.
Software distribution monopolies
If we go closer to OSs in particular, we can see another case of Internet connectivity gone wrong, which is when OS manufacturers get a total and legal monopoly over what kind of software and media can be used on their platform, by only allowing software to be distributed through an online store they own under ridiculous distribution conditions. This is, of course, best embodied by Apple’s App Store on the iOS platform.
In the past, mobile phones were fixed-purpose machines. They happened to use a CPU on the inside, but they were essentially the digital and long-range wireless equivalent of a landline phone. You could make calls, then send text, and some had a few extra functions like an alarm clock or a calendar. That was about it. Then, slowly but surely, mobile phones entered the multimedia age. You could put sound, images, and videos on it, change the ringtone, take photographs, and play basic Java games. After that, with OSs such as Symbian and PalmOS, cellphones entered the personal computing revolution and allowed native code to run on their platform, with full performance and maximal capabilities. Softwares could be fetched from friends, magazines, and websites for those who would bear the download times. Fun times.
However, glitches gradually appeared in that pretty picture. Phone OS vendors looked at video game consoles and thought “Hey, I smell money down there !”. They realized that money could be made by making software developers pay a fee to access the full capabilities of the device. And so appeared stuff like Symbian Signed, where software can only do basic thing unless signing keys are bought from Symbian’s vendor for a hefty price on a per-release basis. A basic feature set, which reminded of Java MIDlet, was still accessible to self-signed software, but it really didn’t go very far. And when Apple, one of the biggest control freaks in computer history, introduced third-party software development for their cell phone platform in 2008, they took the thing one step further.
In the iOS ecosystem, you cannot distribute software without Apple’s explicit permission, and you can only distribute it on Apple’s online store. Although Apple have stated some guidelines on what kind of software will never get accepted, these guidelines also include the explicit statement that in the end, they do whatever they want. Among the kind of software that is banned on the platform are interpreters and emulators (which threaten Apple’s monopoly on software distribution), nudity, and every kind of newspaper-like publication that does not put Apple in a bright light. Multiplatform code which dares to run on something else than Apple’s platform is a very sensitive subject, and don’t even think about redoing a part of the OS which you don’t like in third party software. Besides, you must pay around a hundred dollars per year to get the iOS SDK, which may only be run on Apple computers, and any financial transaction in your software must go through Apple with a 30% cut going to the company.
Now, my answer to this kind of stuff is simple : don’t buy anything running iOS. I don’t own anything from Apple myself and actively discourage everyone else around me to buy their stuff. The problem is that they are not alone anymore. Other OS vendors have realized the financial benefits of such a locked-down ecosystem, and we are reaching a point where mobile platforms which allow software distribution outside of the vendor’s locked down online shop could become the exception rather than the norm. The only major alive mobile OS that doesn’t stand for this is Google’s Android, which has other issues to say the least.
Also, it would be a good idea not to fool yourself by thinking that this will only effect mobile platforms, on which most software is arguably just funny toys. The recent arrival of Apple’s Mac App Store and Microsoft’s Windows Store is here to warn us that the desktop and laptop computer form factors are not intrinsically protected from this kind of monstrosities. And if you think about the possibility of installing another OS on those platforms to avoid vendor control, think twice, for UEFI’s Secure Boot feature (mandated by Windows 8’s Logo program, so to be bundled on all upcoming computers) is designed specifically to prevent you to do that.
Gratuitous use of web technology
Modern web technology is ugly and clunky. For its defense, it probably owes a lot of its horror to its legacy and the fact that developers try to stretch it into doing stuff which it has never been designed to do. Let’s recapitulate what makes web-based software inferior to locally run native code when both can be used for a given task :
- Web-based software frequently relies on the well-being of an internet connection and a web server, although recent additions to web standards allow one to bypass this in a few specific circumstances.
- Web technology uses interpreted programming languages, and not even well-designed ones. As a result, performance is terrible, and the use of big and insecure interpreters is required.
- Web standards ignore the concept of modularity. This made sense when HTML was about interactive books, but is absolute nonsense nowadays. When a web browser parses, say, HTML code, it filters its markup through the namespace of all known HTML instructions, which means a slow parsing process and a huge RAM footprint. And since the performance of web technology is terrible, web developers and browser manufacturers strive to include more and more features into the core standards, which in turns means that web browsers become bigger and slower, leading in turn more stuff to be added to the core standard in a vicious circle that will never end.
- Web standards are not actually standard. When you build web-based stuff that’s a tiny bit complex, you cannot even assume that if it works in your web browser, it will work in all other browsers, on all platforms. You have to test your stuff everywhere, in a cumbersome and time-consuming process. That’s because web standards are so bloated that everyone fails at implementing them properly in a reasonable time. Markup can also be so ill-defined that everyone does its own thing, like in the recent video markup debacle where proprietary web browser manufacturers couldn’t bear to agree on using the same royalty-free video formats as their competitors. In the end, what’s most likely to work everywhere is ironically plugin-based technology, such as Flash or Java, which tends to be platform-specific, ill-integrated in the web browser, and rather heavy, but pretty powerful if you see beyond that.
- There is no such thing as standard widgets and UI on the web. Everyone does his own thing, like a bunch of ants on LSD to paraphrase Jakob Nielsen. This results in the web being a huge mess from the usability point of view, as compared to native code coded by someone who has the minimal amount of mental discipline required to use a standard widget set.
- The web is massively based on ad funding, which results in abusive advertising everywhere, from distracting animated gif banners to giant talking Flash ads that take up your whole browser window. Special mention to the one from Microsoft which I saw the other day : while I was reading a technology-related article, something animated popped in front of me and started loading a video. The beginning of that video : “Think Hotmail is spammy ?”. Well… Yes, stupid !
- While web technology’s goals of platform independence are admirable, I have to say that it fails miserably at it as soon as you are considering a sufficiently wide range of screen size. As explained earlier, a local API can do much better in the realm of device independence. On the web, what we get is desktop websites that render poorly on phones, aside with “iphone-specific” or “ipad-specific” sites that don’t assume their web nature and do their best to look like an iOS application. Ridiculous.
In spite of all the bitterness I may exhibit here, I actually like the current Internet, with its blogs, forums, and e-commerce sites. It’s a nice place to communicate, learn, buy and distribute stuff, read comics, etc. For all I hate current web technology, I admire what web developers manage to do with that. Working with such a horrible tool must be so painful that I would almost understand that people who got burned get so traumatized that they start to put web stuff everywhere, including in places where it really has nothing to do.
However, I am frustrated by some horrible byproducts of web technology, such as software which needs a distant server to work, excessive OS vendor control on software distribution, or gratuitous use of web technology in areas where better alternatives exist. In this OS, I aim at providing a future-proof and open-minded ecosystem in which everyone, not only a few wealthy companies, can scratch his/her own itch without any risk of censorship, which is incompatible with the first two issues. From a technical point of view, I also consider web technology to be ridiculously immature for its age, and as such will not use it except in areas where it’s explicitly needed — websites, content hosting. That is, unless a sudden web standard reform surprises me by ditching the current mess and starting over from a clean slate based on sane programming principles and modern web use cases. But such a rewrite would need to gain acceptance, which would be quite unlikely…