The need for a new desktop OS, 2.75 years after

Two years and nine months ago, frustrated by mainstream desktop OSs, I decided that it was time for me to do something crazy and design one that satisfies my needs better. This is how, in the middle of January 2010, the OS|periment project was born. Now, today, as I prepare myself to review and update a lot of the project’s documentation as part of a move to a new project hosting service, I have decided to sit down a bit and think about what would be the motivations, today, for someone like me to work on a new desktop operating system.

Adapting to the modern computing landscape

Windows NT, Linux and Apple’s OS X all carry some amount of baggage from ancient computer eras where…

  • x86 processors were 32-bit
  • Computer screens shared a common pixel density of 96 ppi and started at the 12″, 4:3 form factor
  • HIDs revolved around keyboard, mice, and game peripherals
  • GUI was for newbies and real men used CLI
  • Computer security systems were for those paranoid guys in the military
  • Only almighty and fully trusted sysadmins were supposed to know how to administrate computers
  • Computer networks were either absent or guaranteed to be available through expensive infrastructure
  • CPU power increased very fast while RAM was expensive
  • The computing market was immature and failure was largely tolerated in consumer products

In contrast, nowadays…

  • Every x86 processor on the market is x86_64 compatible, using it in legacy operating modes wastes its resources
  • There is a wide variety of screens around and counting in pixels propagates the “high-res screens are unreadable” myth
  • Touchscreens and styluses are widespread
  • CLI is for highly exotic tasks and situations where the GUI is broken
  • Malware and security exploits are everywhere
  • Computer users own their computers and administrate them themselves
  • Everyone is connected to the internet, many using largely unreliable cellular connections
  • CPU power stagnates and decreases through “mobile devices”, 4 GB of RAM is a commodity
  • The computing market is mature, users expect perfect reliability and manual-free usability

Taking these changes into account require new approaches to operating system and software design down to very fundamental levels. Recent designs targeted at cellphones and tablet computers show that commercial OS designers tend to acknowledge this problem, but fail to realize how universal and important it is. Once again, they produce inflexible designs that will need to be rewritten when the next big thing in hardware comes out, and either leave desktop customers in the dust or hand them out hideous and barely usable backports of cellphone-oriented technology with the justification that “mobile devices are the future anyway”. In my opinion, this madness has to stop.

Proprietary operating systems become even harder to tweak

In the past, I have discusses how Windows and OS X displayed, in many ways, unattractive characteristics to build a better desktop environment on. Today, things are becoming worse on this front. Since Microsoft and Apple are too incompetent to build an operating system that anyone can safely administrate, they have taken the alternate option of becoming the new dictatorial sysadmin for the millions of personal computer users around the world.

Through unavoidable vendor software distribution channels and incremental tweak-ability lockdowns, computers running Windows and OS X are incrementally becoming as locked down as a cellphone from the Java 2 ME era. Addressing failing OS components through customization become increasingly unfeasible as the companies tighten their grip on their users, and make gigantic amounts of profit in the way by racketting software developer with ridiculously high distribution costs.

In PR documents, these companies will claim that they have a vision, build upon it, ensure that everyone shares it, and that if you don’t like it you are better off using something else. Well, let’s build that something else while we can still install that on our computers, shall we ?

Linux is still not a good base to build upon

What of the Linux ecosystem ? It has this huge developer community, it is moddable, and it provides the highest level of hardware support achievable outside of the Windows and OS X worlds. Surely, I could save myself a lot of effort by making myself more deeply familiar with Linux tech and reusing everything I can. Unfortunately, I think that there is a huge problem with using Linux tech to build a more modern desktop operating system, and that’s one of careless developer attitude.

I could trace the problem back to the origins of the Linux kernel and the Torvalds-Tanenbaum debate. But let’s look at how it is today instead. No one cares about such things as ABI stability in the kernel and user-space libraries, causing regular system updates to break third-party kernel modules and software unless they can be recompiled (and sometimes, even that is not enough, as can be seen with the recent update that broke the hard-earned Optimus support on my laptop’s Fedora distro). As soon as some major element of the Linux ecosystem stabilizes, developers invariably decide to “improve it” by massively dropping it and rewriting a successor from scratch all over again. GUI is still a largely nonstandard and unstable component, and the bulk of Linux software still solely uses CLI for such vital functionality as crash reporting. Complete software rewrites that are incompatible with their predecessors are presented as updates rather than as a new product. It is pretty much impossible to write software for Linux once, and distribute it to users of all distributions. And so on…

I use Linux because alternatives suck either as much or more. But from the distant look of the end user, it seems like developers in the Linux world pay no attention to such vital concerns as stability, reliability, usability by unexperienced users, user feedback, or the needs of commercial developers who DO NOT want to join the kernel team just to write a stupid hardware driver. Making something reliable and enjoyable out of the generally used Linux stack would not just require carefully picking the right components, but basically forking everything from the kernel to the GUI libraries to ensure that the constant upstream breakages do not propagate to users. That seems, taken together, even more difficult than starting from scratch and taking a lot of care not to fall into the same situation, if you are not extremely knowledgeable about Linux tech to start with, I applaud the Android team for having the courage to do it but I wouldn’t want to be in their place.

What of more minor OS projects ?

Overall, the advantages gained from joining or forking an existing project must be weaker than the pain it involves. This generally means that I have to find a project with similar goals as mine. So far, I have yet to find it.

  • Haiku, as an example, is technologically quite interesting, but currently stuck in the past by trying to revive the dead and buried BeOS operating system. Attempts at criticizing the design of that operating system, as an example, will be often met with hostility from its developers, who seemingly never truly accepted its failure.
  • BSDs essentially follow the trail of Linux on the desktop front as their developers have apparently neither the creativity nor the courage to come up with something unique and interesting in this area, often preferring to work on functionality for paranoid server sysadmins instead.
  • MenuetOS has lots of interesting features, but its ASM codebase identifies it as a hobby project without any pretention of becoming something more later. Not that there’s anything wrong with that, it simply doesn’t meet my goals.
  • Syllabe is now essentially the pet project of “Kaj-de-Vos”, who has very specific ideas of where he wants to go and will react with extreme hostility to opposing propositions. It is developped in almost complete secrete, the only news coming from it being related to the support of a niche programming language, REBOL, and its derivatives.
  • The Genode Operating System Framework is a very interesting project from a technological point of view. From what I can gather, it attempts to define a standard OS architecture and a standard set of abstractions that can plug into many existing OS components (both at the kernel and user-space level). Kind of like UNIX set the standard low-level abstractions for many operating systems to this day. However, for now, the project doesn’t seem to have clearly stated goals, and it is unclear what this architecture they define is to be used for, how it is superior to alternatives, and so on.

Conclusion

If there are no mistakes in this article (please report any in the comments), it seems to me that as of today, attempting to design a new operating system that addresses modern personal computing needs is still relevant. It also appears that although inspiration can obviously be taken from the design of other open-source operating systems, none is currently suitable as a base for this project. Thus, I still have work to do on my spare time… And will hurry up, once I’m done catching up with all those things that have happen in my online life since I tried to send my computer for repairs in July.

12 thoughts on “The need for a new desktop OS, 2.75 years after

  1. Alfman September 11, 2012 / 7:42 pm

    “Proprietary operating systems become even harder to tweak”

    Not just software, but the hardware too. Obviously locked down hardware is going to raise the difficulty/cost level for anyone who was previously able to use commodity computing devices as a leaping board towards alternative operating systems. I expect your planning on supporting ARM devices, how much effort do you see yourself spending on trying to defeat deliberate lockouts which are prevalent on those? You might be able to run on the coattails of linux & it’s bootloaders, is that a viable plan?

    The reason I bring it up is because most people won’t realistically buy specialised open hardware to test out TOSP, and yet developing and maintaining procedure to hack into their existing devices is undoubtedly a full time job in itself.

    You could stick with x86, which is still widely hackable for the time being (we’ll see what happens with win8 machines), but that implicitly means not reaching the majority of the tablet market, which you have expressed interest in.

    “Linux is still not a good base to build upon”

    I agree, linux has a lot of faults. A kernel should be modular enough to allow changes to occur independently from one another without cascading through the kernel and without even the need for a recompile. Ideally a module should never break until the interface to it is changed, which should be an infrequent event. Most problems are fixable but they persist because of Linus’ own stubbornness to improve the situation. I actually think some poor decisions were influenced by non-spoken goal of making it deliberately difficult to maintain 3rd party code outside the mainline kernel, but that’s just a conspiracy theory. In any case, this topic is so taboo it’s often better not to talk about it directly :)

    “What of more minor OS projects ?”

    You mentioned BSD doesn’t have a creative effort going on for the GUI, but maybe you could use it’s kernel and save yourself some work, particularly with regards to drivers. I for one have felt the appeal of writing my own kernel. I’m even confident that you & I could beat the big guys at building a solid kernel with a better model. However for me one inescapable fact is that there are probably a million man hours that have gone into drivers, which frankly is important to users.

    I guess I can boil my whole comment down into one question:
    Precisely what hardware are you going to support?

  2. Hadrien September 11, 2012 / 9:45 pm

    “Proprietary operating systems become even harder to tweak”

    Not just software, but the hardware too. Obviously locked down hardware is going to raise the difficulty/cost level for anyone who was previously able to use commodity computing devices as a leaping board towards alternative operating systems. I expect your planning on supporting ARM devices, how much effort do you see yourself spending on trying to defeat deliberate lockouts which are prevalent on those? You might be able to run on the coattails of linux & it’s bootloaders, is that a viable plan?

    As for ARM, I do intend to support it at some point, if standard and unlocked hardware becomes available in large numbers the way it is in the x86 world currently. I would not support ARM in its current state, because having to write vast amounts of device-specific code requires huge teams of coders, which I don’t have, and for hardware that is obsoleted and renewed every 2-3 years it doesn’t make practical sense.

    Windows 8’s exigences of minimal hardware standardization (UEFI, ACPI…) give me hope that all consumer-oriented ARM hardware could end up following those standards for the sake of cost reduction. In such a case, I could start to envision proper ARM support, either on the myriads of cheap hackable ARM boards that have been popping up recently, on consumer devices that provide official support for bootloader unlocking (cf Sony Xperia devices), or even on purely locked-down devices once bootloader unlock tool usability has improved.

    The reason I bring it up is because most people won’t realistically buy specialised open hardware to test out TOSP, and yet developing and maintaining procedure to hack into their existing devices is undoubtedly a full time job in itself.

    You could stick with x86, which is still widely hackable for the time being (we’ll see what happens with win8 machines), but that implicitly means not reaching the majority of the tablet market, which you have expressed interest in.

    My freshly rewritten current plan is to stick with x86 (_64, to be precise) until the codebase has reached a certain level of maturity and geek attention. After that, there will still be time to think about extending hardware support and hiring driver developers, including on ARM.

    If even x86 becomes locked down, then this project will probably be terminated, as it will no longer have any standard and large hardware base to target.

    You mentioned BSD doesn’t have a creative effort going on for the GUI, but maybe you could use it’s kernel and save yourself some work, particularly with regards to drivers. I for one have felt the appeal of writing my own kernel. I’m even confident that you & I could beat the big guys at building a solid kernel with a better model. However for me one inescapable fact is that there are probably a million man hours that have gone into drivers, which frankly is important to users.

    I honestly have absolutely no idea of how good BSDs are on the hardware support front. Echoes here and there seemed to imply that it did not directly benefit from the popularity of Linux (drivers have to be rewritten), and that they struggled in similar areas (accelerated graphics, laptop hardware, wireless networking…).

    Another potentially problematic area, but which I again know nothing about, is how much BSD drivers assume special design choices in the upper layers of the OS stack.

    I guess I can boil my whole comment down into one question:
    Precisely what hardware are you going to support?

    Short-term : Standard x86 features, including framebuffer graphical output through VESA VBE and UEFI GOP, standard USB and PS/2 mouse and keyboard input, storage on CDROM drives, and USB pen drives.

    Long-term : As much as possible, but particularly hardware that is vital to match everyday desktop use cases such as networking hardware (both wired and wireless), power management for laptops, sound, and printers.

    Then, in no particular order, hardware that is necessary to accomplish more specialized computing tasks and whose support is potentially harder to provide : accelerated graphics, scanners, webcams, pen tablets, GPIB cards…

    ARM stuff belongs either to the first or the second category depending on how it’s going to evolve in the upcoming years.

  3. Alfman September 12, 2012 / 1:40 am

    I noticed a touchscreen in particular wasn’t listed, is that deliberate? In theory you could support it even on x86 where it is much less common.

    I’m actually unaware of who the target demographic is. Obviously much of the microkernel design under the hood is the same regardless of how it will be used, but different users probably have very different expectations of the OS interface. Who is it being designed for? Have you answered this already?

  4. be Hadrien September 12, 2012 / 9:21 am

    I noticed a touchscreen in particular wasn’t listed, is that deliberate? In theory you could support it even on x86 where it is much less common.

    Nope, just a big mistake. I’m putting it in the “if testing hardware is available” category right away, thanks ! :)

    I’m actually unaware of who the target demographic is. Obviously much of the microkernel design under the hood is the same regardless of how it will be used, but different users probably have very different expectations of the OS interface. Who is it being designed for? Have you answered this already?

    Most important scope restriction is that I target people who own their computers, and use and administrate them locally. Family- and roommate-shared machines can probably supported too as it requires little extra work (just the introduction of extra users with limited privileges whereas mono-user is sufficient for the former). Company computer networks which are centrally managed through a LAN, as an example, is definitely a non-goal.

    As for use cases, pretty much those for which we use such computers nowadays : creating new stuff (inc. software), working, media consumption, and random fun. I’m probably going to target content creation first because it’s easier to support in a niche OS (you just need to write good tools once, not to create a full ecosystem of games and such). I aim to make software more flexible across computer form factors and input devices, so that “write once, use everywhere” actually becomes possible (or at least stays just a recompile away for architectures which the original developer did not target).

    I do not ask for a specifically high level of competence from users. Early releases will probably require computer literacy to use since they would be too incomplete to attract the attention of anyone else anyway, but my goal is that in the long run, it will be possible to mostly own and administrate these machines with basic computer knowledge, only requiring expert help for installation and in case of major (mostly hardware) failures. This should be an OS that one geeky person like you and me can install on a relative’s computer with confidence that it won’t break all the time or require constant support for everyday use, yet still enjoy when we use it ourselves.

    I should probably write that one down in the project wiki too…

  5. lielawisely September 20, 2012 / 11:23 pm

    Hi from the Silicon Valley! Just wanted to say, I’m a fan of your blog. It’s funny and has a lot of interesting points. Currently reading for inspiration (my friends and I are trying to make an OS as a project). :) Oh, I stumbled on your blog searching “tiling Windows manager’ — trying to get people’s ideas on the pros and cons.

  6. Hadrien September 21, 2012 / 10:47 pm

    Hi from France ! It’s always nice to hear about other OS geeks, and I am aware that people sometimes discover my blog through surprising search engine queries

    As for your project, I hope this blog will provide you with at least some inspiration, and think that you might also want to take a deep look at OSdev.org’s excellent wiki, which contains a wealth of information on OS development, from the basics required to start an OS project (“Getting started”, “Beginner mistakes”, a fairly extensive directory of existing projects, and those priceless ready-to-tweak Bare-bones kernel examples) to fairly advanced technical matters (such as very detailed pages on USB and its protocols, and keyboard handling on x86).

    For a more architecture-independent and theoretical overview of typical OS work, I also recommend getting yourself a copy of Andrew S. Tanenbaum’s “Modern Operating Systems” : it sure is fairly expensive, but also priceless as a reference documentation. It is rather easily found in university libraries, so you might want to check it out to decide for yourself if it’s worth buying. Myself, I got my copy from my brother as he completed his CS studies.

    Whatever you do, good luck with that, and feel free to drop by a link to the project’s website sometimes if you want an exterior person to take a fresh look at it at some point !

  7. fran September 27, 2012 / 11:15 am

    I am interested in what applications you are planning to support.
    Except for a very few people most are just concerned with running software.
    Will you for instance strive to support JRE and how easy will it be to recompile an application, say in C to run on the OS.

  8. Hadrien September 27, 2012 / 7:54 pm

    That is a very interesting question, which I have asked myself quite a lot of times. And I’m still not sure what the right answer is.

    The thing is, I am heading towards a system and API architecture that is much different from the standard UNIX stack. Since it is required for most of the distinctive features which I have in mind, it sounds unavoidable. But as a result, porting software would require some kind of emulation layer, and unless some parts of the ports are rewritten, they couldn’t benefit from all the OS features and be well integrated with the overall environment.

    So what do I do ? Do I spend much time creating emulation layers, knowing that apps which go through them will offer a sub-par experience as compared to native ones and that I will be unfairly blamed for that ? Do I try to make and encourage making native-quality ports of popular software even if it means forking it and rewriting large chunks of its code ? Both ? Neither ? Something inbetween ?

    A historical example of such a situation would be Xorg on Mac OS X : it is universally hated for how badly and alien it performs, but it is also the only way to use some software which has never been ported to native libraries on the Mac platform. Is this where I’m heading to ? I think I don’t know it just yet. Perhaps I should focus on building a solid foundation first, and only care about app compatibility layers next.

  9. Alfman September 29, 2012 / 11:35 pm

    “Do I spend much time creating emulation layers, knowing that apps which go through them will offer a sub-par experience as compared to native ones and that I will be unfairly blamed for that ? Do I try to make and encourage making native-quality ports of popular software even if it means forking it and rewriting large chunks of its code ?”

    Well, it’s an undeniable catch-22. The OS isn’t usable for most of us until it can run our software, however if you focus on standard APIs like those in POSIX/glibc then you’ll probably end up with another ‘nix clone OS that will get judged on it’s ability to run existing linux-like software instead of native TOSP software. It will inevitably fall short because it wasn’t engineered to be a ‘nix.

    On example is posix file permissions, there’s no good way to bridge them with other approaches like ACL even with an “emulation layer”. I think linux basically painted itself into a corner here in the name of POSIX compatibility. It’s true the kernel’s core ext3 drivers now support ACLs, but the rest of the OS and software ecosystem are heavily geared towards the POSIX style permissions. I’ve given up on linux ACLs because I can’t manage them through SFTP. Furthermore the headache of legacy POSIX permissions never goes away, many daemons such sshd will still complain even if the ACL permissions are correct.

    Another example is process and thread semantics, one function specifically calling attention is “fork”. Consider that perl’s interpreter had to be specially re-written for windows.

    I don’t think this changes anything you said, an emulation layer is the right way to go about it, but I think there will be certain areas that will be mighty tough to emulate well. Never the less my vote on the matter is that you forge ahead on the design you think is best without regards to how it might not be compatible. In the end, your not aiming to be a linux clone. (Alas, it doesn’t solve the catch-22)

  10. F September 29, 2012 / 11:49 pm

    IMO if this OS can support a full Web 2.0 (sorry for the buzzword) browser then the OS get’s a huge web app boost. Obviously stuff like google apps, and many other apps will run.
    This sort out office productivity apps.
    I am not sure what the challenges are implementing say a webkit based browser on TOSP but I think this will be the way to go, at least at first.

  11. Hadrien October 1, 2012 / 6:54 pm

    Porting a web browser should not be incredibly hard. Networking drivers, on the other hand, will probably be quite a bit tricky to get right.
    I see it as one of the things that have to be done at some point, but that I won’t be able to implement very quickly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s