A bit of history – From a mess of wire to microcomputers : appearance of early operating system concepts

It’s not obligatory to read this article if you want to follow the blog’s main discussion line, it’s just a little aside for those interested in how things went from the fixed phone and hi-fi systems to modern operating systems.

On the early days of computing, somewhere in the 40s, computers were huge machines taking an entire big room, costing so much that only rich scientists may afford building one, and providing a very tiny fraction of the computing power of modern pocket calculators. At this time, the whole thing was vastly experimental, and no one suffered from the fact that one had to specify instructions by putting wires between two sockets and that components burned out about once in an hour. It was just a part of the fun. The sole concept of an operating system didn’t exist in the minds of people then, and they dealt (almost) directly with the hardware. This approach was error-prone, and required to entirely read some huge manual before being able to make the machine calculate 2+2. But who cared when a sole insect putting two unwrapped wires in contact could make the whole thing crash ? There were a lot of more urgent things to do on the hardware side at the time.

First lesson from the past : Always focus on what’s the most important first. Trying to make a machine that doesn’t even work properly user-friendly is deception.

Then, a tiny electronic component bound to a great destiny came to existence : the transistor. Using it as a new core component, computers progressively got thinner and cheaper. And a lot more reliable. Reliable enough, in fact, for selling them to large organisations ready to pay several millions of dollars for them starting from the mid-50s. Since the people making the computers and the people using it weren’t the same anymore, a lot had to be done on the user-friendliness. As a result, the first programming languages, FORTRAN and assembler, came out. They allowed people to write the instructions in a (somewhat) human-readable way on paper, carefully check them for errors (because computer time was very precious at the time), and then make holes in several pieces of cardboard – which were called punch cards – that reflected the instructions in a machine-readable way. The instructions were then brought to the computer, which rode them, translated them using a pre-loaded program to its own, ugly but simple language, and then executed the instructions. This ability to provide the computer with not only an instruction but a set of instructions, and then go away and do something else while waiting until it’s done, along with some other features like non-native language to machine language translation (a concept called compilation) and controlling the time a program may spend on the machine, is the first example of operating system in computer history.

While being primitive, it already pictured the most basic features of a modern operating system :

  • Simplify human-machine communication
  • Avoid effort duplication in program conception
  • Supervise running programs, making sure they abide by the rules (here that of not exceeding the running time the developer has paid for)

Then, in the 60s, computing technology moved on, from transistors to integrated circuits (basically a lot of transistors in a tiny chip). And big companies like IBM started to wonder if instead of making multiple kinds of computers for multiple ranges of needs, they could only make one range of computers, differing only by pricing and performance, and make it so that software made for one computer would run on another one. This compatibility goal later became one popular goal of operating system software, as technology moved on quicker, allowing one to buy a new computer without having to re-write any program from scratch. Aside from that, IBM optimized the performance of existing computers by introducing multiprogramming (keeping several program in memory at once, so that while, as an example, one has sent its results to a printer and waits for it to be printed, another program may immediately do its own calculations without loosing time) and spooling (when a punch card comes around, the computer immediately copies it on some kind of magnetic tape, allowing one to take the card out and put a new one in the tray, and so forth. This way, when the computer was ready to execute a new program, the program was already there on the tape, without the need to wait until technicians put new cards in the tray, one by one).
Then came a logical evolution of multiprogramming, namely timesharing. Timesharing is when several program may be executed at once on a single computer that may process only one instruction at a time. It works by giving each program a certain amount of time before switching to another program. If hardware is sufficiently powerful, real illusion of simultaneity may be achieved this way. Developers were well-pleased, because they could now test their programs quickly, without making a reservation and waiting a month long in order to know that they forgot a period somewhere in their programs.

If we sum it up, the new computing capabilities introduced with the help of new hardware and operating systems were :

  • Make software less dependent of the underlying hardware (improvement in effort duplication reduction)
  • Do multiple things at once, in order to improve performance and user satisfaction.

After that, circuitry got a last step further in the 80s, namely Large Scale Integration (even more transistors on a little chip, much lower prices). While in the end of the preceding era, computers had become inexpensive enough for a department in a company or university to own one, in this era computers became so inexpensive that anyone being ready to save a few bucks could buy one. The need for a computer usable by low-skilled individuals led to very interesting developments in computer science and operating systems, especially pushed forward by Xerox engineers and a conflict between two companies called Apple and Microsoft, which we’re going to study later when we go on details about Microsoft Windows and Apple Mac OS.

For now, that’s all. Thank you for reading !

Credits : Most of the material used in this article comes from the first chapter of Andrew Tanenbaum’s “Modern operating systems”, a book that I recommend to anyone interested in operating system design

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s