System design 2 – Processes, threads, scheduling

The first goal of the core of any modern operating system is to provide the developer with an abstraction : the illusion of simultaneously and independently running programs. This abstraction is provided here through the two usual mechanisms : processes (for independence) and threads (for simultaneity). Let’s see how it works in detail.

SIMULTANEITY WITH THREADS

The need to have several tasks running simultaneously is obvious as soon as one talks about GUI or multimedia : you may want to hear music while writing an e-mail, want your YouTube video to continue playing while you browse the comments, and so on. Simultaneous running tasks are also needed at a lower level, because your computer has to manage a lot of hardware at once, and hence simultaneous access to various hardware *will* occur someday. And finally, with the advent of multiple processors, dividing programs in multiple tasks has become mandatory in order to get full performance from the hardware.

Now that we see why it is needed, let’s see how it may work. If there’s more available processors than running tasks, an approach may be envisioned : put each task on a dedicated processor and let it do its calculations alone. Unfortunately, there’s generally more running tasks than there’s available processors. Moreover, this simple programming model assumes all tasks to be equal, where we need some tasks to be more equal than others : if a single malicious or badly made program is working on 40 tasks that basically do nothing at all, all that power is lost for programs doing something more useful. One would sometime like to scale the “usefulness” (or priority) of a program on a numeric scale, when needed, in order to make it more or less important.

How do we address those two problems ? By using a well-known technique in the operating system world called priority scheduling. It is about giving each task a certain amount of time (called a quantum), depending on its priority, then switching to another task. When the program is done with its task or can’t do anything for a while (generally because it waits for another task to be completed, like fetching a file from the HDD), the task switch to the next scheduled thread occurs immediately.

In the computing world, tasks are called threads. In order to manage and schedule them, we’ll need :

  • Saving/restoring of processor state : Threads are executed by the processor. The processor uses internally some data that may vary from one task to another. There should hence be a way to save and restore any data related to each thread.
  • A metronome : Switching between one thread and another one is, as we’ve seen, a task that must be performed at regular interval. Hoping the thread will just tell the operating system “Hey, I’m done !” after they’ve spent a reasonable amount of time would be a big mistake. There should hence be a way to wake up the core of the operating system when that time has expired, telling him “Hey, it’s done !”.
  • Some internal data : For proper thread management, these should include a unique way to identify each running thread, the state of the thread (scheduled, waiting, running, dying), the processor state if the thread is not running, the process the thread belongs to (see below), the relative priority of the thread, the processor it currently runs on (for optimization purposes), and other things like that.
  • A scheduling strategy : Which amount of time should be given on each thread ? How should priority influence it ? On what processor should it be run ? Clearly, some internal logic must be defined. There are many options, and no one is good for sure.
  • Ways to create threads : This sounds simple, but this issue can actually be so tricky than more than one solution for it exists. Some operating systems favor the option of simply giving a function to the operating system and let it create a thread running it. Other prefer the more complicated yet sometimes more powerful option of allowing a thread to duplicate itself and then letting the duplicate do whatever it wants. Let’s ask ourselves in which circumstances we may want to create threads :
    • The program itself is thread-aware : Some programmers spontaneously separate their tasks in several threads all by themselves. Operating system developers bless those guys.
    • The program received some kind of signal : As we said, we tend to favor the “popup thread” approach to interprocess communication. When an event occurs, a thread should be spawned in order to take care of it.

    The first option is the way most thread-aware programs work. Both approaches work here. However, thread duplication may not work in the second case : what if a program has no opened thread and is only waiting for an event to occur in order to spawn one ? Which thread are we supposed to clone in that case ? Clearly, the first option is better in our case, so we’ll keep it.

  • Ways to destroy threads : Threads are destroyed in two cases : when they choose to put an end to their lives themselves (either because they completed their work or encountered an unrecoverable error) and when someone do it for them (generally because they are hanged). The core should provide a function for killing threads and manage “natural death” of threads (when function’s end is reached, an instruction asks the thread to kill itself).
  • Management of waiting threads : In some cases, for example when they asked something and wait for the answer, threads are put to sleep. This behavior should be avoided as much as possible, as it’s a major source of crash (as an example when two threads each wait for the other one to do something), but it is relatively safe and necessary in some cases (like when a display thread wants to wait a certain amount of time before displaying the next frame of a video). The core should hence be able to manage awakening conditions for a thread. This will be described in more details in the communication section.

Now threads are great, but they reach the “simultaneity” goal, not the “independence” goal. They’re just parts of a program running seemingly all at once. There is no memory protection or things like that in the eyes of threads. Making multiple programs independent from each other is what processes are here for.

INDEPENDENCE WITH PROCESSES

Now that we’re able to run several tasks at the same time, we want to isolate separate programs from each other so that they can’t hurt their comrades. This isolation will also be needed when we’ll have hardware resources to mind : if we give printer access to the printer driver, that shouldn’t mean giving that access to the sound system too.

An isolated program is called a process. In its simplest implementation, it has got some fully private memory allocated at run time and can do computations. That’s all. When I say fully private memory, I mean that they do not see the memory of other processes, nor the “real” memory of the computer : their memory is fully virtual. The limitations of this model will be addressed later, for now let’s consider a process this way, related to the program notion.

For process support, we’ll need the following :

  • Thread support : Processes run independently from each other. Therefore we need to have threads in order to support process. Conversely, the existence of processes may influence thread behavior, too, and in particular scheduling : we generally prefer to give equal run time to processes of equivalent priority, not to threads of equivalent priorities, so that a process creating a huge amount of threads may not crash the system.
  • Virtual memory : In order to achieve process independence, there should be a way to prevent a process to sneak into its siblings’s data and to make it have its own private space in memory. We’ll provide this through the paging system : the main memory is sliced into a large number of tiny “pages”, which are attributed on a per-process basis, and processes can only see pages which they “own” through the use of some hardware trick.
  • Some internal data : In order to schedule processes, a unique identifier should be applied to each running process. Again, a number should do fine. Process data should also include which threads belong to that process (in order to destroy them all when the process is closed, as an example), and the global priority of the process for scheduling purposes (along with other scheduler data). The structures necessary for virtual memory appear here, too, as they vary depending on the process.
  • Ways to create and delete processes : Like with threads, two approach dominate : creating a new process for running a specific program, or cloning a process and let the clone do whatever it wants (including running a specific program). Here we have a dilemma : we would need some cloning capabilities, but that would bring an inconsistent behavior (why should processes be cloned and threads be created ?). We’ll hence choose an hybrid approach : processes are created, like threads, but the process creation function does not necessarily runs a program. It may create a clone of an existing process, too.
  • Managing no-thread processes : Should processes with no thread, containing only a private zone of memory, be allowed in a world where threads pop-up out of nowhere ? For drivers or other communication-related software, this sounds great. But now, take an average program. It’s written by an average novice C programmer who doesn’t know about threads. Once the program has completed its task, its thread dies. And then… the process haunts the operating system forever. Clearly, only popup threads-aware processes should live with no opened thread.

That’s all for now. The process abstraction will grow in the future, but here we only need memory isolation and simultaneous operation through the use of threads. Now let’s talk a bit about putting some work out of the kernel.

PUTTING SOME BITS IN USER SPACE

The kernel part of the core, the part which runs with full privileges, is almighty. It has access to anything. Any bug in it is highly dangerous. Therefore, it should remain as small as possible. It may sound strange to have parts of the process management task done by a process, but it can actually be done. One just has to think about what must go into the kernel.

The kernel do things. Hence all actual work requiring privileges require his help. However, taking decisions does not explicitly require its help. This is called separation of policy and mechanism.

The mechanism that absolutely must be put in the kernel here is :

  • Saving/restoring processor state.
  • Setting up the clock for waking up the kernel in order to run the scheduler at regular intervals.
  • Swapping from the virtual memory area of a process to that of another process.

Now let’s think about the implications of having the scheduler implemented as a separate process : each time the clock ticks, the kernel is awakened. It saves the processor’s state, loads its private memory and processor state, opens its eyes, and says “It’s scheduling time !”. Then it awakens the scheduler, which means switching to its private memory and loading its processor state, and tells it to check if there is something new.

First, this is a computer-intensive task, whereas scheduling should be run very often and hence stay light. Next, the scheduler is not a process, since it is not scheduled and hence is given the right to run more often than other processes. Finally, the kernel has to do some process and thread switching at regular intervals, and hence effectively does a scheduler’s job.

For all those reasons, it sounds more reasonable to give all the basic scheduling job (switching processes, switching tasks, knowing which task to switch to at a given time) to the kernel. The kernel part running on each processor should be capable of managing several running tasks of various priority in a infinite loop when the situation doesn’t change, and provide ways to move threads in and out of a waiting (non-scheduled) state as needed.

Death of threads (and processes that can’t run with no opened threads) should also be early managed in the kernel, for speed reasons (we need it to be removed from the schedule and its state to switch to “dying” as soon as possible, so that we won’t lose time scheduling dead threads), but such events should be notified to other, higher-level processes who have some per-thread and per-process strategies.

Before adding any other process- or thread-related feature to the kernel, the decision of putting it in and out of the kernel should be carefully thought. As an example, modification of scheduling when a thread dies (as an example in order to balance load between multiple cores) should be managed by a process that is independent from the kernel.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s