System design 3 – Resource allocation, hardware-process communication

The process abstraction we’ve given in the previous article remains somewhat crude : they can do some calculations, okay, but they can’t manage exceptions, get some more memory if they need it, or talk to any part of the hardware other than the processor and their part of the main memory. Let’s see how we may improve it…


First of all, let’s quickly review what might interest our processes, out of their little private memory :

  • More memory : Some programs can tell which amount of memory they need at boot time. Some cannot, especially those who rely on a variable amount of data coming from the user. In the latter case, there has to be a way for the process to tell “Hey, I’m out of memory, give me some !”, and actually get some if there is available memory.
  • I/O ports : All processors have some mean of communicating with peripherals directly. Ports are a socket directly connecting the processor to some peripheral’s input/output. One may send data through it or receive data from it. That’s about all that can be said about them.
  • Memory-mapped places and DMA : Some hardware, like video cards, have part of their internal memory directly accessible from the processor by simply reading and writing at a specific place in the main memory (that internal memory is said to be memory-mapped, and managing it is part of the operating system development hell). As an aside, there exist a system called DMA that is somewhat related to this feature, but in a more modern way : memory is not directly mapped in the main memory as before, instead one tells hardware like CD readers to load some data at a specific place in the main memory and then may go and do something else while the hardware does its work. As far as I know, using DMA is just about using I/O ports for most modern hardware.
  • Interrupts and exceptions : Sometimes, the hardware has something to say. Hardware is stupid, it just knows how to ring a bell in such a case. That bell is called an interrupt, and in a software world using interrupts means running a specific program bit when an event occurs (ex : the internal clock ticked, the CD reader is done transferring data). Somewhat related to interrupts are exceptions : it’s exactly the same as an interrupt, but is triggered by the processor itself and not by some hardware connected to it. Exceptions occur when software did its maths wrong and divided something by zero, when it tries to access a place in memory which it cannot access, and so on. Generally, exception are errors which the CPU cannot manage all by itself. There are generally two options : either the program manages the exception by itself, or the operating system kills it for doing something wrong without being able to know it.


Allocating memory is simple. Remember : some times ago, in order to achieve process isolation, we sliced memory into tiny bits called pages. Each process got a “fake” memory which only gave it access to the pages it owns. So giving more memory to a process is almost just about giving it some more pages and access to those pages. In order to avoid unplanned sharing of memory, we’ll keep track of which pages in memory are owned by a process, and of which pages each process owns. Actually, the final implementation will keep track of how pages are filled in order to reduce memory fragmentation, but the basic idea is that.

Giving access to I/O ports isn’t much more difficult. There’s just a problem about making sure that multiple processes may not be granted access to a single port at once. And using ports should not totally block process operation anytime. So there’s two steps :

  • Each process must be explicitly granted access to a port before being able to access it – as usual.
  • These accesses themselves will pass through operating system functions, in order to ensure that simultaneous access to the same port will not occur.

In order to improve speed, buffers will be used when processes require to send a lot of data to an I/O port. Their use should be as invisible to programmers as the use of buffers in cout/printf instructions.

Memory maps and DMA are more tricky : it is about being granted access to a specific place in memory for memory maps, so only one application should be allowed to do that. Theoretically, DMA is solely manipulated through I/O ports, however the application needs to tell the peripheral where it has to write in main memory (not in its private memory), so it’s about having some access to a main memory map too. Moreover, some kinds of older hardware only allows DMA to work in certain places of main memory, which basically means that they have to be managed like memory maps.

Interrupts and exceptions are more complicated business. First, the operating system’s core has to manage them at a core level – if a third-party process hangs during interrupt handling, instant system freeze occurs, because process scheduling relies on interrupts started by the system clock. So those have to be redirected to processes then. And then there’s two options, which are equally useful and hence should be both kept :

  • Making the running thread stop what it’s doing in order to manage the interruption/exception (the C++ way of things, mandatory for errors management).
  • Create a new thread in order to manage it (the “pop-up thread” approach, excellent for drivers).

Should an exception or interruption that was unplanned by the program be encountered, ancient wisdom suggests that the process shalt be killed.

So to sum it up, memory allocation will be made the usual way (malloc/free/new/delete and friends), I/O ports will probably be managed as C++-like streams, memory mapped-places will be seamlessly mapped in the program’s memory (provided it’s been granted to access it, that is), DMA operation will work in the program’s virtual memory (with translations to real memory addresses being done by the operating system), and exceptions/interruptions will be managed through C++-like exceptions and pop-up threads (more details about pop-up threads in the interprocess communication section).


Again, that’s a lot of work. Any part of it that may move in user space should, for the sake of improved system reliability.

About memory allocation, what can be done ? Managing paging (like allocating a new page to a process) should be done by the kernel, but determining whether there’s room in an existing memory page or a new page should be allocated, along with managing memory outage problems, can be done by a user process. So most of the memory allocation management should be done by a user process, which would only rely on some primitive paging-related operations.

I/O ports are accessed through the OUT processor instruction, which only the kernel may use. Buffer memory should be shared with the kernel for speed reasons. Simultaneous access management and per-process port allocation does not require kernel access and should hence be managed by a separate process, which in turn takes care of those features.

DMA and memory-mapped places both require that processes have an insight on what’s happening in the real memory, out of its private memory. This insight will be given by the same memory management process which takes care of the memory allocation thing.

Interruptions and exceptions have to be caught by the kernel, then sent to someone else. I think that since the kernel will have to take care of interprocess communication all by itself (see next article for more details), it should also manage interruptions and exceptions transmission to process.

That’s all for now. Thank you for reading !

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s