Returning from RPC: a follow-up
For more than a year now, I have been working on the design on an IPC primitive, which I call Remote Procedure Call or RPC and want to put at the heart of this operating system. While the basics have been covered for a long time, ironing out the details and polishing this abstraction until it shines has been an endless struggle. In particular, the way an asynchronously called function may return results is not yet defined in a good enough fashion, and that must be clarified before implementation of IPC functionality starts. In this post, I’ll start by recalling what RPC is and why I’m working on it, then discuss the design criteria that a good RPC equivalent of “return” should follow, and finally discuss the suitability of various existing IPC and asynchronous programming abstractions to this end.
The nature and purpose of RPC
RPC’s core goal is to provide a way for a process to access another process’ functionality in a way that is as close as possible to a simple shared library function call. The goal here is to make it easier for user applications to communicate with system daemons without going through extremely thick library-based abstraction layers, so as to avoid all the typical pitfalls of using dynamic libraries to put system tasks in the hands of user software.
One such pitfall is that of reduced system security and reliability, since sandboxes are less effective when system and application code are regrouped within the boundaries of a single process. Another is that of reduced performance, since system tasks could theoretically get a performance boost from its trusted nature (e.g. by using real-time threads), but cannot safely do so if they run within the boundaries of an untrusted process. Shared libraries also tend to induce more effort duplication than daemons in a running system, since they can only share minimal bookkeeping work between processes, and this can in turn negatively affect performance. And finally, shared libraries are notoriously difficult to update in a way that extends their functionality without breaking backward compatibility with software that relies on earlier revisions, and to update without restarting running user processes, to the point where the whole “DLL hell” expression has been created to describe this deficiency of modern OSs.
To be complete, though, such a “library-less” abstraction needs a way for system services to return replies to user requests, much like the “return” statement of a C library’s code does. Let’s now discuss the various criteria that such an “RPC return” should follow.
Criteria for a RPC equivalent of return
An equivalent of “return” for RPC calls must…
- Be usable beyond just returning results, to notify software of events that it has planned but not directly triggered.
- Be compatible with RPC’s asynchronous design, so as to efficiently take advantage of modern computer’s parallel processing capabilities.
- Be compatible with both “semi-synchronous” (send a bunch of RPC requests at once, then wait for their completion) and event-driven (manage each task completion event separately) programming without a need for explicit server support.
- Be easy to setup for client software, preferably as seemingly simple as allocating and initializing a variable.
- Offer a way for the RPC system itself to report errors and status information to the RPC caller.
- Offer a way to discriminate signals and data that come from different requests made on the same RPC channel.
- Not rely on the availability of advanced programming language constructs, such as generics or operator overloading.
Taking into account these criteria, I have considered the following options:
A variable that behaves much like a pointer to returned data, but on which access attempts result in synchronous blocking until the RPC task is completed.
While elegant, this abstraction is ill-suited to event-driven programming, requires advanced programming language features to be used comfortably, offers no natural way to return RPC errors, and is overall a trap for newbie programmers that are abused by its apparent synchronous simplicity. I thus reject it.
Traditional callbacks do not work in an RPC world, because daemon code has no way to run client code once a task is completed. The closest equivalent is for clients to set up an RPC entry point associated to the function which they want to use as a callback, and direct the daemon there for reporting results and errors.
Such RPC callbacks, however, have a number of issues. They are pretty hard and expensive to set up, especially when you want to discriminate which callback is a reply to which request without asking every daemon to implement its own version of that functionality. They are also ill-suited for semi-synchronous programming. They do, however, free up the RPC call’s own return value for error codes and status information.
Shared memory and signalling
If callbacks won’t do the trick, perhaps a more old-school approach is in order! How about just using RPC’s memory sharing capabilities to provide the server with some storage space to return results, then implement a simple “signaling” mechanism to notify the client software of an incoming result or event?
While this approach is getting there, it still has an overly convoluted setup procedure that is decidedly not suitable for simple tasks (think “return 1;”). Technical limitations of memory sharing in paging-based process designs also make it a waste of memory to use shared memory block for every single return value. Shared memory also has its pitfalls, especially when it must be accessed by two simultaneously executing threads, and the signaling mechanism must also scale well to thousands, or even millions, of concurrent notification channels, which may be nontrivial to design.
This last abstraction, that involves having processes pass packets of data across some kind of virtual pipe over which processes can synchronously wait for data (including by monitoring multiple pipes simultaneously), can arguably be implemented as a variant of the previous one with a fresh coat of paint. But it has a few unique characteristics that are worth mentioning.
Like with shared memory, the message passing channel’s buffer space can be allocated by the client and shared with the server. This setup procedure can be made relatively seamless and PL-agnostic, and the voluntarily limited nature of these channels makes sure that shared memory newbies won’t shoot themselves in the foot when using them. What remains to be addressed is the memory cost of memory sharing and the implementation of a scalable signaling infrastructure, but I think that overall, message passing channels are the best candidate which I have so far for returning results from RPC requests and setting up notification channels.