Are compatibility layers overrated?
Every time a new cellphone operating system starts to get critical interest, it has become the norm to ask whether it is able to run Android applications, and to flag it as doomed to failure otherwise. After all, many believe that software catalog is all that counts on a modern OS, and the bigger the better. But one should not forget that compatibility layers also have costs themselves, which might counterbalance their benefits. Here is an attempt at discussing those.
A few compatibility myths
“Reusing existing software avoids reinventing the wheel”
Writing compatibility layers is, in itself, much like reinventing the wheel, only worse. First, one has to write many new functions that do nothing but calling a similar one from the system API, and maintain them too as the APIs of both OSs evolves. Then comes the problems of things which are done in a radically different way in both OSs, in which case compatibility functions have to proceed approximating things in a very crude way. That often makes them use system resources inefficiently, display a user interface that’s completely alien to the rest of the OS, or both. The only alternative to these issues being to create a true clone of the original OS, in the spirit of ReactOS, in which case users will just use the real thing and stop caring.
“Compatibility brings users, and users speed up development”
Again, users know the difference between a compatibility layer and true native code. Besides usability matters, since compatibility layers are always one or two API versions late with respect to the OS they’re emulating, new programs will often work very poorly, or not work at all. If all users are interested in is the software ecosystem of another OS, then they’ll just go for that OS and the battle is lost. As for speeding up development, let us remember agin that compatibility layer development has a cost, too, and that getting new developers in the project may not be worth the lost development time of experienced ones.
“Familiar software lowers the barrier to entry for newbies”
This is a tricky one. The argument does hold, but only if the ported software is also similar to native ones. Otherwise, users will just be confused by the apparently inconsistent behaviour of software. Moreover, what is especially tricky when using software compatibility layers is that software can look similar, feel similar, but behave in a totally alien way in key areas such as windows focus behaviour or keyboard shortcut, due to differences in the underlying infrastructure. Thus, I would argue that in practice, software can only be efficiently ported when both platforms have very similar user interfaces.
“With these layers, one can write software once and run it everywhere”
See above. Porting through compatibility layers can only be straightforward when the source and target platform share many user interface elements, or else the ported version will either feel completely alien to users of another port (if it is consistent with the OS UI guidelines) or to users of native OS applications (if it is consistent with other platforms’ ports). So far, every promise of making code behave consistently across multiple platforms either has been broken or has required extreme developer care.
“Compatibility helps new OSs to benefit from the work spent on others”
This argument is very similar to the wheel reinvention one, except that it can also imply that as new applications are released, the OS which has implemented a compatibility layer can easily benefit from these too, for free. This is obviously wrong: a compatibility layer is, like the OS it emulates, not set in stone, and supporting the latest software also requires supporting the latest APIs, which in turns requires extra work on the compatibility layer side… The only thing which comes for cheap, in the end, is precisely the kind of native software whose development is hampered by the existence of compatibility layers.
It is not particularly easy to be a new player in the OS market, without the established software ecosystem of your competitors. Some would argue that compatibility layers are a solution to this problem, but myself, I’d tend to argue that they cost more than they bring, that they can hamper the development of the OS they target, and that they do not truly solve the software library problem. As such, it seems to me that they should be avoided unless absolutely necessary. Instead, the best strategy for a new OS project would probably be some more classical plan for resource-constrained teams: efficiently assessing the priority of the various tasks that must be done and going from extremely beneficial tasks to weakly beneficial ones.