Today, as another filler until I get enough time to resume OSdeving, I had some time to try out a quick performance test that has important implications for PC OSdeving : is it still possible, as of today, to get decent graphics performance out of software rendering in case a dedicated GPU driver is not available ? I’m not talking about displaying millions of triangles per second here, just displaying bitmap animations at a modern screen’s native refresh rate. Well, let’s find out !
Hi everyone !
In case you are wondering, I’m not back working on TOSP just yet. My PhD has now entered its worst phase, with a dreadful last month of manuscript writing and an awfully ill-timed conference next week to make things worse. So I still cannot engage in a long-term personal programming project yet.
However, I have a little spare time left to code stuff I like at home. So I decided to practice my Ada a bit by translating some of Numerical Recipes‘ code snippets in Ada 2012, using my own coding style and a different name to avoid trademark lawsuits. I like it, it’s fun and full of bit-sized programming tasks that I can easily start and stop whenever I like. And at the same time, it also teaches a lot of enlightening stuff about how seemingly mundane decisions in programming language design, such as array bounds, can profoundly affect day-to-day coding in that language.
Which leads me to today’s thoughts on heap memory allocation and why I think we’re really overusing it these days.
So, last November, I tried to resume blogging on OS-related matters, hoping that in time, I’d find enough resources to resume implementation too. Unfortunately, today, I have realized that I need to go in the opposite direction, and take another break from this project.
Why ? Because as a PhD student in my last year, I am extremely late at writing my thesis, and every work day starts to count. Yet simultaneously, outside of work, my life is getting busy again and I need more time to enjoy it (and get some sleep). Even if TOSP still matters a lot to me as a project, it cannot beat those more pressing concerns for now…
Anyhow, dear readers, it’s been a fun ride trying to get this project rolling for the second time. See you on my next attempt, perhaps!
Imagine that one day, as you come to the office, you would find a mysterious muscular man at the entrance, performing something that looks like an ID check. Mumbling about this world’s pandemic tendency towards security psychosis, you would search through your bag for your ID card or driver’s license. But as you would show it to him, he would say that he does not recognize it as valid.
Instead, he would direct you towards the services of Don Corleone, inc., selling *certified* IDs (basically a cheap copy of your id with a shiny stamp on it) for a hefty price, and say that he won’t let you enter without one of these.
Obviously, this scenario would be illegal in most countries, and you would be within your right to call the police. The state is the only legitimate authority trusted with the power to publish ID documents, and when it is democratically elected and subjected to the scrutinity of millions of citizens, one can expect that it won’t abuse its ID editing powers for fun and profit.
Yet for some reason, this is exactly the kind of protection racket that we deal with daily as we connect to the world wide web using HTTPS and have to interact with the current-generation Public Key Infrastructure (PKI), which is based on the concept of Certification Authorities (CA). And as I will elaborate, it gets worse. But first, let’s discuss why we are putting up with that. Continue reading
As discussed in the last post, the purpose of authentication, using mechanisms such as passwords, is to ensure that the person in front of the keyboard is indeed the user she claims to be by testing her knowledge of some secret data. This is typically necessary in situations where users are not there to check that no one physically tampers with their computers, such as when a computer is left powered off, sleeping, or crunching numbers on its own with the screen locked.
Obviously, due to their sensitive nature, authentication secrets should be used sparingly. The more often they are used, the more likely they are to be intercepted by an attacker willing to perform identity theft, with potentially disastrous consequences. Yet I don’t know about you, but in a typical week, the two system dialogs which I interact with most often on my computers are these two fellows:
Both of them are dialogs with a simplistic appearance that any user application can replicate, which request me under a vague justification (“make changes”, “modify essential parts of [my] system”) to input a sensitive authentication credential giving full access to my system.
To Microsoft’s credit, their own implementation is a bit better, in that when setup properly, the dialog does not necessarily require a password to be input, appears less frequently, and can have a quite distinctive appearance (translucent black background) that is hard for malicious software to replicate. But I still think that these privilege elevation dialogs, and the habits they give users to input their (administrative) password whenever they are requested to without a clear justification, are a bit of a security issue. Continue reading
Moving on in the present series regarding TOSP’s stance on security matters, I would like to stop discussing file access control and process sandboxing for a bit and discuss a somewhat higher-level, but equally important concept: authentication.
For those new to the term, here’s a short reminder: when you log into an operating system, much like on any other computer service, some process occurs where the operating system checks that you are the user who you claim to be. The OS does this by having you prove that you possess a piece of information which only you should know about, such as a password. This process is what is called user authentication.
In this post, I will discuss the various options which the modern world offers for authentication, their respective advantages and drawbacks, and the role which an operating system could play not only for the authentication of its own users, but also for authentication of users to third-party services.
Ever since people decided, around the UNIX days, that file access needed to be controlled, all mainstream operating systems have featured file access control policies that follow the same basic principles.
Every user of a computer has a unique numerical identifier (UID), belongs to one or more groups each having a similarly unique identifier (GID), and somewhere up in the sky there is a “super-user”, called root or Administrator, that can bypass file access control entirely whenever the need arises. Processes are run under the responsibility of a given user and group, and files are also owned by a user and group, by default that of the process which created them.
In this post, I will discuss why this security model has, in my view, outlived its usefulness in the area of general-purpose personal computing, and why I believe that process capabilities would be a better fit in this area. Continue reading