Hello everyone, and welcome back to this blog !
First, what’s new in here ? As I said earlier, I grabbed my computer at the post office on Monday. I then discovered, having went back to home, that mainstream Linux distros did not support its graphical chipsets out of the box (yes, “chipsets”. There are two in there, and none works). As I prefer fighting with Windows rather than fighting with X.org (because I’m almost certain that I’m going to win facing Windows), I’ve been straining in order to get a proper development platform installed on Windows 7. The result feels a bit ankward, but works in a sufficiently good way. For people who are interested, I’ll put a how-to explaining how to do that on the project’s wiki, which is currently not on a polished enough state to go mainstream.
Now that everything is working (and how ! The performance/power usage ratio of that core i5 mobile is very impressive), I’ve started to write code again. With a little bit of luck, the bootstrap part should be over in a week or so, before the end of June as expected. However, one should better not consider this as a definitive statement.
Having spent that much energy wrestling with a computer, I was in the mood for a nice rant in-depth comment on OSnews about security systems. I’m quite happy with the result, so I leave the non-rant part here, at the mercy of any comment or observation which you might like to make ;)
Let’s take as a basic assumption to begin with that absolute security does not exist. It’s effectively true, because it would require the user to make any kind of hardware and software he uses all by itself. The best we can do is to reduce the amount of trust that the user has to put in third parties. As far as I know, these are the strategies which exist as of today :
1-Keeping the user well-informed and in control.
2-Make knowingly malicious software disappear.
3-Have experts analyzing the software and say if it can be relied upon.
4-Be cautious about what kind of software is installed, do not let anything make its way.
5-Put limitations on what software is able to do.
Now let’s analyze those strategies. First from a philosophical point of view.
1, 4, and 5 are at the same time the most safe and liberal (as opposed to authoritarian) solutions. They don’t require to trust an additional third-party about what is safe, the user is independent instead of being treated like a child.
2 and 3, on the other hand, are more dangerous, because you have to rely about an unknown bunch of people who pretend to know what is safe (and can be themselves authors of malicious software). For that reason, those options should not be used independently, but rather as a complement of other options that one can bypass.
After this philosophical episode, let’s get more technical and see how things work.
1/Keeping user informed
One might argue that it requires the user to have some previous knowledge of malware. However, everybody has such knowledge, to some extent, in the form of common sense. If an unknown guy comes at home and ask if he can borrow your TV set, you’ll probably say “no”, because you’re almost sure that he will never come back. What the system manufacturer has to do is to describe, in an understandable yet precise fashion, what the application wants to do. Precision is important : an application should not ask for “access to system files”, but rather for “ability to change active wi-fi connection”. This requires a fine-grained underlying security permission system.
A second thing the system manufacturer can do is to make the system analyze the permission being asked, and specifically warn the user about dangerous ones. As an example “Make a phone call with prior acknowledgement from the user” is relatively safe, while “Make a phone call without prior acknowledgement” or “Access all system files” are dangerous options, which the security system should warn the user about.
A security system built around those ideas can both help an expert who wants to know if the application is safe and a non-technical user who can check, at his knowledge level, if the software is asking for reasonable things.
2/Remove known malware from user sight
This is the most obvious benefit of a central repository system. All properly maintained central repositories (from Debian’s APT repos to the Android market) provide this kind of security, and it cannot work properly outside of such a system, as anyone who experienced antivirus software on Windows can acknowledge. This system has, however, a major flaw that make it insufficient alone except in technical environments : in order to work, it requires a large and representative part of the user base to keep the repository maintainers informed about what malware they have found.
This one is quite interesting, because it is were being open-source can actually result in increased software security. That’s because a source code can generally be fully checked for malicious behaviors in a reasonable amount of time. In a binary-based system, however, such a full check cannot be performed, and the sole advantage that an expert has over an average user is his experience (which is not necessarily worth much in a rapidly-evolving and polymorphic area like malware).
All the expert can do, when being given a binary file, is to try to run it in every single possible way and check for unwanted behaviors. However, this is insufficient. Suppose that an attacker makes malware which looks like a plane reservation application. The application checks plane availability from a distant server (which is controlled by the attacker), and allows the user to book a seat. However, if a specific available plane is registered on the server, the application suddenly goes evil. This is commonly called a backdoor.
The attacker first submits his application to the expert, with usual content on the server. The expert stress-tests the app, and notices no strange behavior, so he approves it, and people start to download and use the app. Once the app is widespread, all the attacker has to do is to put the fatal entry on the server, and BOOOM ! Millions of phones are suddenly tainted by malware.
4/Thinking before installing
This one can be very effective again phishing-like attacks (“Your phone is slow because it is tainted with malware ! Download this absolutely free software to remove it !”), but it requires prior education of the user.
Does one really require to replace a phone book ? A core component of the system like the control panel or the filesystem manager ? One shouldn’t give very high permissions to an application without thinking not twice, but rather three times. If a core component of the system doesn’t work, you should consider moving to another system, warning the manufacturer about its defects, or patching it yourself if you’re a technical user and it’s free software. Relying on several third-parties for core components of an OS is generally a bad thing, as some Linux user may acknowledge.
5/Limiting apps capabilities
This one is *extremely dangerous*. People who wrote the security system just don’t know what the users will do with their computer. Moreover, computer usage varies wildly as time passes. Who are we to decide what is good for the user ?
However, this idea can find its place in a less extreme form, that is carefully choosing what an application can do without a security permission being given. As an example, consider a situation where applications have a private folder, sort of like in OSX. They can do whatever they want in said folder, but they can’t do anything outside of that folder without prior acknowledgement (as an example through use of an “open file” dialog, or by getting the appropriate security permission). This way, the amount of warnings about security permissions can be reduced, and hence once one does appear, there are higher chances that the user will read it instead of just clicking “next”/”ok”/”ack”/whatever right away.
Well, that’s all for now, thank you for reading !