Tuesday, June 14, 2005

Building a Reactive Immune System for Software Services

Keywords: self-healing computing, computer security, automated patching, zero-day attacks, patch creation

I got excited seeing the title of this article, but Building a Reactive Immune System for Software Services doesn't really contain any immunology. There's no immune system involved, per say, it's just that it's a self-protecting system of sorts.

The idea is quite clever. First, the bug must be found using some extra instrumentation. Then, once the fault has been pinpointed, the code is recompiled so that the problem section can be run in an emulator. The emulator makes a backup version of the state before executing any code, and if, for example, a buffer overflow occurs (the emulator checks for memory faults), the program is reverted to the pre-fault state. Thus, when it returns to non-emulated code, there will be an error since some code has not been executed, but the hope is that this error is something which the rest of the application can handle. The emulator tries to make intelligent guesses about return codes with some heuristics: -1 if return value is int, 0 if unsigned int, some cleverness with pointers so that NULLs won't be de-referenced.

I wasn't too sure if blocking out random sections of code was really all that viable, but it turns out that servers really are pretty robust that way. They tested with apache (Where would academia be without open source software to play with?), and tried applying their technique to 154 functions. In 139 of the 154 cases, their tests showed that the altered apache did not crash, and often all of the pages were served. Results for bind and sshd were similar. Tests with actual attacks showed that rather than crashing, the servers could continue execution and serve other requests. It certainly sounds promising for servers that need to have high-availability!

The performance impact for this selective emulation is not too severe, but it does require access to the source code to compile in the patch, and there needs to be some fairly heavy instrumentation to determine where the patch should be put. There is also some risk that the emulation will open up new security flaws -- imagine what would happen if the emulated function was required for input validation.

But overall, I found this pretty interesting. I'd been working on a somewhat related idea for a term project, and the talk given by Dr. Keromytis about STEM has gotten me thinking that what I thought was a fun but not-terribly-viable idea might actually be worth pursuing.

I'm hoping to look at all the papers cited in the related works section of this one, but for those looking for a little bit more right now, I direct you to Val's summary of the failure-oblivious computing work. Neat stuff!

0 Comments:

Post a Comment

<< Home