Does making the kernel harder make making the kernel harder?

Abstract

The Linux Kernel Hardening Project is making significant strides in reducing vulnerabilities and in increasing the effort required to exploit vulnerabilities that remain. Much of what has been implemented is obviously valuable, but sometimes the benefit is more subtle. How does the introduction of refcount_t make the kernel more secure, and by how much? What value is there in removing variable length arrays? Casey Schaufler, a (really) long time kernel developer will explain why some of these changes provide significantly greater value than might be apparent to the casual observer. He will discuss the cost of kernel hardening in terms of development overhead, code churn and performance.

Presented by

    Casey Schaufler

    Casey Schaufler has been developing operating systems since the 1970's, starting with the first commercial UNIX port to a 32 bit machine. He has worked on device drivers, filesystems, databases, tool chains and debuggers. He started working on system security in the 1980's and was the lead architect and developer for trusted systems at Silicon Graphics. In security he has implemented mandatory access controls, audit systems, access control lists, multi-level window systems and networking. He created a UNIX system with an unprivileged root user. He was heavily involved in the development of rational release process. Casey is the author and maintainer for the Smack Linux security module and is currently working to make security modules fully stackable. He is currently employed in Intel's Open Source Technology center. He lives 30 kilometers south of San Francisco and 100 meters from the Pacific ocean.