A security researcher ought to know one thing very well: risk. Risk is a key to a lot of things we do in security. Risk informs what we focus on. Risk informs decisions. Risk could be the most important factor in security.

What these security researchers did was fail to check risk in a few areas. They thought of their impact to the Linux kernel in submitting flawed bug fixes. I believe them when they say they didn’t intend harm to the Linux kernel. But they didn’t consider, or underestimated, the risk they took in performing this research without considering the ethical impacts. They say they were looking at the process and not the people, but the people are in the process. The kernel maintainers are humans, and often volunteers in the open source space. These researchers experimented on them, without consent, in an unethical manner.

They also failed to consider the risk to their institution’s reputation. As Greg stated, University of Minnesota researchers submitted bad patches and published their research exposing themselves for knowingly submitting bad patches, and then continued to submit bad patches. So they got their institution banned from submitting future patches, and most of their patches are being reverted. They can’t be trusted, so the Linux kernel maintainers are doing what they can to minimize risk.

I find this particular incident really interesting for a number of reasons, but what stands out most to me is this complete disregard for risk outside of the tunnel vision focus on the patches themselves. They could have chosen any of a million open source projects to prove their point. They chose to take a risk and try to infiltrate and harm the Linux kernel. No matter how much they tried to minimize risk in their patches, they took a huge risk in attacking such a vital project for the sake of making a point that wasn’t worth the risk. These particular researchers may find themselves looking for work outside of academia and security for the risk they didn’t account for.