The Negatives of (False) Positives
by Jennifer Lapell
mailto:[email protected]
last updated Mon June 17, 2000


Relevant Links


Unix LoveLetter False Alarm
by FSecure

Pointcast and "Virogen.asexual"
by CNet.com

Are Viruses Detectable?
by Jennifer Lapell


Imagine your doctor calling you up one day, out of the blue. "I'm sorry to tell you this," he says, "but you've only got three months to live." You're shocked, but eventually, you accept it and set about putting your affairs in order, booking the world tour, everything you'd ever meant to do.

The next week, the doctor calls again. "I'm sorry to tell you this..." You wonder what the bad news could be. "...but your wife has only got three months to live." At this point, you could start to get suspicious. But maybe there's no other doctor around for miles and you're feeling lazy, so you believe him. You tell your wife to put her affairs in order, too, and to come see Europe with you.

But then, the next week, the doctor calls again, with the news that your brother and sister and all your cousins have also only got three months left. They go and check with their family doctors, and they're all fine, so you start to wonder about his original diagnosis. You also go to get a second opinion, and guess what? You're fine; so is your wife.

The next week, when the doctor calls, you hang up on him. You've learned your lesson; whenever he tries to call back, you hang up again.

Now, think about how many times you've closed one of those annoying "Are you sure?" dialog boxes without reading it. You hit OK to make it go away, because you've seen it a million times before and it never meant anything special those other million times, did it?

Stricter Rules Equals Tighter Security?

It is a classic truth in computer science that "the rarer a warning is, the more likely it is to be noticed." Particularly in a GUI-based operating system, the more common a warning is, the more likely the user is to want to swat it away like a mosquito on the monitor screen. So the crucial question for security systems administrators is this: Are you hanging up on your virus checking or firewall software?

Software vendors promote the common misconception that they can tighten security by adding stricter rules to virus checking and other security software such as firewalls. By stepping up the level of reporting, their brochures and websites announce proudly, we can make users aware of even the smallest changes, which could be the first stages of a bigger problem.

Nice idea, in theory. Except that there's probably a real user sitting and sifting through all the warnings their security software is generating. Fred Cohen, a pioneer in the field of computer security research, points out that "as the number of user decisions grows, the tendency to use default responses increases, until eventually, the user simply defaults everything and attacks become undifferentiable from normal operation."

Imagine now that it's Monday morning. A vigilant systems administrator arrives at work. It's her job to sift through the entire warning log from the weekend. But she's tired and annoyed from her commute and frazzled with all the other tasks on her plate that day, so maybe she skims over the warning log instead of looking at each item in detail. She just wants the messages to go away. What she doesn't notice is that this time there has been an actual breach. If she doesn't catch the problem now, it could go unnoticed for quite some time; long enough for the virus to do a lot of damage.

We could pretend that this was just the administrator's fault, but in reality, software that "cries wolf" is one of the biggest stumbling blocks in the security industry today. What good can come of giving system administrators weapons that inevitably blow up in their faces?

Catching Imaginary Wolves And Missing The Real Ones

Computer science has borrowed terminology from medicine to describe these and the other common type of errors made by anti-virus and other software. In medicine, a lab test for the presence of an organism or substance will yield a positive value if whatever the lab is testing for is present. The result will be called negative if it is not present. But there is always a margin for error, and these errors are called either "false positives" or "false negatives." A false positive value means that the substance or organism is not present, but the lab test says it is. A false negative result means that the test has not detected something that is actually present.

In computer applications, a false negative value (sometimes called a Type II error) is usually treated as the most grave scenario. A false negative means you do have a virus, but your virus checker doesn't know about it and can't warn you. You feel secure, because you've got the latest and greatest virus checker installed, but even the strongest house of cards doesn't stand a chance if it's built on an active faultline.

False positives (also known as "Type I errors"), are usually considered more or less benign. Your virus checker isn't missing anything, so you're safe, right? The worst case scenario most users envision is that it warns you a few times about viruses that don't exist. No big deal.

No big deal unless, perhaps, you miss the crucial call where the doctor is telling the truth -- or trying to, but can't get a word in edgewise before you hang up on him. "But it's true!" the little boy in the story calls to the villagers. "It's true! This time, there really is a wolf! "

The issue of how to ensure that the real wolves get caught, while filtering out the imaginary ones, has been a hot topic since before computer science even existed as a formal discipline. In 1949, Claude Shannon produced his classic paper "A Mathematical Theory of Communications," discussing the problems involved in reducing the signal-to-noise ratio to decipher intelligible communications. Engineers since then have been refining his work and even applying it in other areas, such as human genetics. In the area of computers, though, there is no such thing as the perfectly reliable virus checker, only a negotiated truce between false positives and false negatives.

Finding a balance between too many and not enough warnings is a tricky and unstable business. And in assessing the risks involved on either side, we may be fooling ourselves -- and allowing software vendors to fool us -- with the twin perceptions that false positives are our friend and that "noisy" security equals tight security.

Fixing It When It Ain't Broken

Currently, the trend in virus checking software is increasingly towards programs which seek out "virus-like" behaviour. Manufacturers claim this gives their software the ability to detect viruses that did not exist at the time the program was written. Although Cohen and others have proven that the problem of catching unknown viruses is theoretically unsolvable, in the real world, these virus checkers do have the ability to pick up on potential problems on their own "initiative," so to speak. You'd think that would be a good thing.

Unfortunately, the prevalence of "intelligent" virus checkers can backfire, with costly and damaging repercussions for software developers and others in the industry. Imagine a virus checker, working optimally as designed, finding and analyzing potential viruses, and reporting those it determines represent a serious problem.

We have seen already that administrators can quickly be deluged by error messages, but let us suppose in this case that the computer operator is wide-awake that day, and notices a positive result. Now what? What happens if the user actually pays attention and tries to fix the problem? If there actually is a virus, and the operator finds it and cleans it up, what we have is a happy ending. But what if there is no virus, but the system administrator assumes that because her software reports one, that it must be hiding, lurking somewhere just out of sight?

Think of your doctor again. What if he doctor calls to let you know you've got three months to live, and you trust him; you believe the false positive. You sell off your house and all your belongings, and sign up for that world tour you've always dreamed of. At some point, you're going to discover that the doctor was wrong and that you've got no assets left.

Sounds ridiculous? It may seem that way until you've spent a substantial chunk of time and money, in the words of one software developer, "trying to fix things that aren't broken." That's the point when companies at the receiving end of a false positive start looking around for somebody to blame.

Brian Connolly*, a Boston-area Web consultant, is bitter, recalling how his company very nearly got burned by an "intelligent" virus checker. Prior to his current position, Connolly owned a CD-ROM firm, and at one point, a CD distributed by his company contained software which happened to trigger false positive messages from one of the leading virus checkers.

In this litigious industry, any perceived damage is potential lawsuit territory, and a disgruntled end-user sent Connolly's firm a demand letter for "damages" to their company's system, plus the time they took investigating the alleged problem. Connolly says the amount could have easily reached US$1 million or more.

Fortunately, Connolly had ample legal ground to fight the allegations. His lawyer retaliated, alleging "damage to a business relationship" and many other charges. Connolly's lawyer also made it clear that "once the court established that there was no virus, which there wasn't -- he [the end-user] would be sued into oblivion."

Taking False Positives as Gospel

If the prospect of having to shell out money -- for a problem his firm didn't cause -- isn't daunting enough, Connolly points out that it could have been much worse. Had the incident occurred now, he claims, his company would have stood to lose more than just big bucks. With the spread of Internet access, he says that "chances are high that a trusting user would take the false positive as gospel, post defamatory info on Usenet, and kill the software firm's chances of success."

From a business perspective, it's clear that this experience was terrifying, even without the Internet to make it worse. "Ninety percent of my sales was through four or five distributors, and this end-user was in contact with the distributor from whom this traveled. The ramifications of a [potential] lawsuit with a dealer were scary."

Now, multiply that potential damage by the number of individuals on the Internet who would read an e-mail warning about a virus in a software package -- and not bother to read the follow-up statements reassuring the public that the "virus" was in fact a flaw in the virus checker itself.

At their most benign, false positives may be just a nuisance. But Connolly's case is the false positive at its worst. In that form, it's easy to see that they not only waste valuable security resources, they can actually end up costing developers and distributors in dollar figures, and more significantly, in terms of their business reputation.

Connolly's final word, reflecting on this episode? Smiling, he comments mildly, "I'm sure glad I got out of the industry." The rest of us, meanwhile, have to stay back, fighting in the trenches for better virus protection. And sometimes, the enemy is not the virus itself, but the software we use to combat it.

* Name has been changed at this source's request.



Privacy Statement
Copyright 1999-2000 SecurityFocus.com
Hosted by www.Geocities.ws

1