Watch CBS News

Software Bugs Can Be Lethal

When his dishwasher acts up and won't stop beeping, Jeff Seigle turns it off and then on, just as he does when his computer crashes. Same with the exercise machines at his gym and his CD player.

"Now I think of resetting appliances, not just computers," says Seigle, a software developer in Vienna, Va.

Malfunctions caused by bizarre and frustrating glitches are becoming harder and harder to escape now that software controls everything from stoves to cell phones, trains, cars and power plants.

Yet computer code could be a lot more reliable - if only the industry were more willing to make it so, experts say. And many believe it would help if software makers were held accountable for sloppy programming.

Bad code can be more than costly. Sometimes it's lethal.

  • A poorly programmed ground-based altitude warning system was partly responsible for the 1997 Korean Air crash in Guam that killed 228 people.
  • Faulty software in anti-lock brakes forced the recall of 39,000 trucks and tractors and 6,000 school buses in 2000.
  • The $165 million Mars Polar Lander probe was destroyed in its final descent to the planet in 1999, probably because its software shut the engines off 100 feet above the surface.

    Of course, more deaths are caused by human error than by bad software, and modern society would be unthinkable without Web servers, word processors and autopilot.

    But software's usefulness means people tolerate it even when quality is not the best.

    Last year, a study commissioned by the National Institute of Standards and Technology found that software errors cost the U.S. economy about $59.5 billion annually, or about 0.6 percent of the gross domestic product. More than half the costs are borne by software users, the rest by developers and vendors.

    Most software is thrown together with insufficient testing, says Peter Neumann, principal scientist at SRI International's Computer Science Laboratory in Menlo Park, Calif.

    "The idea that we depend on something that's inherently untrustworthy is very frightening," he says.

    When Neumann's group worked with NASA on software for the space shuttle, developers were so careful about bugs that they produced just three lines of code per day, an unthinkable pace in an industry where a major application may have a million lines of code.

    Developers say defects stem from several sources: software complexity, commercial pressure to bring products out quickly, the industry's lack of liability for defects, and poor work methods.

    Programmers typically spend half their time writing code and the other half looking for errors and fixing them.

    That approach may have worked in the infancy of computers, when programs were small, says Watts Humphrey, former director of programming quality at IBM Corp. But as demands on software balloon, the size of programs seems to double every year and a half - just like microprocessor speeds, says Humphrey, now with Carnegie Mellon University's Software Engineering Institute.

    Most programs in testing have five to 10 defects per 1,000 lines of code, or up to 10,000 bugs in a million-line program. It would take 50 people a year to find all those bugs, Humphrey says.

    Consequently, Humphrey teaches engineers to plan and pay attention to details early, and reject aggressive deadlines.

    Echoing such ideas, Microsoft Corp.'s Trustworthy Computing initiative held up coding for 10 weeks last year to teach employees to spend "more time in planning stages and thinking about quality," says Microsoft vice president S. Somasegar.

    Windows Server 2003, now being released, is the first software product affected by the initiative, Somasegar says. Its launch was delayed by a year.

    "It took a much longer time because we did the right thing on security and reliability," Somasegar says. "We hope our customers will see a huge improvement."

    Unfortunately, Microsoft customers won't know how well the software works until they've tried it. That's something the Sustainable Computing Consortium wants to remedy.

    The problem, says consortium director Bill Guttman, is that unlike other engineers, programmers have no way of measuring the reliability of their designs.

    "It always takes us by surprise when the rocket blows up or the ATM goes down," Guttman says.

    The consortium wants to create automated tools that analyze software and rate its reliability.

    But others say bugs would be greatly reduced if software makers were held legally responsible for defects.

    "Software is being treated in a way that no other consumer products are," said Barbara Simons, former president of the Association for Computing Machinery. "We all know that you can't produce 100 percent bug-free software. But to go to the other extreme, and say that software makers should have no liability whatsoever, strikes me as absurd."

    Software developers are hard to sue for shoddy products because regulators have been afraid to rein in what was, for a long time, the nation's fastest-growing industry, said Cem Kaner, a professor of software engineering at the Florida Institute of Technology.

    Microsoft contends that setting standards could stifle innovation, and the cost of litigation and damages could mean more expensive software.

    But Kaner favors making companies liable only for bugs not disclosed to customers, and for limited damages.

    "If we are not going to make manufacturers stand behind their products, we could at least force them to give enough info to make appropriate buying choices," Kaner says.

    If software makers haven't done the best job, consumers are hardly blameless. We have long favored flashy products over reliable ones.

    "That's what we pay for," Guttman says. "We say: 'Give me the phone that takes the picture. Don't give me wireless security!'"

    By Peter Svensson

  • View CBS News In
    CBS News App Open
    Chrome Safari Continue
    Be the first to know
    Get browser notifications for breaking news, live events, and exclusive reporting.