Fortunately, you don't have to slog through a statistical swamp to learn what's bugging customers. Small numbers can reveal big clues. Take a lesson from Autodesk, the company that Bartz led with great success before she took the helm at Yahoo, where she's so far had less success.
Autodesk, based in San Raphael, California, sells an array of high-end tools to architects and other design professionals. In the course of updating its flagship AutoCAD software, Autodesk embarked on a "usability study." In exchange for $25 gift certificates, product researchers observed and interviewed 11 customers performing the same task: find the download on Autodesk.com, download it, install and launch the software. That's right, 11 customers furnished enough information about AutoCAD to warrant changes affecting all customers. Autodesk recently repaired a download button that didn't look clickable enough and an error message that looked puzzling even to some company experts -- "ADR Not Empty."
Basing major product changes on input from just a few users may seem too scant a sample for meaningful findings. But it's not. Erin Bradner, a user researcher at Autodesk, finds great value in small sample usability testing. Armed with a Ph.D. in human-computer interaction and 15 years of experience, she insists that small samples reliably produce good ideas with a robust return on investment. Five users satisfy the time-tested minimum sample for usability tests, Bradner wrote in a recent blog for Autodesk. Typical samples engage up to 20 users.
To bolster confidence in a tiny sample, Autodesk hired Jeff Sauro who runs a website called measuringusability.com. Sauro is a statistics entrepreneur with a Masters degree in learning, design and technology from Stanford University. His website asserts that he quantifies the user experience through the statistical analysis of human behavior.
"[Autodesk] watched 11 people and saw the problem in 3 of them," says Sauro. "Using some simple statistics called confidence intervals, you can be 95% confident that between 9% and 52% of all users would be likely to encounter the problem." In other words, eleven customers furnished statistically sound evidence that nearly one AutoCAD customer in ten - and possibly one in two -- experienced the same problem.
Small isn't always best, of course. Large data sets still shed light where small numbers cannot, especially when measuring what is effective. Small samples only expose where trouble lurks, at software designers, toolmakers, retailers, car manufacturers, and homebuilders - any business with customers who will bolt if unhappy to competitors that will swoop in at the first sign of dissatisfaction. If you're not minding your customers, rivals will before you can say, "How'd we miss that?" Small sample user testing is one efficient way to stay a step ahead. When problems show up in small samples, they usually signal big problems.
Have an example? Please share it.
S.L. Mintz covers finance and investment strategy and was a writer of the best-selling Financial Crisis Inquiry Commission Report.