The Bottom Line Of Google Book Search

Last Updated Sep 9, 2009 1:59 PM EDT

Yesterday was the filing deadline set by Denny Chin, the federal judge overseeing the Google Book Search case, so over the last few weeks the court has received letters, briefs, and other documents from hundreds of parties from around the world.

It seems like just about everyone has a view on Google's attempt to build what is often described as a giant digital library and/or online bookstore, though Google officials themselves have quietly indicated that what they are actually trying to do is to enhance the reach of Google Search.

The details of this case are extremely messy, and I've tried to parse them before (see related links below.) Today, I'd prefer to stay at a certain distance from all the noise, and try to capture one signal. And that signal is this: It probably doesn't really matter which way the judge rules.

Why? While opponents of the proposed settlement of a class action suit against Google's massive effort to scan the past 86 years of books are focused on issues like the quality of the metadata (the quality is low), or the likelihood that Google will end up with an effective e-monopoly over most out-of-print books, this whole effort is far more likely, in my view, to come down to Fair Use.

With the anti-trust division of the U.S. Department of Justice due to weigh in with Judge Chinlater this month, I'm assuming that whatever decision the judge issues will provide a layer of regulatory and legal oversight of Google's ability to achieve a monopoly over e-Books. And one nice thing about monopolies is they are relatively easy to regulate!

None of that messy competition that makes government intervention in a competitive marketplace so dicey.

(Not to miss out on the action, the judiciary committee of the U.S. House of Representatives is holding a hearing on "competition and commerce in digital books" on Thursday.)
As for the metadata-quality issue, this is really only a reflection of the chaos that already exists in the physical book world. Despite centuries of book production, and the efforts of legions of brilliant librarians to organize all of the vital information about who published what when, anyone who's ever gone spent time in the stacks knows it's still a crap shoot whether you'll find what you are looking for.

The nice thing is you usually find something better along the way. And that's exactly like online browsing.

What we can expect out of all this is a continuing scrum about the details, but an effective reality that Google Book Search will provide a great browsing bonanza for researchers of all kinds, especially people in academia and the media industry.

By bringing so many "lost" volumes back to life, the search giant is seriously upgrading the ability of all of us to access the pre-digital knowledge that currently is locked up in academic libraries such as that at my alma mater, the University of Michigan scanning effort. -- which, BTW, was an early and eager partner with Google on its book

Google says it already has scanned around ten million books, with many more millions still to go. While everyone else debates the merits of this work, Google can -- and is -- displaying the titles and abstracts from these works under its understanding of Fair Use. Plus by extending keyword search through these digital files, you'll be able to access content from the inside of the books via the normal Google search tool.

To anyone who craves learning about new things, especially from our collective past, this is an informational goldmine. That it will also be good for Google's bottom line is implicit; this is a smart company that returns a high rate of return to its shareholders quarter after quarter.

Therefore, however the judge rules in this case, Google stands to benefit from its decision years back to scan first, and answer questions later.

Thanks to Tamara Baltar.
Sept. 1 Google's Chief Engineer Explains the Book Search Initiative "Dr. Daniel J. Clancy, PhD, is the Engineering Director for the Google Book Search Project, and he agreed to share with Bnet some of the details about how the company is approaching the problems inherent in a project of such magnitude, including the probability of errors in the scanning process..."
Aug. 26 Google Offers Free Downloads of a Million Books
Aug. 5 The Google Book Search Case -- for Dummies

  • David Weir

    David Weir is a veteran journalist who has worked at Rolling Stone, California, Mother Jones, Business 2.0, SunDance, the Stanford Social Innovation Review, MyWire, 7x7, and the Center for Investigative Reporting, which he cofounded in 1977. He’s also been a content executive at KQED, Wired Digital,, and Excite@Home. David has published hundreds of articles and three books,including "Raising Hell: How the Center for Investigative Reporting Gets Its Story," and has been teaching journalism for more than 20 years at U.C. Berkeley, San Francisco State University, and Stanford.