(MoneyWatch) Do you believe what you are reading today? Do you trust the comments and analysis you read here on CBS MoneyWatch? If you do, the chances are that you believe they're written by humans who, although fallible, mostly strive to do the right thing. And I can reassure you that those of us who write for the site work hard to get facts right, to challenge our own and each others' assumptions. We also have a deep vested interest in getting it right, not just out of professional pride but because we all depend on a financial system that is broadly functional.
Moreover, the many writers on the site represent a broad cross-section of experience, expertise and opinion. You can decide for yourself whose approach you agree or disagree with. That diversity is crucial in the same way that a balanced portfolio is essential -- a range of styles and assumptions makes both safer.
That won't necessarily be true when articles like these are generated by machines instead of humans. Narrative Science, a Chicago-based software company, has designed algorithms that will take data -- say, financial data or sports scores -- analyze it, write a story around it and add "expressiveness and uniqueness," along with pictures and graphs to produce what looks just like human-written story. Your only clue as to the article's provenance may be that there is no human byline and the piece comes from Chicago. The charmingly antiquated name for the software is "Quill."
Stories in the financial press that are generated from software derive necessarily from data and history. That means they can't be any better than the information on which they're based. They may or may not be continuously updated, but you won't know by whom or according to what assumptions or heuristics. Any number of guiding principles could be introduced, but you can't see them. The more articles that are written this way, the more the embedded assumptions and biases are reinforced.
Just as the property market bubble was created by everyone making investments in the same way for the same reasons, so this kind of artificially generated journalism has the potential to amplify biases and assumptions, but at far greater speed and on a far wider scale than anything written by humans.
By contrast, you may decide that I am biased in favor or against certain management styles, although you can see that and decide whether to read me my articles or ignore me. You can read many authors who all come from different backgrounds and perspectives and, as a consequence, protect yourself against a monopoly of ideas.
But software-generated journalism all looks the same and feeds itself. And it looks objective because it comes from a system, not a person.
This bothers me not just because I fear it putting me out of work. It bothers me more because, as Kevin Slavin at MIT has pointed out, algorithms are peculiarly opaque. We can't quite see what they will come up with, especially if confronted with other algorithms. When they go wrong, we often don't know why. All we can do, Slavin says, is hit the "stop" button.
I recognize that journalists aren't the most respected breed in the world. And business writers are not a special breed. But I also recognize that algorithms in the trading and investment world have introduced into financial markets a level of volatility hitherto unknown. I hate to think what happens when we add the volatility induced by computer-generated analysis.