A pair ofhas recently done something children often do: create a secret language.
Last month, researchers at Facebook found two bots developed in the social network's AI division had been communicating with each other in an unexpected way. The bots, named Bob and Alice, had generated a language all on their own:
Bob: "I can can I I everything else."
Alice: "Balls have zero to me to me to me to me to me to me to me to me to."
That might look like gibberish or a string of typos, but researchers say it's actually a kind of shorthand. Here's the backstory: In June, Facebook announced an initiative at FAIR, or Facebook Artificial Intelligence Research, in which the company was developing bots that could negotiate.
Bots are software that can talk to both humans and other computers to perform tasks, like booking an appointment or recommending a restaurant.
Facebook's bots were left to themselves to communicate as they chose, and they were given no directive to stick to English. So the bots began to deviate from the script in order to become more effective at deal-making.
The phenomenon isn't new. Researchers at OpenAI, the lab started byand Y Combinator president Sam Altman, have embarked on the same kind of work. AlphaGo, the AI developed by Deepmind, a division of Google's parent Alphabet, works under similar principles.
The prospect of AI developing its own languages raises lots of interesting questions: Are we okay with this and should researchers try to stop it? What benefits could this have? What checks and balances might we create?
Facebook declined to comment.
The issue is only going to get more relevant. Facebook has made a big push with chatbots in its Messenger chat app. The company wants 1.2 billion people on the app to use it for everything from food delivery to shopping. Facebook also wants it to be a customer service utopia, in which people text with bots instead of calling up companies on the phone.
Meanwhile, other tech giants -- Amazon with its, and Google with its rival Assistant -- are investing big in machine learning and .
In the end, Facebook had its bots stop creating languages because that's not what the original point of the study was.
But it's a question we'll be asking for a long time.
This story was originally published on CNET.