The Chatbots Are Coming

Machines are closer to mimicking the human brain. They just need the right words.

Facebook can now identify the faces in your photos. Apple can recognize commands you speak into your cell phone. And if you point your phone at a sign printed in a foreign language, Google can instantly translate it into your own.

It's all thanks to a form of artificial intelligence known as a deep neural network—a network of hardware and software that (loosely) mimics the web of neurons in the human brain. A deep neural net can learn discrete tasks by analyzing vast amounts of data. It can learn to recognize a face by analyzing millions of faces. It can learn to respond to smartphone commands by analyzing millions of spoken words. It can learn to translate from one language to another by analyzing millions of existing translations. 

And in the years to come, neural nets will learn to carry on a real conversation. They will understand the sentences you speak—not just recognize them—and they will respond much as a human would.

In fact, this is already starting to happen. Using deep neural networks, companies like Google and Facebook are building what they call “chatbots,” systems designed to carry on a conversation via text messages. The idea is that you could make a dinner reservation or hail a car simply by trading a few texts with an Internet service, much as you’d trade texts with a friend. 

Meanwhile, using similar techniques, Google and Amazon are building devices designed to sit in on your living-room table and respond to what you say out loud. Amazon’s device, the Echo, is already on the market, and Google’s is due later this year. The company aims to create a system that lets you interact with the Google search engine much as you would interact with someone across the room. You will ask, and it will answer. The plan is to build all sorts of devices eventually that let you chat with the search engine—and any other Internet service—in much the same way. Phones. Watches. Cars.

Today, these systems are flawed. They can’t always grasp the meaning of what you say, and they can’t always formulate the right response. If you’ve used Apple Siri, you know this. 

But deep neural nets and related techniques are advancing the state of the art rapidly. Google researchers recently built a chatbot that not only responds to tech support questions but also debates the meaning of life. We’ve seen similar chatbots in the past. But the point here is that this chatbot learns on its own. It learns to debate the meaning of life by analyzing—believe it or not—reams of old movie dialogue. In the past, we built chatbots by hand-coding their behavior, tiny piece by tiny piece. But now that machines can learn these tasks on their own, the possibilities are broader. Progress is quicker.

The rub lies in finding the right data to learn from. Google also is training its chatbots with old newswire stories from publications like The Wall Street Journal. But like old movie dialogue, this source material isn’t perfect. People don’t talk like the newswires. Other major tech companies are exploring alternatives. Thanks to its social network, Facebook holds all sorts of conversational data, and it’s training with this. 

But that doesn’t cover everything. That’s why, last summer, Facebook went so far as to hire a few hundred contractors—real people—who now answer real conversational requests from other people across the globe, including everything from “Can you buy flowers for my husband?” to “Can you plan my next vacation?” As the contractors comply, Facebook records their every move—what websites they visit, what numbers they call, what they say. Then, somewhere down the road, the company’s neural networks can analyze all this data and learn the same tasks.

That’s a ways off. Years, perhaps. We must not only hone the technology, but give it some sort of ethical framework as well. Microsoft recently released a chatbot into the wild (read: Twitter), and it turned racist. Neural networks are, in many ways, mysterious things. They work, but we don’t always know why they work. We can guide their creation, but we can’t completely control it. They’re governed by math and data, not programmers. 

That leaves some big questions. If an ethical chatbot is our aim, what data do we use? The Bible? Or the Koran? And who gets to choose?

But these chatbots are coming. The techniques are there. And so is the will to build them. Some AI researchers even believe these techniques will produce machines with something akin to common sense. But first comes language.

Metz ’94 is a senior writer with WIRED magazine in San Francisco. His cover story on the unexpected humanity of AlphaGo—the Google machine that learned to play the ancient game of Go—appeared in WIRED’s June issue.

Share your comments

Have an account?

Sign in to comment

No Account?

Email the editor