By now most of us have learned first-hand that human-AI relations need some work. Amazon’s Alexa personal assistant is plugged into one of the world’s largest online stores and can pull information from Wikipedia. But can it help you when you yell out what in the moment feels like a simple request to play that summer banger you just heard, starting at the catchy chorus? “I’m sorry, I don’t understand the question.”
All supposedly smart helpers including Apple’s Siri and Google’s prosaically named Google Assistant are capable of frustrating feats of what can feel like artificial stupidity. It’s one reason that Google is starting a new research push to understand and improve the relations between humans and AI. PAIR, for People + AI Research initiative, was announced today and will be led by two experts in data visualization, Fernanda Viégas and Martin Wattenberg. One thing they hope to do is create a toolkit of techniques and ideas about how to design AI systems less prone to disappointing or surprising us humans.
Virtual assistants get infuriating when they fail to do something we expect to be within their capabilities. Viégas says that she’s interested in studying how people form expectations about what such systems can and can’t do—and how virtual assistants themselves might be designed to nudge us toward only asking things that won’t lead to disappointment. “One of the research questions is how do you reset a user’s expectations on the fly when they’re interacting with a virtual assistant,” she says.
Viégas and Wattenberg will lead PAIR out of Google’s Mountain View-based Google Brain AI research group, where they previously worked mostly on developing tools to let researchers and engineers peek inside the workings of machine learning systems. PAIR will see them keep working on that, and the project today released two open source tools that help engineers understand the data they’re feeding into machine learning systems. But the new initiative will also work on making artificial intelligence more transparent to people not expert in the technology.
The deep learning algorithms that have lately proved so useful for analyzing our personal data or diagnosing diseases from medical imaging also have a reputation for being what researchers dub “black boxes”, meaning it can be difficult to see why a system spat out a particular decision, such as a diagnosis.
That’s a problem as such software gets closer to being used in life-or-death situations in the clinic or on our roads inside autonomous vehicles. “The doctor needs to have some sense of what’s happening and why they got a recommendation or prediction,” Viégas says. That might mean creating ways for diagnostic software to highlight the pieces of a scan that influenced its recommendation or having it write explanations in text.
Google’s project comes at a time of increasing attention on the human consequences of of AI. Today the Ethics and Governance of Artificial Intelligence Fund, with backers including the Knight Foundation and LinkedIn cofounder Reid Hoffman, announced $7.6 million in grants to civil society organizations to study the changes AI might cause in areas such as labor markets and criminal justice systems.
Like those new projects, Google says much of PAIR’s work will take place out in the open. MIT and Harvard professors Hal Abelson and Brendan Meade will collaborate with PAIR on how AI can enhance education and science, for example. But working on making humans and AI more compatible also has clear business benefits for what CEO Sundar Pichai describes as an “AI first” company.
If PAIR can help AI integrate more smoothly into industries like healthcare it could help bring new customers to Google’s AI-centric cloud business, for example. Viégas says she also wants to work closely with Google’s product teams, such as the one behind the Google assistant. Such a collaboration could be lucrative if it results in keeping people more engaged with the product, which acts as a gateway to the company’s broader services and underlying ad business. PAIR has a shot at not only helping advance society’s understanding of what happens when humans and AI collide, but boosting Google’s bottom line.
Full Story: Google’s People + AI Research Initiative Sets Out to Solve Artificial Stupidity