How Will People Teach Robots in the Future? Georgia Tech Researchers Identify What People Prefer in a Robot Learner

Imagine ten years from now anyone being able to go to the store and pick out a robot housekeeper. They bring it home and it begins to organize and clean the house, except it’s not exactly how the homeowner wants it. They need to teach the new robot how they prefer to have the clothes folded, dishes put away, and, yes, the trash taken out.

Georgia Tech Machine Learning Ph.D. student Samantha Krening and Associate Professor of Aerospace Engineering Karen Feigh published work on human preferences for how to teach AIs in their new paper, “Characteristics that Influence Perceived Intelligence in AI Design.”

“This whole area is really fascinating to me because it explores how people want to teach, and I feel like we very rarely ask that question,” says Feigh.

AI or autonomous agents typically learn one of two ways: the critique method, where they are explicitly told something is done correctly or not, or the demonstration method, with a human physically teaching an action. Depending on the skill being taught, agents are more receptive to one method over the other.

Krening and Feigh developed the Newtonian Action Advice method as another alternative. This new method incorporates verbal human advice for actions with reinforcement learning in order to improve human-agent interaction.

“Historically, people have always asked questions like ‘How can we teach better?’ or ‘How does the learner want to learn?’ But in this case, we need to be asking, ‘How does the teacher want to teach?” when scientists are designing the robot learner,” says Feigh.

To answer this question, the duo surveyed people on how they felt about training robots using the critique method versus the action-advice method and came up with several key attributes that people desired in a robot learner.

Compliance with input – Did the agent do what it was told?

Responsiveness – How quickly the agent learned the skill

Complexity – Was it simple for the person to teach the agent the skill?

Transparency – Were the humans able to understand why the agent made its choices?

Robustness and Flexibility – The agent’s ability to correct mistakes and learn alternate policies

The results showed that people tended to be drawn to the action-advice method because they could see the agent making adjustments and learning in real-time, versus critique, which does not provide the same immediate, visible results.

As researchers create technologies, such as household bots, that are managed or trained by people with little or no computer science experience, keeping these attributes in mind is important Feigh says.

Smart devices have already introduced AI agents into the home, so the idea of a domestic robot doing the dishes exactly how the homeowner wants might not be too far off.

 

Feigh and Krenig will present their findings at the 2018 Human Factors and Ergonomics Society (HFES) International Annual Meeting in October.

Related Media

Click on image(s) to view larger version(s)

  • Robots could be built based off how the owner likes to teach.

For More Information Contact

Allie McFadden

Communications Officer

allie.mcfadden@cc.gatech.edu