The secret to a good robot teacher.
Unfortunately, it appears the same goes for cognitive-training programmes. Lumos Labs, the company behind Lumosity, one of the leading programmes in this area, agreed to pay $2 million (Dh7.34 million) to settle charges by the Federal Trade Commission that it misled customers with claims that Lumosity improved people's performance in school and at work.
In our view, the problem stems partly from the fact that the designers of these technologies rely on an erroneous set of assumptions about how the mind learns. Yes, the human brain is an amazing information processor, but it evolved to take in, analyse and store information in a specific way: through social interaction. For millenniums, the environs in which we learned best were social ones. It was through other people's testimony or through interactive discourse and exploration with them that we learned facts about our world and new ways of solving problems. And it's precisely because of this history that we can expect the mind to be socially tuned, meaning that it should rely on and incorporate social cues to facilitate learning.
When it comes to most educational technology, this insight has been ignored. Even those technologies that make use of virtual agents or videos of human speakers lack the give-and-take that defines true social interaction, where the verbal and nonverbal cues of one party are dynamically responsive to those of the other.
To investigate the importance such social cues might play in learning from technology, we recently conducted a study with 4- to 7-year-old children from schools in Boston. The children listened to a story read by a robot that looked like a cute plush creature with an animated face that allowed for emotional expressions and eye and mouth movements. For half the children, the robot made use of these capabilities, responding to events in the story and to the children's answers to its questions in a manner that expressed typical social and emotional cues. For the other children, the robot was "flat": It told the same story, but didn't emit or respond with the typically expected cues.
As the children listened to the story, we measured their engagement and attention using automated software to track facial, head and eye movements. To gauge their understanding and use of the new vocabulary words embedded in the story, we had the children retell the story to a puppet both immediately afterward and again after a four- to six-week delay.
As we detail in a recent issue of the journal Frontiers in Human Neuroscience, the children's learning and engagement were heightened in the presence of appropriate social cues. Among those children who recalled and correctly used at least one of the target vocabulary words during the immediate retelling of the story, the total number used was greater for those who listened to the expressive robot than for those who listened to the flat one. Moreover, children who interacted with the expressive robot showed greater levels of concentration and engagement during the listening task.
But perhaps the biggest effects were seen in long-term retention. When the children returned weeks later to retell the story, those who had initially heard it from the flat robot showed a decrease in the length and detail of their retold story, whereas those who heard it from the expressive robot retained the information they had heard. Put simply, children were not only more attentive to and motivated by a socially expressive robot, but they also processed what they learned from it more deeply.
Of course, there's more to learning than just listening and remembering. There's also the issue of authority: Whom should you seek knowledge from? Here again, social cues can play an important role. In a different experiment published last year in the journal Topics in Cognitive Science, we had two robots tell 3- to 5-year-old children facts about novel (made-up) animals. This time, one of the robots behaved in a "socially contingent" manner while it talked, expressing cues in a way that was appropriate and responsive to children's utterances and behaviours; the other did not, expressing similar cues but in a way that was fairly random.
Toward the end of the experiment, a new animal appeared, affording kids the opportunity to ask questions and learn about it. Here, 82 per cent of the children chose to seek information about the new animal from the properly expressive robot as opposed to its partner. What's more, even when both robots offered information about the new animal, the children were significantly more likely to believe the information from the expressive one.
Notably, it's not that the children liked the expressive robot more. They didn't; we asked. Rather, it's that the presence of social cues made the expressive robot, and therefore its information, seem more reliable and trustworthy.
The upshot of these findings is clear. If we want to use technology to help people learn, we have to provide information in the way the human mind evolved to receive it. We have to speak the mind's language, and that includes the language not only of information but also of social cues. Failing to do so will continue to artificially limit the gains that educational technology promises to offer.
- New York Times News Service
David DeSteno is a professor of psychology at Northeastern University. Cynthia Breazeal is an associate professor of media arts and science at the MIT Media Lab. Paul Harris is a professor of education at Harvard.
[c] Al Nisr Publishing LLC 2017. All rights reserved. Provided by SyndiGate Media Inc. ( Syndigate.info ).