Children more likely to disclose mental health issues to a robot, study shows

Some children disclosed problems to the robot that they had not shared with their parents

6th September 2022 about a 3 minute read
“We think that when the robot is child-sized it’s easier to relate to the robot as a peer,." Professor Hatice Gunes, head of the affective intelligence and robotics laboratory, University of Cambridge

Children are more likely to disclose their true feelings to a robot than to an adult, a University of Cambridge study has found.

In the study, 28 children aged eight to 13 took part in a one-to-one 45-minute session with the robot, called Nao. The robot, which is 60cm tall and designed to resemble a human, has a child’s voice. Beginning each session with a fist-bump and a friendly chat to break the ice, Nao then asked questions about the child’s happy and sad memories over the last week, administered a questionnaire on feelings and mood and then administered another used to diagnose anxiety, panic disorder and low mood. A parent or guardian, along with members of the research team, observed from an adjacent room.

The study found that the children felt more comfortable confiding in the robot than when responding to mental health assessments with their parents. Children whose responses on traditional questionnaires suggested they could be experiencing mental wellbeing problems, gave more strongly negative responses when answering the same questions with the robot. In some cases they disclosed information that they had not previously shared.

“There are times when traditional methods aren’t able to catch mental wellbeing lapses in children, as sometimes the changes are incredibly subtle,” said Nida Itrat Abbasi, the study’s first author.

Robots could be used to screen for mental health problems

Children may view the robot as a “confidant”, allowing them to divulge their true feelings and experiences, the researchers said. Watching the session through a mirrored window, one parent told researchers they had not realised their child was struggling until hearing them respond to the robot’s questions.

“We think that when the robot is child-sized it’s easier to relate to the robot as a peer,” said Professor Hatice Gunes, who leads the affective intelligence and robotics laboratory at Cambridge. By contrast, she said, children might respond to parents or psychologists with “what they think is expected of them rather than what they think is true”.

Gunes added: “After I became a mother, I was much more interested in how children express themselves as they grow, and how that might overlap with my work in robotics. Children are quite tactile, and they’re drawn to technology. If they’re using a screen-based tool, they’re withdrawn from the physical world. But robots are perfect because they’re in the physical world – they’re more interactive, so the children are more engaged.”

In future, said Gunes, robots could be used in schools to screen children for mental health problems, allowing children to receive support at an earlier stage.

Prof Farshid Amirabdollahian, an expert in human-robotic interaction at the University of Hertfordshire, who was not involved in the work, told the Guardian there was growing evidence to support the use of robots in supporting mental healthcare provision. “Children tend to show a very positive attitude to interactive technologies,” he said. “We don’t want robots to replace people but they seem to be very good tools for breaking the ice.”

The researchers say that they hope to expand their survey in future, by including more participants and following them over time. They are also investigating whether similar results could be achieved if children interact with the robot via video chat.

FCC Insight

Mental health problems are on the rise amongst children and young people, but it can be very hard to persuade children to open up and talk about their mental wellbeing. The idea of using robots to engage children is innovative and this small study from University of Cambridge researchers shows encouraging results. Similarly, there have been suggestions that adults feel more comfortable talking to an AI-based chatbot than to another human being.

The study does raise ethical issues – it is not clear whether children understood that their answers were not confidential –  but it has highlighted a promising area for future research. Indeed, agency, capacity to consent and understanding of tools are critical to assess in many areas of mental health and technology.