June 21, 2022 – Researchers at MIT recently announced that they trained an artificial neural network to process information for facial emotion recognition, to help people with autism better recognize emotions expressed in faces.
For many people, it is easy to recognize the emotions expressed on other people’s faces. A smile can mean happiness, while a frown can indicate anger. People with autism, on the other hand, can often have a harder time accomplishing this task. It’s unclear why, but new research, published June 15 in The Journal of Neuroscienceshines a light on the inner workings of the brain to suggest an answer, and it does so by using artificial intelligence (AI) to model the computation in our heads.
MIT researcher Kohitij Kar, who works in the lab of MIT professor James DiCarlo, hopes to find the answer. (DiCarlo, Professor Peter de Florez in the Department of Brain and Cognitive Sciences, is a Fellow of the McGovern Institute for Brain Research and director of MIT’s Quest for Intelligence.)
Kar began by looking at data provided by two other researchers: Shuo Wang of Washington University in St. Louis and Ralph Adolphs of Caltech. In one experiment, they showed pictures of faces to autistic adults and neurotypical controls. The images had been generated by software to vary on a spectrum from fear to joy, and participants quickly judged whether the faces represented happiness. Compared to controls, adults with autism needed higher levels of happiness in faces to report them happy.
Kar, who is also a member of the Center for Brains, Minds and Machines, trained an artificial neural network, a complex mathematical function inspired by the architecture of the brain, to perform this same task of determining whether images of faces were happy. . Kar’s research has suggested that sensory neural connections in autistic adults may be “noisy” or inefficient. For a more in-depth breakdown of the nature of Kar’s research, read the full MIT blog post here.
So where does something like augmented reality (AR) tie into all of this? Well, based on Kar’s research, he thinks these computer models of visual processing could have several uses in the future.
“I think facial emotion recognition is just the tip of the iceberg,” said Kar, who thinks these visual processing models could also be used to select or even generate diagnostic content. For example, artificial intelligence could be used to generate content (such as films and educational materials) that optimally engages children and adults with autism.
Based on the research, these computer models could potentially be used to help modify facial pixels and other relevant pixels in augmented reality glasses to change what people with autism see, perhaps exaggerating happiness levels. (or other emotions) on people’s faces. to help people with autism better recognize emotions. According to MIT, work around augmented reality is something Kar plans to pursue in the future.
Even in a more simplistic format, AR glasses with facial emotion recognition software installed might be able to detect people’s emotions and overlay text prompts to help autistic people wearing the glasses, to give them descriptions of the likely emotional state of the people they are. interact with. Ultimately, the work helps validate the usefulness of computational models, especially image processing neural networks, according to Kar.
To read the full MIT blog post, click here.
Image credit: MIT
#MIT #Research #Facial #Emotion #Recognition #Potential #Benefits #Augmented #Reality #People #Autism #Auganixorg