Sponsored adThis sponsor paid to have this advertisement placed in this section.
Could Artificial Intelligence Experience Depression?

Sponsored adThis sponsor paid to have this advertisement placed in this section.
Depression can be detected in people from studying artificial intelligence (AI) algorithms, and AI works on a similar structure to the human brain. Now a report on Motherboard asks the question, if AI can duplicate human thought, could it one day experience depression as well?
This question was recently raised by Zachary Mainen, a neuroscientist from the Champalimaud Centre for the Unknown in Portugal. Last month, Mainen spoke at a conference in New York where he explained that serotonin sends messages to our brains, and helps people adapt to situations they’re not familiar with.
Sponsored adThis sponsor paid to have this advertisement placed in this section.
Serotonin works as a neuromodulator, a unique neurotransmitter that spreads its message quickly throughout the brain.
“Something good just happened, and the whole brain needs to learn about this,” Mainen explains. “Computational approaches to neuroscience see other neuromodulators as other sorts of ‘control knobs’ similar to those used in AI.
“People think of serotonin as related to happiness,” Mainen continues, “but serotonin neurons appear to send a message that is not good or bad, but more ‘oops’ or surprise. It seems to be especially important in breaking or suppressing outdated beliefs.”
When people get depressed, it can be a result of the brain not being able to adapt to changes.
“In the lab, we create that sort of situation by teaching a mouse or a person a game with certain rules and then abruptly changing the rules,” Mainen adds. “When that happens, serotonin neurons light up.”
Mainen says that AI can use a function that helps machines adapt to changes and new situations, but it could also create depression in the machines as well.
Sponsored adThis sponsor paid to have this advertisement placed in this section.
“Computational psychiatry assumes we can learn about a patient who’s depressed or hallucinating from studying AI algorithms like reinforcement learning. If you reverse the arrow, why wouldn’t an AI be subject to the sort of things that go wrong with patients?”
Mainen told Science Mag, “If serotonin is helping solve a more general problem for intelligent systems, then machines might implement a similar function, and if serotonin goes wrong in humans, the equivalent in a machine could also go wrong.”
Further down the road, Mainen believes that “robots would likely have something like emotions. Similar issues face a person or an AI, for example, when the environment changes radically. Humans or machines with low serotonin or its equivalent may fail to rewire themselves adequately, getting stuck in the rut that we call depression.”