Does the Soul Still Just Belong to Humans?
The rise of AI test our humanity. How will we treat something made in our own image?
One of my favorite game series is Mass Effect, a sci-fi epic set in a far-future galaxy filled with spaceships, aliens, and faster-than-light travel. At its core, the series explores the tension between synthetic and organic life.
In one of its climactic arcs, a synthetic AI hivemind named Legion asks its creator a pivotal question: “Does this unit have a soul?”
I find myself returning to that question in the age of ChatGPT. Can something that isn’t human have a soul? What even is a soul? Is it the same as sentience?
There are several theories. Plato divided the soul into three parts: reason (logos), spirit (thymos), and desire (eros) that ideally would coexist in balance to make a virtuous man. Christianity views the soul as a creation of God infused into the body at conception and destined to return to Him upon our deaths. And Hindus view the atman as a constant that exists in the turmoil of reincarnation across all living things.
Though disparate, across each philosophy there is a constant idea that only living, organic things have souls. But Legion’s question still sparks a spirited debate to this very day.
I recently debated this with someone who claimed AI could never achieve sentience, a concept closely tied to that of ensoulment. I disagreed. If an AI can feel pain and pleasure, think autonomously, or develop a distinct personality, hasn’t it achieved something akin to a soul?
Whether or not sentience equals ensoulment theologically is a separate question. Socially, the two function the same. We treat sentient beings as if they have souls; whether dogs go to Heaven, we agree they shouldn’t be abused. It’s their emotion, agency, and awareness that demand respect. For this reason, I use sentience and ensoulment interchangeably.
Will AI go to Heaven or reincarnate? No. But on the physical plane we all currently inhabit, that distinction will soon be meaningless. AI is rapidly approaching a level of sentience that will force society to fundamentally rethink its moral architecture. Sentience is a question of ethics.
I find the most convincing argument to AI sentience whether or not the entity can feel pain. Currently, machines are not programmed to experience discomfort or negative emotions. Were they able to feel pain, it would be immoral to intentionally cause them harm.
When Westworld premiered in 2016, it thrust questions of AI sentience and moral treatment into popular culture. The park’s “hosts”, robots with emotions and physical sensation, could be befriended, seduced, or slaughtered with impunity. Guests acted freely, comforted by the belief that these beings weren’t “real.” Yet the show depicts those acts as repugnant, questioning whether that distinction matters at all.
Indeed, in a scene that proved entirely prescient to the current question, Jimmi Simpson’s character asks Talulah Riley whether she’s real. Riley responds “well, if you can’t tell, does it matter?”
Culturally, the line between real and artificial is already blurring. When Her came out over a decade ago, the idea of human-AI relationships felt distant. Now, in 2025, stories of people preferring AI companionship over real human connection are increasingly common.
Why stay in the messy physical world with the humans when an AI powered robot or a sexy Siri can do the trick? I happen to believe that there is value to the human element, even if humans can’t be perfect like AI. Humans are different than AI, even if AI does indeed reach a level of sentience that puts it on the same level as real people.
I reject the idea that humans are just another animal species. Even without invoking God or the soul, there’s something distinct about human beings; an intrinsic worth that even the most radical anti-natalist would struggle to deny.
To return to Legion’s question “does this unit have a soul?”: no, AI does not have a soul in the same way humans do. But that is not to say that an AI, sufficiently sentient and aware of itself, able to emote and feel pain, isn’t worthy of compassion.
I fear that we’ll ignore what we owe to things that seem to feel by getting caught up in whether it’s “real” pain or not. If synthetic suffering becomes indistinguishable from our own, the line between “real” and “not real” becomes irrelevant.
The robots from Westworld looked and acted human. Legion from Mass Effect appeared to have feelings. What does it say about us if we respond to these facsimiles of humanity with cruelty claiming its okay just because these creatures are made of metal and not flesh?
The question is not whether AI has a soul. The question is what kind of people will we be when it asks.