Note: Today I am pleased to publish this guest article by Cameron Fathauer. His book Saving the Subject is available from Amazon.
Artificial intelligence (“AI”) is a groundbreaking innovation, but does it come at the cost of human value? Mustafa Suleyman, the CEO of Microsoft AI, told the world in his April 2024 TED Talk to view AI as a new kind of species. There’s the human species and now there’s the “digital species,” Suleyman said with a smile.
AI is efficient, tireless and, in some ways, “omniscient,” having access to just about every piece of knowledge available. Humans, by contrast, seem almost inadequate. None of us can compete with it—not even collectively.
Faced with such a force, it’s easy to feel like we don’t measure up, especially if our value is reduced to what we produce. But aren’t we more than just creators of output? What truly defines the human subject when machines seem capable of doing so much more?
We could approach these questions from the perspective of pure philosophy, diving into abstract ideas about what makes us human. But I’ve learned that philosophy without experience is like a compass without direction—it points somewhere, but it doesn’t show you how to get there.
Nine years ago, my journey took a sharp turn after a tragic accident left me questioning everything about myself. Depression, self-harm, and a complete loss of identity followed, but so did profound truths I had to fight to uncover.
In this article, I’ll weave those lessons into a conversation about artificial intelligence, combining insights from the mind and the heart to address the fears we face today. These ideas come from my new memoir, Saving the Subject: How I Found You When I Almost Lost Me.
What I Thought I Lost
It was September 18, 2015, when my head shattered the windshield of a speeding Lexus, denting its rooftop. Both the driver and passenger were adorned with tufts of hair and bloody glass shards. My body landed in the front yard of a neighbor’s house, while a stream of blood seeped out the right side of my skull.
My neighbors thought I’d been shot, given the loudness of the collision. Emergency medical services got to me quickly, and I was transported by helicopter from my hometown of Columbus, Indiana, to a trauma hospital in Indianapolis, where I was diagnosed with a third-degree diffuse axonal brain injury down to my brain stem. I remained in a noninduced coma for approximately two and a half weeks.
When I woke up, my entire world had changed, having to relearn how to walk, talk, and even recall the name of my fiancée, who is holding my hand in the above photo.
Yet, the most profound loss wasn’t physical or cognitive—it was my sense of self. As I wrote in my journal years ago:
This is your forever life, Cameron. The way you feel today—depressed, dragged, undone—is the way you will always feel. Those strange looks? They’ll keep coming. Those close friends? They’ll keep running. Those goals of yours? They’ll need rewriting.
My reality had been thoroughly undressed. Naked I felt, and ashamed I was. And in that raw state, I wrestled with questions I had never thought to ask before: Who am I if I have nothing left to offer? Where does my value come from if I can’t do what I used to do?
These questions didn’t just haunt me—they began to shape me. Losing so much forced me to confront the source of my worth. I had to find an identity that could not be taken from me. With the rise of AI, society appears to be looking for this identity, too.
What We’ve Always Been
In her 1958 book The Human Condition, German-Jewish philosopher Hannah Arendt observed that humans often define themselves by what they do. She referred to this as vita activa, the “active life.” Arendt avoided defining what a human is, believing the question to be unanswerable by humans themselves. She said self-definition would be like “jumping over our own shadows.”[1] This is why questions of identity and purpose can’t be fully answered by culture or technology. The problem of human definition can only be resolved through supernatural revelation. If there’s a chance that God has spoken about what we are, then we owe it to ourselves to seek out that answer.
I believed in the God of the Bible before I was hit by that car. Yet my suffering and subsequent questions about identity drew me closer to the scriptures. I focused intently on Genesis 1–2, and how God created the first humans. Here’s what stood out to me in my recovery.
First, the significance of humanity being the last act of creation. There were seven days in the biblical creation account: the first six were formative, and on the seventh, he rested. On the sixth day God made the first humans, Adam and Eve. (Genesis 1:26–31). Many have said that this signifies human dignity and importance: that the stage was set up for humankind, the pinnacle of creation.
While I do not dispute the notion, I think there is a plainer truth to be gathered from this order of operations. The fact that the human is the last act of creation signifies less of his importance and more of his dependence. Unlike AI, which depends entirely on human programming for its function and purpose, our identity isn’t something we achieve—it’s something we can only receive from God.
Second, the significance of God not telling Adam to name the animals. The text says God brought the animals to him “to see what he would call them” (Genesis 2:19-20). Why did Adam not need an instruction here? Because Adam was already named by God when he was made: imago dei (image bearer of God) (Genesis 1:26–27). In naming the animals, Adam turned other living creatures into his objects, making them “his.” The only subjective part of the animal is that it is subject to the human.
AI, too, is subject to humanity. We create and name these technologies, shaping their purpose and function. Yet unlike Adam, AI is not a being that defines itself—it is entirely dependent on us to program, categorize, and give it meaning. While Adam’s naming of the animals flowed from the identity God had already given him, AI can never possess such an identity. It can mimic, process, and produce, but it cannot define or be defined in the way humanity already is.
This brings us to a final and crucial point: the significance of the imago dei. The naming of creation was left to the human because God had already named him—His. Like how writing your name on something can represent your ownership of it, so the imago dei represents God’s ownership of the human. But this doesn’t just mark humans as God’s creation—it reflects our unique capacity for a relationship with him. And it’s through this relationship that we have true identity and worth.
AI, by contrast, bears no such mark. It exists not as a subject capable of relationship but as an object entirely owned and defined by its human creators. AI can be programmed to replicate human behavior or solve complex problems, but it cannot transcend its programming or form relationships in the way humans can. This distinction underscores the unbridgeable gap between humans and machines: AI will always lack the divine imprint that gives humanity its potential for supernatural connection.
Our Enduring Worth
AI cannot replace your value, just like a brain injury couldn’t replace my value, because we are far more than what we can do or produce; we are what we have always been––God’s (not gods in the plural sense––for those hearing this audibly). AI processes data; humans process meaning. AI remembers facts; humans cherish relationships. AI has immense knowledge; humans have the capacity for faith, love, and hope.
ChatGPT can’t take that from you. Trauma can’t take that from you. No one can take that from you––not even you. How can I be so sure? Because our humanity did not belong to us in the first place. We are all mere recipients of this existence: “What do you have that you have not received?” (1 Corinthians 4:7).
AI may have access to all the information in recorded history, but it lacks access to a relationship with God. As the psalmist wrote, “This I know: that God is for me” (Psalm 56:9). To have the Creator of the universe on your side is a privilege no technology can claim. In fact, it could be the most human claim available.
If you’ve ever wrestled with questions about your worth or purpose, I explore these ideas in greater depth in Saving the Subject, weaving together my personal journey and the truths that helped me rediscover what it means to be human.
After surviving a severe brain injury, Cameron Fathauer’s journey took him from studying theology at The Southern Baptist Theological Seminary to earning a law degree at Indiana University, becoming a father to triplets, and practicing law in Kentucky and Indiana. He recounts this journey in Saving the Subject: How I Found You When I Almost Lost Me, a multi-dimensional memoir blending personal narrative, theology, and creative expressions. He lives in Southern Indiana with his wife, Chelsea, and their four children. They are members of Immanuel Baptist Church in Louisville.
[1] Arendt, Hannah. The Human Condition. Chicago: University of Chicago Press, 1958, p. 10.
Cameron, I'm going to use this in my philosophy class next year. Thanks.
To be honest, I don't really know what to take away from this article and especially its title. Yes, AI cannot replace one's worth, but who said it should? I love critical articles about technology & culture that make me reflect my own attitude and usage and that's why I'm a paid subscriber to Samuel's substack, but we have to be balanced and nuanced here.
AI is a great tool and I use it all the time. It's fascinating how fast it can look up relevant Bible verses, summarise long PDFs, make suggestions how to improve texts, help me with coding problems etc. I still believe in the dignity, creativity and worthiness of human beings and the imago dei, but I don't mind using technology helping me out in my day to day life, just like a calculator, a vacuum cleaner or a dishwasher that replaces some of my work. I still cherish human beings and prioritise relationships to them, so for me it's never been an “either-or-issue”. Yes, some people might abuse or overuse AI (e.g. “AI boyfriends"), but people who do that usually have other, deeper problems.