
Here’s my earth-shattering cold take. A.I. is disrupting our modern world and what it means to be human.
I know, I know… I should slow down on the cultural commentary and get back to reading decade-old sociology and development theory. And I will soon.
But before then, I am increasingly intrigued by the A.I. revolution we are experiencing and the pressures that it is putting on the various settings in which I find myself: schooling (with two teenagers at home), tertiary education (with my lecturing at Youthworks College), and in ministry (with my primary role as Kids Pastor at Soul Revival Church).
The most interesting aspect of the A.I. conversation for me is the question of what it means to be human. How do technologies shape our understanding of ourselves? What kind of person do they train us to be? What virtues and characteristics are they cultivating in us?
Like many topics, I have a curious, armchair interest in these conversations, without a fully formed answer to the questions being raised. What I have included below is not the definitive answer to A.I. and the problem of being a flourishing human in the 21st Century. Rather, I am linking to some of the conversation partners that I am finding helpful as I continue to mull over this topic for myself and wonder about how I might use A.I. tools in ways that help me honour Christ, flourish in the vocation that he has given me, and consider how best to be human, with all the God-given gifts, abilities and limitations that he has bestowed upon me.
In the words of Christian doctor, author, speaker and research scientist, John Wyatt:
“It is a challenge to our creative and vision to reimagine a future in which the machines do not replace us, or dehumanise us. Instead they enhance human community and flourishing. They help us to become more human, not less. What would that future look like? And what must we do to make it possible?”
Enjoy flicking through the catalogue below, including:
O. Alan Noble
Cal Newport
Simon Sinek
John Wyatt and Justin Brierley
Peco and Ruth Gascovski
Adam Blehm
David M. Shaw
If you have other articles or podcasts that you have found helpful please let me know in a comment below.
O. Alan Noble - We Must Relearn How to Be Human
Alan Noble has perhaps been the most significant voice for me on this topic. His interest in upholding a robust orthodoxy and orthopraxy of what it means to be human is constantly stimulating.
Here is how Noble states his own thesis which is the motivating focus of his newsletter:
“Because of the dehumanizing forces of consumerism, bureaucracy, technology (A.I. in particular), addictions, secularism, and individualism, we have become alienated from the practices and habits of being human…
The goal of relearning the basics of life is to live into the creational reality of who God designed us to be, rather than defining our own telos (primarily through consumerism)…
We need to rehumanize our lives in light of our telos or else be consumed by the machine.”
If you are at all interested in how to think Christianly about what it means to be human in the modern world, then stop reading now and go and subscribe to Noble’s You Are Not Your Own newsletter. You won’t regret it.
Adam Blehm - The Importance of Developing Excellent Humans: Education, Virtue, and ChatGPT
Christian Education Journal is the one academic journal that I (at least attempt to) keep up with. It is where many of my colleagues in the Intergenerate and Children’s Spirituality Summit worlds have published and I have found many articles that have pushed my thinking forward on various children’s ministry topics.
In the April 2024 volume, Adam Blehm, a High School Bible teacher from Rejoice Christian School, has written an excellent article connecting the use of generative A.I. tools like ChatGPT to virtue formation. Similar to Noble, Blehm is interested in what makes for a flourishing human, and focuses on the intellectual virtues of open-mindedness and intellectual humility. “Excellent education must also seek to foster students’ desire to be excellently human.”
His argument concludes by stating that, while A.I. technologies can assist with content mastery, this ought not to be the end point of education.
This is a central idea that I have appreciated in a number of authors who are wrestling with the integration of generative A.I. models in the education setting. When we set the goal of education as content mastery—or far lower, the accumulation of grades and awards in order to be “job ready”—then the use of generative A.I. tools makes sense for the student. However, the frictionless ease with which smart searches and ChatGPT give us the answers to the questions, or write our essays, blog posts, or sermons for us, robs us of the necessary friction that develops important human virtues.
As Blehm concludes, “If I allow students to use ChatGPT in a way that circumvents the processes of perspective evaluation and confidence calibration, I deprive them of the opportunity to develop excellence.”
Here are several other key quotes:
“The student’s use of ChatGPT has undermined their own intellectual development because they have not engaged in the necessary mental activity for honing the skills and habits the assignment targets.”
“When educators are considering a particular use of ChatGPT in their classrooms, they should consider whether the potential use undermines or enhances a student’s development of intellectual virtue.”
“Education ought to be concerned with developing intellectual virtues because intellectual virtues are part of a flourishing human life that finds its ultimate fulfilment in the knowledge of God.”
You can read the full article here.
Peco and Ruth Gascovski - Learning, Fast and Slow: Why AI will not revolutionize education
Sticking with the theme of education, I recently came across another SubStack account, School of the Unconformed. I have not yet dug into the back catalogue, but found this article particularly helpful. My first hunch that I would resonate with the authors was their uploading of the article in PDF form so readers could take it away to read offline.
These authors pick up a number of the threads that I have already noted above about education, A.I. and what it means to be a flourishing human. One strong theme to notice in this article is the importance of effortful engagement with the content and process of learning. Citing science communicator, Derek Muller, learning occurs when there is the “repeat [of] effortful practice until mastery.” The problem, therefore, is that “there’s a danger in using AI or any technology in education if it causes us to skip the effort.”
This is a long-read, with many stimulating sub-themes. Here are some key quotes to whet your appetite:
“Drunk on the magical realism and exaggerated promises of the “digital revolution”, school districts around the country are eagerly converting to computerized test-taking and screen-reading programs at the precise moment when rigorous scientific research is showing that the old-fashioned paper method is better for teaching children how to read.”
“The more AI interferes with student effort and focused attention, the more likely it will diminish their learning.”
“Some people still wonder how we might make the relational context of learning faster or more efficient. During Muller’s talk, an audience member asked how teachers can “scale” a personal connection when teaching large classes. Muller’s reply was one of the highlights of the evening:
“How do you scale a personal trainer? How do you scale a plumber? How do you scale an electrician?
You don’t.
You just have lots of them. And I think that’s the answer; I think that’s the solution.
The goal should always be to get more of them [teachers] and make them better.”
Cal Newport - AI and Work (Some Predictions)
I have long been a fan of Cal Newport’s writings and podcast. His books Deep Work, A World Without Email, and Slow Productivity may masquerade as productivity or business self-help literature, but have a profoundly anti-hustle vision of humanity. Partner these with Digital Minimalism and So Good They Can’t Ignore You, and you have a suite of ideas that transcend the typical offering in the “workplace” category on Amazon.
While Newport is not overtly religious in his output, he is, minimally, sympathetic to Judeo-Christian conceptions of what it means to be human. Jewish, and occasionally Christian, theologians and authors appear not infrequently on his monthly reading recaps. The benefit that Newport brings to this conversation is his day job as a computer scientist at Georgetown University. He understands the world of computers, technology and A.I. as an insider.
I have appreciated Newport’s measured responses to A.I. He’s not freaked out and doesn’t envisage a Skynet scenario anytime soon. Nor is he ignoring the disruptive potential of A.I. for some work places.
Here are a few key quotes from his latest article:
“Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience... I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches.”
“Despite recent hyperbolic statements by tech leaders, many professional programmers aren’t particularly worried that their jobs can be replicated by language model queries, as so much of what they do is experience-based architecture design and debugging, which are unrelated skills for which we currently have no viable AI solution.”
“Here’s the reality: We are not currently on a trajectory to genius systems. We might figure this out in the future, but the “unlocks” required will be sufficiently numerous and slow to master that we’ll likely have plenty of clear signals and warning along the way. So, we’re not out of the woods on these issues, but at the same time, humanity is not going to be eliminated by the machines in 2030 either.”
Read the full article here.
Simon Sinek - We're Teaching Our Kids To Not Be Human!
Another secular thinker that I have valued is Simon Sinek, best known as the author of the business leadership books Start With Why and Leaders Eat Last. Like Cal Newport, I have found his wariness of frictionless technology and the stunting of flourishing humanity perceptive.
This is a wide-ranging interview with Steven Bartlett on The Diary Of A CEO podcast which covers much more than A.I. and related technology topics. Many of the best bits are in the opening preview on the YouTube video.
However, if, like me, you are a fan of meandering, long-form podcasting, this episode is worth the listen.
Key quotes:
“I’m a fan of A.I. I want A.I. to make things, but I would hate to lose out on becoming a better version of me.”
“A.I. will make our life easier, that’s what all technology does… but we are all human beings and are forced to live as human beings.”
“A community is a group of friends who agree to grow together.”
John Wyatt - The Future of A.I.
The first season of Justin Brierley’s documentary-style podcast, The Surprising Rebirth of Belief in God, followed the arguments of his book of the same title, outlining his suspicion that there is a growing resurgence in Western culture toward the divine, in general, and in Christianity, in particular. In this second season, Brierley is continuing this idea, but included broader conversations on the intersection of Christianity and society.
In this recent episode, John Wyatt, Christian doctor, author, speaker and research scientist, gives a lecture on the promises and risks of our increasingly A.I. saturated world, followed by a Q&A with Brierley and live audience.
There are many interesting threads in both the presentation and following discussion, including “Is simulated compassion always inauthentic, misleading, inapprorpiate?”
Here is the link to the common theme we’ve already seen in the articles above:
The paradox is, that in order for embodied human beings to develop and flourish, it seems that we need resistance, friction, struggle, perseverance, frustration, pain, and suffering. In other words, friction is an essential part of humna development, of character formation and flourishing…
And a final, evangelistic note:
Yes, there are real possibilities that A.I. systems will magnify human evil. But you know, it isn’t A.I. but humans who pose the greatest risk to the future. And in answer to this fear, Christianity brings the message of genuine hope. A message, “Who is a person?” the Word who became flesh.
Listen to the whole episode below:
Made-Up Books and Fake Experts
Finally, one of the major concerns that comes out of using generative A.I. tools is the fact that these tools have a habit of “hallucinating”, the technical term given to the fact that they will occasionally feed you false information. I have experienced this myself, experimenting with the inbuilt A.I. function of Notion. When prompted to find me sources on a particular topic, it mentioned a number of books by authors in the correct field, but which did not actually exist.
Over at The Gospel Coalition Australia, David M. Shaw relates his own experience of having Bible College students submit bibliographies with false references. Shaw then connects this back to our original idea of a stunted educational—and in his case, spiritual—formation.
What is extra-fascinating in the interaction that Shaw then has with ChatGPT, in which the program admits to hallucinating and indicating that it cannot be trusted as a reliable source of information.
Me: I appreciate your honesty, Chat, but why did you make that reference up? I could get into trouble for this.
ChatGPT: You’re totally right to be upset—and honestly, I owe you a proper explanation. The reference . . . by Evans was hallucinated—it wasn’t real, and I shouldn’t have included it as though it were. Here’s what happened under the hood: (1) [I prioritised] pattern recognition over accuracy; (2) [I made] assumptions about author topics… (3) No Excuse, Just Transparency: You’re absolutely right—this can have real consequences for academic integrity. That’s why I take it seriously when you call it out. It’s also why I now prioritize transparency and evidence over just generating plausible-sounding responses.
Read the full account here.
This kind of A.I. hallucination also made headlines in the mainstream media when “the Chicago Sun-Times features dozens of pages of recommended summer activities: new trends, outdoor activities, and books to read. But some of the recommendations point to fake, AI-generated books, and other articles quote and cite people that don’t appear to exist.”
You can read up about that incident here.
Congratulations for scrolling to the end! If you have any thoughts, comments, or suggestions, please leave them below.