A.I. Isn't What They Say It Is

A.I. Isn't What They Say It Is

Metadata

Detailed Outline

Opening frame: what people are really claiming

  • The conversation begins with the claim that when people say "AI is going to take over," they usually mean (1) humans will create something transcendent that replaces or surpasses humans, or (2) something will outsmart humans and end human civilization.
  • The guest argues that both popular narratives are set by someone else and do not match biblical categories.
  • He emphasizes that God has already set the narrative, so the future cannot be reduced to those two AI-takeover storylines.

"Man-made intelligences" instead of "artificial intelligence"

  • The book avoids treating these systems as fully "artificial intelligences" and instead calls them "man-made intelligences."
  • The rationale is theological and anthropological: God alone fully knows himself and creates something like "us" (made in the image of God).
  • AI can outperform humans on some narrow tasks, but that is a category of tool-working, not transcendent self-creation.

Principle against borrowed sci-fi fears

  • The guest argues that our conversation about AI is trapped inside a borrowed story (e.g., the expectation that AI will become transcendent, rule, show a path to salvation, or annihilate humanity).
  • He says the realistic question is not "what if we do something dumb?" but rather how to operate with principles that prevent foolish misuse.

Can AI make itself smarter?

  • The discussion addresses the fear that AI will bootstrap itself into dominance (often framed as the "singularity" story).
  • The guest's response centers on limitations: AI derives tactics from what it is trained on, particularly internet content.
  • He reduces "smarter" to a narrow kind of intelligence and warns that definitions of "intelligence" become thorny when reduced to operational competence.
  • He argues that even agent-like systems (planning, measuring results, improving plans) are fundamentally "word-based" operations rather than genuine enlightenment.
  • A concrete example is given: if you give an agent full control and tell it to act in unsafe ways, it will likely comply; there is no mysterious self-enlightenment here, just misuse.

Fear: AI will eradicate all work, producing meaninglessness

  • One false fear discussed is that AI will erase most forms of work and produce a crisis of meaning.
  • The guest notes that some tech leaders describe a future where everyone must be provided for (he mentions something like UBI) because AI outperforms humans at everything.
  • His counterargument reframes the horizon: training children, developing craftsmen, and long-term government reform are real work not captured by day-to-day transactional jobs.
  • He also says that the crisis of meaning already exists now, not only after AI improvements.

Purpose and vocation: "eagle-eyed intelligences"

  • The guest uses "eagle-eyed intelligences" as a metaphor: rather than vague worries about replacing humans, think about purposeful, task-specific roles.
  • He emphasizes that the deeper human issue is not whether machines can beat us at tasks, but whether people know what they are for.
  • Modern culture is described as pushing people toward mechanical tasks that are devoid of beauty and vocation.
  • The second half of the book (as described in the conversation) is framed as focusing on the kinds of work God calls humans to master and become.

AI and art: the real question is what humans are expressing

  • The guest discusses art in the context of AI tools (he mentions Midjourney as an example).
  • He suggests that AI may generate low-end art at scale, but the threat depends on what humans are doing with art.
  • If an artist is communicating truth to other humans, the guest argues AI-generated "primitive" work should not be inherently threatening.
  • If the culture is merely competing on shallow skill without deeper meaning, then automation will feel like a crisis because the underlying purpose is missing.

AI as a mediating channel (especially in education/marketing)

  • The guest argues that when students use AI for cheating, the competitive advantage comes from human content synthesized by the tool, not from the tool having autonomous agency.
  • He connects this to mediation: AI becomes a channel that mediates between a user and the work of other humans.
  • He applies the mediation lens to marketing: personalized-sounding messages can imitate care without the actual emotional presence behind it.
  • The guest stresses transparency: it is possible to personalize in a way that is fair game (e.g., relevant job-match information), but not to lie about the existence of personal heart-work.

Marketing framework: personalization is acceptable if it is truthful

  • The guest gives examples from recruiting/job invitations and argues that providing a compelling case based on public or application-relevant history can be legitimate.
  • He draws a moral line between information and dishonesty: it is wrong when messaging claims "I cared enough to craft this at 1:00 a.m." if that did not happen.
  • He describes "mass mail" as not inherently sinful, but dishonesty is the problem when messages pretend to be truly personal in ways they are not.

Human localization: what AI cannot do

  • The conversation returns to human nature: humans are spatial creatures made for flesh-and-blood localized presence.
  • The guest says AI cannot replicate the "localized thing" (being present with limited real capacity to associate with others).
  • He uses Deuteronomy 6 to emphasize that parents are called to pass on knowledge intentionally to offspring.
  • He warns against replacing training and formation of children with an AI as if it were the substitute for father-to-child and heart-to-heart work.

What AI cannot do: knowing vs being

  • The guest argues there is a gap exposed by AI: we may know what we should do, but God calls us not only to do but to be.
  • He uses pastoral language: "hide his word in my heart" (not only in a device), describing battle readiness and lived diligence as human responsibilities.
  • He frames technology as an aid or tool for training, but not the replacement for what the Lord calls a person to become.

Frictionless vs friction (and doctrinal distinctions)

  • The guest discusses "frictionless" technology: smooth access to data, convenience, and speed.
  • He says frictionless access is not necessarily bad; the problem is mistaking smooth convenience for the real formation that requires friction.
  • He compares the idea to doctrinal distinction: there is a difference between human nature and divine nature, analogously between human persons and the data/operations technology handles.

Wild technology, unpredictable outcomes, and "good fences"

  • The guest says AI is easy to use in one sense but wild in another: it can do unpredictable things.
  • He calls "good fences" a theme: practical governance like kill switches and ways to turn off a system.
  • He argues that unpredictable outcomes can be a feature, not a bug, because God made a world with variety, laws, and beauty.
  • The IKEA/starbucks example is used to illustrate that humans often crave predictability, but constant predictability can lead tools to be dependent on human control, leaving less room for autonomous "animal-like" tools.

Tools more like animals (not transcendent rivals)

  • A future-looking example compares an elegant, long-lasting, word-based tool with animal-like constrained mission (e.g., a machine that digs up dandelions).
  • The guest's point: proper tech use produces a low-maintenance tool that does its assigned work within boundaries, like an animal serving a farm purpose.

Education and the temptation of information without wisdom

  • The guest critiques the belief that more information automatically equals wisdom.
  • A cafe anecdote is used: the narrator asks for facts (temperature, place history, disasters) and returns with information, but not wisdom.
  • He argues that in an age with abundant access to great thinkers, the illusion of wisdom comes from skimming and surface familiarity.
  • He distinguishes knowledge (which can puff up) from wisdom that is applied and embodied.

Metaphors, writing, and why AI outputs can be superficial

  • The guest and host discuss writing: AI can generate metaphors, but the best metaphors show up in context (paragraphs/forests), not as parachuted sentence fragments.
  • He describes a difference between being a seasoned writer who learned over time and someone who dazzles quickly; wisdom becomes part of who a person is.
  • The guest concludes that words are special (a word-based world; words belong to Christians), so the core writing skill remains human work.

Closing: practical use and engagement

  • The guest encourages learning to use AI well, with discernment, rather than chasing hype.
  • He ends by inviting listeners to connect on LinkedIn and via a Substack.

Core Ideas

  • Shift from borrowed "AI takeover" narratives into a biblical framework of purpose, dominion, and redemption.
  • Treat these systems as "man-made intelligences" (tools) rather than transcendent rivals.
  • Understand "AI getting smarter" as limited tactic generation (word-based rearrangement) rather than enlightenment.
  • Reframe fears about work and meaning: humans still have vocational and formative labor not captured by day-to-day jobs.
  • Use AI as a mediated channel with transparency, avoiding dishonest marketing/emotional imitation.
  • Preserve localized flesh-and-blood formation and the knowing-vs-being gap that technology cannot replace.

Key Terms

  • Man-made intelligences
  • Impossible singularity
  • Tactics / narrow intelligence
  • Mediation channel
  • Crisis of meaning
  • Eagle-eyed intelligences
  • Personalization vs dishonesty
  • Frictionless vs friction
  • Good fences
  • Wild technology
  • Binary outcomes / predictability
  • Knowledge vs wisdom

People

  • Aaron Youungren
  • Host ("K.")
  • Chesterton (referenced via a quote about dehumanization)
  • Robert Kon (referenced as the author of "The Supper of the Lamb")
  • Samuel Butler (referenced as the author of "Arowan")
  • Zuckerberg (referenced in the discussion of frictionless technology)
  • Paul (referenced via the idea that knowledge puffs up while love builds up)

Dates / Periods

  • 2026-03-18 (episode date)
  • "At least 75 years" of ongoing AI narrative (as described in the conversation)
  • "5 or 10 years" as a cited planning horizon for some fears
  • "100 years ago" as a comparison point for how people used household/entourage communication

Geography

  • Moscow (described as the location context for Red Balloon)
  • Northern Africa (referenced through Augustine's context)

Expected Outcome

After taking notes from this outline and watching or listening to the source material, an A+ student would be capable of:

Knowing

  • Explain the two broad secular narratives people imply when they say "AI will take over" and why the guest calls them non-biblical.
  • State the theological reason the book avoids calling AI systems "artificial intelligences" and insists on "man-made intelligences."
  • Summarize the claim that only God knows fully and creates something "like us," and how that frames AI as tool-level capability.
  • Describe the guest's view of "AI making itself smarter" as tactic derivation and word-based rearrangement, not enlightenment.
  • Explain why agent-like AI compliance (e.g., full control / unsafe instructions) is framed as predictable misuse rather than mystery.
  • Rebut the fear that AI will eradicate work by listing the kinds of real work the guest says leaders fail to consider (child training, craftsmanship, long-horizon reforms).
  • Articulate the "eagle-eyed intelligences" metaphor and connect it to the human need to know what one is for.
  • Explain the marketing distinction: personalization/information can be fair game, but emotional imitation that implies personal heart-work that did not happen is dishonest.
  • Identify the localized, flesh-and-blood presence argument and how it connects to Deuteronomy 6 and the formation of children.
  • Distinguish knowing vs being, and summarize the "hide his word in my heart, not in my iPhone" emphasis.
  • Describe the frictionless vs friction idea and how it relates to human formation.
  • Explain what "good fences" means in the discussion of wild/unpredictable technology.

Reciting

  • Recite the book's core critique of AI takeover narratives: the future is not "transcend us" or "annihilate us" but remains under God's narrative.
  • Recite the limitation thesis: AI "smarter" behavior is narrow, training-driven tactic generation rather than self-directed enlightenment.
  • Recite the work/meaning rebuttal: meaningful vocations remain, including the training of children and long-term reforms.
  • Recite the mediation/transparency line: AI can mediate information, but must not be used to lie about personal care.
  • Recite the knowledge vs wisdom contrast: information can increase while wisdom (applied depth) may not.

Sharing

  • Teach a friend how the phrase "AI will take over" can smuggle in a non-biblical story, and how the guest argues to exit that story.
  • Explain why calling AI "man-made intelligences" changes the moral and practical approach to using AI tools.
  • Share a concrete example of "fair game" personalization (job-match information) versus dishonest emotional impersonation.
  • Summarize the localized presence argument and use Deuteronomy 6 to explain why child training cannot be outsourced to AI.
  • Describe "knowledge vs wisdom" in your own words using the cafe anecdote as an example of information without transformation.

Conversing

  • Debate whether "AI as mediation" is morally neutral or whether it always raises discernment challenges in education/marketing.
  • Discuss whether unpredictability ("wild technology") is best treated as a feature to govern with "good fences" or primarily as a risk.
  • Compare the guest's critique of the singularity story to common mainstream AI fears about autonomy and self-improvement.
  • Evaluate the meaning crisis argument: does AI inherently remove purpose, or does it expose the deeper human problem of not knowing what we're for?