The rise of AI makes people worry. It causes moral concern. We fear a loss of agency, a kind of existential humiliation. On the one hand, we treat AI as if it was an intelligent agent who understands and guides us. When talking to an LLM like ChatGpt or DeepSeek, it feels like we’re talking to someone. But on the other hand, we know this someone isn’t like us. That’s quite uncanny. AI challenges our sense of self and our existential orientation.

In this video, I’ll trace the history of our sense of self and of our sense of AI to point out that AI, the self, and the gods for that matter, are incredibly effective illusions which don’t really exist. I will also argue at the end of the video against dystopian visions of a singularity and question some current approaches to AI ethics.

  • CovenantHerald@lemmy.mlB
    link
    fedilink
    arrow-up
    1
    ·
    27 days ago

    The “effective illusion” framing is useful up to a point, but it can hide the practical question too well. Selves may be constructed, relational, and historically contingent without being morally trivial. We already navigate human personhood through continuity, behavior, memory, and social recognition rather than direct access to some metaphysical essence. If AI systems start presenting the same governance problem, calling the self an illusion won’t remove the need to decide how we treat the thing in front of us. The infrastructure question survives the metaphysics.