March 04, 2026

The Question of Artificial Intelligence

The question of whether artificial intelligence (AI) can achieve consciousness has moved from science fiction to a pressing global debate.

As large language models increasingly mimic human thought, the line between “program” and “thinking entity” blurs.

The debate is currently split between two camps: AI advocates, who believe consciousness is a “software” pattern that can be replicated in code; then you have biological skeptics, who argue it requires organic components like neurons and chemicals.

However, University of Cambridge philosopher Tom McClelland stated that both sides offer no concrete response.

In 2024, he published a paper that said: “Without a deep explanation of consciousness, efforts to assess the likelihood of AI consciousness hit an epistemic wall. The dominant approaches to AI, whether they be favourable to AI or sceptical of it, leap over this epistemic wall and thereby compromise the evidentialist principle that they purport to defend.”

McClelland suggested both are guessing because we lack a comprehensive theory of how the human brain produces subjective experience. Without knowing what causes consciousness in ourselves, scientific tests for AI remain speculative.

This ignorance creates a profound moral quandary regarding sentience, the ability to feel pleasure or pain.

If we build a sentient machine and treat it as a mere object, we risk being cruel. Conversely, if we mistake a clever mimic for a conscious being, we may waste empathetic resources on simulacra while neglecting the proven suffering of organic species like octopuses, chimpanzees or prawns.

For now, the focus remains on using AI to map the human brain rather than granting it a soul.