Menu
Dr. Claudio Fantinuoli
  • Home
  • Software
  • Publications
  • Conferences
  • CV
  • Blog
Dr. Claudio Fantinuoli
November 21, 2025November 23, 2025

“AI just swaps words” – Rethinking Semantics in the Age of AI

Machine translation is still often dismissed by translators and interpreters as a simplistic word-swapping device, i.e. an automated dictionary that turns sentence A into sentence B by juggling vocabulary. This argument is used every day to suggest that machine translation is inherently limited and, ultimately, pointless. What’s striking is that this misconception mirrors another widespread belief among the general public: that human translators and interpreters also “just replace words”. Critics of MT inadvertently adopt the same reductionist view they reject when applied to human professionals.

At the root of this attitude lies the belief that modern AI, namely Large Language Models (LLMs), lacks real semantics. They are said to manipulate surface forms (mere words) without anchoring them to rich human concepts like intention, reference, or understanding. And in a narrow mechanistic sense, this is true: LLMs are pattern-based systems trained on statistical correlations.

But reducing a system to its machinery tells us nothing about the value of what it produces. As Daniel Dennett often argued, any complex cognitive agent — biological or artificial — can be understood at different explanatory levels. Describing human translators and interpreters as “just neurons firing” is technically correct but entirely misses the point. Functionally, they are doing much more.

Semantics Is Not an Object Hidden Inside the Mind

The debate about “real semantics” often assumes that meaning is a sort of internal substance, more or less an organ that humans possess and machines do not. But major traditions in philosophy disagree with this picture. Wittgenstein1, for example, famously rejected the idea of private meanings; instead, he saw meaning as emerging from use within “language games”. Umberto Eco2 , whom I greatly admire, described meaning as a negotiation among signs, contexts, and interpretive conventions. And Quine3 showed that reference and interpretation are not fixed objects but context-dependent and socially determined.

From this perspective, when we talk about translation and interpreting, asking whether LLMs “truly” understand is roughly as meaningful as asking whether calculators “truly” add: the question presupposes a metaphysics of meaning that neither humans nor machines satisfy. What exists in practice are agents — with different architectures and complexities — participating in our linguistic exchanges with varying degrees of competence.

Machines Already Participate in Our Language Games

Whether or not translation and interpreting systems have inner mental states, they undeniably take part in communication:

  • They interpret user queries.
  • They generate contextually appropriate responses.
  • They justify their choices in ways that satisfy the user’s expectation of an explanation.
  • They act as reliable interlocutors for many tasks.

Dennett’s point is crucial here4. His notion of the intentional stance suggests that we routinely explain and predict the behavior of complex systems by treating them as if they had beliefs, goals, or intentions, regardless of whether such mental states actually exist. We do this not because we believe the system has a mind, but because this stance is useful: it allows us to interact with the system more effectively and to anticipate its actions. When users tell an LLM “you misunderstood me”, or when they ask it to “try again,” they are implicitly adopting this stance, not out of philosophical naivety, but because this framework makes the interaction intelligible and productive.

Seen in this light, the relevance of Dennett’s insight becomes evident. The question of whether LLMs really understand (or if they have a human-like semantics) is less important than the observable fact that humans treat them as understanding agents, and that doing so yields successful coordination. Meaning, in this view, is not an inner, hidden property of the model but an outcome of the interactional dynamics between human and system. This aligns closely with Floridi’s concept of artificial agency, which argues that machines can participate meaningfully in human practices without possessing human-like cognition. Floridi5 shifts the focus from inner intelligence to agency as performance, to what the system does, how it behaves, and how it integrates into our socio-technical routines.

In combining these perspectives, we can say: translation and interpreting systems participate in our language games not because they think or understand in a human sense, but because their behavior is structured in ways that make adopting an intentional stance toward them fruitful. Their “semantics” emerges through use, interaction, and coordination, not through consciousness or mental representation.

Translation Machines as Communicative Agents

This view has direct implications for translation and interpreting. MT and speech-translation systems are not merely tools that transform strings; they mediate meaning between people. They influence decisions, shape expectations, and co-create communicative outcomes. That is a semantic role, even if the underlying mechanics differ from those of human cognition.

These systems should therefore be studied not only as engineering artifacts but also as new communicative agents. Translation studies has begun exploring this territory, but interpreting studies, which is slowly starting to interact with real-time MT/ST, should also recognize these systems as active participants in meaning construction, i.e. as agents of communication.

Semantics Is a Social Achievement

The question “Do machines really understand?” is less insightful than it appears, as it is the repeated notion that AI systems do not understand what they are translating or interpreting (“they are just word-swapping machines”). Meaning is not a metaphysical substance possessed by some agents and not others. It is a social achievement that arises whenever agents — human or artificial — coordinate through language.

From this standpoint, translation and interpreting systems already contribute to semantic labor. They participate in our language games, shape our conversations, and mediate communication across languages. Their semantics is defined not by what they are internally, but by what they can do with us. And by that measure, they are already part of the conversation.

End note: To be clear, my argument is not that human translators and interpreters are equivalent to AI systems. Far from it. My point is that professionals need stronger, more accurate reasons when communicating why the world should choose human expertise over an automated system. Unfortunately, professional associations and academia have so far failed to equip practitioners with the conceptual tools and communication strategies they need.

  1. Wittgenstein, Ludwig (1953). Philosophical Investigations. ↩︎
  2. Eco, Umberto (1976). A Theory of Semiotics. Indiana University Press,. ↩︎
  3. Quine, W. V. O. (1960). Word and Object. MIT Press. ↩︎
  4. Dennett, D. C. (1987). The Intentional Stance. MIT Press. ↩︎
  5. Floridi, L. (2025). “AI as Agency Without Intelligence.” Philosophy & Technology, 38(2). ↩︎

2 thoughts on ““AI just swaps words” – Rethinking Semantics in the Age of AI”

  1. Seth Hammock says:
    November 23, 2025 at 5:13 pm

    The fact is, AI is going to wipe out an entire generation of translators and interpreters, maybe even two generations, including early Gen Xers who did not grow up with computers in the home. The field will require technical expertise going forward. That is, any talk of meetups and conferences to discuss vocabulary and technique is about as useless as screen doors on a submarine

    Reply
    1. claudio says:
      November 23, 2025 at 5:46 pm

      I tend to agree. At the same time, AI is opening up a broader ability to communicate without language barriers.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

FREE NEWSLETTER

I write about how technology is transforming interpreting, dubbing, and multimodal communication.

  • January 4, 2026 by claudio Trends 2026 in Technology and Interpreting
  • January 1, 2026 by claudio Deepfakes and Machine Interpreting: Some Analogies
  • December 28, 2025 by claudio What Role Can Interpreting Studies Play in an Age of Highly Capable Machines?
  • December 20, 2025 by claudio New Edited Volume: Machine and Computer-Assisted Interpreting
  • December 14, 2025 by claudio The Age of AI Music?

E-mail me: info@claudiofantinuoli.org

Claudio Fantinuoli is professor, innovator and consultant for language technologies applied to voice and translation. He founded InterpretBank, the best known AI-tool for professional interpreters, and developed one of the first commercial-grade machine interpreting systems.

2025 Claudio Fantinuoli