What if effective multilingual communication no longer depended on intelligence at all?
That is the starting point of my recent paper, Interpreting without intelligence, blind-reviewed and available in open-access format. Its central claim is simple, but unsettling: AI systems may not need to understand language in the human sense in order to perform spoken translation at a very high level. They may increasingly reach communicative goals through processes that are radically different from human cognition, yet still functional enough to be adopted in real-world settings.
This matters because discussions around interpreting and AI are still often framed by a reassuring assumption: interpreting is so complex, contextual, and deeply human that machines cannot truly do it. There is some truth in that. Human interpreting involves judgment, anticipation, situational awareness, and social sensitivity. But that does not necessarily mean that comparable communicative outcomes cannot be achieved by non-human systems.
That is the conceptual shift the paper tries to make.
A machine does not need to work as a human does to produce a similar practical result. Planes do not fly like birds, yet they fly. In the same way, AI may increasingly enable multilingual communication without intelligence, intention, or awareness in the human sense. From this perspective, the key issue is no longer whether machines genuinely “understand”. The real issue is whether they can become accurate enough, reliable enough, scalable enough, and safe enough to meet communicative needs across a growing range of contexts.
Once the question is framed this way, the debate changes quite dramatically. The rise of machine interpreting no longer depends on replicating human intelligence. It depends on whether artificial systems can act as effective communicative agents. And there are good reasons to think that this possibility is no longer speculative. Advances in speech recognition, neural translation, speech synthesis, multimodal systems, and especially large language models are steadily expanding the domains in which machine interpreting can operate effectively. What already works in constrained or lower-stakes scenarios may, sooner rather than later, extend into much more complex communicative environments.
This is why the most important disruption will probably not come from some future moment in which AI suddenly becomes “human-like”. It will come much earlier, when it becomes good enough for enough situations. That threshold matters more than philosophical debates about whether machines really think. If they can deliver outcomes that satisfy users, institutions, and markets, then many long-standing assumptions about language, mediation, and human exclusivity will have to be revisited. This is especially true in my eyes as I do believe that machines are on the trajectory to perform very similar to humans within 2030.
But this does not mean that the distinction between human and machine disappears. It means that the distinction has to be located elsewhere, not in their performances.
If non-human systems can increasingly handle the transfer of spoken meaning, then the decisive differences may lie beyond raw linguistic performance. They may emerge most clearly in contexts where communication is inseparable from responsibility, trust, ethical judgment, pragmatic flexibility, embodiment, and social presence. In some settings, what matters is not only whether an utterance is rendered correctly, but who is accountable for it, who can be trusted in moments of ambiguity, who can adapt when norms collide, and who carries symbolic legitimacy within the interaction.
That is why the paper ends not with a simple opposition between humans and machines, but with a set of domains in which their differences may become most consequential. Trust and accountability are obvious ones, especially in medicine, justice, diplomacy, and asylum contexts. Pragmatic flexibility and ethical mediation are another: real-life communication often requires more than transfer; it involves repair, restraint, inference, and situational judgment. Relational dynamics matter too, because communication is sometimes shaped by presence, embodiment, and the human recognition built into face-to-face encounters. The symbolic and cultural value of human mediation may also remain important, as may resilience in crises and edge cases, where messy reality still tends to expose the limits of automation. Finally, law and institutional policy may matter as much as technology itself, since regulation will shape where machine interpreting is accepted, restricted, or refused.
Seen in this light, the future debate should move away from the familiar question of whether machines can ever be like human interpreters. They cannot. But that may not be the point. The more interesting question is where human and artificial forms of agency will be treated as interchangeable, and where they will not. That is not only a technical question. It is also a social, ethical, cultural, and institutional one.
And that is where the difference may ultimately matter most.
Fantinuoli C. “Interpreting without Intelligence. Rethinking Agency in Multilingual Communication“, in: “The Interpreters’ Newsletter n. 30/2025”, EUT Edizioni Università di Trieste, Trieste, 2025.