What if effective multilingual communication no longer depended on intelligence at all?
That is the starting point of my recent paper, Interpreting without intelligence, blind-reviewed and available in open-access format. Its central claim is simple, but unsettling: AI systems may not need to understand language in the human sense in order to perform spoken translation at a very high level. They may increasingly reach communicative goals through processes that are radically different from human cognition, yet still functional enough to be adopted in real-world settings.
This matters because discussions around interpreting and AI are still often framed by a reassuring assumption: interpreting is so complex, contextual, and deeply human that machines cannot truly do it. There is some truth in that. Human interpreting involves judgment, anticipation, situational awareness, and social sensitivity. But that does not necessarily mean that comparable communicative outcomes cannot be achieved by non-human systems.
That is the conceptual shift the paper tries to make.
A machine does not need to work as a human does to produce a similar practical result. Planes do not fly like birds, yet they fly. In the same way, AI may increasingly enable multilingual communication without intelligence, intention, or awareness in the human sense. From this perspective, the key issue is no longer whether machines genuinely “understand”, whatever we might think that human-like understanding is about. The real issue is whether they can become accurate enough, reliable enough, scalable enough, and safe enough to meet communicative needs across a growing range of contexts.
Once the question is framed this way, the debate changes quite dramatically. The rise of machine interpreting no longer depends on replicating human intelligence. It depends on whether artificial systems can act as effective communicative agents. And there are good reasons to think that this possibility is no longer speculative, without the need to imitate humans in the process to achieve this goal. Advances in speech recognition, neural translation, speech synthesis, multimodal systems, and especially large language models are steadily expanding the domains in which machine interpreting can operate effectively. What already works in constrained or lower-stakes scenarios may, sooner rather than later, extend into much more complex communicative environments with results that, seen from the outside i.e. from the product, are similar or in some instances even superior to human performances.
This is why the most important disruption, the moment in which highly capable machines will approximate expert performances, will probably not come from some future moment in which AI suddenly becomes “human-like”. It will come much earlier, when it becomes good enough for enough situations. That threshold matters more than philosophical debates about whether machines really think, understand and the like. It is a purely pragmatical questiin. If they can deliver outcomes that satisfy users, institutions, and markets, if they will satisfy the communicative needs we expect from an expert in the field, then the impact of this technology will be radical. I have anticipated that machines are on the trajectory to perform very similar to humans within 2030. The timeline might be wrong, but the plausibility of the outcome probably not.
This prospective does not mean that the distinction between human and machine disappears. Since highly capable machines will reach the same translational goal using different means, and in our context without possessing human-like intelligence, It means that the distinction between the two has to be located elsewhere, not in their material performances.
In other words, if non-human systems can increasingly handle the transfer of spoken meaning, then the decisive differences may lie beyond raw linguistic and cultural performance. They may emerge most clearly in contexts where communication is inseparable from responsibility, trust, ethical judgment, pragmatic flexibility, embodiment, and social presence, to name just a few. In some settings, what matters is not only whether an utterance is rendered correctly, but who is accountable for it, who can be trusted in moments of ambiguity, who can adapt when norms collide, and who carries symbolic legitimacy within the interaction. Such features are negotiated in societies, they are not fixed. They are rather in flux and are determined by conventions, laws, world views.
That is why the paper ends not with a simple opposition between humans and machines, but with a set of domains in which their differences may become most consequential also in the relatively marginal areas of translation and interpreting. Trust and accountability are obvious ones, especially in medicine, justice, diplomacy, and asylum contexts. Pragmatic flexibility and ethical mediation are another: real-life communication often requires more than transfer; it involves repair, restraint, inference, and situational judgment. Relational dynamics matter too, because communication is sometimes shaped by presence, embodiment, and the human recognition built into face-to-face encounters. The symbolic and cultural value of human vs artificial mediation may also remain important, as may resilience in crises and edge cases, where messy reality still tends to expose the limits of a particular agent. Finally, law and institutional policy may matter as much as technology itself, since regulation will shape where machines are accepted, restricted, or refused.
Seen in this light, the future debate should move away from the familiar question of whether machines can ever be like human interpreters. They cannot. But that may not be the point. They are different, albeit soon equally capable agents of translation. The more interesting question is where human and artificial forms of agency will be treated as interchangeable, and where they will not. That is not only a technical question. It is also a social, ethical, cultural, and institutional one.
And that is where the difference may ultimately matter most.
Fantinuoli C. “Interpreting without Intelligence. Rethinking Agency in Multilingual Communication“, in: “The Interpreters’ Newsletter n. 30/2025”, EUT Edizioni Università di Trieste, Trieste, 2025.