In the last two years, the language industry has entered one of the most profound technological shifts in its history. It is, in many ways, an existential reckoning. High quality machine translation, speech translation, automatic dubbing, and AI-assisted interpreting are no longer speculative demos. They are functional systems, improving every month, quietly integrating into workflows and devices we already use. Yet, as the ground shifts, many in translation academia and professional associations cling to a comforting illusion: that the unfolding technological disruption is either exaggerated, unethical, or simply impossible.
While some of the points they make are grounded in reality, the most fundamental, and hence dangerous, path the follow is a pattern the legal scholar Richard Susskind once called technological myopia: the inability of experts to make informed guesses about where technology is heading, because they evaluate it only through the lens of its current imperfections. They judge tomorrow’s systems by today’s glitches, and thus always underestimate what’s around the corner.
The comforting denial
In conference panels, newsletters, and public statements, one still hears the same reassuring refrain: “AI will never reach human level”. Or, when that position becomes untenable, the fallback argument: “Even if it does, users will never accept it”. It’s a narrative that sounds protective, even noble. But it’s also detached from empirical trends, and dangerously misleading for those who take it at face value.
The irony is that this skepticism comes from communities that have traditionally championed analytical rigor and critical thinking. Yet, when faced with an uncomfortable reality — the accelerating performance of language technologies — many institutions resort to emotional defense rather than intellectual inquiry. They celebrate every paper or benchmark that exposes limitations (“You see? We told you!”) and dismiss every breakthrough as a marketing bubble inflated by “bad actors”.
A widening gap
Nowhere is this disconnect clearer than in the relationship between academia, professional associations, and the language technology industry. Officially, everyone calls for “closer collaboration”. In practice, few actually engage. When companies publish research or prototypes, the reaction on professional forums and social media is often outrage or ridicule. Industry is cast as an untrustworthy force: profit-driven, manipulative, even conspiratorial.
This antagonism has become almost ritualistic:
- White papers lament that scientific studies “misrepresent the interpreting profession”
- Association statements dismiss new technologies as “not ready for the field”
- Online debates quickly devolve into moral posturing, where dismissal becomes a badge of integrity
Meanwhile, those in industry rarely even respond. They don’t need to. When you know the direction of travel — when you’ve seen the prototypes, the user metrics, the rapid compounding of multimodal AI — you stop debating whether disruption will happen. You focus on how to manage it.
The asymmetry of confidence
That silence should be unsettling. In most revolutions, the quiet side is the confident one. The noisy side is the one losing ground.
Academia and professional associations are not facing a conspiracy; they are facing a trajectory. And trajectories are harder to argue with than theories. AI interpreting, dubbing, and translation are already reaching thresholds that make substitution economically rational, even if not yet artistically perfect. That is what matters to markets and institutions that buy communication, not the philosophical ideal of human uniqueness.
The sad part is not that this disruption is happening: innovation always does. The sad part is that those entrusted with preparing and protecting professionals are instead feeding them denial. They mistake realism for cynicism and treat foresight as betrayal. By rejecting even short-term forecasts grounded in data or specialized knowledge (forecasts that point to parity-level interpreting, near-human dubbing, and fully automated multilingual content pipelines within the next decade) they condemn the very communities they claim to defend to obsolescence by surprise.
The cost of disbelief
When professionals repeat these narratives — “it’s a bubble,” “AI can’t handle nuance,” “users will always want humans” — they are not exercising skepticism; they are echoing institutional comfort. And comfort, in this case, is fatal. The longer the community delays adaptation by predicting the future trends, the fewer seats it will have at the table where the future of multilingual communication is being designed.
Technological revolutions rarely ask for permission. They unfold according to momentum, not consensus. The choice facing academia and professional associations today is not whether AI will disrupt the language sector. It already has. The choice is whether they want to help shape that disruption or be shaped by it.
Until they confront this reality — not as fantasy, not as fear, but as unfolding fact — the conversation will remain a dialogue of the deaf: one side citing ideals, the other building the future. No matter how comfortable or uncomfortable this future is. And the future, as always, won’t wait.