When asked recently about Spain’s request to make Catalan, Basque and Galician official languages of the European Union, German Chancellor Friedrich Merz offered a confident answer: “I believe that even in the medium term there is a very good solution: one day, thanks to artificial intelligence, we will no longer need interpreters. We will be able to hear, understand, and speak every language in the world within the EU.“
It sounded like a joke — but it wasn’t. He was serious. The EU’s multilingualism comes with a heavy price tag. Translation and interpretation cost hundreds of millions of euros each year1. In a political climate dominated by budget restraint, the appeal of artificial intelligence is obvious: less bureaucracy, more efficiency, lower costs. Perhaps even lower taxes for its citizens.
That logic is therefore powerful. And in many ways, let be honest, AI translation and increasingly interpretation is indeed a success story. Machine systems now deliver fluid, accurate translation across dozens of languages. Performances are improving at a staggering pace also for voice translation, both for offline and real-time scenarios. For daily life — reading news, joining meetings, watching videos — this is an extraordinary leap forward. Let’s not forget it: this technology brings accessibility to millions who were once excluded by language barriers, it increases market opportunities (think of content creators having their videos in multiple languages on YouTube), it offers a cheap and viable solution where translation and interpretation was not an option in the past.
AI translation as a low-risk technology
In ordinary use, AI translation is a low-risk technology. It is instrumental for professional translators and interpreters to be more competitive. And it is ultimately useful for end users: to read a document in a foreign language, to participate in an international meeting where the working language is English, or to watch their favorite podcast in the language of their choice. When it fails badly, people notice and stop using it. When it fails slightly, it might cause a misunderstanding — something humans are already very good at creating, but also at navigating, without help from AI. Misunderstandings may be awkward or frustrating, but they rarely end in tragedy.
Across the world, emerging AI legislation (which will increasingly define how we interact with AI tools and services) tends to classify AI translation and interpreting systems as low-risk technologies. This view aligns with public sentiment: most people welcome these tools, and rightly so, seeing them as useful and largely harmless (with the obvious exception of translators and interpreters, for understandable reasons). Yet there is an important caveat. While the technology itself can be considered safe in general terms, its use in specific, high-stakes contexts may not be.
But not every conversation is safe
There are situations where words carry weight. Sometimes immense weight. When a patient describes their symptoms, or a doctor explains a treatment. When a witness gives testimony in court. When a diplomat negotiates under pressure. In such moments, meaning is not just information; it can shape outcomes, responsibilities, and lives.

In such cases, even minor misinterpretations can have serious consequences. Human translators and interpreters are not infallible. They, too, make mistakes and occasionally cause misunderstandings. But they still remain the best option we have to minimize those risks. They bring long preparation, experience, and accountability. Qualities that algorithms do not have. A machine’s errors, by contrast, remain silent until the damage is done. This is why, as introduced above, legislators emphasize that it is the specific context of use of AI translation and interpreting — not the technology itself — that might be treated as high-risk: when it affects an individual’s well-being or fundamental rights; when it has the potential to influence policies or decisions impacting millions of people; etc.
High-Quality Interpreting is coming. But this is not a good reason to use it in every context
My estimation is for AI interpreting to reach “human parity” by 2030, as described in more details in this article, even achieving, in some respects, superhuman performance. I often stress that the ability to replicate professional performance at an exceptionally high level does not mean we should seek to replace professionals in every context or at any cost. In my upcoming paper “Interpreting without Intelligence,” I explore this double-edged point: even if machines will be able to match human interpreters in output, there will remain compelling reasons to continue relying on human professionals in specific contexts.
And here lies the real danger: when political stakeholders oversimplify reality. While the ambition to reduce costs in the public sector is legitimate and even commendable, Chancellor Merz’s spontaneous optimism should serve as a warning. It reveals how easily we can slip into the illusion that AI can — and should — be used everywhere, at any cost. Yet in some domains, a single misunderstood sentence can alter the course of justice, jeopardize health, or derail diplomacy. Many meetings at the European level, for instance, would clearly fall into the category of high-risk communication.
Efficiency must be tight with clear conditions
Yes, AI can help Europe’s institutions save money. Yes, it can make communication more inclusive. But efficiency and inclusion should never come without conditions. There are conversations — those that determine lives, rights, or policies — where human trust and accountability must remain integral to the process. Scientists and entrepreneurs will undoubtedly work hard to make AI systems as secure and reliable as human professionals. Yet in the meantime, we should exercise a healthy dose of skepticism whenever the stakes are high. Chancellor Merz ended his remark with a telling caveat: “But that will take some time.” On that point, I couldn’t agree more.
- The overall cost for delivering translation and interpreting services in the EU institutions is around €1 billion per year, which represents less than 1 % of the EU budget, according to a source of the European Parliament (https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2019)642207). ↩︎
I worked as a GP in a multilingual practice and am an academic who has worked on the issue of working with interpreters. I completely agree about the importance of accountability and trust. In the health care setting trust is intimately related with relationship; Dr to Patient, Patient to Interpreter, Interpreter to Doctor. Some trust is engendered by institutional means…the doctor and the interpreter have appropriate qualifications. Much is built with interaction, and for complex problems established relationships. An important element of this triad (as compared to machine translation) is the ability to clarify, “repeat back” and discuss nuance. The body language of any of the participants could trigger a need for further discussion.
Here in New Zealand where there is no, as of right, dedicated funding for interpreters machine translation will be increasingly used. Clear analysis of the continuum of risk (from low risk for arranging appointments, to high risk with new seriouis diagnoses) is essential
Thanks for your comment. I am not enough knowledgeable about Interpreting in Australia. But I know that you have a very strong and vetted system of certification for interpreting. Interpreters engaging in a subset of settings need certification to operate. This is a good starting point, I believe, for a safe deployment of AI interpreting (and conversely for its not deployment), since I would imagine similar requirements will need to apply too machines too. Since vetting processes (speak certifications) are not in sight, this should – my conclusion – create a human-only safe space for professionals.