Menu
Dr. Claudio Fantinuoli
  • Home
  • Applications
  • Publications
  • Talks and Conferences
  • Blog posts
Dr. Claudio Fantinuoli
July 27, 2025July 28, 2025

If Quality Is Contextual, Then AI May Be Better Equipped Than We Think

As Director for Interpretation at the European Parliament, Alison Graves offers in this video thoughtful reflections on the evolving notion of quality in conference interpreting. In her presentation, focused on human interpretation, she moves away from rigid, perfectionist definitions and supposedly objective notions of quality, instead emphasizing the contextual, listener-centered nature of interpreting. Her arguments open an important window — perhaps unintentionally — onto what I believe will become central to future discussions on AI interpreting, to the extent that her very points could easily be used as a case in favor of its adoption.

First of all, I agree with all the good points raised by the speaker. They are thoughtful and reflect a deep understanding of what truly matters in interpreting. However, I take issue with one of the conclusions she draws in her talk: namely, that “AI won’t replace us unless we let it” (see here while I do not agree in framing the topic as “replacement yes/no”). The issue I have is that this statement sounds as a natural consequence of the arguments made throughout her speech, yet in my view, it does not fully align with the reflections she offers on the evolving nature of quality in interpreting. While I firmly believe that professional interpreters will continue to have a role even when more capable machines will enter the scene (and even when those machines will eventually deliver performance comparable to humans), her concluding statement struck me as particularly counterintuitive.

In fact, her reflections on quality if applied to a future where machines will be able to demonstrate high quality interpreting (what I call a what-if-MI scenario) could just as well be interpreted as a case for the broader adoption of AI interpreting. These are precisely the kinds of arguments that might soon support an affirmative answer to a question many in the profession still avoid publicly (though often acknowledge privately): Could AI interpreting, in the not-so-distant future, genuinely meet the needs of a significant number of clients?

Let me focus on a few of her points that I believe are particularly relevant in this context:

  • The Concept of Quality is Elusive: interpreters and managers frequently refer to quality but rarely define what it means.
  • Interpreters Struggles to Clearly Define Quality: in interviews, interpreters often define quality as: accuracy, completeness, fidelity to the speaker’s message, however, there’s no agreed threshold for what counts as “significant” lack of quality.
  • Quality must be Listener-Centered: quality is in the ear of the listener and interpreting must be judged from the listener’s perspective, not the interpreter’s.
  • Understanding User Expectations: many users don’t require perfect interpretation: they want presence, accessibility, and participation.
  • What Complaints Tell Us: most complaints are not about accuracy, but about service availability.
  • Accreditation and the “Benchmark” Illusion: accreditation is essential to ensure basic skills. But “benchmark” quality is subjective and contextual.
  • Fluency Affects Perceived Accuracy and it does it sometimes even more than actual accuracy itself.

My point is that these arguments — while clearly articulated in defense of human interpreting — could just as well be read as outlining the conditions under which AI interpreting might thrive. If we reframe them slightly, they start to resemble proper value propositions for AI-driven solutions:

  • If quality is elusive and hard to define, then rigid benchmarks may not be the most meaningful way to evaluate interpreting performance. This opens the door for outcome-based assessments where AI might be “good enough” for certain tasks and clients.
  • If even human interpreters struggle to define quality in objective and shared terms, then insisting on a single universal standard to which AI must conform becomes questionable. Instead, we might focus on fitness for purpose: a performance level where AI might soon be increasingly competitive.
  • If quality is listener-centered, then the central question becomes not whether the output is perfect, but whether it serves the listener’s needs. It’s not interpreters and their view of what makes an interpretation good who will judge its value. It’s the listeners. And their criteria seem to be very less stringent than the ones of interpreters1.
  • If many users prioritize accessibility and inclusion over perfection, then scalable, on-demand AI solutions could be more attractive than limited human availability, especially in under-resourced languages or time-sensitive contexts.
  • If most complaints are about service availability, not fidelity, then AI’s ability to be “always on” and “everywhere” may be its strongest asset, not its linguistic perfection.
  • If accreditation is essential for baseline competence but benchmark quality is subjective, then human excellence and AI adequacy might coexist within a broader, more layered ecosystem of multilingual communication services.
  • If fluency affects perceived accuracy, then AI systems are well positions to please their users. Synthetic voices are increasingly smooth, expressive, and natural-sounding, in many cases, even pleasant and persuasive, while machine translation (soon powered by Large Language Models) is often is known exactly for its tendency to prioritize fluency over fidelity.

In a nutshell, my point is that Graves’ arguments are very pertinent, but that they can be read not only as a defense of human-centered interpreting, as she seems to imply, but also as a roadmap for the reasons AI interpreting is likely to gain traction in future. What we might take from her arguments is that as more capable systems will emerge, users’ perception of their utility might grow significantly, and the interpreter-centered view of what constitutes good performance, which currently dominates the debate, may gradually lose its centrality.

None of this diminishes the complexity or value of human interpreting. Nonetheless, it points toward a future in which the burden of proof may reverse. If interpreting is ultimately about facilitating understanding in a specific moment, for a specific audience — and if Graves’ points and observations are correct (as I believe they are) — then AI interpreting might be better equipped to meet users’ needs than many currently assume2., at least in the perspective of better performance than today’s (what-if-MI scenario).

In conclusion, rather than shielding ourselves with absolutes, the interpreting profession would benefit from engaging more openly with the reality that the ground is shifting. The question is no longer if AI interpreters will become part of the multilingual communication landscape — they already are — but how and where they will be used, and how users will perceive and evaluate their utility in light of their own goals. Those goals may not align with perfection or the other ideals that interpreters rightly hold dear. By fostering a more nuanced discussion — one that recognizes AI systems as emerging agents of multilingual communication — all stakeholders can help shape the integration of these technologies while safeguarding a meaningful and resilient role for human interpreters in this new era.

  1. Talking about evaluation: as Graves rightly points out, it is the user’s perspective and satisfaction, not the interpreter’s, that truly count. This has far reaching consequences on how to evaluate increasingly capable AI interpreting system. ↩︎
  2. It is also worth reminding ourselves that the performances we are seeing today represents the worst these systems will ever be. ↩︎

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

FREE NEWSLETTER

Get notified about new blog posts on Interpreting and Technology.

  • July 27, 2025 by claudio If Quality Is Contextual, Then AI May Be Better Equipped Than We Think
  • July 19, 2025 by claudio Is AI interpreting ready for prime time?
  • July 11, 2025 by claudio Innovation, Not Automation, might be the Key to Shape the Future of the Language Industry
  • July 4, 2025 by claudio End-to-End Machine Interpreting: A Promising Frontier (That’s Not There Yet for Production)
  • July 2, 2025 by claudio The Rise of AI and the Fall of the Gatekeeper

E-mail me: info@claudiofantinuoli.org

Searchable Interpreting and Technology Bibliography

2025 Claudio Fantinuoli