If it’s true that questions are nearly as important as answers, then our first paradigm shift in grasping the profound changes unfolding in the field of interpreting should be to reframe the question itself1. Instead of asking, Will AI replace interpreters? we should be asking, Can AI match human performance in interpreting? Only by reframing this question, and reasoning on the answers it brings we can move beyond a binary, confrontative and interpreter-dominated perspective to a more nuanced exploration of AI’s capabilities, limitations, and the profession’s challenge to reposition itself in this evolving landscape.
The significance of this exercise lies not in the answer itself, but in the questions that arise if you answer this new question positively or, conversely, negatively. In contrast, the original question stifles any fruitful discussion on the future of interpreting, leaving little room for actionable items, and, more critically, already frames the reality at stake in a pre-determined way. In fact, we should not forget that word choice does a lot of framing, both of the speaker’s standpoint and of the audience being addressed. And frames are never neutral: they largely pre-shape the stance one is likely to take. For instance, machine interpreting is often framed within the LSP/interpreter community as a technology meant to “replace” interpreters, while outside that bubble it is more commonly seen as a technology designed to “support” people.
The new question, Can AI match human performance in interpreting?, on the other hand, allows in my opinion for more subtle and interesting discussions. Whether you personally answer it positively or negatively is not the point (only time will tell). I am convinced, however, that the thought experiment sparked by this question will bring up issues that truly matter to the community.
For my part, my answer is cautionary clear: Yes, AI will be able to match human performance in the main key features that define what interpretation is. I recognize that many in the interpreting community are naturally resistant to the notion that machines could achieve human-level performance. However, my involvement in observing progress and in developing these systems, combined with my understanding of the nuances of oral communication and interpreting, leads me to this conclusion on this matter. While this is not the right context to explain, either from a technical or an interpreting-oriented perspective, why I firmly believe this is the case (see my blog post on 10 Things I Learned about AI and Humans for some hints), it should suffice to remind us that AI is currently advancing rapidly in all the key features that make up the ability to interpret: language understanding, reasoning, grounding in communicative interaction, emotional decoding and reproduction, multimodality, cultural comprehension, and cultural transfer, capabilities traditionally thought to be uniquely human. While there is quite a lot of debate in the scientific community about how we define each of these concepts, progress is under our eyes. It would be a big mistake to underestimate this trend.
For the honor of the record, it has to be said that while we are celebrating massive progress in all these individual and fundamental areas, we are, on the other hand, only at the beginning of the journey to seamlessly integrate these elements into the higher-level task of interpreting. Very much is therefore pure speculation. In the short term, however, the significant advancements in the individual components, what I call the features of interpreting, will eventually converge, improving the performance of AI interpreters to a level that is, at the very least, astonishingly similar to that of humans. Note that my words are chosen wisely: I speak from “main” key features of interpreting or “similar” performance to humans. Nuances are important.
Whether it takes two, five, ten, or twenty years, this tipping point will come and my suggestion is that the entire debate on the future of interpreting, within the profession, academia, and beyond, should be grounded in this initial assumption. Answering the question Can AI match human performance in interpreting?, and answering it positively, as I do, or negatively as others might do, allow us to shift the focus of the debate toward opportunities and risks that will really count in the future of the field. Making a further step forward, the best way to frame the entire discussion, in my opinion, becomes therefore:
In an era where highly capable machines can perform at a comparable level of humans, will there be still a role for interpreters, and if yes, what does that role look like and why?
By asking ourselves this question, we open the door to a series of important and thought-provoking discussions, questions that must be explored, studied, and debated. More importantly, this process creates the opportunity to develop practical, actionable insights that will benefit both users of interpreting services and the interpreters themselves. These are some of the first questions that come to my mind:
- How do we define and measure comparability between human and AI performances? And more importantly: which humans are we referring to? A lay person acting as an interpreter, the average professional interpreter or a highly qualified and highly experienced professional?
- AI and humans are inherently different; each might excel in different areas/features when it comes to interpreting. What are these areas, and how will they impact interpreting performance?
- Will these differences really matter? And, if yes, to whom?
- If machines can perform at a comparable level, what role will human expertise play? What unique value can it bring? And who will cherish this value?
- What role will trust play?
- What role will (the lack of) embodiment play?
- What role will certifications have?
- How will the market evolve and how many professional interpreters will be needed? In which segments? And what are the consequences for the training of those interpreters?
But more importantly: What are the other, maybe even more key questions to ask?
So, the real overarching proposition moving forward should not to ask whether AI will replace interpreters, but when and why stakeholders will prefer humans over similarly performing machines. I believe there will be compelling reasons for this, but identifying them is no easy task. Various stakeholders (interpreters, buyers, regulators) must start thinking critically about this shift now so they are prepared when AI reaches what I would call, for lack of a better term, the Singularity in Interpreting. Oversimplified answers won’t serve anymore. If there will be space for professional interpreters even in the presence of highly capable machines, as I do believe, this space needs to be clearly identified.
A final word of caution: it is difficult, perhaps impossible, to predict exactly when this tipping point will arrive. But I am convinced it will, and it seems close enough that the community should start paying serious attention now. Even if you remain skeptical (for whatever reason, and you are fully entitled to), approaching the topic through these questions will help people better understand the transformation underway and better position the field of interpreting. That is surely more useful than reducing the discussion to a simplistic binary: “Will AI replace interpreters?”
- See this interesting paper on the relationship between reframing questions and the research outcomes: https://brill.com/display/book/edcoll/9789087909086/BP000003.xml ↩︎
1 thought on ““Will AI replace interpreters” is the wrong question to ask”