Menu
Dr. Claudio Fantinuoli
  • Home
  • Software
  • Publications
  • Conferences
  • Blog posts
Dr. Claudio Fantinuoli
November 12, 2025November 12, 2025

What Lies Beyond Meta and Translated’s Advances in Supporting Low-Resource Languages

Two recent announcements — Meta’s Omnilingual ASR and Translated’s Lara 200 Languages — remind us that progress in AI-driven language technology is far from plateauing. Together, they demonstrate how automatic speech recognition and large language models for translation tasks, the two core components of current machine interpreting systems, are being extended to an impressive range of low-resource languages while simultaneously improving output quality.

Meta’s Omnilingual ASR can recognize speech in over 1,600 languages, using techniques that generalize to thousands more. It represents a decisive leap toward linguistic inclusivity and low-resource language coverage. Translated’s Lara Think model, meanwhile, extends to 200 languages and integrates reasoning to improve translation quality by about 40% in human evaluations. These developments strengthen the foundational layer of many language-related applications, hence also of machine interpreting: the seamless conversion of spoken content to text, its translation, and eventual re-voicing across languages.

META ASR support of low-resource languages

From Core Models to Real-World Interpreting

Yet ASR and MT are only part of the equation. True machine interpreting requires handling real-time constraints, speaker diarisation, turn-taking, and contextual adaptation. The progress seen in Meta and Translated’s models reduces core error rates and widens coverage, but it also exposes new “extension points”:

  • adapting to specialized domains and glossaries,
  • handling noisy, low-resource, or multilingual environments,
  • integrating non-verbal and contextual cues, and
  • creating hybrid human-machine workflows suitable for professional contexts.

These are not theoretical challenges, but elements that define the next frontier of applied research and innovation in speech-to-speech translation. Much work is still to be done.

Beyond Silicon Valley

Significantly, these advances are not confined to Silicon Valley. Translated’s achievements, developed in Rome, show that high-impact innovation in AI translation is also emerging from Europe, often within stricter data-governance frameworks and multilingual realities that make it difficult to innovate at the same pace of the rest of the world. The open-source release of Meta’s ASR further empowers European research groups and startups to build on global progress rather than depend solely on proprietary systems.

Translated Improvement gains

While European conservative attitude towards AI might slow down innovation, it can also create some unique points of strength. For Italy and Europe, this mans a unique opportunity: to couple technical excellence with ethical and linguistic diversity, ensuring that AI interpreting technologies evolve within environments that value accountability, transparency, and professional standards.

A Realistic Optimism

The message from these two announcements is clear: even within the current AI paradigm, there remains vast potential for improvement. The next breakthroughs may not come from a new form of “intelligence”, but from better integration, reasoning, and adaptation within existing architectures. And with the willingness to move in that directions, as both companies have shown.

For those of us working at the intersection of interpreting and AI, this is encouraging. The building blocks are stronger than ever; now is the time to turn them into systems that are not only powerful but also trustworthy, inclusive, and human-centered.

Technical Summary — Meta’s Omnilingual ASR

Meta’s open-source Omnilingual ASR suite offers pre-trained models covering over 1,600 languages, released publicly under a permissive open license (CC BY-NC 4.0). The models and training code are freely accessible on GitHub via Meta’s fairseq2 framework. Several model sizes are available, ranging from lightweight variants that can run on consumer-grade GPUs and modern CPUs to larger versions optimized for research clusters. This open release lowers the entry barrier for developers and academic groups worldwide, enabling the integration of high-quality, multilingual speech recognition into local or on-device machine-interpreting systems—without dependence on proprietary cloud APIs.

Technical Summary — Translated’s Lara

Translated’s Lara 200 and Lara Think models extend neural machine translation to 200 languages, integrating a reasoning-based architecture designed to enhance contextual understanding and explainability. Unlike many research models, Lara is production-ready and deployed across Translated’s enterprise platform, available through APIs for large-scale or embedded use. While not open-source, its commercial licensing supports integration into professional and institutional workflows, combining high linguistic coverage with human-in-the-loop post-editing options. By emphasizing transparency, domain adaptability, and multilingual scalability, Lara represents a practical advancement of LLM-based translation within the current AI paradigm.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

FREE NEWSLETTER

Explore how technology is transforming interpreting, dubbing, and multimodal communication — today and tomorrow.

  • November 12, 2025 by claudio What Lies Beyond Meta and Translated’s Advances in Supporting Low-Resource Languages
  • October 29, 2025 by claudio InterpretBank ASR 3.0 - Some thoughts from behind the scenes
  • October 13, 2025 by claudio The Day the German Chancellor Said AI will Replace Interpreters in the EU
  • October 11, 2025 by claudio Why the Next Big Wave of Speech Innovation May Be Hyperlocal
  • September 30, 2025 by claudio Giving AI Interpreters Eyes: Why Visual Grounding Matters

E-mail me: info@claudiofantinuoli.org

Claudio Fantinuoli is professor, innovator and consultant for language technologies applied to voice and translation. He founded InterpretBank, the best known AI-tool for professional interpreters, and developed one of the first commercial-grade machine interpreting systems.

2025 Claudio Fantinuoli