/

Speech Recognition for all: Carnegie Mellon’s pioneering project to reach 2,000 languages

Carnegie Mellon University's project paves the way for potentially 2,000 languages to benefit from automatic speech recognition, marrying technological advancements with cultural preservation.

The linguistic diversity across the globe is vast, yet a significant portion of the world’s languages remain untouched by modern speech recognition technologies. Researchers at Carnegie Mellon University have set forth on a groundbreaking path to bridge this divide. Below we explore this promising project aimed at bringing automatic speech recognition to a broader spectrum of languages.

The current gap in language technologies

Despite the existence of between 7,000 to 8,000 spoken languages worldwide, only a fraction benefit from contemporary language technologies like voice-to-text transcription, automatic captioning, instantaneous translation, and voice recognition. With the majority of efforts centred on popular languages, many tongues are left behind, bereft of the advantages of modern technology. The project by Carnegie Mellon University seeks to escalate the reach of automatic speech recognition tools from the present 200 to as many as 2,000 languages.

Bridging the divide with new models

A team at Carnegie Mellon University’s Language Technologies Institute (LTI) is spearheading this initiative. They are focused on simplifying the data requirements needed for speech recognition models, aiming to construct an inclusive language model that can cater to diverse linguistic needs. Their recent work titled “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio” was presented at Interspeech 2022 in South Korea.

Reimagining data requirements

Most speech recognition models rely on text and audio data. While text data is widely available for many languages, the lack of audio data has been a stumbling block. The LTI team seeks to remove this obstacle by concentrating on the common linguistic elements found across languages.

Instead of focusing on phonemes, unique to each language, the team is shifting towards information about phones, the physical sounds of words. This approach allows them to develop a model that shares underlying phones across languages, thereby reducing the effort to create individual models.

The phylogenetic approach

Pairing the model with a phylogenetic tree helps in mapping relationships between languages. This coupling aids in approximating speech models for thousands of languages without needing audio data. The innovative strategy has increased the project’s scope, moving it from the realm of 100 or 200 languages to a potential 2,000.

Impact and future prospects

Although still in the early stages, the research has shown promising signs, enhancing existing language approximation tools by 5%. The LTI team’s aspiration extends beyond merely expanding the reach of speech recognition. Their work symbolises a move towards cultural preservation. With each language embodying a unique cultural story, the importance of linguistic preservation is paramount. By utilising this new model, technologies like the VoIP phone system may find themselves adopting these expanded language technologies. This could revolutionise global communication, rendering it more inclusive and sensitive to linguistic diversity.

Conclusion

Carnegie Mellon University’s project is an ambitious step towards inclusive global communication. By working to simplify the data requirements and harnessing the power of shared linguistic elements, they are paving the way for potentially 2,000 languages to benefit from automatic speech recognition tools.

Not merely a technological achievement, this effort resonates with the deeper significance of preserving languages and their inherent cultural values. It offers a glimpse into a future where technological advancements marry cultural preservation, inspiring a new direction in language technologies.

Vey Law

Vey Law is a reporter at Breakthrough.

Latest from Blog