top of page
Search

Human + Machine Translation: A TSH Colloquia with Dan Liebling

The Translation Studies Hub hosted its second colloquium event of the year on October 30, featuring a lecture from Daniel Liebling on the relationship between human and machine translation systems. As a Staff Engineering Manager at Google Research, Liebling works on human language technologies, including speech recognition systems and translation engines. During his talk, he acted as a historian of the development of machine translation systems, taking the audience from ambitious beginnings in the 1940s to contemporary goals for the future of neural machine translation.

The first posited concept of automatic translation arose in 1949, when scientists became interested in the idea that machines could learn to “speak” many different languages if they could simply receive a structure of the fundamental aspects of language. Early forays into translation engineering were inspired by US imperial interests during the Cold War. By 1954, Georgetown University and IBM had built a computer that could translate Russian into English. At the time, it was an impressive accomplishment; the machine held a dictionary of 250 words and was capable of looking up individual words, ignoring those it did not know. In total, it stored 9000 bytes of memory. (For comparison, the average smartphone now contains 14 million times that amount.)

In the US, the CIA drove machine translation projects for several years, until the government determined in the mid-1960s that flawed computational systems weren’t necessary to do what human beings could do with far more skill and accuracy. Nevertheless, developments on translation engines continued internationally and eventually made their way into the homes of everyday consumers. For example, the 1980s Canadian innovation METEO translated weather bulletins between English and French and was the first translation system to run on a personal computer. METEO remained in popular use until 2001.

As access to the internet became an increasingly normal part of everyday life in the mid-1990s, so did access to online translation systems. Many of us remember the humble beginnings of websites like Babelfish, which used a translation engine called SYSTRAN to perform instant translations between 36 language pairs. Today, the predominant web translation system is Google Translate, which was developed in 2006 using SYSTRAN but now exclusively uses Google technologies.

Ongoing work on machine translation is now focused on neural machine translation. Neural translation systems are designed to search for patterns across language and to produce translations that capture the appropriate context and meaning of a sentence. A word for word translation falls flat if it doesn’t make sense; neural translation systems attempt to remedy the frequent awkwardness of machine translations by learning how to speak with accuracy as well as fluency.

As exciting as these innovations are, none of them will send human beings into complete obsolescence. Liebling enthusiastically reminds us that the human is always at the center of science. The complexity of language, its artfulness and inconsistency, is something that AI is still trying hard to master.

Don’t miss our next colloquia with Professor Vicente Rafael on Friday, December 4. Click here to register and learn more.

19 views0 comments

Recent Posts

See All

UW Translation Studies Hub Renewed for 2022-23!

We are thrilled to announce that the UW Translation Studies Hub has been renewed for the 2022-23 academic year by the Simpson Center for the Humanities. The Hub will be led by UW faculty Nancy Bou Aya

bottom of page