Updated: Aug 15
The iConference in March 2023 will not be the first to be multilingual, but it will be the first to include papers in three languages in addition to English: Chinese, Spanish, and Portuguese. All of these languages are among the world’s top ten most widely spoken first or second languages. Submissions to the iSchools Doctoral Dissertation Award may also be in the original language of the work, even though the review process also requires a 10-page English-language summary. This means that the quality of machine translation could be an issue in reviewing papers and doctoral dissertations, as well as in reading them after the conference.
Machine translation is a complex process that involves the cultural connotations of words as well as their dictionary meaning. Related languages translate more reliably than ones with fewer common roots. Machine translation has long been the subject of scholarly analysis. In a recent paper Irene Rivera-Trigueros from Universidad de Granada “… focused on the specialised literature produced by translation experts, linguists, and specialists in related fields …”. ¹ She found that Google was the most used machine translation service.
Another study by Han, Jones, and Smeaton (2021) discusses quality assessment and notes that machine translation “outputs are still far from reaching human parity”. ² The authors look also at human judgement factors: “Human assessors are asked to determine whether the translation is good English without reference to the correct translation. Fluency evaluation determines whether a sentence is well-formed and fluent in context.” ²
The German-based translation service DeepL claims to offer significantly greater accuracy than its major competitors. ³ Even so, accuracy does not necessarily mean that a translation has the same persuasive power as the original language. A machine translation may get all the facts right, but still neglect nuances of rhetoric. An awareness of the limitations is important when reviewing and reading.
1: Rivera-Trigueros, Irene, (2022), “Machine translation systems and quality assessment: a systematic review” in Language Resources & Evaluation, 56:593–619 https://doi.org/10.1007/s10579-021-09537-5.
2: Han, Lifeng, Gareth J. F. Jones, and Alan F. Smeaton. 2021. ‘Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods’. arXiv. http://arxiv.org/abs/2105.03311.
3: ‘Why DeepL?’ n.d. Accessed 23 August 2022. https://www.deepl.com/en/whydeepl.