O’Speak Version 1.0: A New Tool to Measure Segmental Pronunciation Features

Authors

  • Widya Ratna Kusumaningrum Universitas Tidar
  • Boris Ramadhika Universitas Tidar
  • Rolisda Yosintha Universitas Tidar

DOI:

https://doi.org/10.31002/metathesis.v8i1.494

Keywords:

automated pronunciation evaluation, segmental features, human rating, o’speak

Abstract

The rapid enhancement of technology has made it possible to integrate technology and L2 pronunciation assessment. While the investigation of L2 pronunciation was considered vital in English Language Teaching, assessing pronunciation is granted the least attention. This study attempts to discuss the roles and impacts of O’Speak version 1.0 as an automated pronunciation tool and compare it with human ratings while assessing L2 segmental pronunciation features uttered by Indonesian learners of English. This study aims to pilot an android-based pronunciation test, namely, O’Speak, which was developed using Feuerstein’s Mediated Learning Experience principles. Performed under a quasi-experimental research design, this study ran an independent two-sample t-test involving 50 participants. The study showed that there was no statistically significant difference between O’Speak and human ratings in the segmental pronunciation assessment. This indicates that a new tool functions equally with the ability that human rating has. During the study, this study identified some caveats shown by the human rating that leads to its ability to be equal to O’Speak, and these include teaching experience, hallo effect, and rating experience.

Downloads

Published

2024-07-09

How to Cite

Kusumaningrum, W. R., Ramadhika, B., & Yosintha, R. (2024). O’Speak Version 1.0: A New Tool to Measure Segmental Pronunciation Features. Metathesis: Journal of English Language, Literature, and Teaching, 8(1), 101–112. https://doi.org/10.31002/metathesis.v8i1.494