Assessing Language Proficiency through AI Chatbot-Based Evaluations
Main Article Content
Abstract
This study aims to investigate the effectiveness of using AI chatbots for assessing language proficiency. The study employs a mixed-methods approach, combining quantitative analysis of chatbot evaluation scores with qualitative analysis of participant feedback. The AI chatbot-based evaluations were conducted with a sample of language learners from the State University of Surabaya. The evaluations focused on various linguistic aspects, including grammar, vocabulary, pronunciation, and fluency. The results demonstrate that the AI chatbot-based evaluations provide accurate and consistent language proficiency assessments. The correlation between the chatbot evaluation scores and standardized language proficiency scores is found to be strong, indicating the reliability of the chatbot assessments. Furthermore, participants reported positive experiences with the chatbot assessments, highlighting the usefulness of the instant feedback provided. The chatbot was able to identify language errors and offer targeted suggestions for improvement, enhancing the language learning process. These findings support the effectiveness and potential of AI chatbot-based evaluations for assessing language proficiency. The integration of chatbots in language education can provide valuable tools for both learners and educators, offering objective assessments and personalized feedback.