Go to contents

Auto-scoring program for open-ended questions developed

Posted September. 21, 2017 08:32,   

Updated September. 21, 2017 08:44

한국어

While there were continuous demands for introducing open-ended questions on various tests such as national scholastic aptitude test in Korea, the issue of scoring has hampered the improvement in testing methods. Against this backdrop, a technological feature has been developed to score open-ended questions for large-scale examinations.

According to the Korea Institute for Curriculum and Eval‎uation on Wednesday, it has recently acquired four patents on scoring open-ended answers by developing the “auto-scoring program for Korean supply items.”

Korean is known as one of the trickiest languages for a machine to process, given that it can express similar meanings by using postpositions or ending, with relatively less strict word order and frequent omissions. “The program can score up to one sentence precisely, by leveraging the accumulated processing technology and machine learning-based automatic categorization,” explained the institute.

When was used for scoring samples from the 2016 national scholastic test, the auto-scoring program showed higher accuracy than the actual scorers. When multiple scorers used the auto-scoring program to mark questions requiring answers in word or phrase units, the program showed almost same results for Korean and Science tests at 100 percent and Social Science test at 99.6 percent.

Among scores manually marked by multiple scorers, however, the conformity of answers fell to 97.5 percent for Korean test and 99.3 percent for Science test. The decrease in accuracy can be attributable to lack of consistency as scorers could suffer from fatigue and make objective judgements when scoring many answers. “Once an answer-marking method using tablet PCs introduced, the auto-scoring program can be used at the national scholastic test as well,” said researcher Noh Eun-hee at the institute.



Deok-Young Yu firedy@donga.com