Relationship Between Testing Time and Score in CBT

  • Hideo Hirose Hiroshima Institute of Technology
Keywords: time duration to solve, problem difficulty, online testing, correct answer rate, item response theory

Abstract

By looking at the relationships between the numbers of correct answers and the time durations that students spend in taking tests, we have found that there are typical three patterns. The patterns of the time durations spent in taking the test to each number of correct answers depends on the difficulties of the questions. To easy problems to solve, some smart students can use less time to solve the problems and students with low academic ability need much time to solve. To moderate problems to solve, every student requires the similar time duration to solve the problems. To difficult problems to solve, many students tend to use full time to the pre-specified time duration, but some students with low ability may give up tackling the problem soon.

References

R. de Ayala, The Theory and Practice of Item Response Theory. Guilford Press, 2009.

R.P. Beaulieu and B. Frost, Another look at the time-score relationship, Perceptual and Motor Skills, 78, 1994, pp.40-42.

N. Elouazizi, Critical Factors in Data Governance for Learning Analytics, Journal of Learning Analytics, 1, 2014, pp. 211-222.

D. Gasevic, S. Dawson, and G. Siemens, Let’s not forget: Learning analytics are about learning, TechTrends, 59, 2015, pp. 64-71.

R. Hambleton, H. Swaminathan, and H. J. Rogers, Fundamentals of Item Response Theory. Sage Publications, 1991.

W.E. Herman, The relationship between time to completion and achievement on multiple choice exams, Journal of Research and Development in Education, 30, 1997, pp.113-117.

H. Hirose, Meticulous Learning Follow-up Systems for Undergraduate Students Using the Online Item Response Theory, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.427-432.

H. Hirose, M. Takatou, Y. Yamauchi, T. Taniguchi, T. Honda, F. Kubo, M. Imaoka, T. Koyama, Questions and Answers Database Construction for Adaptive Online IRT Testing Systems: Analysis Course and Linear Algebra Course, 5th International Con-ference on Learning Technologies and Learning Environments, 2016, pp.433-438.

H. Hirose, aLearning Analytics to Adaptive Online IRT Testing Systems “Ai Arutte” Harmonized with University Textbooks, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.439-444.

H. Hirose, M. Takatou, Y. Yamauchi, T. Taniguchi, F. Kubo, M. Imaoka, T. Koyama, Rediscovery of Initial Habituation Importance Learned from Analytics of Learning Check Testing in Mathematics for Undergraduate Students, 6th International Conference on Learning Technologies and Learning Environments, 2017, pp.482-486.

H. Hirose, Dually Adaptive Online IRT Testing System, Bulletin of Informatics and Cybernetics Research Association of Statistical Sciences, 48, 2016, pp.1-17.

H. Hirose, Difference Between Successful and Failed Students Learned from Analytics of Weekly Learning Check Testing, Information Engineering Express, Vol 4, No 1, 2018, pp.11-21.

H. Hirose, A Large Scale Testing System for Learning Assistance and Its Learning Analytics, Proceedings of the Institute of Statistical Mathematics, Vol.66, No.1, 2018, pp.79-96.

H. Hirose and T. Sakumura, Test evaluation system via the web using the item response theory, in Computer and Advanced Technology in Education, 2010, pp.152-158.

H. Hirose, T. Sakumura, T. Kuwahata, Score allotment optimization method with application to comparison of ability evaluation in testing between classical test theory and item response theory, Information, 17, 2014, pp.391-410.

R.E. Landrum, H. Carlson, W. Manwaring, The relationship between time to complete a test and test performance, Psychology Learning and Teaching, 8, 2009, pp.53-56.

K. Noguchi, H. Hirose, A relationship between the adaptive online IRT evaluation and the response time, National Conference of IEEJ 2013, 11-2P-07, 2013.

T. Sakumura, H. Hirose, Bias Reduction of Abilities for Adaptive Online IRT Testing Systems, International Journal of Smart Computing and Artificial Intelligence (IJS-CAI), 1, 2017, pp.57-70.

G. Siemens and D. Gasevic, Guest Editorial - Learning and Knowledge Analytics, Educational Technology & Society, 15, 2012, pp.1-2.

C. Terranova, Relationship between Test Scores and Test Time, The Journal of Experimental Education, 40, 2015, pp.81-83.

Y. Tokusada, H. Hirose, Evaluation of Abilities by Grouping for Small IRT Testing Systems, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.445-449.

R. J. Waddington, S. Nam, S. Lonn, S.D. Teasley, , Improving Early Warning Systems with Categorized Course Resource Usage, Journal of Learning Analytics, 3, 2016, 263-290.

A.F. Wise and D.W. Shaffer, Why Theory Matters More than Ever in the Age of Big Data, Journal of Learning Analytics, 2, pp. 5-13, 2015.

Published
2019-05-31