Key Factor Not to Drop Out is to Attend Lectures

  • Hideo Hirose Hiroshima Institute of Technology
Keywords: learning check testing, placement test, follow-up program, item response theory, multiple linear regression, term examination

Abstract

To find key factors not to drop out using learning analytics, we have added accumulated data such as the number of successes in learning check testing, the number of attendances to follow-up program classes, and etc., in addition to learning check testing ability scores performed at each lecture. Then, we have found key factors strongly related to the students at risk. They are the following. 1) Badly failed students (score range is 0-39 in the term ex-amination) tend to be absent for the regular classes and fail in learning check testing even if they attended, and they are very reluctant to attend follow-up program classes. 2) Success-ful students (score range is 60-100 in the term examination) attend classes and obtain good scores in every learning check testing. 3) Failed students but not so badly (score range is 40-59 in the term examination) reveal both sides of features appeared in score range of 0-39 and score range of 60-100. Therefore, it is crucial to attend lectures in order not to drop out. Students who failed in learning check testing more than half out of all testing times almost absolutely failed in the term examination, which could cause the drop out. Also, students who were successful to learning check testing more than two third out of all testing times took better score in the term examination.

References

R. de Ayala, The Theory and Practice of Item Response Theory. Guilford Press, 2009.

N. Elouazizi, Critical Factors in Data Governance for Learning Analytics, Journal of Learning Analytics, 1, 2014, pp. 211-222.

D. Gasevic, S. Dawson, and G. Siemens, Let’s not forget: Learning analytics are about learning, TechTrends, 59, 2015, pp. 64-71.

R. Hambleton, H. Swaminathan, and H. J. Rogers, Fundamentals of Item Response Theory. Sage Publications, 1991.

H. Hirose, Meticulous Learning Follow-up Systems for Undergraduate Students Using the Online Item Response Theory, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.427-432.

H. Hirose, M. Takatou, Y. Yamauchi, T. Taniguchi, T. Honda, F. Kubo, M. Imaoka, T. Koyama, Questions and Answers Database Construction for Adaptive Online IRT Testing Systems: Analysis Course and Linear Algebra Course, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.433-438.

H. Hirose, Learning Analytics to Adaptive Online IRT Testing Systems “Ai Arutte” Harmonized with University Textbooks, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.439-444.

H. Hirose, M. Takatou, Y. Yamauchi, T. Taniguchi, F. Kubo, M. Imaoka, T. Koyama, Rediscovery of Initial Habituation Importance Learned from Analytics of Learning Check Testing in Mathematics for Undergraduate Students, 6th International Conference on Learning Technologies and Learning Environments, 2017, pp.482-486.

H. Hirose, Dually Adaptive Online IRT Testing System, Bulletin of Informatics and Cybernetics Research Association of Statistical Sciences, 48, 2016, pp.1-17.

H. Hirose, Difference Between Successful and Failed Students Learned from Analytics of Weekly Learning Check Testing, Information Engineering Express, Vol 4, No 1, 2018, pp.11-21.

H. Hirose, A Large Scale Testing System for Learning Assistance and Its Learning Analytics, Proceedings of the Institute of Statistical Mathematics, Vol.66, No.1, 2018, pp.79-96.

W. J. D. Linden and R. K. Hambleton, Handbook of Modern Item Response Theory. Springer, 1996.

T. Sakumura, H. Hirose, Bias Reduction of Abilities for Adaptive Online IRT Testing Systems, International Journal of Smart Computing and Artificial Intelligence (IJSCAI), 1, 2017, pp.57-70.

G. Siemens and D. Gasevic, Guest Editorial - Learning and Knowledge Analytics, Educational Technology & Society, 15, 2012, pp.1-2.

Y. Tokusada, H. Hirose, Evaluation of Abilities by Grouping for Small IRT Testing Systems, 5th International Conference on Learning Technologies and Learning Environments, 2016, pp.445-449.

R. J. Waddington, S. Nam, S. Lonn, S.D. Teasley, , Improving Early Warning Systems with Categorized Course Resource Usage, Journal of Learning Analytics, 3, 2016, 263-290.

A.F. Wise and D.W. Shaffer, Why Theory Matters More than Ever in the Age of Big Data, Journal of Learning Analytics, 2, pp. 5-13, 2015.

https://www.r-project.org/about.html

Published
2019-05-31
Section
Technical Papers (Information and Communication Technology)