Our paper titled:”Experience: Learner Analytics Data Quality for an eTextbook System” has been accepted for publication in the ACM Journal of Data and Information Quality (JDIQ).
We present lessons learned related to data collection and analysis from five years of experience with the eTextbook system OpenDSA.
The use of such cyberlearning systems is expanding rapidly in both formal and informal educational settings.
While the precise issues related to any such project are idiosyncratic based on the data collection technology and goals of the project, certain types of data collection problems will be common.
We begin by describing the nature of the data transmitted between the student’s client machine and the database server, and our initial database schema for storing interaction log data.
We describe many problems that we encountered, with the nature of the problems categorized as syntactic-level data collection issues, issues with relating events to users, or issues with tracking users over time.
Relating events to users and tracking the time spent on tasks are both prerequisites to converting syntactic-level interaction streams to semantic-level behavior needed for higher-order analysis of the data.
Finally, we describe changes made to our database schema that helped to resolve many of the issues that we had encountered.
These changes help to advance our ultimate goal of encouraging a change from ineffective learning behavior by students to more productive behavior.
Title: Investigating Difficult Topics in a Data Structures Course Using Item Response Theory and Logged Data Analysis (paper 31)
We present an analysis of log data from a semester’s use of the OpenDSA eTextbook system with the goal of determining the most difficult course topics in a data structures course. While experienced instructors can identify which topics students most struggle with, this often comes only after much time and effort, and does not provide real-time analysis that might benefit an intelligent tutoring system. Our factors included the fraction of wrong answers given by student, results from Item Response Theory, and the rate of model answer and hint use by students. We grouped exercises by topic covered to yield a list of topics associated with the harder exercises. We found that a majority of these exercises were related to algorithm analysis topics. We compared our results to responses given by a sample of experienced instructors, and found that the automated results match the expert opinions reasonably well. We investigated reasons that might explain the over-representation of algorithm analysis among the difficult topics, and hypothesize that visualizations might help to better present this material.
Our paper titled ”Exploring students learning behavior with an interactive eTextbook in Computer Science Courses” has been accepted for publication in Computers in Human Behavior journal.
We present empirical findings from using an interactive electronic textbook (eTextbook)
system named OpenDSA to teach sophomore- and junior-level Computer Science (CS) courses. The web-based eTextbook infrastructure allows us to collect large amounts of data that can provide detailed information about students’ study behavior. In particular we were interested in seeing if the students will attempt to manipulate the electronic resources so as to receive credit without deeply going through the materials. We found that a majority of students do not read the text. On the other hand, we found evidence that students voluntarily complete additional exercises (after obtaining credit for completion) as a study aid prior to exams. We determined that visualization use was fairly high (even when credit for their completion was not offered). Skipping to the end of slideshows was more common when credit for their completion was offered, but also occurred when it was not. We measured the level of use of mobile devices for learning by CS students. Almost all students did not associate their mobile devices with studying. The only time they accessed OpenDSA from a mobile device was for a quick look up,and never for in depth study.
The Digital Library Research Laboratory (DLRL) at Virginia Tech recently published the fourth and last of a series of books on Digital Libraries and the 5S (Societies, Scenarios, Spaces, Structures, Streams) approach. Drs Edward Fox and Jonathan Leidig edited the book and Morgan & Claypool published it. The fourth book focused on digital libraries applications, and can be found here. I co-authored the chapter on digital libraries applications in education with Yinlin Chen.
Digital libraries for educational resources present many challenges including the participation and collaboration of community members. Monika Akbar et al addressed the problem by proposing DL 2.0, a model to integrate social knowledge to digital library content. But as educators move more toward mixing and matching educational content to create compound learning objects, we need to create a whole new family of digital libraries services to interact with such resources. In the case of OpenDSA, visualizations and interactive exercises can be indexed as stand-alone resources in AlgoViz. But tutorials are created on demand and embed standalone visualizations and exercises. We should index tutorials as well, and make them available to portals educational digital libraries. Which begs the question should we index raw source file or html output?
Our paper describing the OpenDSA system architecture, and the design goals that lead to the present version of the system will be published in the Science of Computer Programming journal special issue on “software development concerns in the e-Learning domain”.
Overview of the OpenDSA architecture.