Video games have assumed an important place in our daily lives. This has led to an increasing interest on the use of games for non-entertainment purposes, introducing the concept of Serious Games (SGs). In particular, SGs are being explored because of their potential to provide reliable assessments, but also because they can measure competences that would be difficult to measure using traditional forms of assessment. However, one of the key issues is that assessment machinery has to be designed specifically for each game, increasing the time and effort when designing and implementing Game-Based Assessments (GBAs). In this research, we introduce a novel approach to develop interoperable GBAs by: (1) designing and creating an ontology that can standardize the GBA area; (2) conducting a validation study on literature metrics to replicate them and designing novel metrics using data from different SGs; (3) conducting a case study that illustrates how our approach can be used in a real life scenario with real data. Our results confirm that the designed ontology can be used to effectively perform GBAs, along with the metrics replicated and designed in the system. We expect our work to solve the current limitations regarding GBA interoperability, thus allowing the deployment of Game-Based Assessments as a Service (GBAaaS).
During the last few years, there has been increasing attention paid to serious games (SGs), which are games used for non-entertainment purposes. SGs offer the potential for more valid and reliable assessments compared to traditional methods such as paper-and-pencil tests. However, the incorporation of assessment features into SGs is still in its early stages, requiring specific design efforts for each game and adding significant time to the design of Game-based Assessments (GBAs). In this research, we present a completely novel framework that aims to perform interoperable GBAs by: (a) integrating a common GBA ontology model to process RDF data; (b) developing in-game metrics to infer useful information and assess learners; (c) integrating a service API to provide an easy way to interact with the framework. We then validate our approach through performance evaluation and two use cases, demonstrating its effectiveness in real-world scenarios with large-scale datasets. Our results show that the developed framework achieves excellent performance, replicating metrics from previous literature. We anticipate that our work will help alleviate current limitations in the field and facilitate the deployment of GBAs as a Service.
Over the last decade, there has been a large amount of research on technology-enhanced learning (TEL), including the exploration of sensor-based technologies. This research area has seen significant contributions from various conferences, including the European Conference on Technology-Enhanced Learning (EC-TEL). In this research, we present a comprehensive analysis that aims to identify and understand the evolving topics in the TEL area and their implications in defining the future of education. To achieve this, we use a novel methodology that combines a text-analytics-driven topic analysis and a social network analysis following an open science approach. We collected a comprehensive corpus of 477 papers from the last decade of the EC-TEL conference (including full and short papers), parsed them automatically, and used the extracted text to find the main topics and collaborative networks across papers. Our analysis focused on the following three main objectives: (1) Discovering the main topics of the conference based on paper keywords and topic modeling using the full text of the manuscripts. (2) Discovering the evolution of said topics over the last ten years of the conference. (3) Discovering how papers and authors from the conference have interacted over the years from a network perspective. Specifically, we used Python and PdfToText library to parse and extract the text and author keywords from the corpus. Moreover, we employed Gensim library Latent Dirichlet Allocation (LDA) topic modeling to discover the primary topics from the last decade. Finally, Gephi and Networkx libraries were used to create co-authorship and citation networks. Our findings provide valuable insights into the latest trends and developments in educational technology, underlining the critical role of sensor-driven technologies in leading innovation and shaping the future of this area.
Technology has become an essential part of our everyday life, and its use in educational environments keeps growing. In addition, games are one of the most popular activities across cultures and ages, and there is ample evidence that supports the benefits of using games for assessment. This field is commonly known as game-based assessment (GBA), which refers to the use of games to assess learners' competencies, skills, or knowledge. This paper analyzes the current status of the GBA field by performing the first systematic literature review on empirical GBA studies. It is based on 65 research papers that used digital GBAs to determine: (1) the context where the study has been applied; (2) the primary purpose; (3) the domain of the game used; (4) game/tool availability; (5) the size of the data sample; (6) the computational methods and algorithms applied; (7) the targeted stakeholders of the study; and (8) what limitations and challenges are reported by authors. Based on the categories established and our analysis, the findings suggest that GBAs are mainly used in K-16 education and for assessment purposes, and that most GBAs focus on assessing STEM content, and cognitive and soft skills. Furthermore, the current limitations indicate that future GBA research would benefit from the use of bigger data samples and more specialized algorithms. Based on our results, we discuss current trends in the field and open challenges (including replication and validation problems), providing recommendations for the future research agenda of the GBA field.
Games are increasingly being recognized as valuable tools for learning. In addition, they are also being explored for their potential to provide valid and reliable assessments, as they allow to create authentic and engaging assessment contexts through interactive and immersive environments. However, there are challenges to enable Game-based Assessment (GBA) at scale, including the need for interoperability between assessment models and machinery, and the complexity of managing and processing large amounts of data generated by users' interaction with games. In this study, we propose a novel approach that combines the use of ontologies and Big Data technologies for developing interoperable GBAs. The architecture enables assessments to be performed using data from different games, and we also designed and implemented a service API that facilitates the Game-Based Assessment as a Service (GBAaaS) paradigm. GBAaaS simplifies the GBA development process and enables its adoption at scale, making it a promising approach for future developments in this field.
The recent pandemic has changed the way we see education. During recent years, Massive Open Online Course (MOOC) providers, such as Coursera or edX, are reporting millions of new users signing up on their platforms. Though online review systems are standard among many verticals, no standardized or fully decentralized review systems exist in the MOOC ecosystem. In this vein, we believe that there is an opportunity to leverage available open MOOC reviews in order to build simpler and more transparent reviewing systems, allowing users to really identify the best courses out there. Specifically, in our research we analyze 2.4 million reviews (which is the largest MOOC reviews dataset used until now) from five different platforms in order to determine the following: (1) if the numeric ratings provide discriminant information to learners, (2) if NLP-driven sentiment analysis on textual reviews could provide valuable information to learners, (3) if we can leverage NLP-driven topic finding techniques to infer themes that could be important for learners, and (4) if we can use these models to effectively characterize MOOCs based on the open reviews. Results show that numeric ratings are clearly biased (63% of them are 5-star ratings), and the topic modeling reveals some interesting topics related with course advertisements, the real applicability, or the difficulty of the different courses.
Nowadays, the use of technology in continuously increasing, making a significant impact in almost every area, including education. New areas have gained much popularity in the last years in educational technology (EdTech), such as Massive Open Online Courses (MOOCs) or computer-supported collaborative learning. In addition, research and interest in this area have also been growing over the years. The quantity of research and scientific publications in EdTech is constantly increasing, and trying to analyze and extract information from a set of research papers is often a very time-consuming task. To make this process easier and solve these limitations, we present Fontana , a framework that can quickly perform trend and social network analysis using any corpus of documents and its metadata. Specifically, the framework can: 1) Discover the latest trends given any corpus of documents, using Natural Language Processing (NLP) analysis and keywords (bibliometric approach); 2) Discover the evolution of the trends previously identified over the years; 3) Discover the primary authors and papers, along with hidden relationships between existing communities. To test its functionality, we evaluated the framework using a corpus of papers from the EdTech research field. We also followed an open science methodology making the entire framework available in Open Science Framework (OSF) easy to access and use. The case study successfully proved the capabilities of the framework, revealing some of the most frequent topics in the area, such as “EDM,” “learning analytics,” or “collaborative learning.” We expect our work to help identifying trends and patterns in the EdTech area, using natural language processing and social network analysis to objectively process large amounts of research.
The Handbook of Research on Digital-Based Assessment and Innovative Practices in Education identifies digital tools and applications for effective assessment of learning, shares various models of digital-based assessment in education, and considers best pedagogical practices for assessment in education. Covering a range of topics such as formative assessments, design thinking, virtual reality, and equity, this major reference work is crucial for educational technologists, instructional designers, policymakers, administrators, faculty, researchers, academicians, scholars, practitioners, instructors, and students.
Over the last decade, we have seen a large amount of research being performed in technology-enhanced learning. Within this area, the use of digital assessment has been gaining a lot of popularity. Researchers aim to identify the main topics in this area, proposing a new methodology to perform a text analytics and bibliometrics driven approach, using the metadata and full text from papers within the last 15 years. The analysis in this work is focused on three objectives: 1) discover which are the main topics based on topic modeling and keywords analysis, 2) discover the evolution of said topics over the last 15 years of research, and 3) discover the primary authors and papers, along with hidden relationships between existing communities.
Games have become one of the most popular activities across cultures and ages. There is ample evidence that supports the benefits of using games for learning and assessment. However, incorporating game activities as part of the curriculum in schools remains limited. Some of the barriers for broader adoption in classrooms is the lack of actionable assessment data, the fact that teachers often do not have a clear sense of how students are interacting with the game, and it is unclear if the gameplay is leading to productive learning. To address this gap, we seek to provide sequence and process mining metrics to teachers that are easily interpretable and actionable. More specifically, we build our work on top of Shadowspect, a three-dimensional geometry game that has been developed to measure geometry skills as well other cognitive and noncognitive skills. We use data from its implementation across schools in the U.S. to implement two sequence and process mining metrics in an interactive dashboard for teachers. The final objective is to facilitate that teachers can understand the sequence of actions and common errors of students using Shadowspect so they can better understand the process, make proper assessment, and conduct personalized interventions when appropriate.
Over the last decade, we have seen a large amount of research being performed in technology-enhanced learning. The European Conference on Technology-enhanced Learning (EC-TEL) is one of the conferences with the most extended trajectory in this area. The goal of this paper is to provide an overview of the last ten years of the conference. We collected all papers from the last ten years of the conference, along with the metadata, and used their keywords to find the most important ones across the papers. We also parsed papers’ full text automatically, and used it to extract information about this year’s conference topic. These results will shed some light on the latest trends and evolution of EC-TEL.
Technology has become an integral part of our everyday life, and its use in educational environments keeps growing. Additionally, video games are one of the most popular mediums across cultures and ages. There is ample evidence that supports the benefits of using games for learning and assessment, and educators are mainly supportive of using games in classrooms. However, we do not usually find educational games within the classroom activities. One of the main problems is that teachers report difficulties to actually know how their students are using the game so that they can analyze properly the effect of the activity and the interaction of students. To support teachers, educational games should incorporate learning analytics to transform data generated by students when playing useful information in a friendly and understandable way. For this work, we build upon Shadowspect , a 3D geometry puzzle game that has been used by teachers in a group of schools in the US. We use learning analytics techniques to generate a set of metrics implemented in a live dashboard that aims to facilitate that teachers can understand students’ interaction with Shadowspect . We depict the multidisciplinary design process that we have followed to generate the metrics and the dashboard with great detail. Finally, we also provide uses cases that exemplify how teachers can use the dashboard to understand the global progress of their class and each of their students at an individual level, in order to intervene, adapt their classes and provide personalize feedback when appropriate.
Games have become one of the most popular mediums across cultures and ages and the use of educational games is growing. There is ample evidence that supports the benefits of using games for learning and assessment. However, we do not usually find games incorporated into educational environments. One of the main problems that teachers face is to actually know how students are interacting with the game as they cannot analyze properly the effect of the activity on the students. To improve this issue, we can use the data generated by the interaction of students with such educational games to analyze the sequences and errors by transforming raw data into meaningful sequences that are interpretable and actionable for teachers. In this study we use a data collection from our game Shadowspect and implement learning analytics with process and sequence mining techniques to generate two metrics that aim to help teachers make proper assessment and better understand the process.
Video games have become one of the most popular mediums across cultures and ages. There is ample evidence that supports the benefits of using games for learning and assessment, and educators are largely supportive of using games in classrooms. However, the implementation of educational games as part of the curriculum and classroom practices has been rather scarce. One of the main barriers is that teachers face to actually know how their students are using the game so that they can analyze properly the effect of the activity and the interaction of students. Therefore, to support teachers to fully leverage the potential benefits of games in classrooms and make data-based decisions, educational games should incorporate learning analytics by transforming click-stream data generated from the gameplay into meaningful metrics and present visualizations of those metrics so that teachers can receive the information in an interactive and friendly way. For this work, we use data collected in a case study where teachers used Shadowspect geometry puzzle games in their classrooms. We apply learning analytics techniques to generate a series of metrics and visualizations that seek to facilitate that teachers can understand the interaction of students with the game. In this way, teachers can be more aware of the global progress of the class and each one of their students at an individual level, and intervene and adapt their classes when necessary.