LASI22 Workshops & Tutorials

We are pleased to present a great LASI22 program that includes 10 Workshops and 10 Tutorials!

This will be our second virtual LASI! In order to accommodate our international community, this year's LASI will span a full five days but each day will be much shorter than our regular LASI schedule when we are able to meet face to face.

This year LASI participants will take a deep dive into ONE workshop of 6 hours total that will be broken up into multiple days. In addition, each participant will take part in two 2-hour tutorials, enabling participants to get a flavor of a range of topics. Each workshop and tutorial will have synchronous and asynchronous components.

Each individual can choose 1 workshops and 2 tutorials. The organizing committee will strive to have all participants attend their top choices but based on attendance and popularity of certain sessions, your choices are not guaranteed. Register early to ensure your spot in your top choices!

Should you have any questions related to registration or general LASI question, please reach out to us at info@solaresearch.org

WorkshopsTutorials

W1. Learning Analytics, Learning Science and Self-Regulated Learning

Title: Learning Analytics, Learning Science and Self-Regulated Learning

Description:

Self-regulating learners are learning scientists. They theorize about how to adapt learning activities to make learning more effective, easier, and more satisfying. They try out tweaks and major shifts to learning to consider new learning skills and investigate effects when they apply skills. Unlike well-funded labs in which highly trained learning scientists use state-of-the-art technologies to gather and analyze data, learners “in the wild” have very meager resources for pursuing such self-directed, personally-focused learning science when N=me.

Enter nStudy, an online suite of tools. Learners use nStudy to study content in any subject area and develop learning artifacts, e.g., notes, essays, and concept maps. As they work, nStudy gathers fine-grained, time-stamped data about which information learners operate on and which cognitive operations they apply to information. These data are raw materials for generating highly descriptive learning analytics.

Deftly mixing learning science with intuitions, learning analytics can be designed to help learners (a) track actual learning activities, (b) conceptualize hypotheses about how learning might be adapted and with what effect(s), (c) test hypotheses using N=me data describing how thoroughly and accurately they implemented new study and (d) develop a long-term, sustainable program of personal research on productively self-regulating learning.

Activities

  1. Participants are introduced to and play with nStudy.
  2. Learning science models and theory are presented and discussed to conceptualize how clickstream data
    1. describe observable learning events (e.g., searching for a note about asteroids and re-opening it) and
    2. provide foundations for inferences about theoretical constructs (e.g., judging how well material about asteroids has been learned and what class of information a learner judges is missing).
  3. Data nStudy gathers are identified and analyzed in terms of
    1. logged clickstream data and
    2. interpretative trace data.
  4. Working groups
    1. design learning analytics grounded in clickstream data generated within nStudy or supplemental software, and
    2. explore theoretical grounds for interpreting trace data as guidance for learning.

    Scope for analytics can range from one study session focusing on one task to study sessions spanning an entire course involving a variety of learning activities (e.g., reading an assignment, preparing a presentation, researching a term paper, reviewing for an exam, etc.).

  5. Read-outs of working group products and whole-group discussions explore factors affecting whether learning analytics are “serviceable.” What can learners understand about learning analytics? What helps learners use learning analytics to self-regulate learning? Why might learners want to use learning analytics to self-regulate learning?

Target Audience

People interested to (a) develop or extend understandings of theories, research methods and instrumentation in learning science; and (b) wrestle with challenges in designing learning analytics for self-regulating learners.

Prerequisites

The workshop is designed for newbies and veterans in learning science, learning analytics and/or quantitative methods.

Takeaways

1.     Perspectives on self-regulated learning as a program of personally focused (N=me) learning science.

2.     Understanding how trace data link to sectors in learning science: cognition, metacognition and motivation.

3.     Guidelines for designing serviceable learning analytics to support self-regulated learning.

4.     First steps toward establishing collaborations for continuing work on learning analytics for self-regulating learners.

Preparations

1.  Read: Winne, P. H. (in press). Modeling self‑regulated learning as learners doing learning science: How trace data and learning analytics help develop skills for self‑regulated learning. Metacognition and Learninghttps://1sfu-my.sharepoint.com/:b:/g/personal/winne_sfu_ca/EVZY2uZVCPlNqh4YAbd2qRUBGc1QbHDA5a6ogyIh4C4TIA?e=EtbdZ4

2.   Read: Winne, P. H., Teng, K., Chang, D., Lin, M. P-C., Marzouk, Z., Nesbit, J. C., Patzak, A., Raković, M., Samadi, D., & Vytasek, J. (2019). nStudy: Software for learning analytics about processes for self-regulated learning. Journal of Learning Analytics, 6, 95-106. Available from: https://learning-analytics.info/index.php/JLA/article/view/6194/7179

3.   Read: Vytasek, J. M., Patzak, A., & Winne, P. H. (2020). Analytics for student engagement. In M. Virvou, E. Alepis, G. A. Tsihrintzis, & L. C. Jain (Eds.) Machine learning paradigms (pp. 23-48). New York, NY: Springer. Request from the authors if not available through your library or professional services.

4.   Bring a laptop with the Google Chrome browser installed. You will be guided to install the nStudy extension to Chrome.

Instructor: Dr. Phil Winne, Distinguished SFU Professor of Education; Faculty of Education, Simon Fraser University

Dr. Jovita Vytasek, Learning Strategist; Faculty of Educational Support and Development, Kwantlen Polytechnic University 

W2. Introduction to Machine Learning for Learning Analytics

Title: Introduction to Machine Learning for Learning Analytics

Description:

The hands-on workshop will provide a rigorous introduction to machine learning for learning analytics practitioners. Participants will learn the workflow associated with building machine learning models for learning analytics. Key Python libraries include pandas, numpy, matplotlib, seaborn, scikitlrn, statsmodels, and tensorflow.

The workshop will consist of three parts:

  • data science and statistics
  • machine learning, including supervised and unsupervised learning
  • deep learning

Data sets and methodologies will emphasize current problems and challenges in learning analytics.

Prerequisites: 

Participant should know basic Python. A self-contained basic python course (approximately 5 hours) will be made available to participants on May 1st. Participants are highly encouraged to take the online course or complete the self-assessments to evaluate their level of proficiency.

The basic Python course will also contain a brief introduction to the Jupyter Notebook environment. The Jupyter Notebook is an open-source web application for creating and sharing documents with live code, equations, visualizations and narrative text. The workshop will make extensive use Jupyter Notebooks.

Activities:

Proficiency in machine learning requires skill sets in computation, mathematics, and statistics. Because machine learning can be a very heavy lift for novices, our approach emphasizes a three step learning process: concept, theory, and practice. The concept step presents the intuition; the theory step elaborates the intuition formally; the practice step synthesizes the learning with live code. Each machine learning module will culminate in a learning analytics case study. Participants will work individually as well as in teams to work on the learning analytics case studies.

Target Audience:

Anyone interested in building machine learning models for learning analytics. The workshop will also emphasize statistical methods for evaluating efficacy of models and interventions.

Syllabus –

Python Prerequisite

  • Basic Python
  • Special Topics in Intermediate Python

Part I. Data Science and Statistics Fundamentals

  • Data Operations with Pandas
  • Data Visualization with Matplotlib and Seaborn
  • Basic Descriptive Statistics
  • p-values
  • Confidence Intervals
  • Effect Sizes

Case Study: Measuring efficacy: Is an educational technology product or intervention effective?
Part II: Machine Learning
Supervised and Unsupervised Learning

  • Machine Learning Overview
  • Linear Regression (regression)
  • K Nearest Neighbor (classification)
  • Logistic Regression (classification)
  • K Means Clustering (unsupervised learning)

Model Evaluation

  • Test/Train Split; Bias-Variance Tradeoff; K-Fold
  • Evaluating Regression Models: R2
  • Evaluating Classification MOdels: Confusion Matrix

Case Study: Build and evaluate a learning analytics machine learning model using one of the techniques above.
Part III: Deep Learning

  • Deep Learning Overview
  • Neurons
  • Network Layers
  • Forward Propagation
  • Loss and Cost Function
  • Backward Propagation

Case Study: Build and evaluate a learning analytics deep learning model.
Preparation
Although not required, the following book can be used for reference and preparation: Practical AI for Business Leaders, Product Managers, and Entrepreneurs. By Alfred Essa and Shirin Mojarad.

The book will be available for purchase on April 4th, 2022.

Instructors: Al Essa & Lalitha Agnithori,

W3. Multimodal Learning Analytics for In-Person and Remote Collaboration

Title: Multimodal Learning Analytics for In-Person and Remote Collaboration

Description: Learning does not only occur over Learning Management Systems or digital tools. It tends to happen in several face-to-face, hands-on, unbounded and analog learning settings such as classrooms and labs. Multimodal Learning Analytics (MMLA) emphasizes the analysis of natural rich modalities of communication during situated learning activities. This includes students’ speech, writing, and nonverbal interaction (e.g., movements, gestures, facial expressions, gaze, biometrics, etc.). A primary objective of multimodal learning analytics is to analyze coherent signal, activity, and lexical patterns to understand the learning process and provide feedback to its participants in order to improve the learning experience. This workshop is posed as a gentle introduction to this new approach to Learning Analytics: its promises, its challenges, its tools and methodologies. To follow the same spirit of MMLA, this workshop will include a hands-on learning experience analyzing different types of signals captured from real environments.

The workshop organizers will be emphasizing strategies, techniques, and constructs related to collaboration analytics.

Activities:

  • A brief introduction to Multimodal Learning Analytics
  • Review and testing of different sensors and analysis techniques
  • The capture of a learning activity
  • Multimodal analysis of recordings
  • Designing a feedback dashboard
  • Final discussion

Target Audience:

Researchers that already have some experience with traditional Learning Analytics that want to expand their capabilities to physical learning spaces.

Takeaways:

  • Knowing the current state-of-the-art of Multimodal Learning Analytics
  • Being able to select sensors to capture learning activities
  • Being able to select analysis techniques to extract multimodal features
  • Fuse different multimodal features to estimate learning constructs
  • Feedback the participants of the learning activities

Prerequisite Skills/Experience: Basic statistics, programming in R.

Advanced preparation: Students should read Chapter 11 (Multimodal Learning Analytics) from the Handbook of Learning Analytics (https://www.solaresearch.org/hla-17/hla17-chapter11/) that is openly available.


Instructors: Xavier Ochoa, New York University 

W4. Effective dashboard design and data visualisation for LA

Title: Effective dashboard design and data visualisation for LA

Description: Using visualisations for communication purposes is very common. In the educational field, individual visualisations or even whole dashboards are often used to show the results of data analysis in an easily digestible form, e.g. to learners or teachers. Often, however, what was intended to be communicated by a visualisation and how it is then interpreted by people differs. Similarly, dashboards are often built without a clear purpose or reason, but simply because the data is available. In this workshop, we will look at principles and guidelines of data visualisation and work on a structured approach to the why, what and how of effective dashboard design for learning analytics. Participants will be introduced to guidelines of dashboard design, will analyse and interpret examples of LA dashboards, and will design their own dashboard mockups.

Activities:

  • Participants will be introduced to the world of data visualisation and will get to see dashboard examples from the field of LA as well as other fields.
  • Based on the analysis of examples and mock-ups, principles and guidelines will be formulated on how to design effective LA dashboards.
  • In small groups, participants will then design a learning analytics dashboard for a given learning context: they will first explore educational problems (i.e. what problem do they want to solve with a dashboard, how can it be grounded in theory and practice) and then identify relevant information and data to work on the problem. Based on this, participants will draft dashboard mock-ups using the principles. Finally, they will prioritise design features and sketch evaluation criteria and plans.
Target Audience:
Anyone interested in the visualisation of learning data and the design of learning analytics dashboards, anyone from students to teachers to practitioners to educational institution managers.
An understanding of what learning analytics is, how and where it can be used and who its stakeholders are is beneficial.
Takeaways:

  • Learning the process of designing dashboards in educational contexts
  • Understanding of principles and guidelines for dashboard design and data visualisation
  • Getting a glimpse into the art of storytelling with data
  • Understanding the importance of grounding dashboard designs in theory and practice

Advanced preparation:

  • Reading list coming soon

Instructors: Maren Scheffel,
Ruhr University Bochum

Ioana Jivet, DIPF – Leibniz Institute for Research and Information in Education / Goethe University Frankfurt

W5. Temporal and sequential analysis for learning analytics

Title: Temporal and sequential analysis for learning analytics

Description:

Data in learning analytics research (e.g. SIS, clickstream, log-files) are often rich in temporal features that could allow us to explore the dynamic changes in learning behavior at different time granularities (e.g. seconds, days, weeks, semesters). This workshop will introduce participants to several common temporal/sequential analysis methods and techniques using R. During the workshop, we will cover some basic concepts in temporal analysis (i.e. sequences, trends, stationary, seasonality, autocorrelation). Next, we will go through some techniques to explore and visualize temporal data. Participants will learn and apply two types of temporal models: a) explanatory models using statistical techniques, such as Sequence Analysis, ARIMA and b) predictive models using modern machine learning techniques, such as long-short term memory (LSTM). Finally, we will brainstorm and discuss some applications of temporal analysis in educational research.   

Activities:

  • Learn the foundation and intuition of temporal/sequential analysis
  • Apply temporal/sequential analysis on educational datasets using R
  • Discuss in groups how to use temporal analysis to answer research questions in education

Target Audience:

This workshop is designed for anyone interested in temporal/sequential analysis. No experience in temporal and sequential analysis is required. To get the best learning experience, participants should familiarize themselves with basic statistics and machine learning concepts (e.g. regression, variance, autocorrelation, classification, cross-validation, overfitting).  

Takeaways:

  1. Bring a laptop with RStudio installed. More information on the packages will be provided before the workshop
  2. Special issues worth checking out:

Knight, S., Friend Wise, A., & Chen, B. (2017). Time for Change: Why Learning Analytics Needs Temporal Analysis. Journal of Learning Analytics, 4(3), 7–17.

https://doi.org/10.18608/jla.2017.43.2

Chen, B., Knight, S., & Wise, A. F. (2018). Critical Issues in Designing and Implementing Temporal AnalyticsJournal of Learning Analytics, 5(1), 1–9. https://doi.org/10.18608/jla.2018.53.1

Instructor: Quan Nguyen, University of Michigan

W6. Natural Language Processing for Learning Analysis

Title: Natural Language Processing for Learning Analysis

Description:

The field of education widely adopts textual document for different applications, such as assessment, communication in online platforms, reading material, and feedback provision. Therefore, notions of Natural Language Processing (NLP) techniques are a key resource for researchers in Learning Analytics (LA). This workshop will introduce natural language processing methods and techniques, using practical examples using the python programming language. More specifically, this workshop will focus on text classification and clustering, which are the main methods used in LA, providing details about each step of the NLP processing including preprocessing, feature extraction, classification and evaluation. Moreover, this workshop will emphasise on the feature extraction step, providing details of an extensive number of features used in NLP, encompassing: (i) traditional content-based feature (bag-of-words approach), (ii) linguistic features (text coherence, syntactic complexity, lexical diversity), and (iii) and information extraction approaches (topic modelling or keywords extraction). Finally, we will explore future NLP trends in education, such as deep learning and epistemic network analysis.

Activities

  1. The participants will be introduced to NLP techniques;
  2. Practical activities and tutorials will be provided using a jupyter notebook environment;
  3. The participants will be divided into small groups to develop a NLP project during the workshop.

Target Audience

Anyone interested in understanding the introductory concepts of natural language processing techniques and its application to education. 

Prerequisites

Workshop participants should have a basic programming experience in the python language. It is also desirable that the participants have basic knowledge of machine learning concepts (e.g., classification, clustering, performance metrics). No prior experience with natural language processing techniques is expected.

Takeaways

After the workshop, the participants should: 

  1. Have a broad view of the potential applications in the field of LA where NLP methods could provide relevant information about learners and the learning environment.
  2. be able to apply natural language processing methods and techniques to analyse different kinds of textual resources.

Preparations

Suggestions, not required:

  1. Ferreira Mello, R., André, M., Pinheiro, A., Costa, E., & Romero, C. (2019). Text mining in education. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(6), e1332.
  2. Kovanović, V., Joksimović, S., Gašević, D., Hatala, M., & Siemens, G. (2015). Content analytics: The definition, scope, and an overview of published research. Handbook of Learning Analyitcs, 77-92.
  3. McNamara, D. S., Allen, L., Crossley, S., Dascalu, M., & Perret, C. A. (2017). Natural language processing and learning analytics. Handbook of learning analytics, 93-104.
  4. Shaffer, D., & Ruis, A. (2017). Epistemic network analysis: A worked example of theory-based learning analytics. Handbook of learning analytics.

Required:

  1. Bring a laptop and try to use google colab (https://colab.research.google.com/) beforehand.

Instructor: Rafael Ferreira Mello, Universidade Federal Rural de Pernambuco (Brazil)

W7. Envisioning Responsible Learning Analytics in 2030 – Implications for Design and Educational Practice

Title: Envisioning Responsible Learning Analytics in 2030 – Implications for Design and Educational Practice

Description:

Despite its increasing importance in LA, the term “responsible” remains an umbrella term used differently and for different goals. It is now time for reflection and agenda-setting. This workshop will offer participants a creative, reflective, and engaging space to discuss how responsible (i.e., accountability) and response-able (i.e., duty to care) interventions can be effectively and carefully considered in LA design and educational practices.

Activities

Inspired by a human-centered and participatory design in LA, participants in this workshop will:

  • map and discuss different stakeholders’ concerns when designing and deploying LA tools,
  • identify roles and tasks in the LA educational data chain,
  • apply speculative design methods like fictive scenarios and design fiction to identify and reflect on ethical and moral implications of emerging technologies for education,
  • assess the value of speculative design methods for responsible LA.

This workshop is justified by the need to build upon relevant HCI knowledge and speculative design methods so LA researchers and practitioners can reflect on the present by envisioning (possible, probable, preferable, and plausible) futures for responsible LA tools and practices at scale.

In particular, speculative design methods can be used to communicate ideas within teams or projects, reflect on epistemic, social, cultural, and political values and provoke uncomfortable but necessary conversations about taken-for-granted understandings of student learning, student-teachers/institutions relationships, and pedagogical models underpinning the LA design.

Goals

Capitalizing on past workshops on human-centered design, ethics, and LA,  this workshop has the following goals:

  • to introduce the workshop’s participants to selected speculative design methods,
  • to test social science fiction and design fiction to discuss responsible human-centered design approaches to LA,
  • to raise awareness of potential ethical and moral dilemmas,
  • to envision best-responsible design and deployment practices for LA
  • to open up a space for research collaboration and professional networking.

Target Audience

Researchers and practitioners

Takeaways

Expected outcomes:

  • a set of scenarios and design fictions crafted by the participants raising concerns and potential solutions enabling and ensuring responsible LA.
  • We intend to surface the diversity of responsible learning analytics approaches and think about the role of speculative design methods within the LA community moving forward.

Such outcomes will inform the LASI and LA communities about LA stakeholders’ concerns and preferred design choices related to the design of responsible LA tools.

Organizers: The Responsible Learning Analytics SIG

       Instructors: Teresa Cerratto Pargman, Olga Viberg, and Cormac McGrath

W8. Grounding learning analytics feedback in feedback literacy - Introduction to OnTask

Title: Grounding learning analytics feedback in feedback literacy – Introduction to OnTask

Description

Feedback has long been recognised as being critically important for learners’ growth and performance. Research has established that feedback that is timely, actionable, and personalised to the individual’s learning needs, is particularly effective for improving students’ learning.  However, contemporary education presents significant challenges: large enrollments, an increasing shift to online learning, as well as high workloads, all of which are barriers for instructors to provide their students with personalised feedback in their teaching and learning contexts.

Furthermore, contemporary understandings of feedback frame feedback in the form of process, rather than product. Understood in this way, feedback is only effective when learners take an active role in the feedback process. This highlights the importance of feedback literacy on the part of both teachers and students: for teachers, feedback literacy is demonstrated in the way they design for feedback at the macro-, meso-, and micro-levels; for students, feedback literacy is demonstrated in the way they understand their role in feedback processes, make sense of their feedback, and enact feedback.

Within the last decade, research in learning analytics has advanced to a stage where many automated feedback systems have been developed as a solution to scaling personalised feedback. However, while many automated systems and tools facilitate the delivery of timely feedback tailored to students’ learning progress, care needs to be taken to ensure that such feedback is designed with student feedback literacy in mind. Without this consideration, feedback merely becomes a product, and is rendered ineffective when students do not act on it.

The objectives of this workshop are to introduce participants to OnTask, a personalised feedback system based on learning analytics, and to consider how to use this tool effectively in their own contexts, in a way that both demonstrates teacher feedback literacy and fosters student feedback literacy.

Activities

  1. Reflections on feedback practices in relation to feedback literacy
  2. Discussions on research in learning analytics and feedback
  3. Planning for timely, personalised feedback within a learning design
  4. Introduction to OnTask – a hands-on session
  5. Discussion of considerations for implementing learning analytics feedback processes

Target audience

Educators and researchers who are interested in personalised and scalable feedback and how to design feedback using learning analytics approaches in alignment with effective feedback principles.

Prerequisites

No prior experience with learning analytics systems is required. Some experience with feedback in a teaching context will be helpful.

Takeaways

  1. An appreciation of feedback literacy in designing for feedback
  2. Being able to use the basic functions of OnTask to prepare personalised feedback messages

Preparation

Readings will be supplied prior to the workshop.

Instructors: Lisa-Angelique Lim, University of Technology Sydney & Yi-Shan Tsai, Monash University

W9. Human Centered Learning Analytics

Title: Human Centered Learning Analytics

Description:

(Mis)understandings of real-world users, stakeholders, contexts, and routines can make or break Learning Analytics (LA) tools and systems. However, the extent to which existing human-centered design methods, processes, and tools are suited to address such human and societal factors in the context of LA is a topic that remains under-explored by our community. In response, the term human-centered learning analytics (HCLA) was recently coined to refer to the subcommunity of LA researchers and practitioners interested in utilizing the body of knowledge and practice from design communities, such as participatory design and co-design, into data-intensive educational contexts. Building from the growing interest in designing LA systems with stakeholders, this workshop seeks to build on the momentum and share insights around the contributions that Human-Centered Design theory and practice make to Learning Analytics system conception, design, implementation, and evaluation.

Activities

  1. Overview of HCLA. In the first part of the workshop, we will present a number of processes, frameworks and examples for engaging in participatory and co-design processes with students, faculty or administrators, emphasizing both opportunities and challenges. Special attention will be paid to the relationship between human wellbeing and HCLA.
  2. Human-Centered Design challenge. The second part of the workshop is a collaborative design challenge. Participants will engage in creating a research design plan by using human-centered methods and tools. Small groups will be presented with a design challenge and asked to work together to create a human-centered design project to address the problem.
  3. Sharing and guided critique. Groups will share their plan for the design challenges and report out the decisions they made and any tensions or challenges that arose. We will provide feedback and compare and contrast strategies to the design challenge and identify ways to improve the plan.
  4. Reflection and discussion. Finally, we will have a discussion based on the experience co-designing the human-centered plans. We expect that this will lead to a discussion of the pros and cons of human-centered design techniques, what needs to be adapted to fit LA purposes and the differences of meaning of human-centered design for different people.

Target Audience:

Anyone with or without experience in human-centered design is invited to participate and learn how human-centered design methods can be applied to learning analytics contexts.

Takeaways: 

  1. Define human-centered methodology within the learning analytics field.
  2. Understanding how to apply the process, methods, and tools in HCLA considering a human wellbeing perspective.
  3. Initial design for an HCLA design project.

Preparation and Pre-requisites: 
Read: Buckingham Shum, S., Ferguson, R. and Martinez-Maldonado R. (2019). Human-Centred Learning Analytics. Journal of Learning Analytics, JLA, 6(2): 1-9.
Materials: Access to Google Suite

Workshop Leader Names & Affiliations: 
Yannis Dimitriadis, Universidad de Valladolid, Spain
Fabio Campos, New York University, USA
Juan Pablo Sarmiento, New York University, USA
LuEttaMae Lawrence, University of California, Irvine, USA
Alejandra Martínez-Monés, Universidad de Valladolid, Spain
Khadija El Aadmi Laamech, Universitat Pompeu Fabra, Spain
Patricia Santos, Universitat Pompeu Fabra, Spain
Carla Barreiros, Graz University of Technology, Austria

W10. Toward Trusted Learning Analytics

Title: Toward Trusted Learning Analytics

Description

Learning analytics, as a field of research and practice, is currently positioned at the intersection of two adverse realities. Recent technological advances allow for the unprecedented data collection possibilities both in terms of quantity and quality (Joksimović et al., 2019). However, ethical and privacy concerns related to the utilization of available data represent a critical issue that needs to be addressed to enable the full proliferation of learning analytics. How pertinent this issue is can be observed through some of the recent examples as well as events that coincide with the emergence of learning analytics. Specifically, the ideas put forward behind the former educational technology company called inBloom Inc are almost perfectly aligned with the goals outlined in learning analytics manifesto. Nevertheless, despite the enormous funding and political support, inBloom failed to gain public trust ending up in a backlash over “inBloom’s intended use of student data, surfacing concerns over privacy and protection” (Bulger et al., 2017, p. 4). More recent events with Facebook and Cambridge Analytica, numerous data breaches scandals resulting in billions of dollars of damages and fines or failure to use data in an ethical way, do not contribute to raising trust in data and analytics in general, or learning analytics.

The aim of this workshop is to demonstrate our recent work on developing privacy-preserving learning analytics. This goes beyond just anonymization as we also account for re-identification risk based on the uniqueness of individuals’ attributes. We will discuss a variety of methods that provide measurable and provable mitigation mechanisms for maintaining learners’ privacy. We will also show that applying these mitigation solutions to the data will not prevent us from achieving our utility goals with LA.

This workshop aims at contributing to the discourse of developing privacy-enhanced learning analytics. Participants will have an opportunity to explore in practice a learning analytics toolbox developed based on the “privacy by design” principles, incorporating some of those novel algorithms.

Activities

  1. Participants will be introduced to privacy-enhanced learning analytics.
  2. Participants will have an opportunity to explore in practice a learning analytics toolbox developed based on the “privacy by design” principles.
  3. Participants will be invited to engage into guided discussions around the emerging challenges related to data privacy.

Target Audience

The target audience can be very broad and includes anyone interested in developing learning analytics models, using and sharing learners’ data or contributing to the discourse on developing privacy-enhanced learning analytics. 

Takeaways

  1. The level of data acquisition and handling raises many ethical issues, security and data privacy concerns (Pardo & Siemens, 2014). It is important that we do not compromise learners’ privacy and security to achieve learning analytics goals, as beneficial as they might be to the learners.
  2. It is equally important to ensure that we can still derive useful insight and results with learning analytics even when using privacy secured data.

Preparations

  1. Read: Joksimović S., Marshall R., Rakotoarivelo T., Ladjal D., Zhan C., Pardo A. (2022) Privacy-Driven Learning Analytics. In: McKay E. (eds) Manage Your Own Learning Analytics. Smart Innovation, Systems and Technologies, vol 261. Springer, Cham. https://doi.org/10.1007/978-3-030-86316-6_1
  2. Bring a laptop

Instructors:

  • Dr Srecko Joksimovic, University of South Australia
  • Dr Djazia Ladjal, Practera
  • Dr Thierry Rakotoarivelo, Data61
  • Alison Li, Practera
  • Dr Chen Zhan, University of South Australia

T1: Introduction to Learning Analytics

Title: Introduction to Learning Analytics

Description:

This tutorial is designed for everyone with an interest in increasing the impact of their learning analytics research. The tutorial will begin with a short introduction to the field and to the learning analytics community. It will go on to identify significant challenges that learning analytics needs to address, and factors that should be taken into account when implementing analytics, including ethical considerations related to development and implementation. As a participant, you’ll have opportunities to relate these challenges to your own work, and to consider how your research is situated in the field. You’ll be encouraged to reflect on how your work aligns with the learning analytics cycle, how it contributes to the evidence base in the field, and ways in which you can structure your work to increase its impact.

Activities:

The tutorial will include opportunities to share ideas and experiences using Google Docs and Padlet or similar openly accessible online tools.

Target Audience:

All are welcome

Advanced preparation:

None

Instructor: Rebecca Ferguson, Open University UK

T2: Introduction to Writing Analytics

Title: Introduction to Writing Analytics

Description:

Writing Analytics is a subfield of Learning Analytics that focuses on how learners generate written texts and ways to optimise the process. It uses natural language processing (NLP) and machine learning technologies to analyse texts, and can be used to provide automated formative feedback on student writing. An example tool that provides instant automated feedback on writing is ‘AcaWriter’ developed by the Connected Intelligence Centre, University of Technology Sydney. The tool captures specific rhetorically salient structures in student writing and provides feedback for improvement that are aligned with learning contexts. In this tutorial, participants will be introduced to the affordances of writing analytics using learning designs and tools, such as with the use of AcaWriter.

Activities:

  • An overview of writing analytics techniques with examples of their educational applications for a basic understanding of the topic.
  • An introduction to AcaWriter and the design of automated feedback for classroom contexts.
  • Group activity: Designing writing feedback for tools
  • Technical hands-on: Automated Feedback Prototype in Python notebook (Colab)

Target Audience:

All researchers, educators and students interested in learning about writing analytics and automated writing feedback can attend. Prior technical knowledge is useful for the hands-on activity part of the session, but not required.

Outcomes:

1. Understanding of writing analytics and automated feedback affordances

2. Experience in examining writing data and tools for automated analysis

3. Designs for feedback to support student writing improvement and technical implementation of automated feedback.

Preparation:

1. A Google account for Colab login (to use the automated feedback Python notebook).

2. Read: Knight, S., Shibani, A., Abel, S., Gibson, A., Ryan, P., Sutton, N., Wight, R., Lucas, C., Sandor, A., Kitto, K., Liu, M., Mogarkar, R., & Buckingham Shum, S. (2020) AcaWriter: A Learning Analytics Tool for Formative Feedback on Academic Writing. Journal of Writing Research, 12, 1, 141-186. https://doi.org/10.17239/jowr-2020.12.01.06 (Open Access)


Instructor: Shibani Antonette, University of Technology Sydney & Andrew Gibson, Queensland University of Technology

T3: Scaling Learning Analytics—A people-centred Approach

Title: Scaling Learning Analytics—A people-centred Approach

Description:

Learning analytics promises to provide useful insights into learning and teaching, thereby leveraging decision-making and enhancing learning experiences. However, the adoption of learning analytics is often found to be sporadic rather than systematic at an institutional scale. This tutorial introduces the audience to the social factors associated with learning analytics adoption, e.g., political contexts, data culture, and multi-stakeholder views. While examples presented in the tutorial will be drawn from cases in higher education, the adoption frameworks introduced will be applicable to other contexts and the participants will be encouraged to contribute perspectives from contexts beyond higher education. The tutorial will enable participants to take first steps towards scaling learning analytics by considering different needs and priorities among key stakeholders through policy and strategy formation.

Activities

  1. Presentations and discussion of the social factors that influence institutional adoption of learning analytics.
  2. Using the SHEILA framework to develop a people-centred approach to learning analytics adoption.

Target Audience

Anyone interested in exploring the social complexities of learning analytics and approaches to scale learning analytics at an institutional level.

Takeaways

  1. Understanding the disparities among different stakeholders regarding drivers and concerns about learning analytics.
  2. A draft of institutional policy or strategy for learning analytics

Preparations

Relevant reading:

Tsai, Y. S., Whitelock-Wainwright, A., & Gašević, D. (2021). More than figures on your laptop:(Dis) trustful implementation of learning analytics. Journal of Learning Analytics, 8(3), 81-100. https://doi.org/10.18608/jla.2021.7379

Tsai, Y. S., Rates, D., Moreno-Marcos, P. M., Muñoz-Merino, P. J., Jivet, I., Scheffel, M., … & Gašević, D. (2020). Learning analytics in European higher education–trends and barriers. Computers & Education, 103933. https://doi.org/10.1016/j.compedu.2020.103933

Author accepted manuscript is available here: http://yi-shan-tsai.com/wp-content/uploads/2020/05/Accepted_manuscript.pdf

Tsai, Y. S., Moreno-Marcos, P. M., Jivet, I., Scheffel, M., Tammets, K., Kollom, K., & Gašević, D. (2018). The SHEILA framework: Informing institutional strategies and policy processes of learning analytics. Journal of Learning Analytics, 5(3), 5-20. https://doi.org/10.18608/jla.2018.53.2


Instructor: Yi-Shan Tsai,
Department of Human-Centred Computing, Faculty of Information Technology, Monash University

T4: Power to the User: Using Participatory Design to develop Learning Analytics tools

Title: Power to the User: Using Participatory Design to develop Learning Analytics tools

Description
How often do Learning Analytics tools disappoint developers and users? As designers and researchers we tend to have a clear idea of how we want our LA tools to change the lives of faculty, administrators and students. However, when they are rolled out, we are often shocked by the reality of users who have a different idea of what the tools may be for, fail to see how they fit their practice, or use it in unforeseen, sometimes even negative, ways. For decades, Participatory Design processes have been used in the field of human-computer interaction to address some of these challenges. Involving stakeholders in design processes can empower them and  render tools which are adjusted to their needs, routines and values.

In this tutorial, we will present a number of processes, frameworks and examples for engaging in participatory design processes with students, faculty or administrators, emphasizing both opportunities and challenges. In the tutorial participants will work with  a current or past project to develop first steps to an outline for a participatory design process.

We (Fabio and Juan Pablo) created this workshop based on our past experiences working withIN LA projects.

Activities

  • Participants will be introduced to the concept of Participatory Design and the reasoning (practical and ethical) behind the implementation of this design methodology.
  • Participants will be introduced to different processes, frameworks, and examples for Participatory Design. Very hands on.
  • We will discuss key elements of designing experiences that lend themselves to participation of stakeholders, discussing issues of power, shared knowledge, authentic participation, and designing for creativity.
  • They will use an example to begin to develop an outline for a process of participatory design for LA.
    • They will:
      • Choose a design process and decide where stakeholders should be involved.
      • Identify potential risks of stakeholder involvement and co-design solutions to these challenges.
      • Reflect on the potential impact of participatory design of LA tools.

Target Audience
Beginners to Human-centered design interested in the developing Learning Analytics tools or implementations using Participatory processes that involve the stakeholders.

Takeaways
Participants will:
•    Identify reasons for the use of Participatory Design in Learning Analytics tool development.
•    Learn about Human Centered Design and Participatory Design.
•    Take first steps to designing a participatory process applicable to their own projects.

Preparation and Pre-requisites
•    Ideally, participants should have a specific project (either ongoing, past or future) that can be used as a scenario to work and redesign using participatory practices. A scenario will be provided for new researchers and designers.
•    For context, we have provided a recent paper from LAK2020 to read before class, and will send a worksheet to complete before the session.

Instructors: Juan Pablo Sarmiento & Fabio Campos,
New York University

T5: Writing and publishing a rigorous learning analytics article

Title: Writing and publishing a rigorous learning analytics article

Description:

The interactive session will focus on developing rigorous learning analytics manuscripts for publication in topic tier publications (including but not limited to the Journal of Learning Analytics). Topics will encompass the full breadth of quality criteria from issues of technical reporting to situating contributions in the literature and knowledge base of the field. Common pitfalls and ways to address them will be discussed. While targeted at manuscript development, the session will be of use to reviewers as well. 

Activities:

Participants will have the opportunity to:

  1. Learn the main quality and rigor criteria that will make your learning analytics paper publishable 
  2. Review learning analytics manuscripts using the JLA review criteria to assess their relevance, theoretical grounding, technical soundness and contribution to the field
  3. Evaluate their own manuscripts-in-progress using the JLA criteria, working in small groups to identify strengths and areas for improvement 
  4. Discuss tips on getting your paper published in JLA from the editors of JLA

Target Audience:

Those interested in developing publishable learning analytics contributions. In addition, the session will have value for those interested in reviewing or editing a Special Section of the journal.

Takeaways:

  1. Knowledge of the main quality and rigor criteria for getting learning analytics research published
  2. Ideas for how to develop impactful learning analytics papers that make their contribution clear 
  3. Understanding of how JLA assesses submissions based on key policies including the focus and scope of the journal

Advanced preparation:

Required

  1. Read JLA focus and scope https://learning-analytics.info/index.php/JLA/about/editorialPolicies#focusAndScope 
  2. Assess a provided learning analytics manuscript for publishable qualities (following JLA reviewers guidelines)
  3. Bring a draft paper, or something you submitted to the JLA or LAK (accepted or otherwise), that you’re happy to share and discuss

Recommended

  1. Read recent JLA editorials:
  2. When Are Learning Analytics Ready and What Are They Ready For (2018)
  3. Fostering An Impactful Field of Learning Analytics (2019)
  4. Learning Analytics Impact: Critical Conversations on Relevance and Social Responsibility (2020)

Instructors: JLA Editorial Team

 

T6: An Introduction to Methodologies in Learning Analytics

Title: An Introduction to Methodologies in Learning Analytics

Description:

Learning analytics is a bridge discipline, drawing on methods from education and learning sciences, computing and data science, psychology, statistics, linguistics, etc. Newcomers to the field can be easily overwhelmed. They might, however, be comforted to know that critical thinking and reasoning can go a long way in compensating for a shortage of technical proficiency, which will come with time and practice. In this tutorial, we explore the structure of learning analytics arguments and outline some best practices from the framing of research questions to selection of methods to collaboration on and communication of analytic results.

Activities

Participants will:

  1. Design a research study (in 5 minutes!) using the Heilmeier catechism.
  2. Explore data sets to identify types of variables, missingness, outliers, and other irregularities.
  3. Discuss levels of analytics from description to prediction, explanation, and causal inference.
  4. Identify research questions and methods in published LAK/JLA papers and enter results into the MLA Airtable.
  5. Learn to use R markdown notebooks (and GitHub) to produce (and collaborate on) reproducible research.

Target Audience:

This tutorial is for anyone just getting started in learning analytics.

Takeaways:

  1. A high-level view of the methods and methodology of learning analytics
  2. A close look at a few published examples
  3. A framework for planning learning analytics explorations
  4. A minimal toolbox for communicating and collaborating effectively.

Advanced preparation:

  1. Software and technology:
    1. Bring a laptop with R and RStudio installed
    2. (optional) Create a GitHub account, verify a working installation of git on your laptop, and install Github Desktop.
  2. Read:
    1. Bergner, Y., Gray, G., & Lang, C. (2018). What does methodology mean for learning analytics? Journal of Learning Analytics, 5(2), 1-8. https://epress.lib.uts.edu.au/index.php/JLA/article/view/6164
    2. (optional) Bergner, Y. (2017). Measurement and its uses in learning analytics. Handbook of learning analytics, 35-48. https://www.solaresearch.org/wp-content/uploads/2017/05/chapter3.pdf


Instructor: Yoav Bergner, New York University

 

T7: Hierarchical Cluster Analysis Heatmaps in R

Title: Hierarchical Cluster Analysis Heatmaps in R

Description:

Throughout learning analytics and much of education research, researchers and practitioners are faced with the task of describing and understanding high dimensionality data, such as clickstream logfile data with hundreds to thousands of rows (students) and columns (variables), yet have few tools to visualize and describe their datasets. Hierarchical Cluster Analysis (HCA) heatmaps are a recently applied technique in learning analytics to visualize and understand high dimensionality data, and importantly include visualizations of relationships and clustering both for the rows (students) as well as the columns (variables) in your dataset. HCA heatmaps have been used extensively across multiple fields of research. For example, in molecular biology and cancer research HCA heatmaps are used to display thousands of patients’ tumors across tens of thousands of different genes, visualizing gene transcript level and activity, correlated to tumor malignancy to pinpoint the most important genes to target for cancer interventions and treatments. In comparison to supervised clustering techniques such as k-nearest neighbors (KNN) in which the number of clusters must be provided a priori, hierarchical cluster analysis is unsupervised and builds the clusters from the data provided, and importantly for education research, is highly robust to missing data. When combined with heatmap visualization techniques, HCA heatmaps provide a useful means to visualize and describe a complex dataset. Recently, HCA heatmaps have been applied to visualizing student clickstream logfile patterns, visually identifying interesting correlations across clusters of students and variables. A central aspect of HCA heatmaps are their ability to cluster and visualize the relationships across a dataset while displaying each datapoint for each individual, without aggregating the data to summary statistics, thus “seeing” each individual across an entire dataset, in context with others who have the most similar data patterns.

Activities:

  1. Participants will be introduced to the research on cluster analysis, hierarchical cluster analysis, and HCA heatmaps, especially as it is applied in education research and learning analytics.
  2. Participants will generate HCA heatmaps using code provided in R. Participants can bring their own data, or a dataset will be provided.

Target Audience:

Researchers and practitioners who are looking for innovative ways to visualize and describe high dimensionality data.

Takeaways:

  1. Understanding the different types of cluster analysis and how they compare for use in education research, especially KNN versus hierarchical cluster analysis.
  2. Understanding how heatmaps can be used to visualize high dimensionality data.
  3. Create an HCA heatmap in R to visualize a dataset.

Advanced preparation:

  1. Read:
    1. Bowers, A.J. (2010) Analyzing the Longitudinal K-12 Grading Histories of Entire Cohorts of Students: Grades, Data Driven Decision Making, Dropping Out and Hierarchical Cluster Analysis. Practical Assessment, Research & Evaluation (PARE), 15(7), 1-18. http://pareonline.net/pdf/v15n7.pdf
    2. Lee, J., Recker, M., Bowers, A.J., Yuan, M. (2016). Hierarchical Cluster Analysis Heatmaps and Pattern Analysis: An Approach for Visualizing Learning Management System Interaction Data. Presented at the annual International Conference on Educational Data Mining (EDM), Raleigh, NC: June 2016. http://www.educationaldatamining.org/EDM2016/proceedings/paper_34.pdf
    3. Overview and skim the online R code manual book for the ComplexHeatmap() package by: Zuguang Gu (2019) ComplexHeatmap Complete Reference. https://jokergoo.github.io/ComplexHeatmap-reference/book/
  2. Bring a laptop with the latest R and R-studio installed. During the session we will install all needed packages and walk through examples.


Instructor: Alex Bowers, Teachers College, Columbia University

 

T8: Systematizing Analytics with Serious Games

Title: Systematizing Analytics with Serious Games

Description:

Serious Games (SG) have already proven their advantages in different educational environments. Combining SGs with Game Analytics (GA) techniques, we can further improve the lifecycle of serious games, from design to deployment, using a data-driven approach to make more effective SGs and, therefore, foster SGs large-scale adoption. With Game Learning Analytics (GLA), the goal is to obtain an evidence-based methodology based on in-game user interaction data that can provide insight about the game-based educational experience promoting aspects such as a better assessment of the learning process of players.

The current barrier to obtain such a methodology is that GA and GLA are usually done from scratch and ad-hoc for each SG. In this workshop, we will approach the systematization of the GLA process from the in-game user interaction data acquisition to the data analysis. One of the key open issues in learning analytics with SGs is the standardization of the data collected. To address this aspect, we use the Experience API Profile for Serious Games: a profile developed to align with the most common data collected in the serious games domain. Using the xAPI-SG profile allows the production of default analysis for SGs and simplifies the creation of ad-hoc analysis and the sharing of the GLA data. These analyses can be performed using common frameworks usually applied in learning analytics (e.g. Jupyter notebooks with Python).

Following this process with a simple SG, we will illustrate different purposes of GLA and how different stakeholders (e.g. designers, developers, researchers) can benefit from the application of systematized GLA.

Activities

  1. Introduction to Game Learning Analytics: from serious game design to implementation, evaluation and deployment
  2. The xAPI Serious Games Application Profile: the standard to collect interaction data from serious games
  3. From xAPI-SG data to analyzable data
  4. Default and ad-hoc analytics for serious games
  5. Hands-on workshop: small working groups will work in relating analysis with game design and deployment in a simple serious game or by working with a pre-existing serious game user-interaction data in xAPI-SG

Target Audience

Anyone interested in applying learning analytics techniques to serious games, for different purposes including assessment. Python and Jupyter notebooks knowledge is beneficial but not a strong requirement.

Takeaways

  1. Perspective on how learning analytics for serious games can be systematized using standards.
  2. Understanding how in-game user interaction data can be used to obtain more effective SGs.
  3. Systematization of the data collection for other people with previous experience in data analysis.

Preparations or references

Cristina Alonso-Fernández, Antonio Calvo-Morata, Manuel Freire, Iván Martínez-Ortiz, Baltasar Fernández-Manjón (2019): Applications of data science to game learning analytics data: a systematic literature review. Computers & Education, Volume 141, November 2019, 103612

Cristina Alonso-Fernández, Ana Rus Cano, Antonio Calvo-Morata, Manuel Freire, Iván Martínez-Ortiz, Baltasar Fernández-Manjón (2019): Lessons learned applying learning analytics to assess serious games. Computers in Human Behavior, Volume 99, October 2019, Pages 301-309

Manuel Freire, Ángel Serrano-Laguna, Borja Manero, Iván Martínez-Ortiz, Pablo Moreno-Ger, Baltasar Fernández-Manjón (2016): Game Learning Analytics: Learning Analytics for Serious Games. In Learning, Design, and Technology (pp. 1–29). Cham: Springer International Publishing.

Ángel Serrano-Laguna, Iván Martínez-Ortiz, Jason Haag, Damon Regan, Andy Johnson, Baltasar Fernández-Manjón (2017): Applying standards to systematize learning analytics in serious games. Computer Standards & Interfaces 50 (2017) 116–123

Cristina Alonso-Fernández, Antonio Calvo-Morata, Manuel Freire, Iván Martínez-Ortiz, Baltasar Fernández-Manjón (2021, in press). Data science meets standardized game learning analytics. IEEE EDUCON Conference.

Preparations or references

We will use online-accessible analysis tools for part 5 of the tutorial; in particular, Jupyter notebooks and ObservableHQ.

Instructors:

Baltasar Fernandez-Manjon, Ivan Martinez-Ortiz (not pictured), Manuel Freire-Moran (not pictured)

eUCM research group

T9: Computational Approaches to Analysing Transitions in Learning

Title: Computational Approaches to Analysing Transitions in Learning

Description
Transitions are an intricate part of our learning. We move in and out of education settings, across workplace contexts, and career roles, transferring and readjusting our knowledge, skills and self-perceptions. When we transition from one context to another, we experience processes of change and the need to adapt.

We define transitions as shifts in one’s knowledge, skills, and attitudes; they occur during learning processes, invoked by a change in the context that requires individuals to adapt. As lived experiences, transitions can be difficult and exert an emotional and psychological toll on individuals, since changes in individuals’ knowledge, skills, and identity are influenced by dynamic and non-trivial interactions of a wide range of factors. Individuals experiencing transitions need support as they adapt and restructure their knowledge and practices to new contexts and expectations. Learning analytics (LA) has offered novel ways to understand and support learning. To provide similar support for individuals who transition across contexts, LA needs to extend current and embrace new theoretical lenses and methods.

After introducing the notion of transitions and highlighting the role and relevance of transitions in learning, this tutorial will focus on an investigation of methodologies that could be used in LA to support individuals as they undergo transitions. We explain why new approaches are needed and offer examples of methods that add to our understanding of transitions as a phenomenon characterised by dynamic and non-linear processes that interact with one another at multiple levels and time scales.
Activities
[Part 1]

Topics:

  • Define transitions in learning.
  • Introduce key properties of transitions and what those properties imply in terms of research / analytics methods.

Activity (individual or group task):

  • Students will be requested to identify the kinds of transitions that they might be interested in studying and to think of methods (from the repertoire of methods known to them) they would choose for such analysis.
[Part 2]

Topics:

  • Limitations of current LA methods in analysing transitions.
  • An overview of complex systems (CS) approaches that allow for examining and understanding properties of transitions (i.e., dynamics, causality, emergence, multi-level, and temporal processes)
  • Showcase of up to 3 CS methods that allow for capturing properties of transitions.

Activity (individual or group task):

  • Students will be asked to get back to the transitions identified as of interest in Part 1, and to think of whether / how those transitions could be analysed using one or more of the presented CS methods.
[Part 3]

Topics:

  • Research directions opened by the CS approach to transitions
  • Challenges associated with the outlined research directions, including but not limited to: i) data collection and integration; ii) availability of adequate software support; iii) proper theoretical grounding to enable interpretation of the results.

Activity (group work, different groups discussing different aspects):

  • A group brainstorming on research objectives and questions opened up by what they learned about the notion of transitions and the CS approach
  • A group discussing challenges associated with the proposed lines of research; depending on the number of groups, one group can discuss different challenges, or there can be one group per identified challenge, so that each challenge is more thoroughly discussed.

Target Audience

  • PhD students exploring new directions
  • Experienced researchers interested in alternative methodologies to approach their research questions

Takeaways
After attending this tutorial, learners will be able to:

  1. Describe the concept of transitions in learning. Recognise the importance of studying transitions in different learning contexts
  2. Detail the limitations of current methodologies and the need to examine dynamics and causality to understand transitions in learning
  3. Describe methodologies that could be used to understand and evaluate transitions in learning
  4. Identify research directions and questions relevant to understanding and facilitating transitions in learning

Preparation and Pre-requisites
No prerequisites. Whoever is interested is welcome to attend.

Instructors:

Jelena Jovanovic, University of Belgrade and Sasha Poquet, Learning Planet Institute

T10: Introduction to Bias Research in Learning Analytics

Title: Introduction to Bias Research in Learning Analytics

Topic

As predictive models are deployed in several aspects of learning analytics, it has become increasingly important to identify potential biases in them. Recent work on bias in learning analytics offers some insights into the technical and humanistic approaches to bias research which could be valuable for designers, developers, and practitioners. In this tutorial, I will begin by establishing why bias is a serious problem for learning analytics that needs to be addressed immediately for an equitable student outcome. Then, I will present an overview of the theoretical perspectives and technical approaches being used currently to identify and mitigate bias in predictive modeling. Lastly, I will juxtapose these approaches with a humanistic view on the problem of bias to highlight the pitfalls in conceptualizing this issue predominantly as a technical problem. I hope to provide both a practical and critical introduction to bias research in learning analytics. 

Activities

Participants will be introduced to the theory and practice surrounding bias research in learning analytics, with a particular focus on predictive modeling. 

Target Audience

Designers, developers, researchers, and practitioners of learning analytics who are interested in developing an understanding of bias.

Takeaways

  1. Initial understanding of bias and its implications.
  2. Practical knowledge of current approaches used to address bias in predictive modeling. 
  3. More open questions than answers on the complex issue of bias that future research in learning analytics needs to tackle.

Preparations

Read: 

  1. Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 1-41. https://edarxiv.org/pbmvz/download?format=pdf 
  2. (Optional) Holstein, K., & Doroudi, S. (2021). Equity and Artificial Intelligence in Education: Will” AIEd” Amplify or Alleviate Inequities in Education?. arXiv preprint arXiv:2104.12920. https://arxiv.org/pdf/2104.12920 
  3. (Optional) Madaio, M., Blodgett, S. L., Mayfield, E., & Dixon-Román, E. (2021). Confronting Structural Inequities in AI for Education. arXiv preprint arXiv:2105.08847. https://arxiv.org/pdf/2105.08847 

Instructor: Shamya Karumbaiah, University of Wisconsin-Madison

 
×

Register | Lost Password