Projects

Development-Focused

InsightXR: Scalable AI Analytics for Immersive Learning

InsightXR is a cutting-edge AI-powered analytics platform designed to enhance immersive learning experiences by providing scalable, data-informed behavioral insights without relying on traditional log mining. By leveraging GPT Vision and multimodal AI analysis, InsightXR dynamically interprets learner interactions, gestures, and engagement patterns in extended reality (XR) environments, enabling adaptive feedback and personalized learning trajectories.

Github page:

Collaborators: David Awoyemi

  • Log-Free, Scalable AI Analytics: Unlike conventional learning analytics systems that rely on log data, InsightXR harnesses real-time, vision-based AI processing to track learner engagement, cognitive load, and interaction patterns in immersive spaces.
  • Vision-Informed Behavioral Insights: Utilizing GPT Vision, InsightXR detects micro-behaviors, gaze shifts, hand movements, and posture dynamics to infer learner intent, engagement, and problem-solving strategies.
  • Scalable & Efficient Data Processing: By eliminating the need for log mining and manual data annotation, InsightXR ensures efficient, scalable deployment across diverse educational settings, from K-12 to higher education and workforce training.

Teacher Analytics – Multimodal Assessment

This project explores the integration of multimodal learning analytics (MMLA) in teacher assessment and professional development. Traditional teacher assessments primarily rely on classroom observations, student feedback, and self-reported reflections. However, such methods often lack real-time, data-driven insights into teaching practices. This project aims to incorporate multimodal data sources, including speech patterns, physiological responses, eye-tracking, and classroom interactions, to develop a more holistic and objective assessment framework for evaluating teacher effectiveness.

Project Page:

Collaborators: Unggi Lee

  • Combine verbal, non-verbal, physiological, and behavioral data to analyze teaching practices.
    Integrate speech-to-text analysis, facial expression tracking, eye-tracking, and gesture recognition to assess teacher engagement and responsiveness.
  • Provide real-time analytics dashboards with personalized feedback on teaching strategies.
  • Support self-reflection and peer coaching through AI-assisted data visualization.
  • Correlate teacher behaviors with student engagement metrics (e.g., gaze attention, participation rates).
    Develop predictive models for effective pedagogical strategies based on multimodal classroom data.

AI-Powered Teacher Simulation with Immersive Analytics

This project aims at AI-driven, immersive teacher training and simulation platform designed to support preservice teachers through real-time, adaptive feedback and immersive analytics. By integrating AI-powered teacher roleplay, interactive scenarios, and multimodal analytics, it helps educators develop instructional strategies, classroom management skills, and pedagogical decision-making in a risk-free, scalable virtual environment.

Project Page 1: GEAR

Project Page 2: TeacherGen@i

Collaborators: Unggi Lee, Jieun Lim, Sumin Hong, Taeyeon Eom, Juno Hwang, Hyojong Sohn

  • AI-Powered Teacher Simulation: Preservice teachers engage in interactive roleplay scenarios where AI-generated students react dynamically to different teaching strategies, classroom situations, and instructional decisions.
  • Immersive Learning Analytics: The system tracks verbal and non-verbal interactions, questioning techniques, engagement levels, and adaptive teaching strategies, providing actionable insights for growth.
  • Real-Time Adaptive Feedback: AI analyzes teacher responses, instructional methods, and student engagement to offer personalized coaching and recommendations for improvement.
  • Scalable, Data-Informed Teacher Development: Unlike traditional practicum experiences that are resource-intensive and inconsistent, TeachGen@i provides scalable, high-fidelity practice sessions that adapt to individual learning needs.
  • Flexible and Context-Specific Scenarios: Users can practice a range of teaching situations, including differentiated instruction, handling disruptions, facilitating discussions, and implementing AI-driven lesson plans.

Small Language Model Design Tailored to AI Ethics Education and Learning Experience Design

As AI systems become more embedded in education, ethical concerns surrounding AI bias, transparency, and responsible use must be addressed through AI Ethics Education. This project focuses on designing a Small Language Model (SLM) tailored for AI ethics education and Learning Experience Design (LXD) to enhance interactive, contextual, and pedagogically grounded learning experiences. Unlike large-scale generalist AI models, this SLM will be fine-tuned for educational dialogue, ethical case analysis, and adaptive learning experiences to support ethical reasoning and AI literacy.

Github page:

Collaborators:

  • Train an SLM specifically for AI ethics discourse, integrating real-world ethical dilemmas, AI governance frameworks, and ethical AI design principles.
  • Ensure the model is optimized for educational use cases, focusing on transparency, explainability, and controlled outputs.
  • Design interactive learning experiences using SLM-driven chatbots, simulations, and case-based reasoning exercises.
  • Personalize learning pathways by adapting content to learners’ ethical reasoning and decision-making progress.
    Ensure Ethical AI Design & Model Alignment with Responsible AI Principles
  • Develop bias-mitigation techniques for SLM-based ethics learning environments.
    Align model behavior with ethical AI principles, such as fairness, accountability, and transparency (FAccT).
  • Pilot and Evaluate SLM Effectiveness in Educational Settings
  • Deploy the model in higher education AI ethics courses and professional development workshops.
    Measure learning outcomes using student engagement, critical thinking development, and ethical decision-making improvements.

ETHOBOT: AI-Powered Ethical Learning Assistant

ETHOBOT is an AI-driven, all-purpose learning assistant designed to support students and educators in exploring AI ethics education, ethical principles, and dilemma ideation. Acting as an adaptive and flexible AI agent, ETHOBOT provides personalized insights, case-based learning, and interactive dialogue to facilitate ethical reasoning and decision-making in AI-related contexts.

Github page:

Collaborators: Sumin Hong, Hyeongjong Han, Jihyun Rho, Laura J. McNeill

  • AI-Powered Ethics Companion: ETHOBOT assists learners by providing real-time explanations, examples, and structured guidance on AI ethics topics, from bias and fairness to privacy and accountability.
  • Dynamic Dilemma Ideation: The AI agent helps students generate, explore, and debate ethical dilemmas related to AI development and deployment, fostering critical thinking and ethical reasoning.
  • Flexible, Context-Aware Support: ETHOBOT adapts to different learning needs, contexts, and instructional goals, functioning as both a tutor (providing structured content) and a collaborator (engaging in ethical discussions).
  • Scalable and Multi-Purpose: Designed for diverse educational environments, ETHOBOT supports classroom discussions, self-guided study, professional development, and research applications in AI ethics education.
  • Insights-Driven Learning Analytics: The system tracks patterns in ethical reasoning, engagement, and misconceptions, offering data-informed feedback to educators and learners.

Cybersecurity Minds: AI-Enhanced Microlearning Game for Cyber Ethics and STEM Education

Cybersecurity Minds is an interactive microlearning game project to teach students core concepts of cybersecurity, AI ethics, and STEM reasoning through immersive, scenario-based missions. Blending gamified learning with evidence-centered design, the project aims to cultivate digital literacy, critical thinking, and ethical decision-making among Gen Z learners.

GitHub Page: (Coming Soon)
Collaborators: Jewoong Moon, [add your team], [advisors if any]

Cybersecurity Minds engages learners with fast-paced micro-missions involving codebreaking, password logic, ethical dilemmas, and network protection tasks. Each challenge is designed to deliver an “aha moment,” fusing digital problem-solving with math, science, and ethics education. Through simulated interactions and ethical prompts, learners encounter real-world scenarios—from data privacy breaches to AI surveillance dilemmas—while developing structured responses based on ethical reasoning and accountability. Each mission is a self-contained 3–5 minute learning experience following the principles of Evidence-Centered Design (ECD). Learners receive immediate feedback, scaffolded explanations, and performance-based insights.

Cybersecurity concepts are grounded in core STEM topics:

  • Math: Combinatorics, pattern logic, encryption
  • Science: Signal transmission, human-AI perception differences
  • Technology & Engineering: Secure systems design, network topology, AI fairness

The game architecture tracks learner decisions and misconceptions, enabling analytics on reasoning patterns, ethical perspectives, and progress. Educators and researchers can use this data to support reflection and instructional design.

LEXBOT: AI Coaching Chatbot for Learner Experience Design (LXD)

LEXBOT is an AI-powered coaching chatbot designed to support prospective instructional designers in applying Learner Experience Design (LXD) principles to their digital learning projects.
Originally developed for a summer course on software technology, LEXBOT helps students critically reflect on their e-learning prototypes—especially those developed using Adobe XD—without relying on intensive readings or teaching assistant support.

While many instructional design programs still follow conventional frameworks (e.g., ADDIE), LEXBOT introduces a more experience-driven, learner-centered perspective. It offers real-time guidance through interactive dialogue, encouraging designers to consider motivation, flow, accessibility, and emotional engagement from the learner’s point of view.

🔍 How LEXBOT Works

LEXBOT features five distinct interactive modes:

  • 🎯 Goal Check Mode: Align your learning activity with both cognitive and experiential objectives.
  • 🧠 Reflection Mode: Think critically about your design choices—what will the learner feel, notice, or struggle with?
  • 🧍 Persona Simulation Mode: See your design through the lens of a specific learner, such as a visually impaired student.
  • 📘 Learn LXD Mode: Access micro-lessons on LXD theory, frameworks, and practical tips—one concept at a time.
  • 🛠️ Co-Design Mode: Get creative support to expand, iterate, or reframe your idea based on UX and learning principles.

Each mode is grounded in research on reflective practice, learner-centered design, and adaptive scaffolding (e.g., Tawfik et al., 2021; Floor, 2021; Schmidt et al., 2020). LEXBOT encourages self-paced exploration and just-in-time feedback, effectively serving as a virtual design studio coach. LEXBOT captures interaction data, including design questions, reflections, and decisions, to support both learner growth and instructional research. These analytics can be used to assess how students engage with LXD principles and where design misconceptions arise.