Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation
Abstract
Knowledge tracing (KT) aims to predict learners' future performance based on historical learning interactions. However, existing KT models predominantly focus on data from a single course, limiting their ability to capture a comprehensive understanding of learners' knowledge states. In this paper, we propose TransKT, a contrastive cross-course knowledge tracing method that leverages concept graph guided knowledge transfer to model the relationships between learning behaviors across different courses, thereby enhancing knowledge state estimation. Specifically, TransKT constructs a cross-course concept graph by leveraging zero-shot Large Language Model (LLM) prompts to establish implicit links between related concepts across different courses. This graph serves as the foundation for knowledge transfer, enabling the model to integrate and enhance the semantic features of learners' interactions across courses. Furthermore, TransKT includes an LLM-to-LM pipeline for incorporating summarized semantic features, which significantly improves the performance of Graph Convolutional Networks (GCNs) used for knowledge transfer. Additionally, TransKT employs a contrastive objective that aligns single-course and cross-course knowledge states, thereby refining the model's ability to provide a more robust and accurate representation of learners' overall knowledge states. Our code and datasets are available at https://github.com/DQYZHWK/TransKT/.
Cite
Text
Bin Rafiq et al. "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/823Markdown
[Bin Rafiq et al. "Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/rafiq2024ijcai-wearable/) doi:10.24963/ijcai.2024/823BibTeX
@inproceedings{rafiq2024ijcai-wearable,
title = {{Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation}},
author = {Bin Rafiq, Riyad and Shi, Weishi and Albert, Mark V.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {7438-7446},
doi = {10.24963/ijcai.2024/823},
url = {https://mlanthology.org/ijcai/2024/rafiq2024ijcai-wearable/}
}