TY - JOUR
T1 - Brain-Inspired Continual Learning
T2 - Robust Feature Distillation and Re-Consolidation for Class Incremental Learning
AU - Khan, Hikmat
AU - Bouaynaya, Nidhal Carla
AU - Rasool, Ghulam
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Artificial intelligence and neuroscience have a long and intertwined history. Advancements in neuroscience research have significantly influenced the development of artificial intelligence systems that have the potential to retain knowledge akin to humans. Building upon foundational insights from neuroscience and existing research in adversarial and continual learning fields, we introduce a novel framework that comprises two key concepts: feature distillation and re-consolidation. The framework distills continual learning (CL) robust features and rehearses them while learning the next task, aiming to replicate the mammalian brain's process of consolidating memories through rehearsing the distilled version of the waking experiences. Furthermore, the proposed framework emulates the mammalian brain's mechanism of memory re-consolidation, where novel experiences influence the assimilation of previous experiences via feature re-consolidation. This process incorporates the new understanding of the CL model after learning the current task into the CL-robust samples of the previous task(s) to mitigate catastrophic forgetting. The proposed framework, called Robust Rehearsal, circumvents the limitations of existing CL frameworks that rely on the availability of pre-trained Oracle CL models to pre-distill CL-robustified datasets for training subsequent CL models. We conducted extensive experiments on three datasets, CIFAR10, CIFAR100, and real-world helicopter attitude datasets, demonstrating that CL models trained using Robust Rehearsal outperform their counterparts' baseline methods. In addition, we conducted a series of experiments to assess the impact of changing memory sizes and the number of tasks, demonstrating that the baseline methods employing robust rehearsal outperform other methods trained without robust rehearsal. Lastly, to shed light on the existence of diverse features, we explore the effects of various optimization training objectives within the realms of joint, continual, and adversarial learning on feature learning in deep neural networks. Our findings indicate that the optimization objective dictates feature learning, which plays a vital role in model performance. Such observation further emphasizes the importance of rehearsing the CL-robust samples in alleviating catastrophic forgetting. In light of our experiments, closely following neuroscience insights can contribute to developing CL approaches to mitigate the long-standing challenge of catastrophic forgetting.
AB - Artificial intelligence and neuroscience have a long and intertwined history. Advancements in neuroscience research have significantly influenced the development of artificial intelligence systems that have the potential to retain knowledge akin to humans. Building upon foundational insights from neuroscience and existing research in adversarial and continual learning fields, we introduce a novel framework that comprises two key concepts: feature distillation and re-consolidation. The framework distills continual learning (CL) robust features and rehearses them while learning the next task, aiming to replicate the mammalian brain's process of consolidating memories through rehearsing the distilled version of the waking experiences. Furthermore, the proposed framework emulates the mammalian brain's mechanism of memory re-consolidation, where novel experiences influence the assimilation of previous experiences via feature re-consolidation. This process incorporates the new understanding of the CL model after learning the current task into the CL-robust samples of the previous task(s) to mitigate catastrophic forgetting. The proposed framework, called Robust Rehearsal, circumvents the limitations of existing CL frameworks that rely on the availability of pre-trained Oracle CL models to pre-distill CL-robustified datasets for training subsequent CL models. We conducted extensive experiments on three datasets, CIFAR10, CIFAR100, and real-world helicopter attitude datasets, demonstrating that CL models trained using Robust Rehearsal outperform their counterparts' baseline methods. In addition, we conducted a series of experiments to assess the impact of changing memory sizes and the number of tasks, demonstrating that the baseline methods employing robust rehearsal outperform other methods trained without robust rehearsal. Lastly, to shed light on the existence of diverse features, we explore the effects of various optimization training objectives within the realms of joint, continual, and adversarial learning on feature learning in deep neural networks. Our findings indicate that the optimization objective dictates feature learning, which plays a vital role in model performance. Such observation further emphasizes the importance of rehearsing the CL-robust samples in alleviating catastrophic forgetting. In light of our experiments, closely following neuroscience insights can contribute to developing CL approaches to mitigate the long-standing challenge of catastrophic forgetting.
UR - http://www.scopus.com/inward/record.url?scp=85186077578&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85186077578&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3369488
DO - 10.1109/ACCESS.2024.3369488
M3 - Article
AN - SCOPUS:85186077578
SN - 2169-3536
VL - 12
SP - 34054
EP - 34073
JO - IEEE Access
JF - IEEE Access
ER -