TY - JOUR T1 - Incremental learning model inspired in Rehearsal for deep convolutional networks AU - Muñoz, David AU - Narváez, Camilo AU - Cobos, Carlos AU - Mendoza, Martha AU - Herrera, Francisco JO - Knowledge-Based Systems VL - 208 SP - 106460 PY - 2020 DA - 2020/11/15/ SN - 0950-7051 DO - https://doi.org/10.1016/j.knosys.2020.106460 UR - http://www.sciencedirect.com/science/article/pii/S095070512030589X KW - Artificial Neural Network KW - Deep Learning KW - Deep convolutional networks KW - Rehearsal KW - Incremental learning AB - In Deep Learning, training a model properly with a high quantity and quality of data is crucial in order to achieve a good performance. In some tasks, however, the necessary data is not available at a particular moment and only becomes available over time. In which case, incremental learning is used to train the model correctly. An open problem remains, however, in the form of the stability–plasticity dilemma: how to incrementally train a model that is able to respond well to new data (plasticity) while also retaining previous knowledge (stability). In this paper, an incremental learning model inspired in Rehearsal (recall of past memories based on a subset of data) named CRIF is proposed, and two instances for the framework are employed — one using a random-based selection of representative samples (Naive Incremental Learning, NIL), the other using Crowding Distance and Best vs. Second Best metrics in conjunction for this task (RILBC). The experiments were performed on five datasets — MNIST, Fashion-MNIST, CIFAR-10, Caltech 101, and Tiny ImageNet, in two different incremental scenarios: a strictly class-incremental scenario, and a pseudo class-incremental scenario with unbalanced data. In Caltech 101, Transfer Learning was used, and in this scenario as well as in the other three datasets, the proposed method, NIL, achieved better results in most of the quality metrics than comparison algorithms such as RMSProp Inc (base line) and iCaRL (state-of-the-art proposal) and outperformed the other proposed method, RILBC. NIL also requires less time to achieve these results. ER -