InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning
Abstract
Continual learning requires the model to learn multiple tasks sequentially. In continual learning, the model should possess the ability to maintain its performance on old tasks (stability) and the ability to adapt to new tasks continuously (plasticity). Recently, parameter-efficient fine-tuning (PEFT), which involves freezing a pre-trained model and injecting a small number of learnable parameters to adapt to downstream tasks, has gained increasing popularity in continual learning. Although existing continual learning methods based on PEFT have demonstrated superior performance compared to those not based on PEFT, most of them do not consider how to eliminate the interference of the new task on the old tasks, which inhibits the model from making a good trade-off between stability and plasticity. In this work, we propose a new PEFT method, called interference-free low-rank adaptation (InfLoRA), for continual learning. InfLoRA injects a small number of parameters to reparameterize the pre-trained weights and shows that fine-tuning these injected parameters is equivalent to fine-tuning the pre-trained weights within a subspace. Furthermore, InfLoRA designs this subspace to eliminate the interference of the new task on the old tasks, making a good trade-off between stability and plasticity. Experimental results show that InfLoRA outperforms existing state-of-the-art continual learning methods on multiple datasets. Code is available at https://github.com/liangyanshuo/InfLoRA.
1 Introduction
Continual learning requires the model to learn multiple tasks sequentially [33]. To achieve continual learning, the model should possess two essential abilities, including the ability to keep its performance on the old tasks (stability) and the ability to adapt to the new tasks continuously (plasticity) [33]. Furthermore, two different scenarios are often considered in continual learning, including task-incremental scenario [32] and class-incremental scenario [41]. Task-incremental scenario allows the model to get task identities during inference. On the contrary, class-incremental scenario does not allow the model to get task identities during inference, making the model learn to distinguish all the classes across all the tasks.
Recently, parameter-efficient fine-tuning (PEFT) [16, 15, 18], which involves freezing a pre-trained model and injecting a small number of learnable parameters to adapt to downstream tasks, has gained increasing popularity in continual learning [44, 38, 12], especially in the class-incremental scenario. More specifically, existing continual learning methods based on PEFT [21, 43] inject the learnable parameters into a pre-trained model using some popular PEFT methods such as prompt-tuning [25] or low-rank adaptation (LoRA) [16]. Subsequently, these methods freeze the pre-trained weights and sequentially fine-tune the injected parameters on multiple tasks throughout the continual learning process.
Although continual learning methods based on PEFT have demonstrated superior performance compared to those not based on PEFT [44], most of them do not consider how to eliminate the interference of the new task on the old tasks, which inhibits the model from making a good trade-off between stability and plasticity. Specifically, when learning a new task, existing continual learning methods based on PEFT either reuse the previously learned parameters to adapt to the new task [44, 12] or randomly expand some parameters first and then adapt to the new task [38, 43, 42]. During this process, the interference of the new task on the old tasks exists due to the shared parameters between new and old tasks, which means fine-tuning a pre-trained model on a new task may interfere with the model’s performance on the old tasks. As a result, it is hard for the model to make a good trade-off between stability and plasticity.
In this work, we propose a new PEFT method, called interference-free low-rank adaptation (InfLoRA), for continual learning. The contributions of this work are listed as follows:
-
•
InfLoRA injects a small number of parameters to reparameterize the pre-trained weights and shows that fine-tuning these injected parameters is equivalent to fine-tuning the pre-trained weights within a subspace.
-
•
InfLoRA designs this subspace to eliminate the interference of the new task on the old tasks, making a good trade-off between stability and plasticity.
-
•
Experimental results show that InfLoRA outperforms existing state-of-the-art continual learning methods on multiple datasets.
2 Related Work and Preliminaries
2.1 Related Work
Parameter-Efficient Fine-Tuning Parameter-efficient fine-tuning (PEFT) methods freeze a pre-trained model and inject a small number of learnable parameters to adapt to downstream tasks. In this way, PEFT methods reduce the inefficiency of full fine-tuning methods which fine-tune all the parameters of a pre-trained model to learn downstream tasks. For example, Adapter [15] adds small modules in different layers of Transformers and only tunes these added modules to adapt to downstream tasks. Prompt-tuning [25] and Prefix-tuning [27] insert a set of learnable tokens into the input of the Transformer layers and only tune these tokens to adapt to downstream tasks. Low-rank adaptation (LoRA) [16] reparameterizes the pre-trained weights with low-rank branches and only tunes these branches to adapt to the downstream tasks. Although these methods tune much fewer learnable parameters than full fine-tuning, they always show comparable or even superior performance compared with full fine-tuning [45, 11, 16, 31]. Early PEFT methods focus on natural language processing (NLP). Recently, PEFT methods have also been proposed for computer vision (CV). For example, visual prompt tuning (VPT) [18] and AdapterFormer [6] apply prompt-tuning and Adapter techniques to CV tasks, respectively. Both of them exhibit comparable performance to full fine-tuning.
Continual Learning Early continual learning was usually considered in the context of learning from scratch. Three types of continual learning methods are proposed, including regularization-based methods [46, 20, 1, 23], memory-based methods [2, 7, 3, 39, 28], and expansion-based methods [35, 17, 26]. Regularization-based methods employ a penalty loss (regularization) to prevent important parameters of old tasks from changing too much. Memory-based methods maintain a memory buffer to store information about old tasks. Expansion-based methods dynamically expand the model’s architecture for each new task.
Recently, with the advancements of pre-trained models [13, 10, 9], using pre-trained models for continual learning has gained increasing popularity. Some continual learning methods fully fine-tune the pre-trained models [4, 49], which has been shown to be inefficient. Other methods explore PEFT methods in continual learning. For instance, some existing continual learning methods [38, 44, 21, 43] introduce prompt-tuning in continual learning, achieving much higher performance than previous methods that learn from scratch, especially in the class-incremental scenario. The method in [12] introduces a framework in continual learning that can be combined with many existing PEFT methods, such as prompt-tuning, LoRA and Adapter. However, all these methods do not consider how to eliminate the interference of the new task on the old tasks, which inhibits the model from making a good trade-off between stability and plasticity.
2.2 Preliminaries
We first introduce low-rank adaptation (LoRA) [16], a popular PEFT method related to our method. Then, we give the problem definition for continual learning.
Low-Rank Adaptation LoRA [16] is one of the most popular PEFT methods. It assumes that the changes of parameters lie in a low-rank space when the model is fully fine-tuned on a downstream task. Specifically, for a linear layer with the input dimension and the output dimension , we represent its weight with . Then, LoRA reparametrizes the pre-trained weight by expanding a branch with two matrices, and . Typically, is much smaller than the input dimension and output dimension , making a dimensionality increasing matrix and a dimensionality reduction matrix. Finally, LoRA modifies the forward propagation in this linear layer as . Here, and denote the input and output of this layer, respectively. LoRA initializes as and initializes using Gaussian distribution. During the learning of the downstream tasks, LoRA freezes the pre-trained weight and only fine-tunes the parameters and .
Problem Definition In continual learning, there is a sequence of tasks with different distributions. We define the task sequence as , where the -th task . Here, denotes an input sample and denotes its label. The objective of continual learning is to train a model sequentially on these tasks and ensure that the model performs well on all of them.
We follow existing continual learning methods [43, 44] based on PEFT and assume the model is a pre-trained Vision Transformer (ViT) [10]. Specifically, assume the model is where is the classifier with parameters and is the pre-trained ViT backbone with pre-trained parameters . Similar to existing work [43], our focus is primarily on the class-incremental scenario, where task identities are unknown during inference. Furthermore, we concentrate on the exemplar-free setting [43, 51], where no historical data can be fetched for rehearsal.
3 Methodology
Figure 1 (a) illustrates the architecture of our InfLoRA within a linear layer. Before learning the -th new task, our InfLoRA expands a LoRA-like branch, which includes a dimensionality reduction matrix and a dimensionality increasing matrix . Then, the forward propagation of this linear layer is modified as
(1) |
Here, . Similar to LoRA, our InfLoRA also initializes dimensionality increasing matrix as . However, different from LoRA, which employs Gaussian distribution to initialize the dimensionality reduction matrix , our InfLoRA designs the dimensionality reduction matrix before learning the -th task. During the learning of the -th task, InfLoRA fine-tunes to learn the new task while keeping the pre-trained weight , all the old branches and the matrix frozen. After learning the -th tasks, for any given test sample belonging to the learned tasks, the model uses and (1) to infer its label. This design ensures that our method is compatible with the class-incremental scenario where task identities are unknown during inference.
In the following subsections, we first build the relationship between our InfLoRA and the method that fine-tunes the pre-trained weight. Specifically, we show that fine-tuning parameters is equivalent to fine-tuning the pre-trained weights within a subspace spanned by the rows of . Note that is designed before learning the -th task, making this subspace pre-designed. Then, building upon this relationship, we introduce how our InfLoRA designs this subspace to eliminate the interference of the new task on the old tasks and make a good trade-off between stability and plasticity.
3.1 Relationship between InfLoRA and Fine-Tuning the Pre-Trained Weight
When the -th task arrives and our method has expanded a new branch, the forward propagation in this layer can be represented by (1). At this time, we can prove the following proposition:
Proposition 1.
When learning the -th task with forward propagation represented by (1), fine-tuning is equivalent to fine-tuning the pre-trained weight within the subspace . Here, () denotes the -th row vector of .
Proof.
When tuning the pre-trained weight to learn the -th task, we can compute the gradient of based on the chain rule:
(2) |
Here, denotes the loss function. At this time, the change of can be denoted as , where is the learning rate. Then, we can compute the change of the composed matrix :
(3) |
Here, we use to denote the change of the composed matrix causing by the change of .
Similarly, when tuning the expanded weight , we can get the gradient of based on the chain rule:
(4) |
At this time, the change of can be denoted as . Then, we can compute the change of the composed matrix :
(5) |
Here, we use to denote the change of the composed matrix causing by the change of . The fourth equation in (3.1) holds because of (4). The fifth equation in (3.1) holds because of (2). (3.1) shows that is equal to multiplying a projection matrix . Since projects each row vector of into the subspace , Proposition 1 holds. ∎
Proposition 1 has demonstrated that using our InfLoRA to train the model is equivalent to directly fine-tuning the pre-trained weight within the subspace . Therefore, before learning the -th task, we can design matrix such that learning the -th task in the subspace will not interfere with the performance of the model on the old tasks.
3.2 Eliminating the Interference of the New Task on the Old Tasks
We first introduce the desired characteristics that InfLoRA aims to let the subspace have. With these characteristics, InfLoRA can eliminate the interference of the new task on the old tasks and make a good trade-off between stability and plasticity. Then, we introduce how to design dimensionality reduction matrix so that subspace has these characteristics.
3.2.1 Desired Characteristics
First, InfLoRA aims to make the subspace orthogonal to the gradients of all the old tasks. In this way, according to Proposition 1, the update of InfLoRA, which can be represented as , will also be orthogonal to the gradient of the old tasks. Note that the idea of making the update for the new task orthogonal to the gradient of the old tasks to eliminate the interference of the new task on the old tasks has been proposed in many existing continual learning methods [36, 30]. However, all these existing methods are designed for continual learning from scratch, involving updating all parameters of the model, which is incompatible with the setting in PEFT. On the contrary, our method is a PEFT method, which only tunes the parameters in .
Besides eliminating the interference of new tasks on old tasks, our InfLoRA further makes the subspace lie in a subspace that the gradient of the new task lies in to make a good trade-off between stability and plasticity. Specifically, existing work [19] has shown that during fine-tuning, the weight increments of pre-trained ViT exhibit redundancy in terms of weight rank. Therefore, the gradients of the new task lie in a low-dimensional subspace. Our method makes not only orthogonal to the gradient of the old tasks but also lie in the subspace in which the gradients of the new task lie. By doing so, our method makes the model’s focus on the new task when eliminating the interference of the new task on the old tasks, thereby making a good trade-off between stability and plasticity. Section 3 verifies the effectiveness of these two characteristics.
3.2.2 Designing Dimensionality Reduction Matrix
InfLoRA first approximates the gradient space of the new task and old tasks. Here, we use to represent the gradient space of the new task approximated by InfLoRA. Similarly, we use to represent the gradient space of previous old tasks approximated by InfLoRA. We also use to denote the residual gradient space, which is orthogonal to the space . Then, in order to meet the characteristics described in Section 3.2.1, InfLoRA ensures that each row of lies in . In other words, InfLoRA makes .
Existing works [36, 29] have shown that the gradient update of the linear layer lies in the span of the inputs. Please refer to supplementary material for a detailed explanation of this proposition. Therefore, InfLoRA uses the input matrix of the new task to approximate the gradient space of the new task. Specifically, InfLoRA computes the input matrix , with each column of representing an input vector of the -th task. Then, InfLoRA considers as the subspace spanned by the columns of matrix .
However, InfLoRA cannot use the input matrix of the old tasks to approximate the gradient space of the old tasks since the data from the old tasks is not available when the model learns the new tasks. Instead, existing methods such as gradient projection memory (GPM) [36] and dual gradient projection memory (DualGPM) [29] can learn a matrix to preserve information about the gradients of the old tasks. InfLoRA incorporates DualGPM to preserve gradient information. With the assistance of DualGPM, the model can learn either a matrix or a matrix . Here, the columns of contribute to the orthonormal bases of and the columns of contribute to the orthonormal bases of . denotes the dimension of . For detailed information of how DualGPM maintains orthonormal bases or , please refer to supplementary material or the original paper [29].
After approximating the gradient space of the new task and old tasks, InfLoRA gets the component of which lies in . Specifically, when the model maintains , InfLoRA performs the operation
(6) |
Similarly, when the model maintains , InfLoRA performs the operation
(7) |
Note that when , is a null space and . Obviously, each column of lies in . However, since and have different shapes, InfLoRA can not directly define as . Note that , InfLoRA uses the principal components of to set . Specifically, singular value decomposotion (SVD) is performed on . Then, InfLoRA designs by
(8) |
Here, denotes the rows of corresponding to the top- singular values. Figure 1 (b) illustrates the pipeline of designing matrix .
Note that DualGPM expands subspace and reduces subspace when the number of tasks increases. Since InfLoRA constrains the update of the model within the subspace , the space for learning the new task reduces when the number of tasks increases. However, by adjusting the approximation error of the gradient for the old tasks, DualGPM can expand slowly and reduce slowly. Therefore, the constraints imposed by InfLoRA do not excessively affect the model’s learning of new tasks. Please refer to supplementary material for a detailed explanation.
3.3 Whole Process of InfLoRA
Algorithm 1 outlines the whole process of InfLoRA in continual learning. When the -th new task arrives, InfLoRA first designs through (8) and expands a new branch. Then, InfLoRA learns the -th task by fine-tuning the newly expanded branch. Please note that, based on empirical findings from existing methods [38, 12], we employ the local cross-entropy (CE) loss as the learning objective, as it usually performs better than the global CE loss in continual learning methods based on PEFT. The local CE is the CE loss constrained to the classes of the current new task, which can be denoted as
(9) |
Here, is a function that filters out the logits of the old classes and denotes the standard CE loss. After learning the -th new task, InfLoRA follows the DualGPM to preserve the information about the gradient of the -th task.
Note that the branch corresponding to the -th task will be frozen once the model has learned the -th task. Since the expanded branches are linear transformations, we can integrate the old branches into the pre-trained weight to reduce the expanded parameters. Specifically, after learning the first task, InfLoRA integrates the first branch into the pre-trained weight and obtains the weight . Before learning the -th new task (), InfLoRA maintains the weight . After learning the -th task, InfLoRA integrates the -th branch into and obtains . In this way, the parameters in and do not need to be maintained in the learning of subsequent tasks. Therefore, during the whole learning process, the number of parameters expanded by InfLoRA equals the number of parameters in a single branch. Since a single branch contains parameters, the number of parameters expanded by InfLoRA is all the time.
Tasks | 5 | 10 | 20 | |||
---|---|---|---|---|---|---|
Method | () | () | () | () | () | () |
joint | - | - | - | |||
sequential | ||||||
L2P [44] | ||||||
DualPrompt [43] | ||||||
CODA-P [38] | ||||||
C-LoRA [37] | ||||||
LAE [12] | ||||||
InfLoRA-b5 | ||||||
InfLoRA | 77.52 0.37 | 82.01 0.12 | 75.65 0.14 | 80.82 0.24 | 71.01 0.45 | 77.28 0.45 |
4 Experiments
4.1 Experimental Settings
Datasets and Evaluation Metric Similar to existing continual learning methods [12, 44] based on PEFT, we use ImageNet-R [14], CIFAR100 [24], and DomainNet [34] to train and evaluate the models. Imagenet-R is generated through artistic processing of 200 classes from ImageNet [8]. This dataset is introduced to continual learning by existing work [43] and has become a standard benchmark for continual learning methods based on PEFT. CIFAR100 is a dataset commonly used in existing continual learning works. DomainNet contains 345 classes and is introduced by some existing works [38, 42] for continual learning. Following existing continual learning work [38], we split ImageNet-R into 5, 10, and 20 tasks, with each task containing 40, 20, and 10 classes. We split CIFAR100 into 10 tasks, and each task constrains 10 classes. We split DomainNet into 5 tasks, and each task contains 69 classes.
Following existing continual learning methods [12, 44], we evaluate the performance of the model through two popular metrics, including the final accuracy and the averaged accuracy , where denotes the total number of tasks and is defined as
(10) |
Here, denotes the accuracy of the -th task once the model has learned the -th task.
Baselines We compare our InfLoRA with state-of-the-art continual learning methods based on PEFT, including learn to prompt (L2P) [44], DualPrompt [43], continual decomposed attention-based prompt (CODA-P) [38], learning accumulation ensemble (LAE) [12], continual low-rank adaptation (C-LoRA) [37]. For LAE, we implement it with LoRA [16]. Following existing works [38, 12], we also include two methods without continual learning, joint and sequential, in the comparison. Here, joint denotes the method that learns all the tasks jointly, while sequential denotes the method that learns all the tasks sequentially without any operation to overcome the forgetting of the model. The accuracy of joint can be treated as the accuracy upper-bound and the accuracy of sequential can be treated as the accuracy lower-bound.
Architecture and Training Details We follow existing works [12, 43] to perform experiments. Specifically, we use the ViT-B/16 backbone [10] supervised pre-trained on ImageNet 21K as the pre-trained model.
For all the methods, we follow existing works [38, 44, 12] and use the Adam [22] optimizer with running averages of gradient and its square (, ). Each task is trained for 50 epochs on ImageNet-R, 20 epochs on CIFAR100 and 5 epochs on DomainNet. The batch size is set to 128 for all the experiments. Since our InfLoRA shares a similar architecture to LoRA, we follow existing work [12] and insert the architecture of our InfLoRA in the key and value of the attention module. Furthermore, existing method DualPrompt [43] treats the inserted blocks as hyperparameters and searches for the best positions for their prompts. On the contrary, we insert the architecture of InfLoRA for all the Transformer blocks to avoid searching. We also implement a variant of our method, which inserts the bottom 5 Transformer blocks like existing methods DualPrompt and CODA-P. We call this variant InfLoRA-b5. As for the hyperparameter , we determine its value through a grid search on a validation dataset.
Tasks | CIFAR100 | DomainNet | ||
---|---|---|---|---|
Method | () | () | () | () |
joint | - | - | ||
sequential | ||||
L2P [44] | ||||
DualPrompt [43] | ||||
CODA-P [38] | ||||
C-LoRA [37] | ||||
LAE [12] | ||||
InfLoRA-b5 | 87.06 0.25 | |||
InfLoRA | 91.70 0.32 | 74.53 0.23 | 79.57 0.57 |
4.2 Experimental Results
Accuracy Table 1 shows the results of different methods on ImageNet-R with a different number of tasks. Table 2 shows the results of different methods on CIFAR100 and DomainNet. We can find that our methods InfLoRA and InfLoRA-b5 outperform existing continual learning methods.
Figure 2 shows the variation of the accuracy of different continual learning methods on ImageNet-R and CIFAR100. We can find that our method outperforms existing methods not only at the end of the learning but also throughout the whole learning process. This indicates that our InfLoRA eliminates the interference of the new task on the old tasks and thus the accuracy of our method decreases slower compared to other methods.
Analysis of Expanded Parameters Figure 3 shows the number of expanded parameters and the accuracy of different methods on ImageNet-R and CIFAR100. For L2P, DualPrompt and CODA-P, their expanded parameters are included in the added prompts and corresponding key. For LAE, its expanded parameters are the inserted LoRA modules and an additional copy. For C-LoRA, its expanded parameters are inserted LoRA modules. For our method, the expanded parameters are and . The details of computing the number of expanded parameters for different methods are given in supplementary material. We can find that CODA-P and C-LoRA expand much more parameters than other methods. Furthermore, our methods InfLoRA and InfLoRA-b5 expand comparable parameters to L2P, DualPrompt and LAE but perform better than these methods.
Tasks | 5 | 10 | 20 | |||
---|---|---|---|---|---|---|
() | () | () | () | () | () | |
Random | ||||||
(InfLoRA) | 77.52 0.37 | 82.01 0.12 | 75.65 0.14 | 80.82 0.24 | 71.01 0.45 | 77.28 0.45 |
Ablation Study We perform experiment to verify the effectiveness of designing dimensionality reduction matrix by (8). Specifically, we explore three different variants for designing . The first variant designs randomly using Gaussian distribution. We call this variant ‘Random ’. The second variant discards the operation in (6) or (7) and directly sets . Through this way, this variant ensures that each row of lies in while ignoring . We call this variant ‘’. The third variant does not compute the input matrix but initializes using a Gaussian distribution before applying the operation in (6) or (7). In this way, this variant ensures that each row of lies in while ignoring . We call this variant ‘’. Since our method focuses both and , we use to represent our method.
Table 3 shows the results of our method and its variants. We can find that all these variants fail to perform as well as our method. To further demonstrate the performance of different variants, we show the relative accuracy of different tasks after the model learns them all in Figure 4. Here, relative accuracy is the accuracy of different variants minus the accuracy of our InfLoRA. Note that the last task is the new task, and the other tasks are old tasks in Figure 4. As we can see, ‘Random ’ and ‘’ outperform ‘’ on the new task but shows much lower accuracy than ‘’ and our InfLoRA on the old tasks. This means these two variants fail to eliminate the inference of the new task on the old tasks, making the model suffer from low stability. On the contrary, ‘’ shows the lowest performance on the new task. This means ‘’ ignores the plasticity of the model. Our method outperforms all the variants on most of the tasks. This shows that our method can eliminate the interference of the new task on the old tasks and make a better trade-off between stability and plasticity than these variants.
Varying the Pre-Trained Model We also follow the existing method [40] and perform experiments using a ViT-B/16 pre-trained with two different self-supervised methods, including DINO [5] and iBOT [50]. All experimental settings, except for the choice of the pre-trained model, are kept consistent with the details outlined in Section 4.1.
Method | () | () | |
---|---|---|---|
DINO-1k | L2P [44] | ||
DualPrompt [43] | |||
CODA-P [38] | |||
C-LoRA [37] | |||
LAE [12] | |||
InfLoRA-b5 | |||
InfLoRA | 68.31 0.28 | 76.15 0.05 | |
iBOT-1k | L2P [44] | ||
DualPrompt [43] | |||
CODA-P [38] | |||
C-LoRA [37] | |||
LAE [12] | |||
InfLoRA-b5 | |||
InfLoRA | 71.84 0.09 | 78.29 0.09 |
Table 4 shows the results of different methods on ImageNet-R when using various pre-trained models. Comparing these results to those in Table 1, we can find that the performance of all methods utilizing self-supervised pre-trained models is lower than the performance of the corresponding methods using supervised pre-trained models. However, our methods still outperform all other methods.
Combining with Classifier Alignment Slow learner with classifier alignment (SLCA) [48] utilizes feature statistics to align classifiers, demonstrating superior performance compared to methods without aligned classifiers. Our InfLoRA can be combined with classifier alignment (CA) to get better performance. Specifically, after learning the -th task with parameters and and loss (9), we collect features of the -th task. Here, denotes the features extracted by backbone . Then, mean and covariance of features for each class are computed and saved. After that, for each class the model has seen during continual learning, samples are sampled from Gaussian distribution . Here, and covariance denote the mean and covariance of the class . Finally, we align the classifier using standard cross-entropy and these samples. The details of this experiment are given in supplementary material.
Table 5 shows that our method InfLoRA+CA outperforms SLCA. Note that SLCA tunes all the parameters of the model while our method InfLoRA only tunes the parameters in . Therefore, our InfLoRA+CA is much more efficient than SLCA.
5 Conclusion
In this work, we propose a new method, called interference-free low-rank adaptation (InfLoRA), for continual learning. InfLoRA injects a small number of parameters to reparameterize the pre-trained weights and shows that fine-tuning these injected parameters is equivalent to fine-tuning the pre-trained weights within a subspace. Furthermore, InfLoRA designs this subspace to eliminate the interference of the new task on the old tasks, making a good trade-off between stability and plasticity. Experimental results show that InfLoRA outperforms existing state-of-the-art continual learning methods on multiple datasets.
Acknowledgment
This work is supported by NSFC (No.62192783), National Key R&D Program of China (No.2020YFA0713901), and Fundamental Research Funds for the Central Universities (No.020214380108).
References
- Aljundi et al. [2018] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision, pages 139–154, 2018.
- Aljundi et al. [2019a] Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems, pages 11849–11860, 2019a.
- Aljundi et al. [2019b] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems, pages 11816–11825, 2019b.
- Boschini et al. [2022] Matteo Boschini, Lorenzo Bonicelli, Angelo Porrello, Giovanni Bellitto, Matteo Pennisi, Simone Palazzo, Concetto Spampinato, and Simone Calderara. Transfer without forgetting. In Proceedings of the European Conference on Computer Vision, pages 692–709, 2022.
- Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9650–9660, 2021.
- Chen et al. [2022] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. Advances in Neural Information Processing Systems, pages 16664–16678, 2022.
- Chrysakis and Moens [2020] Aristotelis Chrysakis and Marie-Francine Moens. Online continual learning from imbalanced data. In Proceedings of the International Conference on Machine Learning, pages 1952–1961, 2020.
- Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
- Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186, 2019.
- Dosovitskiy et al. [2021] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
- Fu et al. [2022] Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, and Hung-Yi Lee. Adapterbias: Parameter-efficient token-dependent representation shift for adapters in nlp tasks. In Findings of the Association for Computational Linguistics, pages 2608–2621, 2022.
- Gao et al. [2023] Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, and Jian Zhang. A unified continual learning framework with general parameter-efficient tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11449–11459, 2023.
- He et al. [2022] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009, 2022.
- Hendrycks et al. [2021] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340–8349, 2021.
- Houlsby et al. [2019] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning, pages 2790–2799, 2019.
- Hu et al. [2022] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
- Hung et al. [2019] Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems, pages 13647–13657, 2019.
- Jia et al. [2022] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge J. Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In Proceedings of the European Conference on Computer Vision, pages 709–727, 2022.
- Jie and Deng [2023] Shibo Jie and Zhi-Hong Deng. Fact: Factor-tuning for lightweight adaptation on vision transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1060–1068, 2023.
- Jung et al. [2020] Sangwon Jung, Hongjoon Ahn, Sungmin Cha, and Taesup Moon. Continual learning with node-importance based adaptive group sparse regularization. Advances in Neural Information Processing Systems, pages 3647–3658, 2020.
- Khan et al. [2023] Muhammad Gul Zain Ali Khan, Muhammad Ferjad Naeem, Luc Van Gool, Didier Stricker, Federico Tombari, and Muhammad Zeshan Afzal. Introducing language guidance in prompt-based continual learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11463–11473, 2023.
- Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kirkpatrick et al. [2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pages 3521–3526, 2017.
- Krizhevsky [2009] A Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Tront, 2009.
- Lester et al. [2021] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, 2021.
- Li et al. [2019] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In Proceedings of the International Conference on Machine Learning, pages 3925–3934, 2019.
- Li and Liang [2021] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 4582–4597, 2021.
- Liang and Li [2023a] Yan-Shuo Liang and Wu-Jun Li. Loss decoupling for task-agnostic continual learning. In Advances in Neural Information Processing Systems, 2023a.
- Liang and Li [2023b] Yan-Shuo Liang and Wu-Jun Li. Adaptive plasticity improvement for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7816–7825, 2023b.
- Lin et al. [2022] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. Trgp: Trust region gradient projection for continual learning. In International Conference on Learning Representations, 2022.
- Mahabadi et al. [2021] Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. Compacter: Efficient low-rank hypercomplex adapter layers. In Advances in Neural Information Processing Systems, pages 1022–1035, 2021.
- Masana et al. [2021] Marc Masana, Joost Van de Weijer, Bartłomiej Twardowski, et al. On the importance of cross-task features for class-incremental learning. arXiv preprint arXiv:2106.11930, 2021.
- Parisi et al. [2019] German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks, pages 54–71, 2019.
- Peng et al. [2019] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1406–1415, 2019.
- Rusu et al. [2016] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
- Saha et al. [2021] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. In International Conference on Learning Representations, 2021.
- Smith et al. [2023a] James Seale Smith, Yen-Chang Hsu, Lingyu Zhang, Ting Hua, Zsolt Kira, Yilin Shen, and Hongxia Jin. Continual diffusion: Continual customization of text-to-image diffusion with c-lora. CoRR, 2023a.
- Smith et al. [2023b] James Seale Smith, Leonid Karlinsky, Vyshnavi Gutta, Paola Cascante-Bonilla, Donghyun Kim, Assaf Arbelle, Rameswar Panda, Rogerio Feris, and Zsolt Kira. Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11909–11919, 2023b.
- Sun et al. [2022] Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, and Liang Wan. Exploring example influence in continual learning. Advances in Neural Information Processing Systems, pages 27075–27086, 2022.
- Wang et al. [2023a] Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, and Jun Zhu. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. arXiv preprint arXiv:2310.07234, 2023a.
- Wang et al. [2023b] Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023b.
- Wang et al. [2022a] Yabin Wang, Zhiwu Huang, and Xiaopeng Hong. S-prompts learning with pre-trained transformers: An occam’s razor for domain incremental learning. Advances in Neural Information Processing Systems, pages 5682–5695, 2022a.
- Wang et al. [2022b] Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer G. Dy, and Tomas Pfister. Dualprompt: Complementary prompting for rehearsal-free continual learning. In Proceedings of the European Conference on Computer Vision, pages 631–648, 2022b.
- Wang et al. [2022c] Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149, 2022c.
- Zaken et al. [2022] Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 1–9, 2022.
- Zenke et al. [2017] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In Proceedings of the International Conference on Machine Learning, pages 3987–3995, 2017.
- Zhang et al. [2021] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, pages 107–115, 2021.
- Zhang et al. [2023] Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, and Yunchao Wei. SLCA: slow learner with classifier alignment for continual learning on a pre-trained model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19091–19101, 2023.
- Zheng et al. [2023] Zangwei Zheng, Mingyuan Ma, Kai Wang, Ziheng Qin, Xiangyu Yue, and Yang You. Preventing zero-shot transfer degradation in continual learning of vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19068–19079, 2023.
- Zhou et al. [2022] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image bert pre-training with online tokenizer. In International Conference on Learning Representations, 2022.
- Zhu et al. [2021] Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng-Lin Liu. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871–5880, 2021.
Supplementary Material
A Details of GPM and DualGPM
GPM and DualGPM are established on the fact that the gradient updates lie in the span of input data points [47].
For a linear layer, we denote its forward propagation as
(11) |
, , and . and denote input and output dimension, respectively. We further denote the loss function as . Through the chain rule, we can get the gradient of :
(12) |
denotes the vector . Through (12), we can find that each column of can be represented as input multiplied by a real value (). Therefore, in the linear layer, each column of the gradient lies in the span of input.
A.1 Gradient Projection Memory
GPM learns a subspace with orthogonal bases to approximate the gradient space of the old tasks. Here, the columns of contribute a set of orthogonal bases in . GPM expands the bases of to the bases of after learning the -th new task. Specifically, GPM computes the inputs matrix such that each column of represents an input of this layer. Then, the part of that has already in is removed by
(13) |
Please note that when , and hence is a zero matrix. After that, singular value decomposition (SVD) is performed on . Then, new orthogonal bases are chosen from the columns of for a minimum of satisfying the following criteria for given threshold :
(14) |
Here, denotes the components of that correspond to top- singular values. Then, subspace is obtained with the bases .
A.2 Dual Gradient Projection Memory
Different from GPM that learns a subspace with orthogonal bases to approximate the gradient space of the old tasks, DualGPM either learns a subspace with orthogonal bases to approximate the gradient of the old tasks, or learns a subspace with orthogonal bases to approximate orthogonal complement of the gradient space of the old tasks.
DualGPM decides whether to keep or in memory according to and . Specifically, during the learning of the first several tasks, . At this time, DualGPM maintains , and expands to after each task. When increases and exceeds , DualGPM obtains through some transformations on . After that, DualGPM only maintains in memory, and reduces to after each task. Through this way, the number of bases kept for each layer is .
There are three key problems in DualGPM: expanding the bases of , obtaining the bases of through the bases of , and reducing the bases of .
Expanding the Bases of
The expansion of is the same as that in GPM.
Transforming to
DualGPM transforms to by performing SVD to the matrix . Specifically, let , the column vectors of which correspond to the zero singular values form a set of orthogonal bases of . Please refer to the paper of DualGPM [29] for this explanation.
Reducing the Bases of
DualGPM reduces space by removing the part of which contains the gradient of the -th task. Specifically, DualGPM first computes the input matrix . Then, the part of which lies in can be computed through
(15) |
After that, SVD is performed on . Then, new orthogonal bases are chosen from the columns of for a maximum of satisfying the following criteria for the given threshold (the same as in (14)):
(16) |
Let , . Here, is the subspace of that contains the gradient of the -th task. DualGPM removes from to get . Specifically, let . DualGPM performs the second SVD on . The columns of which correspond to the non-zero singular values form the bases . Please refer to the paper of DualGPM [29] for this explanation.
A.3 Approximation Error in DualGPM
DualGPM either learns a subspace to approximate the gradient space of the old tasks or learns a subspace to represent the orthogonal complement of the gradient of the old tasks. From Seciton A.2, we can find that the approximation error is related to the hyperparameter in (14) and (16). Specifically, when the value of in (14) and (16) increases, the approximation error decreases. As a result, the dimension of subspace becomes larger, while the dimension of becomes smaller. Note that our InfLoRA constrains the update of the model to lie within the subspace . Therefore, we can adjust the value of to adjust the space for learning the new task. Here, for all the experiments, we set
(17) |
where denotes the task id and denotes the total number of tasks. In other words, we gradually increase the value of as the number of tasks increases throughout the whole learning process. Table 6 shows the setting of in our InfLoRA.
Figure 5 illustrates the variation of the dimension of the subspace in different Transformer layers of ViT-B/16. We can find that the dimension of the subspace in different Transformer layers of ViT-B/16 is always much larger than zero, which means the space for learning the new task always exists throughout the whole learning process.
Methods | Hyper-Parameters |
---|---|
L2P | lr: 0.001 (ImageNet-R, DomainNet, CIFAR100) |
: 1 (ImageNet-R, DomainNet, CIFAR100) | |
: 30 (ImageNet-R, DomainNet, CIFAR100) | |
: 20 (ImageNet-R, DomainNet, CIFAR100) | |
DualPrompt | lr: 0.001 (ImageNet-R, DomainNet, CIFAR100) |
: 3 (ImageNet-R, DomainNet, CIFAR100) | |
: 2 (ImageNet-R, DomainNet, CIFAR100) | |
: 20 (ImageNet-R, DomainNet, CIFAR100) | |
: 6 (ImageNet-R, DomainNet, CIFAR100) | |
CODA-P | lr: 0.001 (ImageNet-R, DomainNet, CIFAR100) |
: 5 (ImageNet-R, DomainNet, CIFAR100) | |
: 100 (ImageNet-R, DomainNet, CIFAR100) | |
: 8 (ImageNet-R, DomainNet, CIFAR100) | |
LAE | lr: 0.001 (ImageNet-R, DomainNet, CIFAR100) |
: 5 (ImageNet-R, DomainNet, CIFAR100) | |
C-LoRA | lr: 0.001 (ImageNet-R, DomainNet, CIFAR100) |
: 64 (ImageNet-R, DomainNet, CIFAR100) | |
: 0.5 (ImageNet-R, DomainNet, CIFAR100) | |
InfLoRA-b5 | lr: 0.001 (CIFAR100), 0.0005 (ImageNet-R, DomainNet) |
: 10 (ImageNet-R, CIFAR100), 20 (DomainNet) | |
: (ImageNet-R), (CIFAR100, DomainNet) | |
InfLoRA | lr: 0.0005 (ImageNet-R, DomainNet, CIFAR100) |
: 10 (ImageNet-R, DomainNet, CIFAR100) | |
: (ImageNet-R), (CIFAR100, DomainNet) |
B More Experimental Details
B.1 Training Details
For all the methods in all the experiments except for the comparison with SLCA, the batch size is set to 128 to follow many existing continual learning methods based on PEFT [38, 40]. Hyperparameters for different methods are selected based on the experimental settings in existing works [38, 44, 12] or through hyperparameter search. For example, Adam is used as the optimizer with running averages of gradient and its square (, ). The learning rate is searched among [5e-4, 1e-3, 2e-3, 1e-2] for all the methods through the validation sets we split from the training sets. For the hyperparameter in our InfLoRA, we search it among [1, 5, 10, 20, 30] through the validation sets we split from the training sets. Table 6 shows the hyperparameters of different methods.
When compared with SLCA, our method is combined with classifier alignment (CA). At this time, we follow SLCA to train the expanded LoRA branches and classifiers using the SGD optimizer. Each task is trained for 50 epochs on ImageNet-R, 20 epochs on CIFAR100 and 5 epochs on DomainNet. The batch size is set to 128.
B.2 Expanded Parameters
For L2P [44], the expanded parameters consist of the inserted prompts and their corresponding keys. Let denote the embedding dimension, denote the prompt length, denote the number of prompts, and denote the number of layers in which prompts are inserted. To compute the total number of expanded parameters, the formula used is .
For DualPrompt [43], the expanded parameters also consist of the inserted prompts and corresponding keys. However, DualPrompt contains expert prompts and shared prompts. Let denote the embedding dimension, denote the number of tasks, denote the expert prompt length, denote the shared prompt length, denote the number of layers in which expert prompts are inserted and denote the number of layers in which shared prompts are inserted. To compute the total number of expanded parameters, the formula used is .
For CODA-Prompt [38], the expanded parameters consist of the inserted prompts, corresponding keys and attention parameters. Let denote the embedding dimension, denote the prompt length, denote the number of prompts, and denote the number of layers in which prompts are inserted. To compute the total number of expanded parameters, the formula used is .
For LAE [12], we implement it with LoRA. Therefore, the expanded parameters in this method consist of the inserted LoRA modules and the corresponding ensemble modules. Let denote the embedding dimension, denote the rank, and denote the number of layers in which LoRA modules are inserted. Since LAE inserts LoRA modules into key and value projection in multi-head attention, the number of expanded parameters is .
For C-LoRA [37], the expanded parameters in this method consist of the inserted LoRA modules. Let denote the embedding dimension, denote the rank, and denote the number of layers in which LoRA modules are inserted. Since C-LoRA inserts LoRA modules into query, key and value projection in multi-head attention, the number of expanded parameters is .
For our methods, since we integrate the branches of the old tasks when the model learns a new task, the number of expanded parameters equals the number of parameters in a single branch. Let denote the embedding dimension, denote the rank, and denote the number of layers in which our InfLoRA modules are inserted. Since we also insert InfLoRA modules into key and value projection in multi-head attention, the number of expanded parameters is .
C More Experimental Results
Tasks | 5 | 10 | 20 | |||
---|---|---|---|---|---|---|
Method | () | () | () | () | () | () |
SeqLoRA | ||||||
HiDe-Prompt [40] | ||||||
InfLoRA | 77.52 0.37 | 82.01 0.12 | 75.65 0.14 | 80.82 0.24 | 71.01 0.45 | 77.28 0.45 |
() | () | |
SeqLoRA | ||
HiDe-Prompt [40] | ||
InfLoRA | 74.53 0.23 | 79.57 0.57 |
Method | () | () | |
---|---|---|---|
DINO-1k | SeqLoRA | ||
HiDe-Prompt [40] | |||
InfLoRA | 68.31 0.28 | 76.15 0.05 | |
iBOT-1k | SeqLoRA | ||
HiDe-Prompt [40] | |||
InfLoRA | 71.84 0.09 | 78.29 0.09 |
C.1 Compare with More Methods
We compare with SeqLoRA, which initials LoRA modules and finetunes these modules on multiple tasks sequentially without any operation to overcome forgetting. The results are given in Table 7, Table 8 and Table 9. We can find that our method outperforms this method.
A recent continual learning PEFT method hierarchical decomposition prompt (HiDe-Prompt) [40] proposes to perform continual learning hierarchically. This method maintains a set of task-specific prompts for each task and contains two stages during training and inference. Specifically, given an input sample, Hide-Prompt infers the prompt index and then uses the corresponding prompt to infer its label. We also compare our method with this method, and the results are also given in Table 7, Table 8 and Table 9. We can find that our method outperforms this method. Furthermore, this method shows comparable performance to our method in terms of final accuracy on ImageNet-R. However, there is a notable gap between this method and our method in terms of averaged accuracy . Note that averaged accuracy is more important than final accuracy since represents the performance of the model over the whole learning process.
C.2 Hyperparameter Analysis
We perform the hyperparameter analysis for our method InfLoRA. There are two specific hyperparameters in our method InfLoRA. The first hyperparameter is , which controls the expanded parameters in InfLoRA. The second hyperparameter is , which is not the specific hyperparameter of our InfLoRA but the hyperparameter introduced by DualGPM. This hyperparameter controls the component maintained in the matrix .
Figure 6 shows the results of our method with different values of or . We can find that the performance of InfLoRA increases first and then decreases with the increase of and .
C.3 Domain Incremental Setting
InfLoRA can be extended to the domain incremental setting. Specifically, DomainNet contains six domains and InfLoRA can learn on these domains sequentially. Table 10 shows that InfLoRA outperforms other baselines.
C.4 Inference Efficiency
Existing methods often involve multiple forward propagations through the pre-trained backbone. Specifically, prompt-based continual learning methods, including L2P, DualPrompt, and CODA-P, require an extra forward propagation to generate instance-specific prompts. LAE requires an extra forward propagation for ensembling. In contrast, our InfLoRA only requires a single forward propagation through the pre-trained backbone. Figure 7 provides a comparison of the time consumed by different methods during inference. We can find that our method consistently outperforms existing methods in terms of time efficiency.