Renate: Automatic Neural Networks Retraining and Continual Learning in Python#
Renate is a Python package for automatic retraining of neural networks models. It uses advanced Continual Learning and Lifelong Learning algorithms to achieve this purpose. The implementation is based on PyTorch and Lightning for deep learning, and Syne Tune for hyperparameter optimization.
Quick links#
Install renate with
pip install renate
or look at these instructionsExamples for local training and training on Amazon SageMaker.
Who needs Renate?#
In many applications data is made available over time and retraining from scratch for every new batch of data is prohibitively expensive. In these cases, we would like to use the new batch of data provided to update our previous model with limited costs. Unfortunately, since data in different chunks is not sampled according to the same distribution, just fine-tuning the old model creates problems like catastrophic forgetting. The algorithms in Renate help mitigating the negative impact of forgetting and increase the model performance overall.
Renate also offers hyperparameter optimization (HPO), a functionality that can heavily impact the performance of the model when continuously updated. To do so, Renate employs Syne Tune under the hood, and can offer advanced HPO methods such multi-fidelity algorithms (ASHA) and transfer learning algorithms (useful for speeding up the retuning).
Key features#
Easy to scale and run in the cloud
Designed for real-world retraining pipelines
Advanced HPO functionalities available out-of-the-box
Open for experimentation
Resources#
Cite Renate#
@misc{renate2023, title = {Renate: A Library for Real-World Continual Learning}, author = {Martin Wistuba and Martin Ferianc and Lukas Balles and Cedric Archambeau and Giovanni Zappella}, year = {2023}, eprint = {2304.12067}, archivePrefix = {arXiv}, primaryClass = {cs.LG} }
What are you looking for?#
- Installation Instructions
pip install renate
If you did not find what you were looking for, open an issue and we will do our best to improve the documentation.