Renate provides a feature to simulate continual learning offline. By providing a dataset, you will be able to split it into smaller parts and simulate the behavior of updating your model on a regular basis. Among other things, Renate will support you in evaluating different optimizers, hyperparameters and the expected performance on your historic data or public benchmark data. At the core of this feature is the function execute_experiment_job(). For the reader familiar with the function run_training_job(), the use will be very intuitive.

Renate’s benchmarking functionality may require additional dependencies. Please install them via

pip install Renate[benchmark]

In the following chapters, we will discuss how this interface can be used to experiment on Renate benchmarks as well as custom benchmarks.