Skip to content

Conversation

AndreaCossu
Copy link
Collaborator

This builds on #493 to implement the forward transfer.

The problem is mainly about computing the model accuracy at random initialization on all experiences. This is now done by adding to the BaseStrategy a periodic_eval call before training. The user is expected to:

  1. Enable periodic eval
  2. Pass the entire test stream to the eval_streams parameter of the strategy.train() call. This is the recommended behavior in any case, because otherwise the evaluation plugin will raise a warning/error, depending on its configuration.

This is not the only way to implement forward transfer. For example, the forward transfer metric could automatically call an eval on the entire test stream to avoid relying too much on the user calls. We can discuss other ways.

In case you think of a better way to proceed, I will wait some days before moving on to fully test this implementation.

This closes #212

@AndreaCossu AndreaCossu marked this pull request as draft July 12, 2021 13:10
@coveralls
Copy link

coveralls commented Jul 12, 2021

Pull Request Test Coverage Report for Build 1090448588

  • 170 of 187 (90.91%) changed or added relevant lines in 7 files are covered.
  • 3 unchanged lines in 2 files lost coverage.
  • Overall coverage increased (+0.2%) to 82.328%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/training/plugins/evaluation.py 5 10 50.0%
avalanche/evaluation/metrics/forward_transfer.py 137 149 91.95%
Files with Coverage Reduction New Missed Lines %
avalanche/training/strategies/base_strategy.py 1 96.8%
avalanche/evaluation/metrics/forgetting_bwt.py 2 91.26%
Totals Coverage Status
Change from base Build 1075424779: 0.2%
Covered Lines: 10468
Relevant Lines: 12715

💛 - Coveralls

@vlomonaco vlomonaco requested a review from AntonioCarta July 12, 2021 16:54
@vlomonaco
Copy link
Member

Nice @AndreaCossu! Can you add also an example about this? thanks :)

@AndreaCossu
Copy link
Collaborator Author

I will add the forward transfer in the example about the evaluation plugin and I will provide a usage example directly in the API doc once we refine it.

@AndreaCossu
Copy link
Collaborator Author

@AntonioCarta what do you think about the periodic_eval call also before training starts? I think this is the main change related to the training module.

@AntonioCarta
Copy link
Collaborator

@AntonioCarta what do you think about the periodic_eval call also before training starts? I think this is the main change related to the training module.

I think it's ok for now, but it's an ugly hack. Everyone gets a performance hit because of this. I also don't think it completely solves the problem with metrics. Different metrics will need a different evaluation loop and we will have to adjust it again.

@AndreaCossu AndreaCossu marked this pull request as ready for review August 2, 2021 13:23
@AndreaCossu AndreaCossu merged commit f0482f8 into ContinualAI:master Aug 2, 2021
@AndreaCossu AndreaCossu deleted the fwt branch August 4, 2021 09:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add (Lopez-paz, 2017) Metrics: ACC, FWT & BWT
5 participants