You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This builds on #493 to implement the forward transfer.
The problem is mainly about computing the model accuracy at random initialization on all experiences. This is now done by adding to the BaseStrategy a periodic_eval call before training. The user is expected to:
Enable periodic eval
Pass the entire test stream to the eval_streams parameter of the strategy.train() call. This is the recommended behavior in any case, because otherwise the evaluation plugin will raise a warning/error, depending on its configuration.
This is not the only way to implement forward transfer. For example, the forward transfer metric could automatically call an eval on the entire test stream to avoid relying too much on the user calls. We can discuss other ways.
In case you think of a better way to proceed, I will wait some days before moving on to fully test this implementation.
I will add the forward transfer in the example about the evaluation plugin and I will provide a usage example directly in the API doc once we refine it.
@AntonioCarta what do you think about the periodic_eval call also before training starts? I think this is the main change related to the training module.
@AntonioCarta what do you think about the periodic_eval call also before training starts? I think this is the main change related to the training module.
I think it's ok for now, but it's an ugly hack. Everyone gets a performance hit because of this. I also don't think it completely solves the problem with metrics. Different metrics will need a different evaluation loop and we will have to adjust it again.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This builds on #493 to implement the forward transfer.
The problem is mainly about computing the model accuracy at random initialization on all experiences. This is now done by adding to the
BaseStrategy
aperiodic_eval
call before training. The user is expected to:eval_streams
parameter of thestrategy.train()
call. This is the recommended behavior in any case, because otherwise the evaluation plugin will raise a warning/error, depending on its configuration.This is not the only way to implement forward transfer. For example, the forward transfer metric could automatically call an
eval
on the entire test stream to avoid relying too much on the user calls. We can discuss other ways.In case you think of a better way to proceed, I will wait some days before moving on to fully test this implementation.
This closes #212