-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Open
Labels
bugSomething isn't workingSomething isn't workingdocsDocumentation relatedDocumentation relatedhelp wantedOpen to be worked onOpen to be worked onlightningclipl.cli.LightningCLIpl.cli.LightningCLIver: 1.9.x
Description
Bug description
When using LightningCLI + --lr_scheduler pytorch_lightning.cli.ReduceLROnPlateau --lr_scheduler.monitor=epoch
: learning rate does not change during training (even when progress is stalled). I've tested it using the LR monitor on 10 different experiments and LR does not change.
What version are you seeing the problem on?
v1.9
How to reproduce the bug
Use LightningCLI + --lr_scheduler pytorch_lightning.cli.ReduceLROnPlateau --lr_scheduler.monitor=epoch
+
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.learning_rate)
Error messages and logs


Environment
Current environment
#- Lightning Component: LightningModule or Trainer?
#- PyTorch Lightning Version (e.g., 1.5.0): 1.9.0
#- PyTorch Version (e.g., 2.0): 1.10.0
#- Python version (e.g., 3.9): 3.8.16
#- OS (e.g., Linux): Linux
#- GPU models and configuration: V100
#- How you installed Lightning(`conda`, `pip`, source): pip
More info
I've seen it happen in version 1.9.0 but it's quite possible it applies to master as well.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingdocsDocumentation relatedDocumentation relatedhelp wantedOpen to be worked onOpen to be worked onlightningclipl.cli.LightningCLIpl.cli.LightningCLIver: 1.9.x