Skip to content

Commit

Permalink
STYLE: Fix lightning deprecation warnings
Browse files Browse the repository at this point in the history
I'm fixing a few issues that would prevent upgrading the pytorch-lightning package. Trainer.lr_schedulers is being replaced with trainer.lr_scheduler_configs. In future versions, using trainer.logger with multiple loggers will only return the first logger available, so I'm iterating through trainer.loggers instead. And LightningLoggerBase.close is being replaced with LightningLoggerBase.finalize.

Closes microsoft#751
  • Loading branch information
dkarhi committed May 19, 2023
1 parent 2877002 commit cc7d754
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion InnerEye/ML/lightning_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -395,5 +395,5 @@ def write_loss(self, is_training: bool, loss: torch.Tensor) -> None:
assert isinstance(self.trainer, Trainer)
self.log_on_epoch(MetricType.LOSS, loss, is_training)
if is_training:
learning_rate = self.trainer.lr_schedulers[0]['scheduler'].get_last_lr()[0]
learning_rate = self.trainer.lr_scheduler_configs[0].scheduler.get_last_lr()[0] # type: ignore
self.log_on_epoch(MetricType.LEARNING_RATE, learning_rate, is_training)
3 changes: 2 additions & 1 deletion InnerEye/ML/model_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,8 @@ def model_train(checkpoint_path: Optional[Path],
logging.info("Starting training")

trainer.fit(lightning_model, datamodule=data_module)
trainer.logger.close() # type: ignore
for logger in trainer.loggers:
logger.finalize("success")

world_size = getattr(trainer, "world_size", 0)
is_azureml_run = not is_offline_run_context(RUN_CONTEXT)
Expand Down

0 comments on commit cc7d754

Please sign in to comment.