-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Closed
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
Hi,
I think in optim/adamw.py
has a small mistake with alignment at line 110.
F.adamw(params_with_grad,
grads,
exp_avgs,
exp_avg_sqs,
max_exp_avg_sqs,
state_steps,
amsgrad,
beta1,
beta2,
group['lr'],
group['weight_decay'],
group['eps'])
At line 110, I think it should be increased by 1 tab.
I met this bug when using mmdetection toolbox.
cc @vincentqb
pemryan, arbenson, GeorgeFedoseev, quqixun, filaPro and 22 more
Metadata
Metadata
Assignees
Labels
module: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module