Skip to content

Error in optim/adamw.py #55740

@caodoanh2001

Description

@caodoanh2001

🐛 Bug

Hi,

I think in optim/adamw.py has a small mistake with alignment at line 110.

    F.adamw(params_with_grad,
                        grads,
                        exp_avgs,
                        exp_avg_sqs,
                        max_exp_avg_sqs,
                        state_steps,
                        amsgrad,
                        beta1,
                        beta2,
                        group['lr'],
                        group['weight_decay'],
                        group['eps'])

At line 110, I think it should be increased by 1 tab.

I met this bug when using mmdetection toolbox.

cc @vincentqb

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: optimizerRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions