{"author":"chenyu","author_email":"chenyu@fastmail.com","author_time":1714070548,"commit_time":1714070548,"committer":"GitHub","committer_email":"noreply@github.com","hash":"5ae252ae8384d37f60e770d45170e35818709fea","message":"use at least float32 for optim.lr (#4297)\n\n* use at least float32 for optim.lr\r\n\r\nwhen doing mixed precision training (float32 weight, default_float=half), still use float32 to store lr.\r\nit would have been upcasted later in actual weight update, but would have lost precision.\r\nthis improved resnet convergence significantly\r\n\r\n* undo type annotation","parents":["6f792b727b2d80edb939c8208ee92baef027a734"],"tree_hash":"34b70397ca86d1b8a6ef72c2602b9dfac3afad8d"}