optimizers
Optimizers
¶
Bases: BaseContainer
A container for model optimizers.
Source code in pytorch_adapt\containers\optimizers.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
|
__init__(*args, multipliers=None, **kwargs)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
*args |
|
()
|
|
multipliers |
A dictionary mapping from
optimizer name to lr multiplier. Each
optimizer will have |
None
|
|
**kwargs |
|
{}
|
Source code in pytorch_adapt\containers\optimizers.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
step()
¶
Calls .step()
on all optimizers.
Source code in pytorch_adapt\containers\optimizers.py
43 44 45 46 47 48 |
|
zero_back_step(loss, keys=None)
¶
Zeros gradients, computes gradients, and updates model weights.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loss |
The loss on which |
required | |
keys |
List[str]
|
The subset of optimizers on which to call
|
None
|
Source code in pytorch_adapt\containers\optimizers.py
61 62 63 64 65 66 67 68 69 70 71 72 |
|
zero_grad()
¶
Calls .zero_grad()
on all optimizers.
Source code in pytorch_adapt\containers\optimizers.py
50 51 52 53 54 55 |
|