optimizer
        OptimizerHook
¶
  
        Bases: BaseHook
- Executes the wrapped hook
- Zeros all gradients
- Backpropagates the loss
- Steps the optimizer
Source code in pytorch_adapt\hooks\optimizer.py
          | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |  | 
__init__(hook, optimizers, weighter=None, reducer=None, **kwargs)
¶
  Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| hook | BaseHook | the hook that computes the losses | required | 
| optimizers | Union[List[torch.optim.Optimizer], List[str]] | either a list of optimizers that will be used
to update model weights, or a list of optimizer names.
If it's the latter, then the optimizers must be passed
into the hook as one of the  | required | 
| weighter | BaseWeighter | weights the returned losses and outputs a
single value on which  | None | 
| reducer | BaseReducer | a hook that reduces any unreduced losses to a single value.
If  | None | 
Source code in pytorch_adapt\hooks\optimizer.py
        | 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |  | 
        SummaryHook
¶
  
        Bases: BaseHook
Repackages losses into a dictionary format useful for logging. This should be used only at the very end of each iteration, i.e. it should be the last sub-hook in a ChainHook.
Source code in pytorch_adapt\hooks\optimizer.py
          | 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |  | 
__init__(optimizers, **kwargs)
¶
  Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| optimizers | Dict[str, OptimizerHook] | A dictionary of optimizer hooks. The losses computed inside these hooks will be packaged into nested dictionaries. | required | 
Source code in pytorch_adapt\hooks\optimizer.py
        | 82 83 84 85 86 87 88 89 90 |  |