Skip to content

gradient_reversal

GradientReversal

Bases: torch.nn.Module

Implementation of the gradient reversal layer described in Domain-Adversarial Training of Neural Networks, which 'leaves the input unchanged during forward propagation and reverses the gradient by multiplying it by a negative scalar during backpropagation.'

Source code in pytorch_adapt\layers\gradient_reversal.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class GradientReversal(torch.nn.Module):
    """
    Implementation of the gradient reversal layer described in
    [Domain-Adversarial Training of Neural Networks](https://arxiv.org/abs/1505.07818),
    which 'leaves the input unchanged during forward propagation
    and reverses the gradient by multiplying it
    by a negative scalar during backpropagation.'
    """

    def __init__(self, weight: float = 1.0):
        """
        Arguments:
            weight: The gradients  will be multiplied by ```-weight```
                during the backward pass.
        """
        super().__init__()
        self.register_buffer("weight", torch.tensor([weight]))
        pml_cf.add_to_recordable_attributes(self, "weight")

    def update_weight(self, new_weight):
        self.weight[0] = new_weight

    def forward(self, x):
        """"""
        return _GradientReversal.apply(x, pml_cf.to_device(self.weight, x))

    def extra_repr(self):
        """"""
        return c_f.extra_repr(self, ["weight"])

__init__(weight=1.0)

Parameters:

Name Type Description Default
weight float

The gradients will be multiplied by -weight during the backward pass.

1.0
Source code in pytorch_adapt\layers\gradient_reversal.py
16
17
18
19
20
21
22
23
24
def __init__(self, weight: float = 1.0):
    """
    Arguments:
        weight: The gradients  will be multiplied by ```-weight```
            during the backward pass.
    """
    super().__init__()
    self.register_buffer("weight", torch.tensor([weight]))
    pml_cf.add_to_recordable_attributes(self, "weight")