Skip to content

adaptive_feature_norm

AdaptiveFeatureNorm

Bases: torch.nn.Module

Implementation of the loss in Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. Encourages features to gradually have larger and larger L2 norms.

Source code in pytorch_adapt\layers\adaptive_feature_norm.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
class AdaptiveFeatureNorm(torch.nn.Module):
    """
    Implementation of the loss in
    [Larger Norm More Transferable:
    An Adaptive Feature Norm Approach for
    Unsupervised Domain Adaptation](https://arxiv.org/abs/1811.07456).
    Encourages features to gradually have larger and larger L2 norms.
    """

    def __init__(self, step_size: float = 1):
        """
        Arguments:
            step_size: The desired increase in L2 norm at each iteration.
                Note that the loss will always be equal to ```step_size```
                because the goal is always to make the L2 norm ```step_size```
                larger than whatever the current L2 norm is.
        """
        super().__init__()
        self.step_size = step_size

    def forward(self, x):
        """"""
        l2_norm = x.norm(p=2, dim=1)
        radius = l2_norm.detach() + self.step_size
        return torch.mean((l2_norm - radius) ** 2)

    def extra_repr(self):
        """"""
        return c_f.extra_repr(self, ["step_size"])

__init__(step_size=1)

Parameters:

Name Type Description Default
step_size float

The desired increase in L2 norm at each iteration. Note that the loss will always be equal to step_size because the goal is always to make the L2 norm step_size larger than whatever the current L2 norm is.

1
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
18
19
20
21
22
23
24
25
26
27
def __init__(self, step_size: float = 1):
    """
    Arguments:
        step_size: The desired increase in L2 norm at each iteration.
            Note that the loss will always be equal to ```step_size```
            because the goal is always to make the L2 norm ```step_size```
            larger than whatever the current L2 norm is.
    """
    super().__init__()
    self.step_size = step_size

L2PreservedDropout

Bases: torch.nn.Module

Implementation of the dropout layer described in Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. Regular dropout preserves the L1 norm of features, whereas this layer preserves the L2 norm.

Source code in pytorch_adapt\layers\adaptive_feature_norm.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
class L2PreservedDropout(torch.nn.Module):
    """
    Implementation of the dropout layer described in
    [Larger Norm More Transferable:
    An Adaptive Feature Norm Approach for
    Unsupervised Domain Adaptation](https://arxiv.org/abs/1811.07456).
    Regular dropout preserves the L1 norm of features, whereas this
    layer preserves the L2 norm.
    """

    def __init__(self, p: float = 0.5, inplace: bool = False):
        """
        Arguments:
            p: probability of an element to be zeroed
            inplace: if set to True, will do this operation in-place
        """
        super().__init__()
        self.dropout = torch.nn.Dropout(p=p, inplace=inplace)
        self.scale = math.sqrt(1 - p)

    def forward(self, x):
        """"""
        x = self.dropout(x)
        if self.training:
            return x * self.scale
        return x

__init__(p=0.5, inplace=False)

Parameters:

Name Type Description Default
p float

probability of an element to be zeroed

0.5
inplace bool

if set to True, will do this operation in-place

False
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
50
51
52
53
54
55
56
57
58
def __init__(self, p: float = 0.5, inplace: bool = False):
    """
    Arguments:
        p: probability of an element to be zeroed
        inplace: if set to True, will do this operation in-place
    """
    super().__init__()
    self.dropout = torch.nn.Dropout(p=p, inplace=inplace)
    self.scale = math.sqrt(1 - p)