Skip to content

entropy_weights

EntropyWeights

Bases: torch.nn.Module

Implementation of entropy weighting described in Conditional Adversarial Domain Adaptation. Computes the entropy (x) per row of the input, and returns 1+exp(-x). This can be used to weight losses, such that the most confidently scored samples have a higher weighting.

Source code in pytorch_adapt\layers\entropy_weights.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class EntropyWeights(torch.nn.Module):
    """
    Implementation of entropy weighting described in
    [Conditional Adversarial Domain Adaptation](https://arxiv.org/abs/1705.10667).
    Computes the entropy (```x```) per row of the input, and returns
    ```1+exp(-x)```.
    This can be used to weight losses, such that the most
    confidently scored samples have a higher weighting.
    """

    def __init__(
        self,
        after_softmax: bool = False,
        normalizer: Callable[[torch.Tensor], torch.Tensor] = None,
    ):
        """
        Arguments:
            after_softmax: If ```True```, then the rows of the input are assumed to
                already have softmax applied to them.
            normalizer: A callable for normalizing
                (e.g. min-max normalization) the weights.
                If ```None```, then sum normalization is used.
        """
        super().__init__()
        self.after_softmax = after_softmax
        self.normalizer = c_f.default(normalizer, SumNormalizer, {})

    def forward(self, logits: torch.Tensor) -> torch.Tensor:
        """
        Arguments:
            logits: Raw logits if ```self.after_softmax``` is False.
                Otherwise each row should be predictions that sum up to 1.
        """
        return entropy_weights(logits, self.after_softmax, self.normalizer)

    def extra_repr(self):
        """"""
        return c_f.extra_repr(self, ["after_softmax"])

__init__(after_softmax=False, normalizer=None)

Parameters:

Name Type Description Default
after_softmax bool

If True, then the rows of the input are assumed to already have softmax applied to them.

False
normalizer Callable[[torch.Tensor], torch.Tensor]

A callable for normalizing (e.g. min-max normalization) the weights. If None, then sum normalization is used.

None
Source code in pytorch_adapt\layers\entropy_weights.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def __init__(
    self,
    after_softmax: bool = False,
    normalizer: Callable[[torch.Tensor], torch.Tensor] = None,
):
    """
    Arguments:
        after_softmax: If ```True```, then the rows of the input are assumed to
            already have softmax applied to them.
        normalizer: A callable for normalizing
            (e.g. min-max normalization) the weights.
            If ```None```, then sum normalization is used.
    """
    super().__init__()
    self.after_softmax = after_softmax
    self.normalizer = c_f.default(normalizer, SumNormalizer, {})

forward(logits)

Parameters:

Name Type Description Default
logits torch.Tensor

Raw logits if self.after_softmax is False. Otherwise each row should be predictions that sum up to 1.

required
Source code in pytorch_adapt\layers\entropy_weights.py
43
44
45
46
47
48
49
def forward(self, logits: torch.Tensor) -> torch.Tensor:
    """
    Arguments:
        logits: Raw logits if ```self.after_softmax``` is False.
            Otherwise each row should be predictions that sum up to 1.
    """
    return entropy_weights(logits, self.after_softmax, self.normalizer)