adaptive_feature_norm
AdaptiveFeatureNorm
¶
Bases: torch.nn.Module
Implementation of the loss in Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. Encourages features to gradually have larger and larger L2 norms.
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
__init__(step_size=1)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
step_size |
float
|
The desired increase in L2 norm at each iteration.
Note that the loss will always be equal to |
1
|
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
18 19 20 21 22 23 24 25 26 27 |
|
L2PreservedDropout
¶
Bases: torch.nn.Module
Implementation of the dropout layer described in Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. Regular dropout preserves the L1 norm of features, whereas this layer preserves the L2 norm.
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
__init__(p=0.5, inplace=False)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
p |
float
|
probability of an element to be zeroed |
0.5
|
inplace |
bool
|
if set to True, will do this operation in-place |
False
|
Source code in pytorch_adapt\layers\adaptive_feature_norm.py
50 51 52 53 54 55 56 57 58 |
|