pyiqa.losses
Submodules
Package Contents
Classes
Charbonnier loss (one variant of Robust L1Loss, a differentiable |
|
L1 (mean absolute error, MAE) loss. |
|
MSE (L2) loss. |
|
Weighted TV loss. |
|
EMD (earth mover distance) loss. |
|
PLCC loss, induced from Pearson’s Linear Correlation Coefficient. |
|
NiN (Norm in Norm) loss |
- class pyiqa.losses.CharbonnierLoss(loss_weight=1.0, reduction='mean', eps=1e-12)[source]
Bases:
torch.nn.Module
Charbonnier loss (one variant of Robust L1Loss, a differentiable variant of L1Loss).
- Described in “Deep Laplacian Pyramid Networks for Fast and Accurate
Super-Resolution”.
- Args:
loss_weight (float): Loss weight for L1 loss. Default: 1.0. reduction (str): Specifies the reduction to apply to the output.
Supported choices are ‘none’ | ‘mean’ | ‘sum’. Default: ‘mean’.
eps (float): A value used to control the curvature near zero. Default: 1e-12.
- class pyiqa.losses.L1Loss(loss_weight=1.0, reduction='mean')[source]
Bases:
torch.nn.Module
L1 (mean absolute error, MAE) loss.
- Args:
loss_weight (float): Loss weight for L1 loss. Default: 1.0. reduction (str): Specifies the reduction to apply to the output.
Supported choices are ‘none’ | ‘mean’ | ‘sum’. Default: ‘mean’.
- class pyiqa.losses.MSELoss(loss_weight=1.0, reduction='mean')[source]
Bases:
torch.nn.Module
MSE (L2) loss.
- Args:
loss_weight (float): Loss weight for MSE loss. Default: 1.0. reduction (str): Specifies the reduction to apply to the output.
Supported choices are ‘none’ | ‘mean’ | ‘sum’. Default: ‘mean’.
- class pyiqa.losses.WeightedTVLoss(loss_weight=1.0, reduction='mean')[source]
Bases:
L1Loss
Weighted TV loss.
- Args:
loss_weight (float): Loss weight. Default: 1.0.
- class pyiqa.losses.EMDLoss(loss_weight=1.0, r=2, reduction='mean')[source]
Bases:
torch.nn.Module
EMD (earth mover distance) loss.
- class pyiqa.losses.PLCCLoss(loss_weight=1.0)[source]
Bases:
torch.nn.Module
PLCC loss, induced from Pearson’s Linear Correlation Coefficient.
- class pyiqa.losses.NiNLoss(loss_weight=1.0, p=1, q=2)[source]
Bases:
torch.nn.Module
NiN (Norm in Norm) loss
Reference:
Dingquan Li, Tingting Jiang, and Ming Jiang. Norm-in-Norm Loss with Faster Convergence and Better Performance for Image Quality Assessment. ACMM2020.
This loss can be simply described as: l1_norm(normalize(pred - pred_mean), normalize(target - target_mean))