kornia.losses¶
Reconstruction¶
- kornia.losses.ssim_loss(img1, img2, window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')¶
Function that computes a loss based on the SSIM measurement.
The loss, or the Structural dissimilarity (DSSIM) is described as:
\[\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}\]See
ssim()
for details about SSIM.- Parameters:
img1 (
Tensor
) – the first input image with shape \((B, C, H, W)\).img2 (
Tensor
) – the second input image with shape \((B, C, H, W)\).window_size (
int
) – the size of the gaussian kernel to smooth the images.max_val (
float
, optional) – the dynamic range of the images. Default:1.0
eps (
float
, optional) – Small value for numerically stability when dividing. Default:1e-12
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
padding (
str
, optional) –'same'
|'valid'
. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default:"same"
- Return type:
- Returns:
The loss based on the ssim index.
Examples
>>> input1 = torch.rand(1, 4, 5, 5) >>> input2 = torch.rand(1, 4, 5, 5) >>> loss = ssim_loss(input1, input2, 5)
- kornia.losses.ssim3d_loss(img1, img2, window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')¶
Function that computes a loss based on the SSIM measurement.
The loss, or the Structural dissimilarity (DSSIM) is described as:
\[\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}\]See
ssim()
for details about SSIM.- Parameters:
img1 (
Tensor
) – the first input image with shape \((B, C, D, H, W)\).img2 (
Tensor
) – the second input image with shape \((B, C, D, H, W)\).window_size (
int
) – the size of the gaussian kernel to smooth the images.max_val (
float
, optional) – the dynamic range of the images. Default:1.0
eps (
float
, optional) – Small value for numerically stability when dividing. Default:1e-12
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
padding (
str
, optional) –'same'
|'valid'
. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default:"same"
- Return type:
- Returns:
The loss based on the ssim index.
Examples
>>> input1 = torch.rand(1, 4, 5, 5, 5) >>> input2 = torch.rand(1, 4, 5, 5, 5) >>> loss = ssim3d_loss(input1, input2, 5)
- kornia.losses.psnr_loss(image, target, max_val)¶
Function that computes the PSNR loss.
The loss is computed as follows:
\[\text{loss} = -\text{psnr(x, y)}\]See
psnr()
for details abut PSNR.- Parameters:
- Return type:
- Returns:
the computed loss as a scalar.
Examples
>>> ones = torch.ones(1) >>> psnr_loss(ones, 1.2 * ones, 2.) # 10 * log(4/((1.2-1)**2)) / log(10) tensor(-20.0000)
- kornia.losses.total_variation(img, reduction='sum')¶
Function that computes Total Variation according to [1].
- Parameters:
- Return type:
- Returns:
a tensor with shape \((*,)\).
Examples
>>> total_variation(torch.ones(4, 4)) tensor(0.) >>> total_variation(torch.ones(2, 5, 3, 4, 4)).shape torch.Size([2, 5, 3])
Note
See a working example here. Total Variation is formulated with summation, however this is not resolution invariant. Thus, reduction=’mean’ was added as an optional reduction method.
- Reference:
- kornia.losses.inverse_depth_smoothness_loss(idepth, image)¶
Criterion that computes image-aware inverse depth smoothness loss.
\[\text{loss} = \left | \partial_x d_{ij} \right | e^{-\left \| \partial_x I_{ij} \right \|} + \left | \partial_y d_{ij} \right | e^{-\left \| \partial_y I_{ij} \right \|}\]- Parameters:
- Return type:
- Returns:
a scalar with the computed loss.
Examples
>>> idepth = torch.rand(1, 1, 4, 5) >>> image = torch.rand(1, 3, 4, 5) >>> loss = inverse_depth_smoothness_loss(idepth, image)
- kornia.losses.charbonnier_loss(img1, img2, reduction='none')¶
Criterion that computes the Charbonnier [2] (aka. L1-L2 [3]) loss.
According to [1], we compute the Charbonnier loss as follows:
\[\text{WL}(x, y) = \sqrt{(x - y)^{2} + 1} - 1\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://ieeexplore.ieee.org/document/413553 [3] https://hal.inria.fr/inria-00074015/document [4] https://arxiv.org/pdf/1712.05927.pdf
Note
This implementation follows the formulation by Barron [1]. Other works utilize a slightly different implementation (see [4]).
- Parameters:
img1 (
Tensor
) – the predicted tensor with shape \((*)\).img2 (
Tensor
) – the target tensor with the same shape as img1.reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> img1 = torch.randn(2, 3, 32, 32, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 32) >>> output = charbonnier_loss(img1, img2, reduction="sum") >>> output.backward()
- kornia.losses.welsch_loss(img1, img2, reduction='none')¶
Criterion that computes the Welsch [2] (aka. Leclerc [3]) loss.
According to [1], we compute the Welsch loss as follows:
\[\text{WL}(x, y) = 1 - exp(-\frac{1}{2} (x - y)^{2})\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://www.tandfonline.com/doi/abs/10.1080/03610917808812083 [3] https://link.springer.com/article/10.1007/BF00054839
- Parameters:
img1 (
Tensor
) – the predicted tensor with shape \((*)\).img2 (
Tensor
) – the target tensor with the same shape as img1.reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> img1 = torch.randn(2, 3, 32, 32, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 32) >>> output = welsch_loss(img1, img2, reduction="mean") >>> output.backward()
- kornia.losses.cauchy_loss(img1, img2, reduction='none')¶
Criterion that computes the Cauchy [2] (aka. Lorentzian) loss.
According to [1], we compute the Cauchy loss as follows:
\[\text{WL}(x, y) = log(\frac{1}{2} (x - y)^{2} + 1)\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://files.is.tue.mpg.de/black/papers/cviu.63.1.1996.pdf
- Parameters:
img1 (
Tensor
) – the predicted tensor with shape \((*)\).img2 (
Tensor
) – the target tensor with the same shape as img1.reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> img1 = torch.randn(2, 3, 32, 32, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 32) >>> output = cauchy_loss(img1, img2, reduction="mean") >>> output.backward()
- kornia.losses.geman_mcclure_loss(img1, img2, reduction='none')¶
Criterion that computes the Geman-McClure loss [2].
According to [1], we compute the Geman-McClure loss as follows:
\[\text{WL}(x, y) = \frac{2 (x - y)^{2}}{(x - y)^{2} + 4}\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] Bayesian image analysis: An application to single photon emission tomography, Geman and McClure, 1985
- Parameters:
img1 (
Tensor
) – the predicted tensor with shape \((*)\).img2 (
Tensor
) – the target tensor with the same shape as img1.reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> img1 = torch.randn(2, 3, 32, 32, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 32) >>> output = geman_mcclure_loss(img1, img2, reduction="mean") >>> output.backward()
- class kornia.losses.SSIMLoss(window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')¶
Create a criterion that computes a loss based on the SSIM measurement.
The loss, or the Structural dissimilarity (DSSIM) is described as:
\[\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}\]See
ssim_loss()
for details about SSIM.- Parameters:
window_size (
int
) – the size of the gaussian kernel to smooth the images.max_val (
float
, optional) – the dynamic range of the images. Default:1.0
eps (
float
, optional) – Small value for numerically stability when dividing. Default:1e-12
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
padding (
str
, optional) –'same'
|'valid'
. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default:"same"
- Returns:
The loss based on the ssim index.
Examples
>>> input1 = torch.rand(1, 4, 5, 5) >>> input2 = torch.rand(1, 4, 5, 5) >>> criterion = SSIMLoss(5) >>> loss = criterion(input1, input2)
- class kornia.losses.SSIM3DLoss(window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')¶
Create a criterion that computes a loss based on the SSIM measurement.
The loss, or the Structural dissimilarity (DSSIM) is described as:
\[\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}\]See
ssim_loss()
for details about SSIM.- Parameters:
window_size (
int
) – the size of the gaussian kernel to smooth the images.max_val (
float
, optional) – the dynamic range of the images. Default:1.0
eps (
float
, optional) – Small value for numerically stability when dividing. Default:1e-12
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
padding (
str
, optional) –'same'
|'valid'
. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default:"same"
- Returns:
The loss based on the ssim index.
Examples
>>> input1 = torch.rand(1, 4, 5, 5, 5) >>> input2 = torch.rand(1, 4, 5, 5, 5) >>> criterion = SSIM3DLoss(5) >>> loss = criterion(input1, input2)
- class kornia.losses.MS_SSIMLoss(sigmas=[0.5, 1.0, 2.0, 4.0, 8.0], data_range=1.0, K=(0.01, 0.03), alpha=0.025, compensation=200.0, reduction='mean')¶
Creates a criterion that computes MSSIM + L1 loss.
According to [1], we compute the MS_SSIM + L1 loss as follows:
\[\text{loss}(x, y) = \alpha \cdot \mathcal{L_{MSSIM}}(x,y)+(1 - \alpha) \cdot G_\alpha \cdot \mathcal{L_1}(x,y)\]- Where:
\(\alpha\) is the weight parameter.
\(x\) and \(y\) are the reconstructed and true reference images.
\(\mathcal{L_{MSSIM}}\) is the MS-SSIM loss.
\(G_\alpha\) is the sigma values for computing multi-scale SSIM.
\(\mathcal{L_1}\) is the L1 loss.
- Reference:
- Parameters:
sigmas (
list
[float
], optional) – gaussian sigma values. Default:[0.5, 1.0, 2.0, 4.0, 8.0]
data_range (
float
, optional) – the range of the images. Default:1.0
K (
tuple
[float
,float
], optional) – k values. Default:(0.01, 0.03)
alpha (
float
, optional) – specifies the alpha value Default:0.025
compensation (
float
, optional) – specifies the scaling coefficient. Default:200.0
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
- Returns:
The computed loss.
- Shape:
Input1: \((N, C, H, W)\).
Input2: \((N, C, H, W)\).
Output: \((N, H, W)\) or scalar if reduction is set to
'mean'
or'sum'
.
Examples
>>> input1 = torch.rand(1, 3, 5, 5) >>> input2 = torch.rand(1, 3, 5, 5) >>> criterion = kornia.losses.MS_SSIMLoss() >>> loss = criterion(input1, input2)
- class kornia.losses.TotalVariation(*args, **kwargs)¶
Compute the Total Variation according to [1].
- Shape:
Input: \((*, H, W)\).
Output: \((*,)\).
Examples
>>> tv = TotalVariation() >>> output = tv(torch.ones((2, 3, 4, 4), requires_grad=True)) >>> output.data tensor([[0., 0., 0.], [0., 0., 0.]]) >>> output.sum().backward() # grad can be implicitly created only for scalar outputs
- Reference:
- class kornia.losses.PSNRLoss(max_val)¶
Create a criterion that calculates the PSNR loss.
The loss is computed as follows:
\[\text{loss} = -\text{psnr(x, y)}\]See
psnr()
for details abut PSNR.- Parameters:
max_val (
float
) – The maximum value in the image tensor.
- Shape:
Image: arbitrary dimensional tensor \((*)\).
Target: arbitrary dimensional tensor \((*)\) same shape as image.
Output: a scalar.
Examples
>>> ones = torch.ones(1) >>> criterion = PSNRLoss(2.) >>> criterion(ones, 1.2 * ones) # 10 * log(4/((1.2-1)**2)) / log(10) tensor(-20.0000)
- class kornia.losses.InverseDepthSmoothnessLoss(*args, **kwargs)¶
Criterion that computes image-aware inverse depth smoothness loss.
\[\text{loss} = \left | \partial_x d_{ij} \right | e^{-\left \| \partial_x I_{ij} \right \|} + \left | \partial_y d_{ij} \right | e^{-\left \| \partial_y I_{ij} \right \|}\]- Shape:
Inverse Depth: \((N, 1, H, W)\)
Image: \((N, 3, H, W)\)
Output: scalar
Examples
>>> idepth = torch.rand(1, 1, 4, 5) >>> image = torch.rand(1, 3, 4, 5) >>> smooth = InverseDepthSmoothnessLoss() >>> loss = smooth(idepth, image)
- class kornia.losses.CharbonnierLoss(reduction='none')¶
Criterion that computes the Charbonnier [2] (aka. L1-L2 [3]) loss.
According to [1], we compute the Charbonnier loss as follows:
\[\text{WL}(x, y) = \sqrt{(x - y)^{2} + 1} - 1\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://ieeexplore.ieee.org/document/413553 [3] https://hal.inria.fr/inria-00074015/document [4] https://arxiv.org/pdf/1712.05927.pdf
Note
This implementation follows the formulation by Barron [1]. Other works utilize a slightly different implementation (see [4]).
- Parameters:
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Shape:
img1: the predicted tensor with shape \((*)\).
img2: the target tensor with the same shape as img1.
Example
>>> criterion = CharbonnierLoss(reduction="mean") >>> img1 = torch.randn(2, 3, 32, 2107, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 2107) >>> output = criterion(img1, img2) >>> output.backward()
- class kornia.losses.WelschLoss(reduction='none')¶
Criterion that computes the Welsch [2] (aka. Leclerc [3]) loss.
According to [1], we compute the Welsch loss as follows:
\[\text{WL}(x, y) = 1 - exp(-\frac{1}{2} (x - y)^{2})\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://www.tandfonline.com/doi/abs/10.1080/03610917808812083 [3] https://link.springer.com/article/10.1007/BF00054839
- Parameters:
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Shape:
img1: the predicted tensor with shape \((*)\).
img2: the target tensor with the same shape as img1.
Example
>>> criterion = WelschLoss(reduction="mean") >>> img1 = torch.randn(2, 3, 32, 1904, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 1904) >>> output = criterion(img1, img2) >>> output.backward()
- class kornia.losses.CauchyLoss(reduction='none')¶
Criterion that computes the Cauchy [2] (aka. Lorentzian) loss.
According to [1], we compute the Cauchy loss as follows:
\[\text{WL}(x, y) = log(\frac{1}{2} (x - y)^{2} + 1)\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] https://files.is.tue.mpg.de/black/papers/cviu.63.1.1996.pdf
- Parameters:
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Shape:
img1: the predicted tensor with shape \((*)\).
img2: the target tensor with the same shape as img1.
Example
>>> criterion = CauchyLoss(reduction="mean") >>> img1 = torch.randn(2, 3, 32, 2107, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 2107) >>> output = criterion(img1, img2) >>> output.backward()
- class kornia.losses.GemanMcclureLoss(reduction='none')¶
Criterion that computes the Geman-McClure loss [2].
According to [1], we compute the Geman-McClure loss as follows:
\[\text{WL}(x, y) = \frac{2 (x - y)^{2}}{(x - y)^{2} + 4}\]- Where:
\(x\) is the prediction.
\(y\) is the target to be regressed to.
- Reference:
[1] https://arxiv.org/pdf/1701.03077.pdf [2] Bayesian image analysis: An application to single photon emission tomography, Geman and McClure, 1985
- Parameters:
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied (default),'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
- Shape:
img1: the predicted tensor with shape \((*)\).
img2: the target tensor with the same shape as img1.
Example
>>> criterion = GemanMcclureLoss(reduction="mean") >>> img1 = torch.randn(2, 3, 32, 2107, requires_grad=True) >>> img2 = torch.randn(2, 3, 32, 2107) >>> output = criterion(img1, img2) >>> output.backward()
Semantic Segmentation¶
- kornia.losses.binary_focal_loss_with_logits(pred, target, alpha=0.25, gamma=2.0, reduction='none', pos_weight=None, weight=None, ignore_index=-100)¶
Criterion that computes Binary Focal loss.
According to [LGG+18], the Focal loss is computed as follows:
\[\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)\]- where:
\(p_t\) is the model’s estimated probability for each class.
- Parameters:
pred (
Tensor
) – logits tensor with shape \((N, C, *)\) where C = number of classes.target (
Tensor
) – labels tensor with the same shape as pred \((N, C, *)\) where each value is between 0 and 1.alpha (
Optional
[float
], optional) – Weighting factor \(\alpha \in [0, 1]\). Default:0.25
gamma (
float
, optional) – Focusing parameter \(\gamma >= 0\). Default:2.0
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
pos_weight (
Optional
[Tensor
], optional) – a weight of positive examples with shape \((num\_of\_classes,)\). It is possible to trade off recall and precision by adding weights to positive examples. Default:None
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Return type:
- Returns:
the computed loss.
Examples
>>> C = 3 # num_classes >>> pred = torch.randn(1, C, 5, requires_grad=True) >>> target = torch.randint(2, (1, C, 5)) >>> kwargs = {"alpha": 0.25, "gamma": 2.0, "reduction": 'mean'} >>> output = binary_focal_loss_with_logits(pred, target, **kwargs) >>> output.backward()
- kornia.losses.focal_loss(pred, target, alpha, gamma=2.0, reduction='none', weight=None, ignore_index=-100)¶
Criterion that computes Focal loss.
According to [LGG+18], the Focal loss is computed as follows:
\[\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)\]- Where:
\(p_t\) is the model’s estimated probability for each class.
- Parameters:
pred (
Tensor
) – logits tensor with shape \((N, C, *)\) where C = number of classes.target (
Tensor
) – labels tensor with shape \((N, *)\) where each value is an integer representing correct classification \(target[i] \in [0, C)\).alpha (
Optional
[float
]) – Weighting factor \(\alpha \in [0, 1]\).gamma (
float
, optional) – Focusing parameter \(\gamma >= 0\). Default:2.0
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Return type:
- Returns:
the computed loss.
Example
>>> C = 5 # num_classes >>> pred = torch.randn(1, C, 3, 5, requires_grad=True) >>> target = torch.randint(C, (1, 3, 5)) >>> kwargs = {"alpha": 0.5, "gamma": 2.0, "reduction": 'mean'} >>> output = focal_loss(pred, target, **kwargs) >>> output.backward()
- kornia.losses.dice_loss(pred, target, average='micro', eps=1e-8, weight=None, ignore_index=-100)¶
Criterion that computes Sørensen-Dice Coefficient loss.
According to [1], we compute the Sørensen-Dice Coefficient as follows:
\[\text{Dice}(x, class) = \frac{2 |X \cap Y|}{|X| + |Y|}\]- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the one-hot tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{Dice}(x, class)\]- Parameters:
pred (
Tensor
) – logits tensor with shape \((N, C, H, W)\) where C = number of classes.labels – labels tensor with shape \((N, H, W)\) where each value is \(0 ≤ targets[i] ≤ C-1\).
average (
str
, optional) – Reduction applied in multi-class scenario: -'micro'
[default]: Calculate the loss across all classes. -'macro'
: Calculate the loss for each class separately and average the metrics across classes. Default:"micro"
eps (
float
, optional) – Scalar to enforce numerical stabiliy. Default:1e-8
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Return type:
- Returns:
One-element tensor of the computed loss.
Example
>>> N = 5 # num_classes >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = dice_loss(pred, target) >>> output.backward()
- kornia.losses.tversky_loss(pred, target, alpha, beta, eps=1e-8, ignore_index=-100)¶
Criterion that computes Tversky Coefficient loss.
According to [SEG17], we compute the Tversky Coefficient as follows:
\[\text{S}(P, G, \alpha; \beta) = \frac{|PG|}{|PG| + \alpha |P \setminus G| + \beta |G \setminus P|}\]- Where:
\(P\) and \(G\) are the predicted and ground truth binary labels.
\(\alpha\) and \(\beta\) control the magnitude of the penalties for FPs and FNs, respectively.
Note
\(\alpha = \beta = 0.5\) => dice coeff
\(\alpha = \beta = 1\) => tanimoto coeff
\(\alpha + \beta = 1\) => F beta coeff
- Parameters:
pred (
Tensor
) – logits tensor with shape \((N, C, H, W)\) where C = number of classes.target (
Tensor
) – labels tensor with shape \((N, H, W)\) where each value is \(0 ≤ targets[i] ≤ C-1\).alpha (
float
) – the first coefficient in the denominator.beta (
float
) – the second coefficient in the denominator.eps (
float
, optional) – scalar for numerical stability. Default:1e-8
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Return type:
- Returns:
the computed loss.
Example
>>> N = 5 # num_classes >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = tversky_loss(pred, target, alpha=0.5, beta=0.5) >>> output.backward()
- kornia.losses.lovasz_hinge_loss(pred, target)¶
Criterion that computes a surrogate binary intersection-over-union (IoU) loss.
According to [2], we compute the IoU as follows:
\[\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}\][1] approximates this fomular with a surrogate, which is fully differentable.
- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the binary tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{IoU}(x, class)\]Note
This loss function only supports binary labels. For multi-class labels please use the Lovasz-Softmax loss.
- Parameters:
pred (
Tensor
) – logits tensor with shape \((N, 1, H, W)\).labels – labels tensor with shape \((N, H, W)\) with binary values.
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> N = 1 # num_classes >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = lovasz_hinge_loss(pred, target) >>> output.backward()
- kornia.losses.lovasz_softmax_loss(pred, target, weight=None)¶
Criterion that computes a surrogate multi-class intersection-over-union (IoU) loss.
According to [1], we compute the IoU as follows:
\[\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}\][1] approximates this fomular with a surrogate, which is fully differentable.
- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the long tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{IoU}(x, class)\]- Reference:
Note
This loss function only supports multi-class (C > 1) labels. For binary labels please use the Lovasz-Hinge loss.
- Parameters:
- Return type:
- Returns:
a scalar with the computed loss.
Example
>>> N = 5 # num_classes >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = lovasz_softmax_loss(pred, target) >>> output.backward()
- class kornia.losses.BinaryFocalLossWithLogits(alpha, gamma=2.0, reduction='none', pos_weight=None, weight=None, ignore_index=-100)¶
Criterion that computes Focal loss.
According to [LGG+18], the Focal loss is computed as follows:
\[\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)\]- where:
\(p_t\) is the model’s estimated probability for each class.
- Parameters:
alpha (
Optional
[float
]) – Weighting factor \(\alpha \in [0, 1]\).gamma (
float
, optional) – Focusing parameter \(\gamma >= 0\). Default:2.0
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
pos_weight (
Optional
[Tensor
], optional) – a weight of positive examples with shape \((num\_of\_classes,)\). It is possible to trade off recall and precision by adding weights to positive examples. Default:None
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Shape:
Pred: \((N, C, *)\) where C = number of classes.
Target: the same shape as Pred \((N, C, *)\) where each value is between 0 and 1.
Examples
>>> C = 3 # num_classes >>> pred = torch.randn(1, C, 5, requires_grad=True) >>> target = torch.randint(2, (1, C, 5)) >>> kwargs = {"alpha": 0.25, "gamma": 2.0, "reduction": 'mean'} >>> criterion = BinaryFocalLossWithLogits(**kwargs) >>> output = criterion(pred, target) >>> output.backward()
- class kornia.losses.DiceLoss(average='micro', eps=1e-8, weight=None, ignore_index=-100)¶
Criterion that computes Sørensen-Dice Coefficient loss.
According to [1], we compute the Sørensen-Dice Coefficient as follows:
\[\text{Dice}(x, class) = \frac{2 |X| \cap |Y|}{|X| + |Y|}\]- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the one-hot tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{Dice}(x, class)\]- Parameters:
average (
str
, optional) – Reduction applied in multi-class scenario: -'micro'
[default]: Calculate the loss across all classes. -'macro'
: Calculate the loss for each class separately and average the metrics across classes. Default:"micro"
eps (
float
, optional) – Scalar to enforce numerical stabiliy. Default:1e-8
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Shape:
Pred: \((N, C, H, W)\) where C = number of classes.
Target: \((N, H, W)\) where each value is \(0 ≤ targets[i] ≤ C-1\).
Example
>>> N = 5 # num_classes >>> criterion = DiceLoss() >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = criterion(pred, target) >>> output.backward()
- class kornia.losses.TverskyLoss(alpha, beta, eps=1e-8, ignore_index=-100)¶
Criterion that computes Tversky Coefficient loss.
According to [SEG17], we compute the Tversky Coefficient as follows:
\[\text{S}(P, G, \alpha; \beta) = \frac{|PG|}{|PG| + \alpha |P \setminus G| + \beta |G \setminus P|}\]- Where:
\(P\) and \(G\) are the predicted and ground truth binary labels.
\(\alpha\) and \(\beta\) control the magnitude of the penalties for FPs and FNs, respectively.
Note
\(\alpha = \beta = 0.5\) => dice coeff
\(\alpha = \beta = 1\) => tanimoto coeff
\(\alpha + \beta = 1\) => F beta coeff
- Parameters:
alpha (
float
) – the first coefficient in the denominator.beta (
float
) – the second coefficient in the denominator.eps (
float
, optional) – scalar for numerical stability. Default:1e-8
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Shape:
Pred: \((N, C, H, W)\) where C = number of classes.
Target: \((N, H, W)\) where each value is \(0 ≤ targets[i] ≤ C-1\).
Examples
>>> N = 5 # num_classes >>> criterion = TverskyLoss(alpha=0.5, beta=0.5) >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = criterion(pred, target) >>> output.backward()
- class kornia.losses.FocalLoss(alpha, gamma=2.0, reduction='none', weight=None, ignore_index=-100)¶
Criterion that computes Focal loss.
According to [LGG+18], the Focal loss is computed as follows:
\[\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)\]- Where:
\(p_t\) is the model’s estimated probability for each class.
- Parameters:
alpha (
Optional
[float
]) – Weighting factor \(\alpha \in [0, 1]\).gamma (
float
, optional) – Focusing parameter \(\gamma >= 0\). Default:2.0
reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"none"
weight (
Optional
[Tensor
], optional) – weights for classes with shape \((num\_of\_classes,)\). Default:None
ignore_index (
Optional
[int
], optional) – labels with this value are ignored in the loss computation. Default:-100
- Shape:
Pred: \((N, C, *)\) where C = number of classes.
Target: \((N, *)\) where each value is an integer representing correct classification \(target[i] \in [0, C)\).
Example
>>> C = 5 # num_classes >>> pred = torch.randn(1, C, 3, 5, requires_grad=True) >>> target = torch.randint(C, (1, 3, 5)) >>> kwargs = {"alpha": 0.5, "gamma": 2.0, "reduction": 'mean'} >>> criterion = FocalLoss(**kwargs) >>> output = criterion(pred, target) >>> output.backward()
- class kornia.losses.LovaszHingeLoss¶
Criterion that computes a surrogate binary intersection-over-union (IoU) loss.
According to [2], we compute the IoU as follows:
\[\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}\][1] approximates this fomular with a surrogate, which is fully differentable.
- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the binary tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{IoU}(x, class)\]Note
This loss function only supports binary labels. For multi-class labels please use the Lovasz-Softmax loss.
- Parameters:
pred – logits tensor with shape \((N, 1, H, W)\).
labels – labels tensor with shape \((N, H, W)\) with binary values.
- Returns:
a scalar with the computed loss.
Example
>>> N = 1 # num_classes >>> criterion = LovaszHingeLoss() >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = criterion(pred, target) >>> output.backward()
- class kornia.losses.LovaszSoftmaxLoss(weight=None)¶
Criterion that computes a surrogate multi-class intersection-over-union (IoU) loss.
According to [1], we compute the IoU as follows:
\[\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}\][1] approximates this fomular with a surrogate, which is fully differentable.
- Where:
\(X\) expects to be the scores of each class.
\(Y\) expects to be the binary tensor with the class labels.
the loss, is finally computed as:
\[\text{loss}(x, class) = 1 - \text{IoU}(x, class)\]- Reference:
Note
This loss function only supports multi-class (C > 1) labels. For binary labels please use the Lovasz-Hinge loss.
- Parameters:
- Returns:
a scalar with the computed loss.
Example
>>> N = 5 # num_classes >>> criterion = LovaszSoftmaxLoss() >>> pred = torch.randn(1, N, 3, 5, requires_grad=True) >>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N) >>> output = criterion(pred, target) >>> output.backward()
Distributions¶
- kornia.losses.js_div_loss_2d(pred, target, reduction='mean')¶
Calculate the Jensen-Shannon divergence loss between heatmaps.
- Parameters:
pred (
Tensor
) – the input tensor with shape \((B, N, H, W)\).target (
Tensor
) – the target tensor with shape \((B, N, H, W)\).reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
- Return type:
Examples
>>> pred = torch.full((1, 1, 2, 4), 0.125) >>> loss = js_div_loss_2d(pred, pred) >>> loss.item() 0.0
- kornia.losses.kl_div_loss_2d(pred, target, reduction='mean')¶
Calculate the Kullback-Leibler divergence loss between heatmaps.
- Parameters:
pred (
Tensor
) – the input tensor with shape \((B, N, H, W)\).target (
Tensor
) – the target tensor with shape \((B, N, H, W)\).reduction (
str
, optional) – Specifies the reduction to apply to the output:'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:"mean"
- Return type:
Examples
>>> pred = torch.full((1, 1, 2, 4), 0.125) >>> loss = kl_div_loss_2d(pred, pred) >>> loss.item() 0.0
Morphology¶
- class kornia.losses.HausdorffERLoss(alpha=2.0, k=10, reduction='mean')¶
Binary Hausdorff loss based on morphological erosion.
Hausdorff Distance loss measures the maximum distance of a predicted segmentation boundary to the nearest ground-truth edge pixel. For two segmentation point sets X and Y , the one-sided HD from X to Y is defined as:
\[hd(X,Y) = \max_{x \in X} \min_{y \in Y}||x - y||_2\]Furthermore, the bidirectional HD is:
\[HD(X,Y) = max(hd(X, Y), hd(Y, X))\]This is an Hausdorff Distance (HD) Loss that based on morphological erosion, which provided a differentiable approximation of Hausdorff distance as stated in [KS19]. The code is refactored on top of here.
- Parameters:
alpha (
float
, optional) – controls the erosion rate in each iteration. Default:2.0
k (
int
, optional) – the number of iterations of erosion. Default:10
reduction (
str
, optional) – Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’. ‘none’: no reduction will be applied, ‘mean’: the weighted mean of the output is taken, ‘sum’: the output will be summed. Default:"mean"
Examples
>>> hdloss = HausdorffERLoss() >>> input = torch.randn(5, 3, 20, 20) >>> target = (torch.rand(5, 1, 20, 20) * 2).long() >>> res = hdloss(input, target)
- class kornia.losses.HausdorffERLoss3D(alpha=2.0, k=10, reduction='mean')¶
Binary 3D Hausdorff loss based on morphological erosion.
Hausdorff Distance loss measures the maximum distance of a predicted segmentation boundary to the nearest ground-truth edge pixel. For two segmentation point sets X and Y , the one-sided HD from X to Y is defined as:
\[hd(X,Y) = \max_{x \in X} \min_{y \in Y}||x - y||_2\]Furthermore, the bidirectional HD is:
\[HD(X,Y) = max(hd(X, Y), hd(Y, X))\]This is a 3D Hausdorff Distance (HD) Loss that based on morphological erosion, which provided a differentiable approximation of Hausdorff distance as stated in [KS19]. The code is refactored on top of here.
- Parameters:
alpha (
float
, optional) – controls the erosion rate in each iteration. Default:2.0
k (
int
, optional) – the number of iterations of erosion. Default:10
reduction (
str
, optional) – Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’. ‘none’: no reduction will be applied, ‘mean’: the weighted mean of the output is taken, ‘sum’: the output will be summed. Default:"mean"
Examples
>>> hdloss = HausdorffERLoss3D() >>> input = torch.randn(5, 3, 20, 20, 20) >>> target = (torch.rand(5, 1, 20, 20, 20) * 2).long() >>> res = hdloss(input, target)