# kornia.losses#

## Reconstruction#

kornia.losses.ssim_loss(img1, img2, window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')[source]#

Function that computes a loss based on the SSIM measurement.

The loss, or the Structural dissimilarity (DSSIM) is described as:

$\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}$

See ssim() for details about SSIM.

Parameters
• img1 (Tensor) – the first input image with shape $$(B, C, H, W)$$.

• img2 (Tensor) – the second input image with shape $$(B, C, H, W)$$.

• window_size (int) – the size of the gaussian kernel to smooth the images.

• max_val (float, optional) – the dynamic range of the images. Default: 1.0

• eps (float, optional) – Small value for numerically stability when dividing. Default: 1e-12

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'

• padding (str, optional) – 'same' | 'valid'. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default: 'same'

Return type

Tensor

Returns

The loss based on the ssim index.

Examples

>>> input1 = torch.rand(1, 4, 5, 5)
>>> input2 = torch.rand(1, 4, 5, 5)
>>> loss = ssim_loss(input1, input2, 5)

kornia.losses.psnr_loss(input, target, max_val)[source]#

Function that computes the PSNR loss.

The loss is computed as follows:

$\text{loss} = -\text{psnr(x, y)}$

See psnr() for details abut PSNR.

Parameters
Return type

Tensor

Returns

the computed loss as a scalar.

Examples

>>> ones = torch.ones(1)
>>> psnr_loss(ones, 1.2 * ones, 2.) # 10 * log(4/((1.2-1)**2)) / log(10)
tensor(-20.0000)

kornia.losses.total_variation(img)[source]#

Function that computes Total Variation according to [1].

Parameters

img (Tensor) – the input image with shape $$(N, C, H, W)$$ or $$(C, H, W)$$.

Return type

Tensor

Returns

a scalar with the computer loss.

Examples

>>> total_variation(torch.ones(3, 4, 4))
tensor(0.)


Note

See a working example here.

Reference:
kornia.losses.inverse_depth_smoothness_loss(idepth, image)[source]#

Criterion that computes image-aware inverse depth smoothness loss.

$\text{loss} = \left | \partial_x d_{ij} \right | e^{-\left \| \partial_x I_{ij} \right \|} + \left | \partial_y d_{ij} \right | e^{-\left \| \partial_y I_{ij} \right \|}$
Parameters
Return type

Tensor

Returns

a scalar with the computed loss.

Examples

>>> idepth = torch.rand(1, 1, 4, 5)
>>> image = torch.rand(1, 3, 4, 5)
>>> loss = inverse_depth_smoothness_loss(idepth, image)

class kornia.losses.SSIMLoss(window_size, max_val=1.0, eps=1e-12, reduction='mean', padding='same')[source]#

Create a criterion that computes a loss based on the SSIM measurement.

The loss, or the Structural dissimilarity (DSSIM) is described as:

$\text{loss}(x, y) = \frac{1 - \text{SSIM}(x, y)}{2}$

See ssim_loss() for details about SSIM.

Parameters
• window_size (int) – the size of the gaussian kernel to smooth the images.

• max_val (float, optional) – the dynamic range of the images. Default: 1.0

• eps (float, optional) – Small value for numerically stability when dividing. Default: 1e-12

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'

• padding (str, optional) – 'same' | 'valid'. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default: 'same'

Returns

The loss based on the ssim index.

Examples

>>> input1 = torch.rand(1, 4, 5, 5)
>>> input2 = torch.rand(1, 4, 5, 5)
>>> criterion = SSIMLoss(5)
>>> loss = criterion(input1, input2)

class kornia.losses.MS_SSIMLoss(sigmas=[0.5, 1.0, 2.0, 4.0, 8.0], data_range=1.0, K=(0.01, 0.03), alpha=0.025, compensation=200.0, reduction='mean')[source]#

Creates a criterion that computes MSSIM + L1 loss.

According to [1], we compute the MS_SSIM + L1 loss as follows:

$\text{loss}(x, y) = \alpha \cdot \mathcal{L_{MSSIM}}(x,y)+(1 - \alpha) \cdot G_\alpha \cdot \mathcal{L_1}(x,y)$
Where:
• $$\alpha$$ is the weight parameter.

• $$x$$ and $$y$$ are the reconstructed and true reference images.

• $$\mathcal{L_{MSSIM}}$$ is the MS-SSIM loss.

• $$G_\alpha$$ is the sigma values for computing multi-scale SSIM.

• $$\mathcal{L_1}$$ is the L1 loss.

Reference:
Parameters
• sigmas (List[float], optional) – gaussian sigma values. Default: [0.5, 1.0, 2.0, 4.0, 8.0]

• data_range (float, optional) – the range of the images. Default: 1.0

• K (Tuple[float, float], optional) – k values. Default: (0.01, 0.03)

• alpha (float, optional) – specifies the alpha value Default: 0.025

• compensation (float, optional) – specifies the scaling coefficient. Default: 200.0

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'

Returns

The computed loss.

Shape:
• Input1: $$(N, C, H, W)$$.

• Input2: $$(N, C, H, W)$$.

• Output: $$(N, H, W)$$ or scalar if reduction is set to 'mean' or 'sum'.

Examples

>>> input1 = torch.rand(1, 3, 5, 5)
>>> input2 = torch.rand(1, 3, 5, 5)
>>> criterion = kornia.losses.MS_SSIMLoss()
>>> loss = criterion(input1, input2)

class kornia.losses.TotalVariation[source]#

Compute the Total Variation according to [1].

Shape:
• Input: $$(N, C, H, W)$$ or $$(C, H, W)$$.

• Output: $$(N,)$$ or scalar.

Examples

>>> tv = TotalVariation()
>>> output = tv(torch.ones((2, 3, 4, 4), requires_grad=True))
>>> output.data
tensor([0., 0.])
>>> output.sum().backward()  # grad can be implicitly created only for scalar outputs

Reference:
class kornia.losses.PSNRLoss(max_val)[source]#

Create a criterion that calculates the PSNR loss.

The loss is computed as follows:

$\text{loss} = -\text{psnr(x, y)}$

See psnr() for details abut PSNR.

Parameters

max_val (float) – The maximum value in the input tensor.

Shape:
• Input: arbitrary dimensional tensor $$(*)$$.

• Target: arbitrary dimensional tensor $$(*)$$ same shape as input.

• Output: a scalar.

Examples

>>> ones = torch.ones(1)
>>> criterion = PSNRLoss(2.)
>>> criterion(ones, 1.2 * ones) # 10 * log(4/((1.2-1)**2)) / log(10)
tensor(-20.0000)

class kornia.losses.InverseDepthSmoothnessLoss[source]#

Criterion that computes image-aware inverse depth smoothness loss.

$\text{loss} = \left | \partial_x d_{ij} \right | e^{-\left \| \partial_x I_{ij} \right \|} + \left | \partial_y d_{ij} \right | e^{-\left \| \partial_y I_{ij} \right \|}$
Shape:
• Inverse Depth: $$(N, 1, H, W)$$

• Image: $$(N, 3, H, W)$$

• Output: scalar

Examples

>>> idepth = torch.rand(1, 1, 4, 5)
>>> image = torch.rand(1, 3, 4, 5)
>>> smooth = InverseDepthSmoothnessLoss()
>>> loss = smooth(idepth, image)


## Semantic Segmentation#

kornia.losses.binary_focal_loss_with_logits(input, target, alpha=0.25, gamma=2.0, reduction='none', eps=None, pos_weight=None)[source]#

Function that computes Binary Focal loss.

$\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)$
where:
• $$p_t$$ is the model’s estimated probability for each class.

Parameters
• input (Tensor) – input data tensor of arbitrary shape.

• target (Tensor) – the target tensor with shape matching input.

• alpha (float, optional) – Weighting factor for the rare class $$\alpha \in [0, 1]$$. Default: 0.25

• gamma (float, optional) – Focusing parameter $$\gamma >= 0$$. Default: 2.0

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'none'

• eps (Optional[float], optional) – Deprecated: scalar for numerically stability when dividing. This is no longer used. Default: None

• pos_weight (Optional[Tensor], optional) – a weight of positive examples. It’s possible to trade off recall and precision by adding weights to positive examples. Must be a vector with length equal to the number of classes. Default: None

Return type

Tensor

Returns

the computed loss.

Examples

>>> kwargs = {"alpha": 0.25, "gamma": 2.0, "reduction": 'mean'}
>>> logits = torch.tensor([[[6.325]],[[5.26]],[[87.49]]])
>>> labels = torch.tensor([[[1.]],[[1.]],[[0.]]])
>>> binary_focal_loss_with_logits(logits, labels, **kwargs)
tensor(21.8725)

kornia.losses.focal_loss(input, target, alpha, gamma=2.0, reduction='none', eps=None)[source]#

Criterion that computes Focal loss.

According to [LGG+18], the Focal loss is computed as follows:

$\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)$
Where:
• $$p_t$$ is the model’s estimated probability for each class.

Parameters
• input (Tensor) – logits tensor with shape $$(N, C, *)$$ where C = number of classes.

• target (Tensor) – labels tensor with shape $$(N, *)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

• alpha (float) – Weighting factor $$\alpha \in [0, 1]$$.

• gamma (float, optional) – Focusing parameter $$\gamma >= 0$$. Default: 2.0

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'none'

• eps (Optional[float], optional) – Deprecated: scalar to enforce numerical stabiliy. This is no longer used. Default: None

Return type

Tensor

Returns

the computed loss.

Example

>>> N = 5  # num_classes
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = focal_loss(input, target, alpha=0.5, gamma=2.0, reduction='mean')
>>> output.backward()

kornia.losses.dice_loss(input, target, eps=1e-08)[source]#

Criterion that computes Sørensen-Dice Coefficient loss.

According to [1], we compute the Sørensen-Dice Coefficient as follows:

$\text{Dice}(x, class) = \frac{2 |X| \cap |Y|}{|X| + |Y|}$
Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the one-hot tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{Dice}(x, class)$
Reference:
Parameters
• input (Tensor) – logits tensor with shape $$(N, C, H, W)$$ where C = number of classes.

• labels – labels tensor with shape $$(N, H, W)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

• eps (float, optional) – Scalar to enforce numerical stabiliy. Default: 1e-08

Return type

Tensor

Returns

the computed loss.

Example

>>> N = 5  # num_classes
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = dice_loss(input, target)
>>> output.backward()

kornia.losses.tversky_loss(input, target, alpha, beta, eps=1e-08)[source]#

Criterion that computes Tversky Coefficient loss.

According to [SEG17], we compute the Tversky Coefficient as follows:

$\text{S}(P, G, \alpha; \beta) = \frac{|PG|}{|PG| + \alpha |P \setminus G| + \beta |G \setminus P|}$
Where:
• $$P$$ and $$G$$ are the predicted and ground truth binary labels.

• $$\alpha$$ and $$\beta$$ control the magnitude of the penalties for FPs and FNs, respectively.

Note

• $$\alpha = \beta = 0.5$$ => dice coeff

• $$\alpha = \beta = 1$$ => tanimoto coeff

• $$\alpha + \beta = 1$$ => F beta coeff

Parameters
Return type

Tensor

Returns

the computed loss.

Example

>>> N = 5  # num_classes
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = tversky_loss(input, target, alpha=0.5, beta=0.5)
>>> output.backward()

kornia.losses.lovasz_hinge_loss(pred, target)[source]#

Criterion that computes a surrogate binary intersection-over-union (IoU) loss.

According to [2], we compute the IoU as follows:

$\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}$

[1] approximates this fomular with a surrogate, which is fully differentable.

Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the binary tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{IoU}(x, class)$
Reference:
. note::

This loss function only supports binary labels. For multi-class labels please use the Lovasz-Softmax loss.

Parameters
• pred (Tensor) – logits tensor with shape $$(N, 1, H, W)$$.

• labels – labels tensor with shape $$(N, H, W)$$ with binary values.

Return type

Tensor

Returns

a scalar with the computed loss.

Example

>>> N = 1  # num_classes
>>> pred = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = lovasz_hinge_loss(pred, target)
>>> output.backward()

kornia.losses.lovasz_softmax_loss(pred, target)[source]#

Criterion that computes a surrogate multi-class intersection-over-union (IoU) loss.

According to [1], we compute the IoU as follows:

$\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}$

[1] approximates this fomular with a surrogate, which is fully differentable.

Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the binary tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{IoU}(x, class)$
Reference:
. note::

This loss function only supports multi-class (C > 1) labels. For binary labels please use the Lovasz-Hinge loss.

Parameters
• pred (Tensor) – logits tensor with shape $$(N, C, H, W)$$ where C = number of classes > 1.

• labels – labels tensor with shape $$(N, H, W)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

Return type

Tensor

Returns

a scalar with the computed loss.

Example

>>> N = 5  # num_classes
>>> pred = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = lovasz_softmax_loss(pred, target)
>>> output.backward()

class kornia.losses.BinaryFocalLossWithLogits(alpha, gamma=2.0, reduction='none', pos_weight=None)[source]#

Criterion that computes Focal loss.

According to [LGG+18], the Focal loss is computed as follows:

$\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)$
where:
• $$p_t$$ is the model’s estimated probability for each class.

Parameters
• alpha (float) – Weighting factor for the rare class $$\alpha \in [0, 1]$$.

• gamma (float, optional) – Focusing parameter $$\gamma >= 0$$. Default: 2.0

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'none'

• pos_weight (Optional[Tensor], optional) – a weight of positive examples. It’s possible to trade off recall and precision by adding weights to positive examples. Must be a vector with length equal to the number of classes. Default: None

Shape:
• Input: $$(N, *)$$.

• Target: $$(N, *)$$.

Examples

>>> kwargs = {"alpha": 0.25, "gamma": 2.0, "reduction": 'mean'}
>>> loss = BinaryFocalLossWithLogits(**kwargs)
>>> input = torch.randn(1, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(2)
>>> output = loss(input, target)
>>> output.backward()

class kornia.losses.DiceLoss(eps=1e-08)[source]#

Criterion that computes Sørensen-Dice Coefficient loss.

According to [1], we compute the Sørensen-Dice Coefficient as follows:

$\text{Dice}(x, class) = \frac{2 |X| \cap |Y|}{|X| + |Y|}$
Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the one-hot tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{Dice}(x, class)$
Reference:
Parameters

eps (float, optional) – Scalar to enforce numerical stabiliy. Default: 1e-08

Shape:
• Input: $$(N, C, H, W)$$ where C = number of classes.

• Target: $$(N, H, W)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

Example

>>> N = 5  # num_classes
>>> criterion = DiceLoss()
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = criterion(input, target)
>>> output.backward()

class kornia.losses.TverskyLoss(alpha, beta, eps=1e-08)[source]#

Criterion that computes Tversky Coefficient loss.

According to [SEG17], we compute the Tversky Coefficient as follows:

$\text{S}(P, G, \alpha; \beta) = \frac{|PG|}{|PG| + \alpha |P \setminus G| + \beta |G \setminus P|}$
Where:
• $$P$$ and $$G$$ are the predicted and ground truth binary labels.

• $$\alpha$$ and $$\beta$$ control the magnitude of the penalties for FPs and FNs, respectively.

Note

• $$\alpha = \beta = 0.5$$ => dice coeff

• $$\alpha = \beta = 1$$ => tanimoto coeff

• $$\alpha + \beta = 1$$ => F beta coeff

Parameters
Shape:
• Input: $$(N, C, H, W)$$ where C = number of classes.

• Target: $$(N, H, W)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

Examples

>>> N = 5  # num_classes
>>> criterion = TverskyLoss(alpha=0.5, beta=0.5)
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = criterion(input, target)
>>> output.backward()

class kornia.losses.FocalLoss(alpha, gamma=2.0, reduction='none', eps=None)[source]#

Criterion that computes Focal loss.

According to [LGG+18], the Focal loss is computed as follows:

$\text{FL}(p_t) = -\alpha_t (1 - p_t)^{\gamma} \, \text{log}(p_t)$
Where:
• $$p_t$$ is the model’s estimated probability for each class.

Parameters
• alpha (float) – Weighting factor $$\alpha \in [0, 1]$$.

• gamma (float, optional) – Focusing parameter $$\gamma >= 0$$. Default: 2.0

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'none'

• eps (Optional[float], optional) – Deprecated: scalar to enforce numerical stability. This is no longer used. Default: None

Shape:
• Input: $$(N, C, *)$$ where C = number of classes.

• Target: $$(N, *)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

Example

>>> N = 5  # num_classes
>>> kwargs = {"alpha": 0.5, "gamma": 2.0, "reduction": 'mean'}
>>> criterion = FocalLoss(**kwargs)
>>> input = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = criterion(input, target)
>>> output.backward()

class kornia.losses.LovaszHingeLoss[source]#

Criterion that computes a surrogate binary intersection-over-union (IoU) loss.

According to [2], we compute the IoU as follows:

$\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}$

[1] approximates this fomular with a surrogate, which is fully differentable.

Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the binary tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{IoU}(x, class)$
Reference:
. note::

This loss function only supports binary labels. For multi-class labels please use the Lovasz-Softmax loss.

Parameters
• pred – logits tensor with shape $$(N, 1, H, W)$$.

• labels – labels tensor with shape $$(N, H, W)$$ with binary values.

Returns

a scalar with the computed loss.

Example

>>> N = 1  # num_classes
>>> criterion = LovaszHingeLoss()
>>> pred = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = criterion(pred, target)
>>> output.backward()

class kornia.losses.LovaszSoftmaxLoss[source]#

Criterion that computes a surrogate multi-class intersection-over-union (IoU) loss.

According to [1], we compute the IoU as follows:

$\text{IoU}(x, class) = \frac{|X \cap Y|}{|X \cup Y|}$

[1] approximates this fomular with a surrogate, which is fully differentable.

Where:
• $$X$$ expects to be the scores of each class.

• $$Y$$ expects to be the binary tensor with the class labels.

the loss, is finally computed as:

$\text{loss}(x, class) = 1 - \text{IoU}(x, class)$
Reference:
. note::

This loss function only supports multi-class (C > 1) labels. For binary labels please use the Lovasz-Hinge loss.

Parameters
• pred – logits tensor with shape $$(N, C, H, W)$$ where C = number of classes > 1.

• labels – labels tensor with shape $$(N, H, W)$$ where each value is $$0 ≤ targets[i] ≤ C−1$$.

Returns

a scalar with the computed loss.

Example

>>> N = 5  # num_classes
>>> criterion = LovaszSoftmaxLoss()
>>> pred = torch.randn(1, N, 3, 5, requires_grad=True)
>>> target = torch.empty(1, 3, 5, dtype=torch.long).random_(N)
>>> output = criterion(pred, target)
>>> output.backward()


## Distributions#

kornia.losses.js_div_loss_2d(input, target, reduction='mean')[source]#

Calculate the Jensen-Shannon divergence loss between heatmaps.

Parameters
• input (Tensor) – the input tensor with shape $$(B, N, H, W)$$.

• target (Tensor) – the target tensor with shape $$(B, N, H, W)$$.

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'

Examples

>>> input = torch.full((1, 1, 2, 4), 0.125)
>>> loss = js_div_loss_2d(input, input)
>>> loss.item()
0.0

kornia.losses.kl_div_loss_2d(input, target, reduction='mean')[source]#

Calculate the Kullback-Leibler divergence loss between heatmaps.

Parameters
• input (Tensor) – the input tensor with shape $$(B, N, H, W)$$.

• target (Tensor) – the target tensor with shape $$(B, N, H, W)$$.

• reduction (str, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Default: 'mean'

Examples

>>> input = torch.full((1, 1, 2, 4), 0.125)
>>> loss = kl_div_loss_2d(input, input)
>>> loss.item()
0.0


## Morphology#

class kornia.losses.HausdorffERLoss(alpha=2.0, k=10, reduction='mean')[source]#

Binary Hausdorff loss based on morphological erosion.

Hausdorff Distance loss measures the maximum distance of a predicted segmentation boundary to the nearest ground-truth edge pixel. For two segmentation point sets X and Y , the one-sided HD from X to Y is defined as:

$hd(X,Y) = \max_{x \in X} \min_{y \in Y}||x - y||_2$

Furthermore, the bidirectional HD is:

$HD(X,Y) = max(hd(X, Y), hd(Y, X))$

This is an Hausdorff Distance (HD) Loss that based on morphological erosion, which provided a differentiable approximation of Hausdorff distance as stated in [KS19]. The code is refactored on top of here.

Parameters
• alpha (float, optional) – controls the erosion rate in each iteration. Default: 2.0

• k (int, optional) – the number of iterations of erosion. Default: 10

• reduction (str, optional) – Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’. ‘none’: no reduction will be applied, ‘mean’: the weighted mean of the output is taken, ‘sum’: the output will be summed. Default: 'mean'

Examples

>>> hdloss = HausdorffERLoss()
>>> input = torch.randn(5, 3, 20, 20)
>>> target = (torch.rand(5, 1, 20, 20) * 2).long()
>>> res = hdloss(input, target)

class kornia.losses.HausdorffERLoss3D(alpha=2.0, k=10, reduction='mean')[source]#

Binary 3D Hausdorff loss based on morphological erosion.

Hausdorff Distance loss measures the maximum distance of a predicted segmentation boundary to the nearest ground-truth edge pixel. For two segmentation point sets X and Y , the one-sided HD from X to Y is defined as:

$hd(X,Y) = \max_{x \in X} \min_{y \in Y}||x - y||_2$

Furthermore, the bidirectional HD is:

$HD(X,Y) = max(hd(X, Y), hd(Y, X))$

This is a 3D Hausdorff Distance (HD) Loss that based on morphological erosion, which provided a differentiable approximation of Hausdorff distance as stated in [KS19]. The code is refactored on top of here.

Parameters
• alpha (float, optional) – controls the erosion rate in each iteration. Default: 2.0

• k (int, optional) – the number of iterations of erosion. Default: 10

• reduction (str, optional) – Specifies the reduction to apply to the output: ‘none’ | ‘mean’ | ‘sum’. ‘none’: no reduction will be applied, ‘mean’: the weighted mean of the output is taken, ‘sum’: the output will be summed. Default: 'mean'

Examples

>>> hdloss = HausdorffERLoss3D()
>>> input = torch.randn(5, 3, 20, 20, 20)
>>> target = (torch.rand(5, 1, 20, 20, 20) * 2).long()
>>> res = hdloss(input, target)