# kornia.metrics#

Module containing metrics for training networks

## Classification#

kornia.metrics.accuracy(input, target, topk=(1,))[source]#

Computes the accuracy over the k top predictions for the specified values of k.

Parameters
• input (Tensor) – the input tensor with the logits to evaluate.

• target (Tensor) – the tensor containing the ground truth.

• topk (optional) – the expected topk ranking. Default: (1,)

Example

>>> logits = torch.tensor([[0, 1, 0]])
>>> target = torch.tensor([[1]])
>>> accuracy(logits, target)
[tensor(100.)]

Return type

## Segmentation#

kornia.metrics.confusion_matrix(input, target, num_classes, normalized=False)[source]#

Compute confusion matrix to evaluate the accuracy of a classification.

Parameters
• input (Tensor) – tensor with estimated targets returned by a classifier. The shape can be $$(B, *)$$ and must contain integer values between 0 and K-1.

• target (Tensor) – tensor with ground truth (correct) target values. The shape can be $$(B, *)$$ and must contain integer values between 0 and K-1, where targets are assumed to be provided as one-hot vectors.

• num_classes (int) – total possible number of classes in target.

• normalized (bool, optional) – whether to return the confusion matrix normalized. Default: False

Return type

Tensor

Returns

a tensor containing the confusion matrix with shape $$(B, K, K)$$ where K is the number of classes.

Example

>>> logits = torch.tensor([[0, 1, 0]])
>>> target = torch.tensor([[0, 1, 0]])
>>> confusion_matrix(logits, target, num_classes=3)
tensor([[[2., 0., 0.],
[0., 1., 0.],
[0., 0., 0.]]])

kornia.metrics.mean_iou(input, target, num_classes, eps=1e-06)[source]#

Calculate mean Intersection-Over-Union (mIOU).

The function internally computes the confusion matrix.

Parameters
• input (Tensor) – tensor with estimated targets returned by a classifier. The shape can be $$(B, *)$$ and must contain integer values between 0 and K-1.

• target (Tensor) – tensor with ground truth (correct) target values. The shape can be $$(B, *)$$ and must contain integer values between 0 and K-1, where targets are assumed to be provided as one-hot vectors.

• num_classes (int) – total possible number of classes in target.

Return type

Tensor

Returns

a tensor representing the mean intersection-over union with shape $$(B, K)$$ where K is the number of classes.

Example

>>> logits = torch.tensor([[0, 1, 0]])
>>> target = torch.tensor([[0, 1, 0]])
>>> mean_iou(logits, target, num_classes=3)
tensor([[1., 1., 1.]])


## Detection#

kornia.metrics.mean_average_precision(pred_boxes, pred_labels, pred_scores, gt_boxes, gt_labels, n_classes, threshold=0.5)[source]#

Calculate the Mean Average Precision (mAP) of detected objects.

Code altered from https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection/blob/master/utils.py#L271. Background class (0 index) is excluded.

Parameters
Return type
Returns

mean average precision (mAP), list of average precisions for each class.

Examples

>>> boxes, labels, scores = torch.tensor([[100, 50, 150, 100.]]), torch.tensor([1]), torch.tensor([.7])
>>> gt_boxes, gt_labels = torch.tensor([[100, 50, 150, 100.]]), torch.tensor([1])
>>> mean_average_precision([boxes], [labels], [scores], [gt_boxes], [gt_labels], 2)
(tensor(1.), {1: 1.0})

kornia.metrics.mean_iou_bbox(boxes_1, boxes_2)[source]#

Compute the IoU of the cartesian product of two sets of boxes.

Each box in each set shall be (x1, y1, x2, y2).

Parameters
Return type

Tensor

Returns

a tensor in dimensions $$(B1, B2)$$, representing the intersection of each of the boxes in set 1 with respect to each of the boxes in set 2.

Example

>>> boxes_1 = torch.tensor([[40, 40, 60, 60], [30, 40, 50, 60]])
>>> boxes_2 = torch.tensor([[40, 50, 60, 70], [30, 40, 40, 50]])
>>> mean_iou_bbox(boxes_1, boxes_2)
tensor([[0.3333, 0.0000],
[0.1429, 0.2500]])


## Image Quality#

kornia.metrics.psnr(input, target, max_val)[source]#

Create a function that calculates the PSNR between 2 images.

PSNR is Peek Signal to Noise Ratio, which is similar to mean squared error. Given an m x n image, the PSNR is:

$\text{PSNR} = 10 \log_{10} \bigg(\frac{\text{MAX}_I^2}{MSE(I,T)}\bigg)$

where

$\text{MSE}(I,T) = \frac{1}{mn}\sum_{i=0}^{m-1}\sum_{j=0}^{n-1} [I(i,j) - T(i,j)]^2$

and $$\text{MAX}_I$$ is the maximum possible input value (e.g for floating point images $$\text{MAX}_I=1$$).

Parameters
• input (Tensor) – the input image with arbitrary shape $$(*)$$.

• labels – the labels image with arbitrary shape $$(*)$$.

• max_val (float) – The maximum value in the input tensor.

Return type

Tensor

Returns

the computed loss as a scalar.

Examples

>>> ones = torch.ones(1)
>>> psnr(ones, 1.2 * ones, 2.) # 10 * log(4/((1.2-1)**2)) / log(10)
tensor(20.0000)

Reference:

https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Definition

kornia.metrics.ssim(img1, img2, window_size, max_val=1.0, eps=1e-12, padding='same')[source]#

Function that computes the Structural Similarity (SSIM) index map between two images.

Measures the (SSIM) index between each element in the input x and target y.

The index can be described as:

$\text{SSIM}(x, y) = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)} {(\mu_x^2+\mu_y^2+c_1)(\sigma_x^2+\sigma_y^2+c_2)}$
where:
• $$c_1=(k_1 L)^2$$ and $$c_2=(k_2 L)^2$$ are two variables to stabilize the division with weak denominator.

• $$L$$ is the dynamic range of the pixel-values (typically this is $$2^{\#\text{bits per pixel}}-1$$).

Parameters
• img1 (Tensor) – the first input image with shape $$(B, C, H, W)$$.

• img2 (Tensor) – the second input image with shape $$(B, C, H, W)$$.

• window_size (int) – the size of the gaussian kernel to smooth the images.

• max_val (float, optional) – the dynamic range of the images. Default: 1.0

• eps (float, optional) – Small value for numerically stability when dividing. Default: 1e-12

• padding (str, optional) – 'same' | 'valid'. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default: 'same'

Return type

Tensor

Returns

The ssim index map with shape $$(B, C, H, W)$$.

Examples

>>> input1 = torch.rand(1, 4, 5, 5)
>>> input2 = torch.rand(1, 4, 5, 5)
>>> ssim_map = ssim(input1, input2, 5)  # 1x4x5x5


Create a module that computes the Structural Similarity (SSIM) index between two images.

Measures the (SSIM) index between each element in the input x and target y.

The index can be described as:

$\text{SSIM}(x, y) = \frac{(2\mu_x\mu_y+c_1)(2\sigma_{xy}+c_2)} {(\mu_x^2+\mu_y^2+c_1)(\sigma_x^2+\sigma_y^2+c_2)}$
where:
• $$c_1=(k_1 L)^2$$ and $$c_2=(k_2 L)^2$$ are two variables to stabilize the division with weak denominator.

• $$L$$ is the dynamic range of the pixel-values (typically this is $$2^{\#\text{bits per pixel}}-1$$).

Parameters
• window_size (int) – the size of the gaussian kernel to smooth the images.

• max_val (float, optional) – the dynamic range of the images. Default: 1.0

• eps (float, optional) – Small value for numerically stability when dividing. Default: 1e-12

• padding (str, optional) – 'same' | 'valid'. Whether to only use the “valid” convolution area to compute SSIM to match the MATLAB implementation of original SSIM paper. Default: 'same'

Shape:
• Input: $$(B, C, H, W)$$.

• Target $$(B, C, H, W)$$.

• Output: $$(B, C, H, W)$$.

Examples

>>> input1 = torch.rand(1, 4, 5, 5)
>>> input2 = torch.rand(1, 4, 5, 5)
>>> ssim = SSIM(5)
>>> ssim_map = ssim(input1, input2)  # 1x4x5x5


## Monitoring#

class kornia.metrics.AverageMeter[source]#

Computes and stores the average and current value.

Example

>>> stats = AverageMeter()
>>> acc1 = torch.tensor(0.99) # coming from K.metrics.accuracy
>>> stats.update(acc1, n=1)  # where n is batch size usually
>>> stats.avg
tensor(0.9900)