# kornia.augmentation¶

The classes in this section perform various data augmentation operations

class RandomHorizontalFlip(p: float = 0.5, return_transform: bool = False)[source]

Horizontally flip a tensor image or a batch of tensor images randomly with a given probability. Input should be a tensor of shape (C, H, W) or a batch of tensors $$(*, C, H, W)$$. If Input is a tuple it is assumed that the first element contains the aforementioned tensors and the second, the corresponding transformation matrix that has been applied to them. In this case the module will Horizontally flip the tensors and concatenate the corresponding transformation matrix to the previous one. This is especially useful when using this functionality as part of an nn.Sequential module.

Parameters
• p (float) – probability of the image being flipped. Default value is 0.5

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

Examples

>>> input = torch.tensor([[[[0., 0., 0.],
[0., 0., 0.],
[0., 1., 1.]]]])
>>> seq = nn.Sequential(kornia.augmentation.RandomHorizontalFlip(p=1.0, return_transform=True),
kornia.augmentation.RandomHorizontalFlip(p=1.0, return_transform=True)
)
>>> seq(input)
(tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 1., 1.]]),
tensor([[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]]))

class RandomVerticalFlip(p: float = 0.5, return_transform: bool = False)[source]

Vertically flip a tensor image or a batch of tensor images randomly with a given probability. Input should be a tensor of shape (C, H, W) or a batch of tensors $$(*, C, H, W)$$. If Input is a tuple it is assumed that the first element contains the aforementioned tensors and the second, the corresponding transformation matrix that has been applied to them. In this case the module will Vertically flip the tensors and concatenate the corresponding transformation matrix to the previous one. This is especially useful when using this functionality as part of an nn.Sequential module.

Parameters
• p (float) – probability of the image being flipped. Default value is 0.5

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

Examples

>>> input = torch.tensor([[[[0., 0., 0.],
[0., 0., 0.],
[0., 1., 1.]]]])
>>> seq = nn.Sequential(kornia.augmentation.RandomVerticalFlip(p=1.0, return_transform=True))
>>> seq(input)
(tensor([[0., 1., 1.],
[0., 0., 0.],
[0., 0., 0.]]),
tensor([[[1., 0., 0.],
[0., -1., 3.],
[0., 0., 1.]]]))

class RandomRectangleErasing(erase_scale_range: Tuple[float, float], aspect_ratio_range: Tuple[float, float])[source]

Erases a random selected rectangle for each image in the batch, putting the value to zero. The rectangle will have an area equal to the original image area multiplied by a value uniformly sampled between the range [erase_scale_range[0], erase_scale_range[1]) and an aspect ratio sampled between [aspect_ratio_range[0], aspect_ratio_range[1])

Parameters
• erase_scale_range (Tuple[float, float]) – range of proportion of erased area against input image.

• aspect_ratio_range (Tuple[float, float]) – range of aspect ratio of erased area.

Examples

>>> inputs = torch.ones(1, 1, 3, 3)
>>> rec_er = kornia.augmentation.RandomRectangleErasing((.4, .8), (.3, 1/.3))
>>> rec_er(inputs)
tensor([[[[1., 0., 0.],
[1., 0., 0.],
[1., 0., 0.]]]])

class RandomGrayscale(p: float = 0.5, return_transform: bool = False)[source]

Random Grayscale transformation according to a probability p value

Parameters
• p (float) – probability of the image to be transformed to grayscale. Default value is 0.5

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

class RandomAffine(degrees: Union[float, Tuple[float, float]], translate: Optional[Tuple[float, float]] = None, scale: Optional[Tuple[float, float]] = None, shear: Union[float, Tuple[float, float], None] = None, return_transform: bool = False)[source]

Random affine transformation of the image keeping center invariant.

Args:
degrees (float or tuple): Range of degrees to select from.

If degrees is a number instead of sequence like (min, max), the range of degrees will be (-degrees, +degrees). Set to 0 to deactivate rotations.

translate (tuple, optional): tuple of maximum absolute fraction for horizontal

and vertical translations. For example translate=(a, b), then horizontal shift is randomly sampled in the range -img_width * a < dx < img_width * a and vertical shift is randomly sampled in the range -img_height * b < dy < img_height * b. Will not translate by default.

scale (tuple, optional): scaling factor interval, e.g (a, b), then scale is

randomly sampled from the range a <= scale <= b. Will keep original scale by default.

shear (sequence or float, optional): Range of degrees to select from.

If shear is a number, a shear parallel to the x axis in the range (-shear, +shear) will be apllied. Else if shear is a tuple or list of 2 values a shear parallel to the x axis in the range (shear[0], shear[1]) will be applied. Else if shear is a tuple or list of 4 values, a x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. Will not apply shear by default

return_transform (bool): if True return the matrix describing the transformation

applied to each. Default: False.

Examples

>>> input = torch.rand(2, 3, 224, 224)
>>> my_fcn = kornia.augmentation.RandomAffine((-15., 20.), return_transform=True)
>>> out, transform = my_fcn(input)  # 2x3x224x224 / 2x3x3

class RandomPerspective(distortion_scale: float = 0.5, p: float = 0.5, return_transform: bool = False)[source]

Performs Perspective transformation of the given torch.Tensor randomly with a given probability.

Parameters
• p (float) – probability of the image being perspectively transformed. Default value is 0.5

• distortion_scale (float) – it controls the degree of distortion and ranges from 0 to 1. Default value is 0.5.

• return_transform (bool) – if True return the matrix describing the transformation

• to each. Default (applied) – False.

• tensor. (input) –

class RandomRotation(degrees: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 45.0, return_transform: bool = False)[source]

Rotate a tensor image or a batch of tensor images a random amount of degrees. Input should be a tensor of shape (C, H, W) or a batch of tensors $$(*, C, H, W)$$. If Input is a tuple it is assumed that the first element contains the aforementioned tensors and the second, the corresponding transformation matrix that has been applied to them. In this case the module will rotate the tensors and concatenate the corresponding transformation matrix to the previous one. This is especially useful when using this functionality as part of an nn.Sequential module.

Parameters
• degrees (sequence or float or tensor) – range of degrees to select from. If degrees is a number the

• of degrees to select from will be (range) –

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

Examples: >>> input = torch.tensor([[[[10., 0., 0.],

[0., 4.5, 4.], [0., 1., 1.]]]])

>>> seq = nn.Sequential(kornia.augmentation.RandomRotation(degrees=90.0, return_transform=True))
>>> seq(input)
(tensor([[[0.0000e+00, 8.8409e-02, 9.8243e+00],
[9.9131e-01, 4.5000e+00, 1.7524e-04],
[9.9121e-01, 3.9735e+00, 3.5140e-02]]]),
tensor([[[ 0.0088, -1.0000,  1.9911],
[ 1.0000,  0.0088, -0.0088],
[ 0.0000,  0.0000,  1.0000]]]))

class ColorJitter(brightness: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, contrast: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, saturation: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, hue: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, return_transform: bool = False)[source]

Change the brightness, contrast, saturation and hue randomly given tensor image or a batch of tensor images.

Input should be a tensor of shape (C, H, W) or a batch of tensors $$(*, C, H, W)$$.

Parameters
• brightness (float or tuple) – Default value is 0

• contrast (float or tuple) – Default value is 0

• saturation (float or tuple) – Default value is 0

• hue (float or tuple) – Default value is 0

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

class CenterCrop(size: Union[int, Tuple[int, int]], return_transform: bool = False)[source]

Crops the given torch.Tensor at the center. :param size: Desired output size of the crop. If size is an

int instead of sequence like (h, w), a square crop (size, size) is made.

class RandomCrop(size: Tuple[int, int], padding: Union[int, Tuple[int, int], Tuple[int, int, int, int], None] = None, pad_if_needed: Optional[bool] = False, fill: int = 0, padding_mode: str = 'constant', return_transform: bool = False)[source]

Random Crop on given size. :param size: Desired output size of the crop, like (h, w). :type size: tuple :param padding: Optional padding on each border

of the image. Default is None, i.e no padding. If a sequence of length 4 is provided, it is used to pad left, top, right, bottom borders respectively. If a sequence of length 2 is provided, it is used to pad left/right, top/bottom borders, respectively.

Parameters
• pad_if_needed (boolean) – It will pad the image if smaller than the desired size to avoid raising an exception. Since cropping is done after padding, the padding seems to be done at a random offset.

• fill – Pixel fill value for constant fill. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. This value is only used when the padding_mode is constant

• padding_mode – Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.

• return_transform (bool) – if True return the matrix describing the transformation applied to each input tensor. If False and the input is a tuple the applied transformation wont be concatenated

class RandomResizedCrop(size: Tuple[int, int], scale=(1.0, 1.0), ratio=(1.0, 1.0), interpolation=None, return_transform: bool = False)[source]

Random Crop on given size and resizing the cropped patch to another. :param size: expected output size of each edge :type size: Tuple[int, int] :param scale: range of size of the origin size cropped :param ratio: range of aspect ratio of the origin aspect ratio cropped :param interpolation: Default: PIL.Image.BILINEAR :param return_transform: if True return the matrix describing the transformation applied to each

input tensor. If False and the input is a tuple the applied transformation wont be concatenated

color_jitter(input: torch.Tensor, brightness: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, contrast: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, saturation: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, hue: Union[torch.Tensor, float, Tuple[float, float], List[float]] = 0.0, return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Generate params and apply operation on input tensor.

See _random_color_jitter_gen() for details. See _apply_color_jitter() for details.

random_affine(input: torch.Tensor, degrees: Union[float, Tuple[float, float]], translate: Optional[Tuple[float, float]] = None, scale: Optional[Tuple[float, float]] = None, shear: Union[float, Tuple[float, float], None] = None, return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Random affine transformation of the image keeping center invariant

See _random_affine_gen() for details. See _apply_affine() for details.

random_grayscale(input: torch.Tensor, p: float = 0.5, return_transform: bool = False)[source]

Generate params and apply operation on input tensor.

See _random_prob_gen() for details. See _apply_grayscale() for details.

random_hflip(input: torch.Tensor, p: float = 0.5, return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Generate params and apply operation on input tensor.

See _random_prob_gen() for details. See _apply_hflip() for details.

random_perspective(input: torch.Tensor, distortion_scale: float = 0.5, p: float = 0.5, return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Performs Perspective transformation of the given torch.Tensor randomly with a given probability.

See _random_perspective_gen() for details. See _apply_perspective() for details.

random_rectangle_erase(images: torch.Tensor, erase_scale_range: Tuple[float, float], aspect_ratio_range: Tuple[float, float]) → torch.Tensor[source]

Function that erases a random selected rectangle for each image in the batch, putting the value to zero. The rectangle will have an area equal to the original image area multiplied by a value uniformly sampled between the range [erase_scale_range[0], erase_scale_range[1]) and an aspect ratio sampled between [aspect_ratio_range[0], aspect_ratio_range[1])

Parameters
• images (torch.Tensor) – input images.

• erase_scale_range (Tuple[float, float]) – range of proportion of erased area against input image.

• aspect_ratio_range (Tuple[float, float]) – range of aspect ratio of erased area.

random_rotation(input: torch.Tensor, degrees: Union[torch.Tensor, float, Tuple[float, float], List[float]], return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Generate params and apply operation on input tensor.

See _random_rotation_gen() for details. See apply_rotation() for details.

random_vflip(input: torch.Tensor, p: float = 0.5, return_transform: bool = False) → Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]][source]

Generate params and apply operation on input tensor.

See _random_prob_gen() for details. See _apply_vflip() for details.