kornia.geometry.warp

class HomographyWarper(height: int, width: int, mode: str = 'bilinear', padding_mode: str = 'zeros', normalized_coordinates: bool = True, align_corners: bool = False)[source]

Warp tensors by homographies.

\[X_{dst} = H_{src}^{\{dst\}} * X_{src}\]
Parameters
  • height (int) – The height of the destination tensor.

  • width (int) – The width of the destination tensor.

  • mode (str) – interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’.

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’.

  • normalized_coordinates (bool) – wether to use a grid with normalized coordinates.

  • align_corners (bool) – interpolation flag. Default: False. See

  • https – //pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate for detail

forward(patch_src: torch.Tensor, src_homo_dst: Optional[torch.Tensor] = None) → torch.Tensor[source]

Warp a tensor from source into reference frame.

Parameters
  • patch_src (torch.Tensor) – The tensor to warp.

  • src_homo_dst (torch.Tensor, optional) – The homography or stack of homographies from destination to source. The homography assumes normalized coordinates [-1, 1] if normalized_coordinates is True. Default: None.

Returns

Patch sampled at locations from source to destination.

Return type

torch.Tensor

Shape:
  • Input: \((N, C, H, W)\) and \((N, 3, 3)\)

  • Output: \((N, C, H, W)\)

Example

>>> input = torch.rand(1, 3, 32, 32)
>>> homography = torch.eye(3).view(1, 3, 3)
>>> warper = kornia.HomographyWarper(32, 32)
>>> # without precomputing the warp
>>> output = warper(input, homography)  # NxCxHxW
>>> # precomputing the warp
>>> warper.precompute_warp_grid(homography)
>>> output = warper(input)  # NxCxHxW
precompute_warp_grid(src_homo_dst: torch.Tensor) → None[source]

Compute and store internaly the transformations of the points.

Useful when the same homography/homographies are reused.

Parameters

src_homo_dst (torch.Tensor) – Homography or homographies (stacked) to transform all points in the grid. Shape of the homography has to be \((1, 3, 3)\) or \((N, 1, 3, 3)\). The homography assumes normalized coordinates [-1, 1] if normalized_coordinates is True.

class DepthWarper(pinhole_dst: kornia.geometry.camera.pinhole.PinholeCamera, height: int, width: int, mode: str = 'bilinear', padding_mode: str = 'zeros', align_corners: bool = True)[source]

Warps a patch by depth.

\[ \begin{align}\begin{aligned}P_{src}^{\{dst\}} = K_{dst} * T_{src}^{\{dst\}}\\\begin{split}I_{src} = \\omega(I_{dst}, P_{src}^{\{dst\}}, D_{src})\end{split}\end{aligned}\end{align} \]
Parameters
  • pinholes_dst (PinholeCamera) – the pinhole models for the destination frame.

  • height (int) – the height of the image to warp.

  • width (int) – the width of the image to warp.

  • mode (str) – interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’.

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’.

  • align_corners (bool) – interpolation flag. Default: True. See

  • https – //pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate for detail

compute_projection_matrix(pinhole_src: kornia.geometry.camera.pinhole.PinholeCamera) → kornia.geometry.warp.depth_warper.DepthWarper[source]

Computes the projection matrix from the source to destinaion frame.

compute_subpixel_step() → torch.Tensor[source]

This computes the required inverse depth step to achieve sub pixel accurate sampling of the depth cost volume, per camera.

Szeliski, Richard, and Daniel Scharstein. “Symmetric sub-pixel stereo matching.” European Conference on Computer Vision. Springer Berlin Heidelberg, 2002.

forward(depth_src: torch.Tensor, patch_dst: torch.Tensor) → torch.Tensor[source]

Warps a tensor from destination frame to reference given the depth in the reference frame.

Parameters
  • depth_src (torch.Tensor) – the depth in the reference frame. The tensor must have a shape \((B, 1, H, W)\).

  • patch_dst (torch.Tensor) – the patch in the destination frame. The tensor must have a shape \((B, C, H, W)\).

Returns

the warped patch from destination frame to reference.

Return type

torch.Tensor

Shape:
  • Output: \((N, C, H, W)\) where C = number of channels.

Example

>>> # pinholes camera models
>>> pinhole_dst = kornia.PinholeCamera(...)
>>> pinhole_src = kornia.PinholeCamera(...)
>>> # create the depth warper, compute the projection matrix
>>> warper = kornia.DepthWarper(pinhole_dst, height, width)
>>> warper.compute_projection_matrix(pinhole_src)
>>> # warp the destionation frame to reference by depth
>>> depth_src = torch.ones(1, 1, 32, 32)  # Nx1xHxW
>>> image_dst = torch.rand(1, 3, 32, 32)  # NxCxHxW
>>> image_src = warper(depth_src, image_dst)  # NxCxHxW
warp_grid(depth_src: torch.Tensor) → torch.Tensor[source]

Computes a grid for warping a given the depth from the reference pinhole camera.

The function compute_projection_matrix has to be called beforehand in order to have precomputed the relative projection matrices encoding the relative pose and the intrinsics between the reference and a non reference camera.

homography_warp(patch_src: torch.Tensor, src_homo_dst: torch.Tensor, dsize: Tuple[int, int], mode: str = 'bilinear', padding_mode: str = 'zeros', align_corners: bool = False, normalized_coordinates: bool = True) → torch.Tensor[source]

Warp image patchs or tensors by normalized 2D homographies.

See HomographyWarper for details.

Parameters
  • patch_src (torch.Tensor) – The image or tensor to warp. Should be from source of shape \((N, C, H, W)\).

  • src_homo_dst (torch.Tensor) – The homography or stack of homographies from destination to source of shape \((N, 3, 3)\).

  • dsize (Tuple[int, int]) – The height and width of the image to warp.

  • mode (str) – interpolation mode to calculate output values ‘bilinear’ | ‘nearest’. Default: ‘bilinear’.

  • padding_mode (str) – padding mode for outside grid values ‘zeros’ | ‘border’ | ‘reflection’. Default: ‘zeros’.

  • align_corners (bool) – interpolation flag. Default: False. See

  • https – //pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate for detail

  • normalized_coordinates (bool) – Whether the homography assumes [-1, 1] normalized coordinates or not.

Returns

Patch sampled at locations from source to destination.

Return type

torch.Tensor

Example

>>> input = torch.rand(1, 3, 32, 32)
>>> homography = torch.eye(3).view(1, 3, 3)
>>> output = kornia.homography_warp(input, homography, (32, 32))
depth_warp(pinhole_dst: kornia.geometry.camera.pinhole.PinholeCamera, pinhole_src: kornia.geometry.camera.pinhole.PinholeCamera, depth_src: torch.Tensor, patch_dst: torch.Tensor, height: int, width: int, align_corners: bool = True)[source]

Function that warps a tensor from destination frame to reference given the depth in the reference frame.

See DepthWarper for details.

Example

>>> # pinholes camera models
>>> pinhole_dst = kornia.PinholeCamera(...)
>>> pinhole_src = kornia.PinholeCamera(...)
>>> # warp the destionation frame to reference by depth
>>> depth_src = torch.ones(1, 1, 32, 32)  # Nx1xHxW
>>> image_dst = torch.rand(1, 3, 32, 32)  # NxCxHxW
>>> image_src = kornia.depth_warp(pinhole_dst, pinhole_src,
>>>     depth_src, image_dst, height, width)  # NxCxHxW
warp_grid(grid: torch.Tensor, src_homo_dst: torch.Tensor) → torch.Tensor[source]

Compute the grid to warp the coordinates grid by the homography/ies.

Parameters
  • grid – Unwrapped grid of the shape \((1, N, W, 2)\).

  • src_homo_dst (torch.Tensor) – Homography or homographies (stacked) to transform all points in the grid. Shape of the homography has to be \((1, 3, 3)\) or \((N, 1, 3, 3)\).

Returns

the transformed grid of shape \((N, H, W, 2)\).

Return type

torch.Tensor

normalize_homography(dst_pix_trans_src_pix: torch.Tensor, dsize_src: Tuple[int, int], dsize_dst: Tuple[int, int]) → torch.Tensor[source]

Normalize a given homography in pixels to [-1, 1].

Parameters
  • dst_pix_trans_src_pix (torch.Tensor) – homography/ies from source to destiantion to be normalized. \((B, 3, 3)\)

  • dsize_src (tuple) – size of the source image (height, width).

  • dsize_dst (tuple) – size of the destination image (height, width).

Returns

the normalized homography of shape \((B, 3, 3)\).

Return type

torch.Tensor

normal_transform_pixel(height: int, width: int) → torch.Tensor[source]

Compute the normalization matrix from image size in pixels to [-1, 1].

Parameters
  • height (int) – image height.

  • width (int) – image width.

Returns

normalized transform with shape \((1, 3, 3)\).

Return type

torch.Tensor

normal_transform_pixel3d(depth: int, height: int, width: int) → torch.Tensor[source]

Compute the normalization matrix from image size in pixels to [-1, 1].

Parameters
  • depth (int) – image depth.

  • height (int) – image height.

  • width (int) – image width.

Returns

normalized transform with shape \((1, 4, 4)\).

Return type

Tensor