# kornia.geometry.depth¶

depth_to_3d(depth: torch.Tensor, camera_matrix: torch.Tensor) → torch.Tensor[source]

Compute a 3d point per pixel given its depth value and the camera intrinsics.

Parameters
• depth (torch.Tensor) – image tensor containing a depth value per pixel.

• camera_matrix (torch.Tensor) – tensor containing the camera intrinsics.

Shape:
• Input: $$(B, 1, H, W)$$ and $$(B, 3, 3)$$

• Output: $$(B, 3, H, W)$$

Returns

tensor with a 3d point per pixel of the same resolution as the input.

Return type

torch.Tensor

depth_to_normals(depth: torch.Tensor, camera_matrix: torch.Tensor) → torch.Tensor[source]

Compute the normal surface per pixel.

Parameters
• depth (torch.Tensor) – image tensor containing a depth value per pixel.

• camera_matrix (torch.Tensor) – tensor containing the camera intrinsics.

Shape:
• Input: $$(B, 1, H, W)$$ and $$(B, 3, 3)$$

• Output: $$(B, 3, H, W)$$

Returns

tensor with a normal surface vector per pixel of the same resolution as the input.

Return type

torch.Tensor

warp_frame_depth(image_src: torch.Tensor, depth_dst: torch.Tensor, src_trans_dst: torch.Tensor, camera_matrix: torch.Tensor) → torch.Tensor[source]

Warp a tensor from a source to destination frame by the depth in the destination.

Compute 3d points from the depth, transform them using given transformation, then project the point cloud to an image plane.

Parameters
• image_src (torch.Tensor) – image tensor in the source frame with shape (BxDxHxW).

• depth_dst (torch.Tensor) – depth tensor in the destination frame with shape (Bx1xHxW).

• src_trans_dst (torch.Tensor) – transformation matrix from destination to source with shape (Bx4x4).

• camera_matrix (torch.Tensor) – tensor containing the camera intrinsics with shape (Bx3x3).

Returns

the warped tensor in the source frame with shape (Bx3xHxW).

Return type

torch.Tensor