kornia.geometry.depth¶
- kornia.geometry.depth.depth_from_disparity(disparity, baseline, focal)¶
Computes depth from disparity.
- Parameters:
- Return type:
- Returns:
Depth map of the shape \((*, H, W)\).
Example
>>> disparity = torch.rand(4, 1, 4, 4) >>> baseline = torch.rand(1) >>> focal = torch.rand(1) >>> depth_from_disparity(disparity, baseline, focal).shape torch.Size([4, 1, 4, 4])
- kornia.geometry.depth.depth_to_3d(depth, camera_matrix, normalize_points=False)¶
Compute a 3d point per pixel given its depth value and the camera intrinsics.
Note
This is an alternative implementation of depth_to_3d that does not require the creation of a meshgrid. In future, we will support only this implementation.
- Parameters:
depth (
Tensor
) – image tensor containing a depth value per pixel with shape \((B, 1, H, W)\).camera_matrix (
Tensor
) – tensor containing the camera intrinsics with shape \((B, 3, 3)\).normalize_points (
bool
, optional) – whether to normalise the pointcloud. This must be set to True when the depth is represented as the Euclidean ray length from the camera position. Default:False
- Return type:
- Returns:
tensor with a 3d point per pixel of the same resolution as the input \((B, 3, H, W)\).
Example
>>> depth = torch.rand(1, 1, 4, 4) >>> K = torch.eye(3)[None] >>> depth_to_3d(depth, K).shape torch.Size([1, 3, 4, 4])
- kornia.geometry.depth.depth_to_3d_v2(depth, camera_matrix, normalize_points=False, xyz_grid=None)¶
Compute a 3d point per pixel given its depth value and the camera intrinsics.
Note
This is an alternative implementation of
kornia.geometry.depth.depth_to_3d()
that does not require the creation of a meshgrid.- Parameters:
depth (
Tensor
) – image tensor containing a depth value per pixel with shape \((*, H, W)\).camera_matrix (
Tensor
) – tensor containing the camera intrinsics with shape \((*, 3, 3)\).normalize_points (
bool
, optional) – whether to normalise the pointcloud. This must be set to True when the depth is represented as the Euclidean ray length from the camera position. Default:False
- Return type:
- Returns:
tensor with a 3d point per pixel of the same resolution as the input \((*, H, W, 3)\).
Example
>>> depth = torch.rand(4, 4) >>> K = torch.eye(3) >>> depth_to_3d_v2(depth, K).shape torch.Size([4, 4, 3])
- kornia.geometry.depth.unproject_meshgrid(height, width, camera_matrix, normalize_points=False, device=None, dtype=None)¶
Compute a 3d point per pixel given its depth value and the camera intrinsics.
Tip
This function should be used in conjunction with
kornia.geometry.depth.depth_to_3d_v2()
to cache the meshgrid computation when warping multiple frames with the same camera intrinsics.- Parameters:
- Return type:
- Returns:
tensor with a 3d point per pixel of the same resolution as the input \((*, H, W, 3)\).
- kornia.geometry.depth.depth_to_normals(depth, camera_matrix, normalize_points=False)¶
Compute the normal surface per pixel.
- Parameters:
depth (
Tensor
) – image tensor containing a depth value per pixel with shape \((B, 1, H, W)\).camera_matrix (
Tensor
) – tensor containing the camera intrinsics with shape \((B, 3, 3)\).normalize_points (
bool
, optional) – whether to normalize the pointcloud. This must be set to True when the depth is Default:False
position. (represented as the Euclidean ray length from the camera)
- Return type:
- Returns:
tensor with a normal surface vector per pixel of the same resolution as the input \((B, 3, H, W)\).
Example
>>> depth = torch.rand(1, 1, 4, 4) >>> K = torch.eye(3)[None] >>> depth_to_normals(depth, K).shape torch.Size([1, 3, 4, 4])
- kornia.geometry.depth.depth_from_plane_equation(plane_normals, plane_offsets, points_uv, camera_matrix, eps=1e-8)¶
Compute depth values from plane equations and pixel coordinates.
- Parameters:
plane_normals (Tensor) – Plane normal vectors of shape (B, 3).
plane_offsets (Tensor) – Plane offsets of shape (B, 1).
points_uv (Tensor) – Pixel coordinates of shape (B, N, 2).
camera_matrix (Tensor) – Camera intrinsic matrix of shape (B, 3, 3).
- Returns:
Computed depth values at the given pixels, shape (B, N).
- Return type:
Tensor
- kornia.geometry.depth.warp_frame_depth(image_src, depth_dst, src_trans_dst, camera_matrix, normalize_points=False)¶
Warp a tensor from a source to destination frame by the depth in the destination.
Compute 3d points from the depth, transform them using given transformation, then project the point cloud to an image plane.
- Parameters:
image_src (
Tensor
) – image tensor in the source frame with shape \((B,D,H,W)\).depth_dst (
Tensor
) – depth tensor in the destination frame with shape \((B,1,H,W)\).src_trans_dst (
Tensor
) – transformation matrix from destination to source with shape \((B,4,4)\).camera_matrix (
Tensor
) – tensor containing the camera intrinsics with shape \((B,3,3)\).normalize_points (
bool
, optional) – whether to normalize the pointcloud. This must be set toTrue
when the depth is represented as the Euclidean ray length from the camera position. Default:False
- Return type:
- Returns:
the warped tensor in the source frame with shape \((B,3,H,W)\).