Customized Integration
You can prototype a new RGB-D volumetric reconstruction algorithm with additional properties (e.g. semantic labels) while maintaining a reasonable performance. An example can be found at examples/python/t_reconstruction_system/integrate_custom.py
.
Activation
The frustum block selection remains the same, but then we manually activate these blocks and obtain their buffer indices in the /tutorial/core/hashmap.ipynb:
78# examples/python/t_reconstruction_system/ray_casting.py
79 # Get active frustum block coordinates from input
80 frustum_block_coords = vbg.compute_unique_block_coordinates(
81 depth, intrinsic, extrinsic, config.depth_scale,
82 config.depth_max)
83 # Activate them in the underlying hash map (may have been inserted)
84 vbg.hashmap().activate(frustum_block_coords)
85
86 # Find buf indices in the underlying engine
87 buf_indices, masks = vbg.hashmap().find(frustum_block_coords)
Voxel Indices
We can then unroll voxel indices in these blocks into a flattened array, along with their corresponding voxel coordinates.
91# examples/python/t_reconstruction_system/ray_casting.py
92 voxel_coords, voxel_indices = vbg.voxel_coordinates_and_flattened_indices(
93 buf_indices)
Up to now we have finished preparation. Then we can perform customized geometry transformation in the Tensor interface, with the same fashion as we conduct in numpy or pytorch.
Geometry transformation
We first transform the voxel coordinates to the frame’s coordinate system, project them to the image space, and filter out-of-bound correspondences:
99# examples/python/t_reconstruction_system/ray_casting.py
100 extrinsic_dev = extrinsic.to(device, o3c.float32)
101 xyz = extrinsic_dev[:3, :3] @ voxel_coords.T() + extrinsic_dev[:3,
102 3:]
103
104 intrinsic_dev = intrinsic.to(device, o3c.float32)
105 uvd = intrinsic_dev @ xyz
106 d = uvd[2]
107 u = (uvd[0] / d).round().to(o3c.int64)
108 v = (uvd[1] / d).round().to(o3c.int64)
109 o3d.core.cuda.synchronize()
110 end = time.time()
111
112 start = time.time()
113 mask_proj = (d > 0) & (u >= 0) & (v >= 0) & (u < depth.columns) & (
114 v < depth.rows)
115
116 v_proj = v[mask_proj]
117 u_proj = u[mask_proj]
118 d_proj = d[mask_proj]
Customized integration
With the data association, we are able to conduct integration. In this example, we show the conventional TSDF integration written in vectorized Python code:
Read the associated RGB-D properties from the color/depth images at the associated
u, v
indices;Read the voxels from the voxel buffer arrays (
vbg.attribute
) at maskedvoxel_indices
;Perform in-place modification
118# examples/python/t_reconstruction_system/ray_casting.py
119 depth_readings = depth.as_tensor()[v_proj, u_proj, 0].to(
120 o3c.float32) / config.depth_scale
121 sdf = depth_readings - d_proj
122
123 mask_inlier = (depth_readings > 0) \
124 & (depth_readings < config.depth_max) \
125 & (sdf >= -trunc)
126
127 sdf[sdf >= trunc] = trunc
128 sdf = sdf / trunc
129 weight = vbg.attribute('weight').reshape((-1, 1))
130 tsdf = vbg.attribute('tsdf').reshape((-1, 1))
131
132 valid_voxel_indices = voxel_indices[mask_proj][mask_inlier]
133 w = weight[valid_voxel_indices]
134 wp = w + 1
135
136 tsdf[valid_voxel_indices] \
137 = (tsdf[valid_voxel_indices] * w +
138 sdf[mask_inlier].reshape(w.shape)) / (wp)
139 if config.integrate_color:
140 color = o3d.t.io.read_image(color_file_names[i]).to(device)
141 color_readings = color.as_tensor()[v_proj,
142 u_proj].to(o3c.float32)
143
144 color = vbg.attribute('color').reshape((-1, 3))
145 color[valid_voxel_indices] \
146 = (color[valid_voxel_indices] * w +
147 color_readings[mask_inlier]) / (wp)
You may follow the example and adapt it to your customized properties. Open3D supports conversion from and to PyTorch tensors without memory any copy, see /tutorial/core/tensor.ipynb#PyTorch-I/O-with-DLPack-memory-map. This can be use to leverage PyTorch’s capabilities such as automatic differentiation and other operators.