Register fragments

Once the fragments of the scene are created, the next step is to align them in a global space.

Input arguments

This script runs with python run_system.py [config] --register. In [config], ["path_dataset"] should have subfolders fragments which stores fragments in .ply files and a pose graph in a .json file.

The main function runs make_posegraph_for_scene and optimize_posegraph_for_scene. The first function performs pairwise registration. The second function performs multiway registration.

Preprocess point cloud

41# examples/python/reconstruction_system/register_fragments.py
42
43def preprocess_point_cloud(pcd, config):
44    voxel_size = config["voxel_size"]
45    pcd_down = pcd.voxel_down_sample(voxel_size)
46    pcd_down.estimate_normals(
47        o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 2.0,
48                                             max_nn=30))
49    pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature(
50        pcd_down,
51        o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 5.0,
52                                             max_nn=100))
53    return (pcd_down, pcd_fpfh)
54

This function downsamples a point cloud to make it sparser and regularly distributed. Normals and FPFH feature are precomputed. See /tutorial/geometry/pointcloud.ipynb#voxel-downsampling, /tutorial/geometry/pointcloud.ipynb#vertex-normal-estimation, and /tutorial/pipelines/global_registration.ipynb#extract-geometric-feature for more details.

Compute initial registration

 85# examples/python/reconstruction_system/register_fragments.py
 86
 87def compute_initial_registration(s, t, source_down, target_down, source_fpfh,
 88                                 target_fpfh, path_dataset, config):
 89
 90    if t == s + 1:  # odometry case
 91        print("Using RGBD odometry")
 92        pose_graph_frag = o3d.io.read_pose_graph(
 93            join(path_dataset,
 94                 config["template_fragment_posegraph_optimized"] % s))
 95        n_nodes = len(pose_graph_frag.nodes)
 96        transformation_init = np.linalg.inv(pose_graph_frag.nodes[n_nodes -
 97                                                                  1].pose)
 98        (transformation, information) = \
 99                multiscale_icp(source_down, target_down,
100                [config["voxel_size"]], [50], config, transformation_init)
101    else:  # loop closure case
102        (success, transformation,
103         information) = register_point_cloud_fpfh(source_down, target_down,
104                                                  source_fpfh, target_fpfh,
105                                                  config)
106        if not success:
107            print("No reasonable solution. Skip this pair")
108            return (False, np.identity(4), np.zeros((6, 6)))
109    print(transformation)
110
111    if config["debug_mode"]:
112        draw_registration_result(source_down, target_down, transformation)
113    return (True, transformation, information)
114

This function computes a rough alignment between two fragments. If the fragments are neighboring fragments, the rough alignment is determined by an aggregating RGBD odometry obtained from Make fragments. Otherwise, register_point_cloud_fpfh is called to perform global registration. Note that global registration is less reliable according to [Choi2015].

Pairwise global registration

54# examples/python/reconstruction_system/register_fragments.py
55
56def register_point_cloud_fpfh(source, target, source_fpfh, target_fpfh, config):
57    o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
58    distance_threshold = config["voxel_size"] * 1.4
59    if config["global_registration"] == "fgr":
60        result = o3d.pipelines.registration.registration_fast_based_on_feature_matching(
61            source, target, source_fpfh, target_fpfh,
62            o3d.pipelines.registration.FastGlobalRegistrationOption(
63                maximum_correspondence_distance=distance_threshold))
64    if config["global_registration"] == "ransac":
65        # Fallback to preset parameters that works better
66        result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching(
67            source, target, source_fpfh, target_fpfh, False, distance_threshold,
68            o3d.pipelines.registration.TransformationEstimationPointToPoint(
69                False), 4,
70            [
71                o3d.pipelines.registration.
72                CorrespondenceCheckerBasedOnEdgeLength(0.9),
73                o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(
74                    distance_threshold)
75            ],
76            o3d.pipelines.registration.RANSACConvergenceCriteria(
77                1000000, 0.999))
78    if (result.transformation.trace() == 4.0):
79        return (False, np.identity(4), np.zeros((6, 6)))
80    information = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
81        source, target, distance_threshold, result.transformation)
82    if information[5, 5] / min(len(source.points), len(target.points)) < 0.3:
83        return (False, np.identity(4), np.zeros((6, 6)))
84    return (True, result.transformation, information)
85

This function uses /tutorial/pipelines/global_registration.ipynb#RANSAC or /tutorial/pipelines/global_registration.ipynb#fast-global-registration for pairwise global registration.

Multiway registration

114# examples/python/reconstruction_system/register_fragments.py
115
116def update_posegraph_for_scene(s, t, transformation, information, odometry,
117                               pose_graph):
118    if t == s + 1:  # odometry case
119        odometry = np.dot(transformation, odometry)
120        odometry_inv = np.linalg.inv(odometry)
121        pose_graph.nodes.append(
122            o3d.pipelines.registration.PoseGraphNode(odometry_inv))
123        pose_graph.edges.append(
124            o3d.pipelines.registration.PoseGraphEdge(s,
125                                                     t,
126                                                     transformation,
127                                                     information,
128                                                     uncertain=False))
129    else:  # loop closure case
130        pose_graph.edges.append(
131            o3d.pipelines.registration.PoseGraphEdge(s,
132                                                     t,
133                                                     transformation,
134                                                     information,
135                                                     uncertain=True))
136    return (odometry, pose_graph)
137

This script uses the technique demonstrated in /tutorial/pipelines/multiway_registration.ipynb. The function update_posegraph_for_scene builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.

Once a pose graph is built, the function optimize_posegraph_for_scene is called for multiway registration.

63# examples/python/reconstruction_system/optimize_posegraph.py
64
65def optimize_posegraph_for_scene(path_dataset, config):
66    pose_graph_name = join(path_dataset, config["template_global_posegraph"])
67    pose_graph_optimized_name = join(
68        path_dataset, config["template_global_posegraph_optimized"])
69    run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
70            max_correspondence_distance = config["voxel_size"] * 1.4,
71            preference_loop_closure = \
72            config["preference_loop_closure_registration"])
73

Main registration loop

The function make_posegraph_for_scene below calls all the functions introduced above. The main workflow is: pairwise global registration -> multiway registration.

167# examples/python/reconstruction_system/register_fragments.py
168
169def make_posegraph_for_scene(ply_file_names, config):
170    pose_graph = o3d.pipelines.registration.PoseGraph()
171    odometry = np.identity(4)
172    pose_graph.nodes.append(o3d.pipelines.registration.PoseGraphNode(odometry))
173
174    n_files = len(ply_file_names)
175    matching_results = {}
176    for s in range(n_files):
177        for t in range(s + 1, n_files):
178            matching_results[s * n_files + t] = matching_result(s, t)
179
180    if config["python_multi_threading"] == True:
181        from joblib import Parallel, delayed
182        import multiprocessing
183        import subprocess
184        MAX_THREAD = min(multiprocessing.cpu_count(),
185                         max(len(matching_results), 1))
186        results = Parallel(n_jobs=MAX_THREAD)(delayed(
187            register_point_cloud_pair)(ply_file_names, matching_results[r].s,
188                                       matching_results[r].t, config)
189                                              for r in matching_results)
190        for i, r in enumerate(matching_results):
191            matching_results[r].success = results[i][0]
192            matching_results[r].transformation = results[i][1]
193            matching_results[r].information = results[i][2]
194    else:
195        for r in matching_results:
196            (matching_results[r].success, matching_results[r].transformation,
197                    matching_results[r].information) = \
198                    register_point_cloud_pair(ply_file_names,
199                    matching_results[r].s, matching_results[r].t, config)
200
201    for r in matching_results:
202        if matching_results[r].success:
203            (odometry, pose_graph) = update_posegraph_for_scene(
204                matching_results[r].s, matching_results[r].t,
205                matching_results[r].transformation,
206                matching_results[r].information, odometry, pose_graph)
207    o3d.io.write_pose_graph(
208        join(config["path_dataset"], config["template_global_posegraph"]),
209        pose_graph)
210

Results

The pose graph optimization outputs the following messages:

[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 42 edges.
Line process weight : 55.885667
[Initial     ] residual : 7.791139e+04, lambda : 1.205976e+00
[Iteration 00] residual : 6.094275e+02, valid edges : 22, time : 0.001 sec.
[Iteration 01] residual : 4.526879e+02, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 4.515039e+02, valid edges : 22, time : 0.000 sec.
[Iteration 03] residual : 4.514832e+02, valid edges : 22, time : 0.000 sec.
[Iteration 04] residual : 4.514825e+02, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.003 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 60.762800
[Initial     ] residual : 6.336097e+01, lambda : 1.324043e+00
[Iteration 00] residual : 6.334147e+01, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 6.334138e+01, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.001 sec.
CompensateReferencePoseGraphNode : reference : 0

There are 14 fragments and 52 valid matching pairs among the fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.