A review of algorithms for filtering the 3D point cloud
abstract
In recent years, 3D point cloud has gained increasing attention as a new representation for objects.
However, the raw point cloud is often noisy and contains outliers.
Therefore, it is crucial to remove the noise and outliers from the point cloud while preserving the features, in particular, its fine details.
This paper makes an attempt to present a comprehensive analysis of the state-of-the-art methods for filtering point cloud.
The existing methods are categorized into seven classes, which concentrate on their common and obvious traits.
An experimental evaluation is also performed to demonstrate robustness, effectiveness and computational efficiency of several methods used widely in practice.
1. Introduction
The 3D point cloud [1–3], a new primitive representation for objects, has became increasingly prevalent in many research fields [2], such as object recognition [4] and reconstruction [5,6], due to its simplicity, flexibility and powerful representation capability.
In contrast to triangle meshes, the point cloud does not require to store or maintain the polygonal-mesh connectivity [7] or topological consistency [8].
Processing and manipulating point cloud therefore can demonstrate better performance and lower overhead.
These prominent advantages make the research on processing point cloud a hot topic.
The rapid development of low-cost sensors, such as Kinect [9–11] and time of flight cameras [5,12], makes it easy to obtain point cloud for growing communities.
The point cloud acquired with these sensors, however, inevitably suffers from noise contamination and contains outliers [13,14] due to the limitations of sensors [5], the inherent noise of the acquisition device [15], the lighting or reflective nature of the surface or artifact in the scene [16].
Therefore, it is necessary to perform filtering operations on raw point clouds to obtain accurate point clouds that are suitable for further processing.
In recent years, although a large number of methods contributing to 3D filtering have been proposed, most of these are devised for meshes and only a few approaches directly operate on point cloud.
In addition, there is no survey paper giving an insightful analysis of these filtering methods for point cloud.
Compared with the existing literature, the main contributions of this work are as follows: (i) To the best of our knowledge, this is the first review paper in the literature that focuses on algorithms for filtering 3D point cloud at present.
(ii) This paper provides readers with a comprehensive review of the state-of-the-art methods covered in early work.
(iii) A comparative summary of traits of these methods is demonstrated in table form.
(iv) This paper carries out an experiment concerning on performance comparison of several widely used methods.
The remainder of this paper is organized as follows.
Section 2 presents an overview of filtering approaches for 3D point cloud.
And then experimental results and discussion are illustrated in Section 3.
Conclusions are drawn in Section 4.
2. Methods for filtering point cloud
Filtering is an area of intensive research and the crucial step of the processing pipeline for a wide range of applications.
The main filtering approaches for 3D point cloud can be categorized into the following seven groups, where four classifications (statistical-based, neighborhood-based, projection-based and PDEs-based filtering) are from [17].
2.1. Statistical-based filtering techniques
In the context of filtering point cloud, many techniques utilize the adaptation of the statistical conceptions, which are suitable for the nature of the point cloud.
Schall et al. [18] filtered point cloud using a kernel based clustering approach.
They first accumulated the local likelihoods 𝐿𝑖 computed on every point 𝑝𝑖 to define the likelihood function 𝐿 modeling the probability for the noisy point cloud.
Next, they moved the points to positions of high probability utilizing an iterative scheme motivated by mean shift technique to smooth the point cloud.
This method achieves effectiveness in filtering and robustness in outlier detection.
However, sharp features are not treated with emphasis in their method.
Narváez et al.[15] proposed a new weighted variant of the principal component analysis method for denoising point cloud, which used weighting factors assignment by inversely proportional repartition of the sum of distance to the mean.
Then, the factors and the weighted mean are used to estimate a weighted covariance matrix.
By realizing an eigen-analysis of the matrix, a fitting plane, expanded by the eigenvectors corresponding to the largest eigen-values, and a normal vector to this plane oriented in the direction of eigen-vector corresponding to the smallest eigen-value are obtained at each point.
The variations make the algorithm robust to the noise and outliers.
In addition, the operator 𝑝 ′ = 𝑝 + 𝑡𝑛𝑝 [19] is applied to shift the mean along the normal direction to preserve shape features.
Jenke et al. [20] first employed Bayesian statistics for denoising point cloud.
They found a measurement model 𝑃 (𝐷|𝑆), which specified the probability distribution of estimated point cloud 𝑆 agreeing with measured data 𝐷.
Then, they defined three prior probabilities, such as density priors, smoothness priors and priors for sharp features, to form 𝑃 (𝑆) = 1 𝑍 𝑃𝑑𝑒𝑛𝑠𝑖𝑡𝑦 (𝑆) 𝑃𝑠𝑚𝑜𝑜𝑡ℎ (𝑆) 𝑃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 (𝑆)⋅𝑤(𝑆), 𝑍 was a normalization constant.
Finally, they maximized a posteriori 𝑃 (𝑆|𝐷) to remove noise while preserving features.
𝑆𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑆𝑃 (𝑆|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥𝑆𝑃 (𝐷|𝑆) 𝑃 (𝑆) (1) Kalogerakis et al.
[21] provided a robust statistical framework to filter point clouds.
In their framework, an Iteratively Least Squares (IRLS) approach estimates curvature tensor and assigns weights to samples at each iteration to refine each neighborhood around every point.
The computed curvatures and the final statistical weights are utilized to correct normal.
The robustly estimated curvatures and normal can drive the outlier rejection and denoise point cloud in a feature-preserving manner based on a global energy minimization process.
vron et al. [22] introduced L1 -sparsity paradigm to denoise the point cloud.
Firstly, a re-weighted L1 minimization process is used to restore point orientations.
Then, point position is reconstructed by assuming a local planarity criterion so as to preserve shape features.
Although, this method can achieve reasonable results, points on an edge are sometimes not faithfully recovered [23].
Meanwhile, since L0 is a sparser solution than L1 , Sun et al.
provided an L0 minimization method, which is directly used to denoise point cloud by applying a similar L0 optimization procedure to estimate normals followed by repositioning points along the normal directions in order to better maintain the sharp features (see Fig.
1). Orts-Escolano et al. [24] first used a 3D filtering and downsampling technique based on Growing Neural Gas (GNG) [25–27] network.
This is a growth process to produce a GNG network to represent a raw point Fig.
GNN representation of point cloud, primitively shown in [4]. cloud using a set of 3D neurons and interconnection among them.
This method can preserve the topology of point cloud and deal with outliers in point cloud.
Therefore, filtered point cloud using GNG can improve keypoints detection performance [28] and yield better recognition results [4,29].
This method produced a GNG network mapping to the point cloud (shown in Fig. 2).
2.2. Neighborhood-based filtering techniques
Neighborhood-based filtering techniques determine the filtered position of a point using similarity measures between a point and its neighborhood which has a strong influence on the efficiency and effectiveness of the filtering approach [17].
As described in the following methods, the similarity can be defined by positions of points, normals or regions.
The bilateral filter, originally introduced by Tomasi and Manduchi [30], is an edge-preserving [31] smoothing filter, which is extended to 3D meshed denoising [32–34].
However, these methods require a mesh generation process, which itself suffers from noise [35].
In order to tackle this problem, the bilateral filter is applied directly on the point cloud [6,36–38] based on point positions and intensity.
The 𝑤𝑠 and 𝑤𝑟 are the spatial and range weight, respectively, 𝑤𝑠 = exp ( − (𝑖 − 𝑥) 2 + (𝑗 − 𝑦) 2 2𝜎 2 𝑠 ) (2) 𝑤𝑟 = exp ( − (𝐼 (𝑖, 𝑗) − 𝐼 (𝑥, 𝑦))2 2𝜎 2 𝑟 ) (3) where (𝑖, 𝑗) is the neighborhood of (𝑥, 𝑦), 𝐼(𝑖, 𝑗) presents the intensity at (𝑥, 𝑦), 𝜎𝑠 and 𝜎𝑟 are the standards of Gaussian functions.
In order to reduce time complexity, Xu et al. [39] replaced the weight of gray domain in the bilateral filter with a binary function (4) to achieve a better performance.
However, this kind of filters deals with the point cloud containing intensity components.
As a consequence, normal [35,40–43], being as one of the important attributes of point cloud, is considered in the process of bilateral filter of which the weight is defined as a function of spatial location and normal information of points, shown in Eq. (5).
𝑤𝑟 = { 1 |𝐼(𝑥, 𝑦) − 𝐼(𝑖, 𝑗)| ≤ 3, 𝐼(𝑥, 𝑦) ≠ 0 0 |𝐼(𝑥, 𝑦) − 𝐼(𝑖, 𝑗)| > 3, 𝐼(𝑥, 𝑦) ≠ 0 (4) 𝑤 = 𝑓 [𝑑 (𝑝, 𝑞)] × 𝑔 [ 𝑐 ( 𝐧𝑝 , 𝐧𝑞 )] (5) where f and g are the Gaussian function with 𝜎𝑓 and 𝜎𝑔 .
𝑑(𝑝, 𝑞) defines certain distance (e.g. Euclidean distance) between point p and its neighbor q. 𝑐(𝐧𝑝 , 𝐧𝑞 ) indicates the normal relation at points 𝑝 and 𝑞, such as inner product of normal vector ⟨ 𝐧𝑝 , 𝐧𝑞 ⟩ [35],‖ ‖ ‖ (𝑝 − 𝑞) ⋅ 𝐧𝑝𝐧𝑞 ‖ ‖ ‖ [40], ( 𝐧𝑝 − 𝐧𝑞 )2 [41] and 𝐧𝑝 ⋅ ( 𝐧𝑝 − 𝐧𝑞 ) [42] (see Fig. 3).
Motivated by the mean shift filtering for images, Hu et al. [44] formulated a 3D mean shift based anisotropic filter algorithm by taking the vertex normal, curvature as well as position into account.
According to the similar modes of points produced via mean shift procedure, they designed a cluster scheme to provide an adaptive neighbor searching method to determine the neighborhoods of each point.
Finally, they applied a trilateral filtering to the adaptive neighborhoods to denoise the point cloud whilst preserving features.
Based on the similar idea as bilateral filtering, Schall et al. [17] first extended non-local means method proposed by Buades et al. [45] for image filtering to 3D point cloud.
They introduced a new nonlocal similarity measure 𝛷𝑠 which took the local square neighborhoods around points into account.
𝛷𝑠 ( 𝑝𝑖 , 𝑝𝑗 ) = 𝑒 −𝑆𝑖𝑚( 𝑝𝑖 ,𝑝𝑗 )2 ∕𝑠 2 , 𝑆𝑖𝑚 ( 𝑝𝑖 , 𝑝𝑗 ) = ∑ 𝑜∈𝑂 | | | (𝑝𝑖+𝑜 − 𝑝𝑗+𝑜 ) ⋅ 𝑑𝑖 | | | 2 ∕ ‖𝑂‖, where 𝑝𝑗 is a point of neighborhood 𝑁(𝑝𝑖 ), o denotes the offset between the center point and an arbitrary neighborhood point.
𝑑𝑖 is the displacement direction of point 𝑝𝑖 , which can either be an estimated normal or the line-of-sight of camera lying on the types of noise.
This method yields a more accurate filtering result and possesses a better feature preservation characteristic.
Huhle et al. [12] took color information and intra-patch similarity into account to propose a robust non-local means filter.
This method first detects outliers in the input data and then performs a feature-preserving smoothing processing.
Experimental results show that this denoising algorithm exhibited a superior performance.
Wang et al. [46] presented an effective algorithms consisting of two steps: outliers filtering and noise smoothing.
They designed a connectivity-based approach based on the properties of the relative deviation of the local neighborhood and the average local neighborhood to remove sparse outliers.
After the process, they used a clustering-based scheme to further eliminate the small cluster outliers.
Furthermore, they performed normal filtering by iteratively updating weighted normal of point.
Points are updated to match the filtered normal.
Experimental results show that this method obtains satisfactory results.
2.3. Projection-based filtering approaches
Projection-based filtering approaches adjust the position of each point in a point cloud via different projection strategies to filter point cloud.
Inspired by the concept of L1 median [47–50] from statistics, Lipman et al.
[51] introduced a parameterization-free projection operator, that was, Locally Optimal Projection (LOP) operator [52].
The rationale of this method is to iteratively project a subset of the input point cloud onto this point cloud to reduce noise and outliers.
However, if the input point cloud is highly non-uniform, projection by LOP tends to become non-uniformity, which is undesirable in shape features preservation and normal estimation.
Huang et al. [49] incorporated locally adaptive density weights of each point into LOP, that was WLOP (see Fig.
4), to produce an evenly distributed points set.
An important parameter which controls the amount of smoothness in LOP is h, the support size of the weight function 𝛩.
Too large value of h causes the shrinkage of the point cloud while too small value results in little effect of denoising.
To solve this problem, Ye et al. [53] projected only z coordinates of the point-set, then x and y could be calculated through re-projection.
This method not only obtains smoothness and the preservation of geometric structure, but also removes isolated outlier points.
To address problems mentioned above, Liao et al. [54] integrated a feature preservation weight 𝜃𝑟 (𝑥) = 𝑒 − ( 𝐧 𝑇 𝑖 ( 𝑝𝑖−𝑞𝑗 ))2 ∕𝜎 2 𝑝 into the formula, which penalized large variation in geometry similarity, into the term E1 of L1 median, while retained the repulsion term E2 unchanged to form the feature-preserving locally optimal projection (FLOP).
Since LOP is an isotropic operator, Huang et al. [55] replaced 𝜃 ( ‖ ‖ ‖ 𝑝𝑖 − 𝑞𝑗 ‖ ‖ ‖ ) in the term E1 with following equation 𝛩 ( 𝐧𝑖 , 𝑝𝑖 − 𝑞𝑗 ) = 𝑒 − ( 𝐧 𝑇 𝑖 ( 𝑝𝑖−𝑞𝑗 ))2 ∕𝜎 2 𝑝 (6) where 𝐧𝑖 denoted normal at point 𝑝𝑖 , which was estimated based on an anisotropic neighborhood followed by being smoothed using bilateral normal smoothing.
This anisotropic LOP could obtain better sharp feature preservation.
Since the above methods require high computational effort, Preiner et al.
[56] designed an efficient variant of the locally optimal projection operator.
They employed the Gaussian Mixture Model formed by regularizing the hierarchical EM algorithm to cluster points to represent the point cloud’s density.
Then, they redefined and evaluated the attractive force to obtain the continuous LOP (CLOP).
Finally, CLOP was applied to the Gaussians instead of points to achieve speedups.
There are many approaches built upon the work over Moving Least Squares by Levin [57–59].
Alexa handled noise problem by iteratively projecting the points onto the MLS surface [19,60] defined by two steps (see Fig. 5).
The first step finds a local reference plane 𝐻 = { 𝑥 ∈ 𝑅3 | < 𝐧, 𝑥 > −𝐷 = 0} by locally minimizing ∑ 𝑝∈𝑃 (< 𝐧, 𝑝 > −𝐷) 2 𝑒 −‖𝑟−𝑞‖∕ℎ 2 , where q is the projection of r onto H and h is a global scale factor controlling the degree of smoothing.
A local polynomial approximation with respect to the reference domain is then computed (see Fig. 6).
However, in terms of irregularly sampled point cloud, it is difficult to find a suitable h.
Pauly et al. [61] collected the k-nearest neighbors and dynamically chose h=r/3 to ensure that points in the k-neighborhood contributed to the least-squares optimization.
The drawback of MLS is that the process involves a non-linear optimization for finding the reference plane [8].
It is computational expensive.
Alexa and Adamson [62] proposed a simpler projection procedure, in which local reference planes were defined by a weighted average position at x.
𝑎 (𝑥) = ∑ 𝑖 𝜃 ( ‖ ‖ 𝑥 − 𝑝𝑖 ‖ ‖ ) 𝑝𝑖∕ ∑ 𝑖 𝜃 ( ‖ ‖ 𝑥 − 𝑝𝑖 ‖ ‖ ) and a normal was computed using weighted averages of input normal 𝐧𝑖 .
𝐧 (𝑥) = ∑ 𝑖 𝜃 ( ‖ ‖ 𝑥 − 𝑝𝑖 ‖ ‖ ) 𝐧𝑖∕ ‖ ‖ ‖ ∑ 𝑖 𝜃 ( 𝑥 − 𝑝𝑖 ) 𝐧𝑖 ‖ ‖ ‖ .
Amenta and Kil [63] presented an explicit version of MLS surface definition in terms of the critical points of an energy function e𝑀𝐿𝑆 on lines determined by a vector field 𝒏(𝑥) = argmin 𝑒𝑀𝐿𝑆 and gave a simple procedure that produced a point of the MLS surface, which was not considered in Levin’s paper.
In fact, MLS is a low-pass filter, which could smooth shape features of point cloud.
Thoughtfully, based on M-estimator procedure and moving least squares method, Mederos et al.
[64] gave a new smoothing operator 𝑄(𝑟) = 𝑟 + 𝑡 𝑟𝒏𝑟 computed by using an effective numerical optimization procedure to preserve salient features, where n𝑟 and t 𝑟 were determined by the fitting of a plane H in the neighborhood of point r.
Fleishman et al. [65] introduced a robust moving least squares technique based on forward-search paradigm to deal with noise, outliers and sharp features.
Their work classifies the neighborhood of the point into subsets of outlier-free smooth regions of the surface, and then the moving least squares projection mechanism is operated on points.
But this method requires very dense point clouds and is time-consuming.
Adamson and Alexa [66] adopted cell complexes to preserve the shape features by decomposing the object into cells of different dimensions, while this decomposition demanded effort by the users.
From the above approaches, it can be seen that the plane fitting operation is unstable in regions of high curvature if the sampling rate is below a threshold.
Guennebaud and Gross [67] thus performed algebraic sphere fitting to improve stability where planar MLS fails.
Fua et al. [68] fitted a local quadric patch to a neighborhood of every point.
They then iteratively smoothed the raw point clouds by using these estimated surfaces.
In order to deal with outliers, a metric was defined to measure whether or not two points belong to the same local surface.
This process was curvature-preserving and could not smooth out relevant features.
Wang et al. [69] used the mean shift clustering and adaptive scale sampling consensus to compute the best tangent planes for all points so as to detect and remove outliers.
Subsequently, they dichotomized the feature and non-feature points and denoised them, respectively, by projecting them onto the corresponding quadratic surfaces fitted using their neighborhoods, which were determined based on the normal-based region growing technique.
2.4. Signal processing based method
Signal processing methodology can also be extended to point cloud filtering.
Based on Taubin’s method [70] that applied the Laplacian operators to filter the mesh, Linsen [71] developed a filtering operator containing three features: locality, non-shrinkage and including geometrical aspects.
Pauly et al. [72] introduced the discrete approximation of Laplacian to point cloud.
In order to handle shrinkage, their method approximates the local volume change and compensates the shrinkage by displacing the neighbors of point p with (𝐩 − 𝐩 ′ )∕𝑘, 𝐩 ′ = 𝐩 + 𝜆 △ 𝐩, where △𝐩 is discrete approximation of the Laplacian at point p.
This method, however, could smooth features and lead to the vertex drift.
Motivated by Fourier transformation, Pauly and Gross [73] used spectral processing to filter point cloud.
They applied a discrete Fourier transform to obtain a spectral decomposition of point cloud.
The elaborate filtering operation was then carried out by manipulating the frequency spectrum via the Wiener filter.
Rosman et al. [74] first constructed the collaborative patch P𝑖 using a set of similar patches for each patch 𝑃̂ 𝑖 based on certain distance measurement, and then their algorithm included two phases.
Phase I prevents shrinkage using the eigen-functions of the Laplace–Beltrami operator of the collaborative patch, followed by Wiener filtering based on the denoised estimation from phase I.
2.5. PDEs-based filtering technique
PDE (Partial Differential Equations), one of the most important tools widely used in computer vision and computer graphic, has been successfully applied to many applications tasks such as image or mesh denoising.
PDEs-based techniques for filtering point cloud can be considered as an extension of that to the triangular meshes.
By assembling the local finite matrices constructed over small point neighborhoods into a single matrix, Clarenz et al.
[75] presented a framework for processing point cloud via PDEs.
They discretized and solved an anisotropic geometric diffusion equation for filtering point cloud.
This method could smooth out the noise whereas features were preserved or enhanced.
Extending Taubin’s work [76], Lange and Polthier [77] took the directional curvatures, principal curvatures and the Weingarten map into account to obtain an anisotropic geometric mean curvature flow method to filter point cloud.
First, they used directional curvatures to produce a Weingarten map by discretizing an integral formula.
Next, they computed the eigen-values and eigen-vectors corresponding to principal curvatures and directions respectively.
Finally, they defined an anisotropic Laplacian for mean curvature flow technique by applying curvature information to modify Laplacian.
This method can detect features such as edges but need to choose edge quotients manually.
Based on covariance analysis and constructed directional curvature, Xiao et al.
[78] proposed an anisotropic curvature flow approach.
The corresponding equation they used is 𝜕𝑢∕𝜕𝑡 = 𝑔 ( −𝑘𝐧 ) + (1 − 𝑔) (𝐼 − 𝑢), where G𝑢 is a Gaussian kernel, 𝑘 denotes the directional curvature.
The terms 𝑔 ( | | | 𝑘𝐻 ( 𝐺𝑢 × 𝑢 )| | | ) and 1-g work as a moderated selector of the diffusion process.
I–u is a forcing term to eliminate shrinkage.
Experimental results show that their method achieves a better performance in noise removal, feature preservation, anti-shrinking and anti-point drifting.
To extend many processing methods on images into 3D point cloud, Lozes et al.
[79] constructed weighted arbitrary graphs for representing point cloud, which needed to consider local neighborhood information.
The transposition of framework of PDEs, such as 𝑝-Laplacian operator [80] and PDEs morphological operators, was adapted to these arbitrary graphs (that is PDEs on weighted graphs [81]) to filter point cloud.
Because PDEs-based techniques need to compute the partial differential properties, it is time consuming to use these methods to filter point cloud.
2.6. Hybrid filtering technique
Hybrid strategies usually use two or more filtering techniques together to deal with the raw point clouds.
Liu et al. [82] developed an iterative framework to process point cloud.
They first applied a Weighted Locally Optimal Projection (WLOP) operator to efficiently filter out noise.
Then, they used a mean-shiftbased outlier removal operator to detect and eliminate outliers.
However, the limitations of their algorithm are its difficulty in recovering sharp features and its computational costs.
Zaman et al. [16] proposed a density-based approach for denoising point cloud.
First, they used particle-swam optimization technique to select the bandwidth for approximation of kernel density estimation (KDE).
Then, mean-shift clustering algorithm was utilized to the KDE to remove outliers.
Finally, the noise in the remaining points was reduced by applying bilateral filter.
The result showed that this method is robust with highly noisy point cloud.
2.7. Other methods
There are many other methods used for filtering point cloud.
Szeliski and Tonnesen et al. [83] developed oriented particles (point clouds).
Each point, having a position and its own local coordinate frame, defined both a normal and a local tangent plane of the represented surface.
They devised a set of potential functions to group these points into surface-like arrangements.
Their method could shape the surface by moving points around.
Voxel Grid (VG) filtering method first defines a 3D voxel grid (3D boxes in 3D space) on a point cloud.
Then, in each voxel, a point is chosen to approximate all the points that lie on that voxel.
Normally, the centroid of these points or the center of this voxel is used as the approximation.
The former is slower than the later, while its representative of underlying surface is more accurate.
The VG method usually leads to geometric information loss.
Quadtree-based filtering method creates a quadtree as a data structure to organize a point cloud.
This representation can improve the speed of neighborhood search, which is beneficial to the efficiency of the filtering algorithms.
And then, the filtering strategies or methods (like neighborhood-based approaches or projection-based methods) are used to filter the point clouds.
Digne et al. [84] introduced a similarity based filtering to denoise point cloud.
They decomposed the input point clouds into a smooth part S𝑠𝑚𝑜𝑜𝑡ℎ yielded by using the mean curvature motion and a high frequency term →𝑉 .
And then a non-local strategy, filtering high frequency based on L2-distance similarity between local descriptors computed for each point, was used to denoise →𝑉 to achieve shape denoising.
Nevertheless, this algorithm relied on point densities and cannot differentiate texture and structured noise.
Table 1 presents a summary of the filtering methods in terms of noise removal, feature preservation, outlier removal and other performance.
These methods are listed chronologically by year of publication.
3. Experimental results and discussion
After a comprehensive analysis of the major 3D point cloud filtering algorithms, we now proceed to conduct experiments aiming at comparing and evaluating the performance of selected point cloud filtering methods.
3.1. Selected methods
In our experiments, we select 7 different filtering methods which are widely cited and used in comparison.
These algorithms have been already briefly introduced in Section 2 and include: Voxel Grid filter (VG), Normal-based Bilateral Filter (NBF), Moving Least Square (MLS), Weighted Locally Optimal Projection (WLOP), Edge Aware Resample (EAR) and L0 minimization (L0).
These filtering algorithms are tested on different point cloud models corrupted with Gaussian noise to evaluate corresponding performance.
Version 1.7.2. WLOP and EAR were implemented in C++, while the others (namely RMLS and L0) were programmed by MATLAB R2016a.
The experiments are carried out on a PC with Intel(R) Core(TM) i7-4790 CPU @ 3.
60 GHz and 16 GB memory.
3.2. Efficiency
A comprehensive evaluation of the considered filtering algorithms is given with respect to computational efficiency.
We calculate the running time required by each method to filter a model of a point cloud.
Since the number of points is one of the main factors that would affect the computational time, models with various numbers of points are chosen to test the performance of these methods.
3.3. Measures
In order to evaluate the quality of results, two error metrics 𝛿 and 𝐷𝑚𝑒𝑎𝑛 are used.
𝛿 represents the averaged angle over all angles between the ground truth point normals and the resulting point normals.
𝐷𝑚𝑒𝑎� is the average distant from the resulting points to the corresponding ground truth points.
3.4. Results and discussion
Table 2 gives the running time of different methods over different point cloud models with different numbers of points.
Some observations can be found among these algorithms.
Firstly, voxel grid approach is the most efficient filter, which is nearly two or three orders of magnitude faster compared with others.
Secondly, it is worth pointing out that when the number of points in models is less than 200,000, NBF can achieve almost the same performance in terms of efficiency as MLS, while with the increase of points, NBF becomes very time-consuming.
RMLS and WLOP are two or three times slower than MLS and NBF on the whole.
Finally, due to different strategies for normal estimation, normal filtering and point updating, L0 and EAR, therefore, are the most computationally expensive algorithms.
The error metrics for different methods are presented in Table 3.
According to the error metrics, it is evident that there is an obvious difference between ground truth point clouds and filtered ones by VG, indicating that VG cannot obtain the results as expected.
The filtering effectiveness of NBF is similar to VG in some extend apart from the Armadillo model.
In addition, for most point cloud models, EAR and L0 achieves a better filtering performance with relatively lower 𝛿 and 𝐷𝑚𝑒𝑎𝑛.
Fig. 7 shows the filtered results of these four methods on bowl models with Gaussian noise (𝜎 = 0.005).
It can be seen that the bowl is still noisy after filtering by NBF and VG, while MLS, WLOP and EAR can produce good visual results.
Experimental results tested on sofa models with Gaussian noise (𝜎 = 0.01) are illustrated in Fig. 8.
WLOP, EAR and L0 can preserve the shape feature better than other methods.
And it can be seen from Fig. 9 that EAR and L0 yield a better results compared to the others in terms of noise removal and feature preserving.
Overall, as for real-time systems on point clouds with a large number of points, VG can be recommended as a better choice regarding to computational efficiency.
However, its filtering effect is unsatisfactory.
In contrast, EAR and L0 demonstrate a sufficient performance in terms of noise removal, together with feature preserving, yielding a relatively accurate models for further processing (e.
g. object recognition).
It has to be noted that they are both computationally expensive.
MLS can be considered as a relatively good trade-off which provides a balance between running efficiency as well as the filtering effectiveness.
4. Conclusion
This paper has placed a strong emphasis on the comprehensive review of the state-of-the-art algorithms for filtering 3D point cloud.
Although there are a few existing research on point cloud filtering, it is believed that filtering on the raw point cloud, being as a crucial step of point cloud processing pipeline, remains a challenging task.
A brief discussion of future research directions are presented as follows.
(1) Combination of color and geometric information: For point clouds, especially these containing color information, a pure color or pure geometric attributes based method cannot work well.
Hence, it is expected to combine the color and geometric information in the filtering process to further increase the performance of a filtering scheme.
(2) Time complexity reduction: Because point clouds contain a large number of points, some of which can be up to hundreds of thousands or even millions of points, computation on these point clouds is time consuming.
It is necessary to develop filtering technologies to filter point cloud effectively to reduce time complexity.
(3) Filtering on point cloud sequence: Since object recognition from a point cloud sequence will become the future research direction.
And filtering the point cloud sequence will help to improve the performance and accuracy of object recognition.