Poisson Manifold Reconstruction — Beyond Co‐dimension One

Screened Poisson Surface Reconstruction creates 2D surfaces from sets of oriented points in 3D (and can be extended to co‐dimension one surfaces in arbitrary dimensions). In this work we generalize the technique to manifolds of co‐dimension larger than one. The reconstruction problem consists of finding a vector‐valued function whose zero set approximates the input points. We argue that the right extension of screened Poisson Surface Reconstruction is based on exterior products: the orientation of the point samples is encoded as the exterior product of the local normal frame. The goal is to find a set of scalar functions such that the exterior product of their gradients matches the exterior products prescribed by the input points. We show that this setup reduces to the standard formulation for co‐dimension 1, and leads to more challenging multi‐quadratic optimization problems in higher co‐dimension. We explicitly treat the case of co‐dimension 2, i.e., curves in 3D and 2D surfaces in 4D. We show that the resulting bi‐quadratic problem can be relaxed to a set of quadratic problems in two variables and that the solution can be made effective and efficient by leveraging a hierarchical approach.


Introduction
A central task in geometry processing is the reconstruction of a surface from point samples. Most of the work in this area considers two-dimensional surfaces embedded in 3D. Among different ways to define a surface from point samples (see Section 2), level-set techniques and, in particular, (screened) Poisson Surface Reconstruction [KBH06,KH13] have proven to be particularly resilient to common sampling artifacts such as noise, outliers, and missing data. The reason for this robustness is that they treat the reconstruction problem globally (see Section 3 for details).
The aim of this work is to generalize the class of Poisson Surface Reconstruction methods to oriented d-dimensional manifolds embedded in an n-dimensional space with co-dimensiond = n−d > 1. The input is a sampling S of the manifold, consisting of locations of the samples and (oriented) frames spanning thed-dimensional normal space. The output is a vector valued function F : R n → R n−d that represents the manifold as a level-set, i.e. M S = x ∈ R n : F(x) = 0 . (1) As in screened Poisson Surface Reconstruction, we solve for the coordinate functions of F whose gradients span the normal frames prescribed at the sample points. This poses several challenges.
Whend > 1, there is a continuum of frames that can be used to represent the same orientedd-dimensional normal space at a point. As the reconstructed manifold should only depend on the oriented normal space at the samples, we require a representation that is agnostic to the particular choice of frame.
For a similar reason, there can be a continuum of solutions F whose gradients span the normal space at the sample points. Often, this reflects the fact that the underlying optimization is non-convex making it hard to find solutions that avoid getting trapped in unwanted local minima.
Key contributions Our main contributions are choices in representation and optimization that lead to a practical extension of screened Poisson Surface Reconstruction to higher co-dimension, overcoming the mentioned challenges: • We argue that the right mathematical tool to deal with the multiplicity of frames is exterior algebra (see Section 4 for a brief introduction): we encode the local normal space of the manifold as an exterior product -making the representation independent of the particular choice of frame. • We show that using the exterior product (Section 5), the problem of solving for the coordinate functions of F reduces to minimizing a multi-quadratic energy. The continuous appraoch can be discretized using a finite basis, and we specifically develop a discretization for the cased = 2 (Section 6). • We propose a hierarchical approach for minimizing the energy that smooths out the energy landscape at coarser resolutions, keeping the optimization from getting trapped in local minima and consistently returning the same solution, regardless of initial guess (Section 7).
We demonstrate the efficacy of our approach in reconstructing curves in 3D and surfaces in 4D, analyzing run-time performance, convergence, and stability in the presence of noise (Section 8). We conclude by offering some discussion and proposing directions for future work (Section 9).

Related Work
For the vast field of curve reconstruction in the plane and surface reconstruction in 3D, we refer the reader to existing overview literature [OPP * 21, BTS * 14, Dey06]. We defer a discussion of reconstructing a surface as the level set of a signed scalar function, i.e., d = 1 to the next section. We are unaware of work that extends this approach to the intersection of level-sets (d > 1). Here we focus on reconstruction approaches that have been extended to manifolds in arbitrary dimension and co-dimension.
Subsets of simplicial complexes A classic approach to curve reconstruction in the plane is based on first triangulating the set of points and then selecting an appropriate subset of the edges [KSO04]. For nicely sampled curves one observes that the Voronoi cells of the points are elongated in direction normal to the curve [AB99]. While many techniques based on this observation in 2D fail to generalize to 3D or higher dimension, the general observation still holds and allows detecting the local dimension of the tangent space in higher dimension and co-dimension [DGGZ02,ACSTD07]. Combining this idea with modifying the weights in a weighted Delaunay triangulation [Aur87,dG-MMD14] to remove slivers [CDE * 00] leads to a first algorithm with guarantees [CDR05]. By the authors own admission it is not practical "mainly because it requires a very dense and noise-free sample", but also because of the high computational complexity.
An idea that keeps the complexity mostly dependant on the dimension of the manifold, and to a lesser extent the ambient space, is to compute the Delaunay triangulation in the tangent planes of the points [BF04]. While this could be used for reconstruction directly [Fre02], the combination with the above ideas on sliverremoval leads to a practical algorithm with guarantees [BG10]. Notably, it is only linear in ambient dimension (but quadratic in the number of samples).
All of the above require clean and dense samples. Some tolerance to noise can be achieved by appropriately selecting a subset of the samples, for example using topological persistence [CO08].
Projection operator / ridges of density function A popular approach in computer graphics and geometric modeling for defining a surface from surface samples is to use a projection operator: the operator is defined by repeatedly projecting any point in space onto a locally estimated tangent plane. The process can be carefully constructed so that it satisfies the properties of a projection [Lev04] and the surface is defined as the set of stationary points. This idea has been used first for curve reconstruction from points [Lee00], and later for surfaces in 3D [ABCO * 03]. It generalizes directly to arbitrary dimension and co-dimension [AA06,SL20]. For well-behaved manifolds and dense enough sampling the projection can be constructed so that it guarantees homeomorphic reconstruction [CC19]. Likewise, tolerance to some noise can be proven [FIK * 18]. Relying on a projection operator and only locally establishing tangent-and/or normal-spaces alleviates the problem that not all manifolds admit globally smooth frames and even supports reconstruction of non-orientable manifolds. In contrast, globally smooth reconstruction potentially deals better with outliers and missing data, and the signed implicit function significantly simplifies iso-surfacing the zero-level set.
In machine learning, related surface definitions have been used [DG03], albeit from a different mathematical view-point: the point set is considered to be a discrete sample of a probability distribution and the manifold may then be defined as the ridge of an appropriately chosen density function [OE11,GPPVW14].
Parametric curves and surfaces A common theme in CAGD is fitting a spline curve or surface to given point data. This is done essentially by defining a distance between the parametric curve or surface to the samples and then optimizing the control points to minimize this distance measure. For curves this has been done in application domains using the idea of active contour models [CnRT * 00, SKG15]. The idea has been extended to 3D [PL03,PLH02] and it seems that it could also be used in higher dimensions and co-dimension.

Background: Implicit Surface Reconstruction
There are different strategies for reconstructing a surface from point samples. We focus on methods that first approximate the input data with a level-set of a scalar function and then extract a typically piecewise linear approximation of the level-set as an explicit surface representation. One approach has been to represent the surface implicitly by a (possibly truncated) signed distance function [HDD * 92, CL96, CBC * 01, MPS08, NIH * 11, CT11]. While this has advantages in some applications, for the reconstruction problem it has been observed that it is useful to ask only that the function has non-vanishing gradients close to the surface. This avoids spurious surface components and is the central idea of screened Poisson Surface Reconstruction [KH13].
Screened Poisson Surface Reconstruction Given a set of oriented points S ⊂ R 3 × R 3 where each sample s = (s.p, s.n) is described by its position in R 3 and the associated (possibly areaweighted) outward-facing normal, the screened Poisson Surface Reconstruction algorithm proceeds in three steps.
First, a sampling density ρ : R 3 → R ≥0 is estimated and the oriented samples are extended to a target vector-field V S : R 3 → R 3 by "splatting" the samples' normals: with κp a splatting kernel (weighted inversely to density ρ).
Next, the implicit function f : R 3 → R is found by minimizing the combination of gradient-fitting and screening energies: (3) Finally, given the function f : R 3 → R, the reconstructed manifold M S is obtained by extracting the zero level-set: Choosing a (finite) function basis, this reduces to solving a linear system Ax = b where the system matrix A encodes the discrete Laplacian combined with the mass matrix defined by the samples, the vector b describes the divergence of V S , and the the solution vector x gives the coefficients of the implicit function in the basis. Choosing a B-spline basis, the system is solved efficiently using a hierarchical solver [BHM00]. Iso-surfacing can be done using standard techniques [LC87, SW04, KKDH07] to obtain a triangle mesh approximating the zero level-set of f .

Exterior Products
In co-dimensiond = 1, we can represent the d-dimensional tangent space at a point uniquely in terms of its outward-facing normal. In a similar way, for co-dimensiond ≥ 2, we would like to represent the d-dimensional tangent space uniquely in terms of thē d-dimensional normals, without explicitly choosing a basis for thē d-dimensional normal space. Our desiderata are: (1) The representation should be linear, and (2) it should uniquely encode the oriented normal frame together with its measure (area, volume, etc.).
We begin by reviewing the necessary concepts for co-dimension 2, i.e., two vectors spanning the normal space, and generalize later. We focus our review on the case that the vector space is R n , with canonical basis (e 1 , . . . , en), with the i-th coefficient of e i equal to 1 and all other coefficients equal zero.
Exterior product of two vectors The right concept for our purpose is the exterior product. Given two vectors u, v ∈ R n , the exterior product u ∧ v represents the vector area of the parallelogram spanned by u and v in R n . It satisfies the following properties: It follows that exterior products can be represented in a vector space with basis e i ∧ e j , i < j. Let u = ∑ i u i e i and v = ∑ i v i e i we have: Geometrically, we can think of the coefficients u i v j − u j v i in this basis as the (signed) areas of the shadows of the parallelogram on the planes spanned by e i , e j . An algebraic view leads to a convenient implementation: noting that the outer product uv T is bilinear and taking the difference yields the anti-symmetry in the form of a skew-symmetric matrix.
Constructing the exterior product of (µv + νu) and (µ ′ v + ν ′ u) results in (µν ′ − νµ ′ )(uv T − vu T ). Noting that the scalar term is the determinant of µ ν µ ′ ν ′ , the matrix acting on the vectors u and v by right-multiplication, shows the invariance to the group of transformations that map the plane spanned by the frame onto itself and preserve the signed area -the matrix group SL(2, R). As before, a basis can be constructed as e i e T j − e j e T i with i < j, spanning all R n×n skew-symmetric matrices. The basis has n 2 elements. As desired, we can easily perform computation on this space as the skew-symmetric matrices form a vector space. However, while every frame with signed area can be uniquely encoded as a skewsymmetric matrix, not every skew-symmetric matrix is the exterior product of two vectors. In fact, the non-zero exterior products are precisely the skew symmetric matrices with rank 2.
Linearization One of the challenges in working with the exterior product is that it is bi-linear in the input, making quadratic energies in the exterior product difficult to optimize. An obvious linearization results from fixing one of the two vectors and considering the energy as a function of the other one.
Exterior product spaces All of the concepts above extend naturally to co-dimensiond > 2. The geometric picture is that we encode the oriented vector measure of a parallelepiped spanned byd vectors v 1 , . . . , vd ∈ R n as the signed measures of the shadows ontō d-dimensional linear subspaces spanned byd-tuples of canonical basis vectors. We denote this space as d R n . It is spanned by the basis e i1 ∧ · · · ∧ e id , with i 1 < . . . < id.
Anti-symmetry now means that if the positions of any two elements in the product are transposed, the sign changes. To explain what this entails we use a multi-dimensional index i = (i 1 , . . . , id). A permutation π of the indices can be encoded as an orthogonal matrix in {0, 1}d ×d and sign π is the determinant of this matrix. Intuitively, sign π equals +1 if the number of transpositions in π is even, otherwise it is −1. With this notation, the anti-symmetry is characterized by the property that for any permutation π, we have: The algebraic construction now requires tensor products anddth order tensors (generalizations of square matrices with more than two indices). The tensor product v 1 ⊗ · · · ⊗ vd is multilinear and generates all possibled-wise products of the entries of the vectors. We can represent the exterior product ofd vectors as an antisymmetricd-th order tensor, similar to the representation by skew symmetric matrices, as v 1 ∧ · · · ∧ vd ≡ ∑ π sign π v π(1) ⊗ · · · ⊗ v π(d) . (8) In practice, it suffices to store only the entries with multi-indices i for which i 1 < . . . < id. Entries with repeated index i j = i k are necessarily zero because of anti-symmetry. This means it suffices to store nd coefficients.
As in the two-dimensional case, the space of anti-symmetricd-th order tensors uniquely encodes thed-frames with signed measure, sometimes referred to asd-blades, but also contains elements that have no correspondingd-frame.

Exterior product fields
We remark that the above discussion extends directly to vectorfields on R n by applying the exterior product point-wise. For example, given vector-fields ⃗ v 1 , . . . ,⃗ vd : R n → R n , the exterior product defines an anti-symmetric tensor field on R n :

Generalizing Screened Poisson Surface Reconstruction
In this section we describe the generalization of screened Poisson Surface Reconstruction (sPSR) to the reconstruction of codimensiond ≥ 2 manifolds. For simplicity, we focus on the casē d = 2, though our discussion extends to higher co-dimension. Figure 1, gives a visualizations of our process.
To simplify notation, we overload the operator ⟨·, ·⟩ (and correspondingly ∥ · ∥ 2 ) so that for scalar-fields, vector-fields, and skewsymmetric matrix fields, it denotes the integral of the product, the dot-product, and the Frobenius-dot-product (respectively) over R n .

Overview of the approach
As in sPSR, we start with oriented samples S ⊂ R n × (R n × R n ), where each sample is described by its position in n-dimensional space and two vectors describing the (weighted) oriented normal space at that point. Our goal is to solve for a vector-valued function F = ( f 1 , f 2 ) : R n → R 2 with the property that at every sample s = (p, n 1 , n 2 ) the gradients ∇ f 1 |s.p and ∇ f 2 |s.p describe the same normal space as the vectors s.n 1 and s.n 2 . In the language of exterior products, we want ∇ f 1 |s.p ∧ ∇ f 2 |s.p = s.n 1 ∧ s.n 2 .
To achieve this, we proceed as in sPSR, estimating the sampling density ρ : R n → R ≥0 and extending the samples' normals to a target skew-symmetric matrix field: V S : R n → 2 R n by "splatting" the samples' skew-symmetric matrices: with κp a splatting kernel (weighted inversely to density ρ).
Then we solve for the function F = ( f 1 , f 2 ) minimizing the combination of skew-symmetric-matrix-fitting and screening energies: Finally, given the function F : R n → R 2 , the reconstructed manifold M S is obtained by extracting the zero level-set: Feasibility Before proceeding, it is natural to ask: Clearly, for such an F to exist, the manifold must have a trivial normal bundle. For manifolds of dimension d ≥ 4 this is known to be more restrictive than orientability (one such example being complex projective 2-space which, by Whitney's Embedding Theorem [GP10], can be embedded in R n for n large enough). For manifolds of dimension d = 1 and d = 3, [Kir89, Sec. VIII, Thm. 1] shows that this is equivalent to orientability, and the same appears to hold for d = 2. However, in the general case, there exist closed manifolds embedded in R n with trivial normal bundles that cannot be realized as level-sets of an implicit function [AK85].
It is known that if M has a trivial normal bundle, such an F can be constructed locally (i.e. in a tubular neighborhood) [Lee12, Ex. 10-18] and can be extended to a function F : R n → S n−d (e.g. using the Pontryagin-Thom Collapse Map) [Mil65, Ch. 7, Thm. C]. Furthermore, Milnor's construction of a function F : R n → S n−d can be transformed into a function F : We leverage this in Section 5.4, using the local sampling density to select the connected components of F −1 (0) comprising the reconstruction.
Finally, we note that while our approach assumes that the solution M has a trivial normal bundle, the use of the exterior product to describe the normal space means we do not require a trivialization to be given. This is in contrast to [Lee12,Mil65] that use the framing to define the function in the tubular neighborhood.

Optimization
The challenge in minimizing the energy in Equation 10 is that the exterior product ∇ f 1 ∧ ∇ f 2 is bi-linear in the functions f 1 and f 2 , making the fitting energy term bi-quadratic. Thus, minimization requires solving a cubic system in f 1 and f 2 . We address this by alternately linearizing. Concretely, fixing f i , let F i be the map taking scalar fields in R n to skew-symmetric matrix fields: This map is linear, so for fixed f 2 the energy in f 1 is quadratic and can be minimized by solving a linear system. Analogously, fixing f 1 gives a quadratic energy in f 2 .

Informal remark
We note that the linearization of the energy has an elegant geometric interpretation. Assume the idealized situation in which f 2 is the indicator function of some co-dimension 1 surface Ω passing through the co-dimension 2 manifold M S . Consider the contribution of points q ∈ R n to the energy E( f 1 ).
q ̸ ∈ Ω: In this case V S (q) vanishes because q is away from the samples and ∇ f 1 ∧ ∇ f 2 vanishes regardless of the value of f 1 because f 2 is constant away from Ω. That is, points q ̸ ∈ Ω do not contribute to the energy, allowing us to replace the domain of integration in the energy: q ∈ Ω: In this case consider the projection πq : TqR n → TqΩ, taking vectors in R n to their projection onto the tangent space of Ω at q. Decomposing the gradient of f 1 as: and noting that the second term is parallel to ∇ f 2 , we get: Letting ⃗ N : Ω → T Ω be the (unique) tangent vector field on Ω such that ⃗ N ∧ ∇ f 2 = V S , we can re-write the energy as: Noting that ⃗ N vanishes for points q ̸ ∈ M S (since V S vanishes there) and, up to scale, ⃗ N is the normal to M S on Ω (since the norms of ∇ f 2 and V S should be constant on M S ), the linearization seeks the function f 1 whose projected gradient is the vector field that vanishes away from M S and is equal to the normal of That is, the linearization solves a generalized version of the Poisson Reconstruction problem in which the samples S live on a manifold Ω, rather than in Euclidean space.

Regularization
Note that if we can find a solution F : R n → R 2 for which the energy vanishes, then for any M ∈ SL(2, R) the function F · M will also have vanishing energy. This is because (1) the exterior products of gradients is invariant to linear re-combinations of the coordinate functions which preserve the skew-symmetric matrix and (2) the screening term will continue to vanish for all linear re-combinations.
A natural way to select between the multiplicity of solutions is to prefer those functions Incorporating this directly into the energy is challenging as it would add quartic terms into the optimization.
Instead, we propose incorporating the Dirichlet energy, often used as a smoothness regularizer, defining the modified energy as the sum of the energy in Equation 10 and a Dirichlet regularizer: As the Dirichlet term is quadratic in F, this does not make the system fundamentally more difficult to solve.
We note that while the addition of the Dirichlet energy does introduce a preference for a function F = ( f 1 , f 2 ) whose coefficients' gradients are perpendicular and equal length, there remains an ambiguity due to the action of SO(2), similar to the ambiguity when defining smooth frames along curves [BWR * 08].
Intuition The idea behind this approach can be understood by considering the cross-product in R 3 (formally the dual of the exterior product). Given a vector n ∈ R 3 consider asking for a pair of vectors v 1 , v 2 ∈ R 3 whose cross-product equals n. There exist many such pairs of vectors and to narrow the solution space we consider the constrained optimization problem of minimizing Using the fact that ∥n∥ = ∥v 1 ∥ · ∥v 2 ∥ · sin(α) where α is the angle between v 1 and v 2 , we get: This is minimized when sin(α) = 1 and ∥v 1 ∥ = ∥v 2 ∥ = ∥n∥.
That is, when the vectors are orthogonal and equal length.
Relation to sPSR The Dirichlet energy term is not necessary in sPSR because there the gradient is explicitly constrained to match V S . In contrast, for higher co-dimensions the constraint is formulated only on the exterior product of the gradients.

Trimming the level-set
One of the challenges for co-dimensiond ≥ 2 is that the zero levelset of F may be be a superset of the manifold M, with spurious geometric components outside a tubular neighborhood of M. (In the case of co-dimensiond = 1, sPSR solves for the indicator function of M, which does not have this problem.) Since the reconstruction process estimates the sampling density

of 15
ρ : R n → R ≥0 , we address this problem by using ρ to trim the levelset, discarding connected components of the level-set over which the (maximum) sampling density is low.   ,7) torusknot. The zero level-set is shown on the left, visualized as the union of dilated curves drawn in gray (components for which some part is near a region of high sampling density) and curves drawn in black (components for which no part is near a region of high sampling density). Looking at the level-sets of f 1 and f 2 individually (right two columns) it is not surprising that the spurious curves drawn in black appear. The target curve is highly oscillatory so we expect the functions f 1 and f 2 to exhibit similar twisting to ensure that they intersect along the curve.
This challenge further motivates the need for f 1 and f 2 to have equal-length and orthogonal gradients. If the gradients are orthogonal and equal length, then for a point p ∈ M on the manifold, any unit normal direction n, and small ε, we have: This means that as we (locally) move off the manifold at p, we can expect the deviation of F from zero to grow quickly and independently of the direction in which we move, so that the spurious geometry should not be too close to the target manifold.

General co-dimension
While this section focused on the particular case ofd = 2, the discussion generalizes directly to higher co-dimension.
Constructing the target anti-symmetric tensor field Given an oriented point set S ⊂ R n × (R n × · · · × R n ), we can construct the anti-symmetric tensor field V S : R n → d R n by setting: κs.p(q) · s.n 1 ∧ · · · ∧ s.nd.
Defining the energy As above, the goal can be stated as solving for the function F = ( f 1 , . . . , fd) : R n → Rd minimizing: with the ellipses denoting the Dirichlet regularization energy.
Optimizing As above, this energy is multi-quadratic in the coordinate functions of F. Thus, it can be optimized by alternately fixing all but one coordinate function f i , so that the energy becomes quadratic in f i , reducing the optimization to a linear solve.
Complexity of the solve Noting that the anti-symmetric tensor field ∇ f 1 ∧ · · · ∧ ∇ fd is multi-linear in the coordinate functions, the energy E(F) is a polynomial of degree 2d in the coordinate functions. Thus, finding the minimizer of F requires solving a system of polynomial equations of degree 2d − 1, so that the problem becomes more difficult as the co-dimension is increased.
The case ofd = 1 For sPSR, the co-dimension isd = 1 so that 1 R n ≈ R n and the exterior product acts as the identity. In this case the above expressions for the target vector-field and the energy reduce to those in Equations 2 and 3, demonstrating that the proposed approach generalizes sPSR. In particular, we have 2d − 1 = 1, corroborating the established result that sPSR reduces surface reconstruction to the solution of a linear system of equations. We also note that in the case of co-dimension one, orientable manifolds necessarily have trivial normal bundles so the requirement that the normal bundle be trivial does not introduce additional constraints.

Discretization and Solution
We next proceed to a discussion of discretization. We focus on the general problem in this section and discuss our implementation using the B-spline basis in Section 7.

Basis
To discretize the energy in Equation 12 we need to define a discrete space of real-valued functions F (for representing the solution coefficients f 1 and f 2 ) and a discrete space of skew-symmetric matrix fields W (for representing the constraints defined by V S ). We do this by choosing a basis {φ i } N i=1 to span F and then define the basis spanning W in terms of the {φ i }.
In particular, given a basis {φ i } N i=1 of (weakly differentiable) functions, we define the skew-symmetric matrix fields: By anti-symmetry, ω (i, j) = −ω ( j,i) so that we need only consider functions ω (i, j) with i < j. Furthermore, we only need to consider indices (i, j) for which the supports of φ i and φ j overlap (otherwise ω (i, j) is everywhere zero). Thus, we define: and take {ω i } i∈I to be the basis spanning W.
The advantage of this formulation is that the exterior product of the gradients of two functions in F will reside in W: For simplicity, we will refer to f = ( f 1 , . . . , f N ) ⊤ ∈ R N equivalently as the coefficients of the function ∑ i f i · φ i and as the function itself. Similarly we will refer to w ∈ R |I| as both the coefficients of a skew-symmetric matrix field and the field itself.

Mass and stiffness matrices
Given the bases {φ i } N i=1 and {ω i } i∈I we obtain the mass and stiffness matrices, M F , S F ∈ R N×N and M W ∈ R |I|×|I| , by integrating the products of basis functions (and their gradients): Additionally, given the samples S, we setM F ∈ R N×N to be the mass matrix defined by the samples:

The exterior product
Given F = (f 1 , f 2 ) ∈ R N×2 , we denote by W F ∈ R |I| the exterior product of the gradients of f 1 and f 2 : Lastly, we denote by [F] 1 ∈ R |I|×N (resp. [F] 2 ∈ R |I|×N ) the matrix taking a function f ∈ R N and returning the exterior product of the gradient of f 1 with the gradient of f (resp. the gradient of f with the gradient of f 2 ): The sign (−1) i is required to account for the position of the gradient of f within the exterior product.

Prolongation
Next, We consider the case of nesting real-valued function spaces, F ⊂ F, with bases {φ 1 , . . . ,φ N } and {φ 1 , . . . , φ N }, for which the nesting is given in terms of a prolongation matrix P F ∈ R N× N . We show that despite the transition from a linear to a multilinear system, the classical Galerkin approach carries over in a straightforward manner (when solving in a strictly coarse-to-fine fashion).
As above, the basis for scalar fields {φ i } N i=1 defines a basis for skew-symmetric matrix fields {ω i } i∈ I . Furthermore, due to the bilinear relationship between the bases, the associated space of skewsymmetric matrix fields W is contained in W, with the nesting relationship given by the prolongation matrix P W ∈ R |I|×| I| : Formally, this leverages the fact that for any linear map P : F → F there is a canonical multilinear map ⊗ k P : In what follows, we will overload notation, dropping the superscripts F and W in the mass, stiffness, and prolongation matrices when the distinction between F and W is contextually clear.
Observation For all F = (f 1 ,f 2 ) ∈ R N×2 , prolonging and taking the exterior product of the gradients is equivalent to taking the exterior product of the gradients and then prolonging:

Discretized energy
Given the target vector-field V S the problem of solving for the coefficients function f 1 , f 2 ∈ R N becomes the problem of minimizing: where b ∈ R |I| is the dual representation of the target skewsymmetric matrix field: We note that although ∥V S ∥ 2 is not represented in our discretization (the target skew-symmetric field V S need not reside in W), its value does not affect the optimization since it is independent of F.
Hierarchical system Given nesting function spaces F ⊂ F we define the energy at the coarser resolution in the usual way -prolonging to the finer resolution and evaluating Equation 14: Leveraging the commutativity of the prolongation with the exterior derivative (Equation 13) and the fact that the multilinear mass matrix still satisfies the Galerkin property -that the coarse mass matrix is M = P ⊤ · M · P -it follows that: Comparing to Equation 14, we see that this is the energy we would have computed at the coarse resolution if we defined the coarse constraint vector as the restriction of the finer one,b = P ⊤ · b. That is, the Galerkin formulation designed for the linear setting extends naturally to the multi-linear setting.

Relaxing the system
We leverage the fact that fixing f 1 (resp. f 2 ) the energy becomes quadratic in f 2 (resp. f 1 ). For example, fixing f 1 the energy in f 2 , up to a constant term, becomes: giving the optimal f 2 as the solution: Algorithm 1 summarizes our approach for refining the estimate of the solution F = (f 1 , f 2 ).

Hierarchical relaxation
Given nesting spaces of scalar fields F 0 ⊂ · · · ⊂ F L with associated prolongation matrices, we define corresponding nesting spaces of skew-symmetric matrix fields W 0 ⊂ · · · ⊂ W L with their associated prolongation matrices. Then, we relax the system in a coarse-to-fine manner, first by restricting the dual constraints b to the coarsest resolution, then proceeding as in the upstroke of the standard (linear) multigrid algorithm [BHM00], relaxing at the current level and then up-sampling the estimated solution to serve as an initial guess for the next finer level, proceeding until the system is relaxed at the finest level. Algorithm 2 summarizes our approach for finding the minimizer of the energy.

14:
P F ℓ ← ScalarProlongation(2 ℓ ) 15: F ℓ+1 ← P F ℓ · F ℓ 16: Return F L As the energy is bi-quadratic, the implementation of a full multigrid solver (i.e. fine-to-coarse, in addition to coarse-to-fine) is more challenging. We show in the Supplemental that this too can be done, allowing for the implementation of standard multi-cycle V-, W-, and F-cycle solvers.

Choosing a basis
We discretize the space of scalar fields using the first-order B-spline basis indexed over the corners of a regular grid. We choose this basis for two reasons.
1. The regularity of the grid facilitates indexing and enables all of the linear operators (with the exception of the mass matrixM defined by the samples) to be defined using a stencil, reducing the cost of system set-up. 2. The basis supports a hierarchical representation, with the space of scalar-fields defined over a grid of resolution R × · · · × R nesting within the space of scalar fields defined over a grid of resolution 2R × · · · × 2R. Denoting by i = (i 1 , . . . , i d ) (resp. j = ( j 1 , . . . , j d )) the coordinates of a corner of a 2R × · · · × 2R (resp. R × · · · × R) grid, the prolongation matrix is defined as:

Defining the dual constraints
Given the samples, we define the target skew-symmetric matrix field V S by first multi-linearly splatting the samples' skewsymmetric matrices into a grid. That is, for each s ∈ S we find the grid cell containing s.p and distribute the value s.n 1 ∧ s.n 2 to the corners of the cell using the standard multi-linear interpolation weights. Then we perform two passes of smoothing, replacing the value at each corner with the average of the values in its one-ring. Finally, we set V S to be the piecewise-constant skew-symmetric matrix field whose value within a cell is the average of the values at the cell's corners. Thus, considering each cell in turn, the computation of the dual constraints b ∈ R |I| reduces to the integration of multi-quadratic polynomials (the dot-product of ω i with a constant skew-symmetric matrix) over the grid cells.

Accounting for non-uniform sampling
The splatting of samples to define V S is done taking into account sampling density [KBH06]. We compute a piecewise-constant density estimator ρ as above: For each sample s ∈ S splatting a value of 1 into the corners of the grid, performing two-passes of one-ring averaging, and setting the value in a cell to the average of the corners' values. In constructing the target skew-symmetric matrix field, we rescale s.n 1 ∧ s.n 2 to have Frobenius norm equal to 1/ρ(s.p) -an estimate of the d-dimensional volume of sample s -before splatting the exterior product into the grid.

Solving the system
We solve for F using the hierarchical approach described in Algorithm 2. In Line 12 we invoke Algorithm 1 to alternately update the coordinate functions within a level. A naïve implementation would use a direct solver. We could leverage the hierarchy, using a standard V-cycle solver [BHM00] to efficiently approximate the solutions of the linear systems. However, we have observed that even this is not necessary as we already have a good initial guess for F ℓ obtained by prolonging the estimated solution from level ℓ − 1 (Line 15). Thus, for a given level 0 ≤ ℓ ≤ L, the relaxation in Algorithm 1 can be done by performing a small number of Gauss-Seidel iterations. (We accelerate the computation by using multi-coloring to parallelize the relaxations.)

Extracting the level-set
Extracting the level-set from the regular grid is more complex than the standard Marching Cubes case [LC87,KKDH07] because the basis functions are multi-linear within a grid cell. We resolve this by using a piecewise-linear approximation -partitioning grid cells into simplices, sampling the original function at the vertices of the simplices, and linearly interpolating within a simplex. Our recursive strategy is an implementation of the generalization of an algorithm for 2-surfaces in 4 dimensions [WB96]. It is possible to 8 of 15 resolve the recursion [Min03]; and in higher dimension it would be beneficial to use triangulations with few simplices [BKW21]. We found that our approach was sufficiently efficient to explore the results with negligible computation times.

Simplicial decomposition
To partition each grid cell we proceed by induction on cell dimension. For dimensions k = 0 and k = 1 nothing needs to be done since the k-dimensional cells of the grid are already k-dimensional simplices. Starting with dimension k = 2, we introduce an additional vertex at the center of each k-dimensional cell, and connect the previously obtained (k − 1)-dimensional boundary simplices to the center. For example, with k = 2 we add the center of each 2dimensional face to the grid and then triangulate each 2D face by connecting its boundary edges/simplices to the face center, obtaining four triangles. Similarly, for each 3D cube, we introduce the center and then iterate over each 2D face, connecting the four previously obtained triangles to the cube's center to get 24 tets.

Marching simplices
Having a piecewise-linear representation of the functions over a simplicial complex we obtain the level-set by inducting on the dimension of the simplices.
Base case k =d: We iterate over all k-simplices (triangles) in the simplicial complex, embed each in in Rd using the functions' values at the vertices, and test if the embedded simplex contains the origin. If it does, we introduce an iso-vertex at the location on the simplex that maps to the origin.
Induction step: For each (k + 1)-dimensional simplex, we iterate over its k-dimensional faces, merging the associated (k −d)dimensional level-set simplices into (k −d + 1)-dimensional levelset simplices. As the intersection of the level-set with a simplex is convex, merging can be done by choosing a vertex on a (k −d)dimensional level-set simplex and connecting it to all other (k −d)dimensional level-set simplices that do not contain that vertex.

Results and Evaluation
We now assess the empirical behavior of our solver. We start by considering results when reconstructing curves in 3D and surfaces in 4D. Next, we analyze the convergence of the solver. We conclude by examining the robustness in the presence of non-uniform sampling and noise. Source code implementing the reconstruction of co-dimension two manifolds has been made publicly available at https://github.com/mkazhdan/ExteriorPoissonRecon.
With the exceptions of the curve in Figure 1, point samples are obtained by uniformly randomly sampling parameter space (with 1024 samples for curves and 1024×1024 samples for surfaces) and visualizations show the entire zero level-set, including connected components over which the sampling density is low (in black). Curves are reconstructed at a resolution of 64 × 64 × 64. Surfaces are reconstructed at a resolution of 32 × 32 × 32 × 32. Connected components are removed when all points on the component have a sampling density of less than two samples per cell.

Curve and surface reconstruction
Visualizations of various (p, q) torus-knots are show in Figure 3 and several Hopf tori are visualized (after stereographic projection from the 3-sphere into Euclidean 3-space) in Figure 4.
For the curves, our method correctly reconstructs the geometry even when the curves consist of multiple disconnected components (gcd(p, q) > 1) and are knotted (p, q ̸ = 1). While the spurious levelset components are generated for the more complex knots there are far from the input samples and are easily trimmed off.
For the tori we visualize the Gaussian curvature, computed per vertex in 4D as the the angular deficit divided by the vertex area. We expect the curvature to be zero because the target surfaces are  intrinsically flat. In general, we find this corroborated in our visualizations, though the tori exhibit more curvature variation as the number of lobes and the frequency of the pre-image curve on the sphere is increased, likely due to the low reconstruction resolution.

Computational complexity
Using a hierarchical solver with a fixed number of relaxations per level, the cost of the solve is linear in the number of non-zero entries in the system matrix. As we use a first-order B-spline basis to represent scalar functions, the number of non-zeros is proportional to the number of basis functions. Thus, using a resolution of R in d dimensions, the complexity (both temporal and spatial) of reconstructing the geometry is O(R d ). Table 1 shows the empirical performance for reconstructions in 3D and 4D at different resolutions, which matches the theoretical complexity.

Solver performance
Unlike sPSR, the case of co-dimension larger than one requires solving a multi-quadratic system of equations. We examine the convergence of the solver by considering its behavior with respect to different optimization schemes and its stability with respect to choice of initial guess.

Convergence
We reduce the minimization of a multi-quadratic energy to the solution of linear systems by alternately locking one coordinate function and solving for the other. As the energy is not quadratic, the updating of the coordinate functions requires multiple passes (parameter K in Algorithm 2).
While a naïve approach would only iterate at the finest resolution, we found this to be ineffective for two reasons. First, as with general iterative Jacobi methods, convergence tends to be slow, with the local updates requiring many iterations for the constraints at one point to affect the solution at a point far away. More significantly, since the energy we minimize is not convex, naïve iteration can lead to the solution being trapped at local minima.
We argue that both problems can be addressed by leveraging the hierarchical structure of the B-spline basis. As with traditional multigrid, this improves convergence because iterations at coarser resolutions have more global effect. More significantly, the downsampling of the system to coarser resolutions smooths out the energy landscape, allowing the solver to find a good solution without getting trapped in local minima with narrow basins of convergence. (While this does not ensure we find the global minimum, it encourages the solution to be stable. A property we validate below.) As the solution at a coarser level provides a good guess for the solution at the next finer level, we argue that the full power of a linear solver is not necessary and the finer solution can be obtained with a few passes of Gauss-Seidel relaxation. To this end we compare three implementations.
• Single-solve: Solve for the coordinate functions at the finest level of the hierarchy. • Multi-solve: Processing the levels of the hierarchy in a coarseto-fine manner, solve for the coordinate functions and prolong them to the next level. • Multi-relax: Processing the levels of the hierarchy in a coarseto-fine manner, relax the coordinate functions and prolong them to the next level.
The first two implementations require solving linear systems. As a direct solver would be too expensive, we take advantage of the multi-resolution hierarchy, using a standard V-cycle solver instead.   Examining the non-hierarchical solution (Single-solve) we find that only alternating at the finest resolution does not converge efficiently and tends to converge to a solution with a larger energy (i.e. gets trapped at a local minimum). In contrast, both hierarchical solutions converge at roughly the same rate. As expected, relaxing the linearized problem by using a V-cycle solver (Multi-solve) converges in slightly fewer iterations than only relaxing the linearized problem within a single level (Multi-relax).
We also note that the hierarchical implementation of Multi-solve results in a good initial guess for the V-cycle solver so that convergence does not require a large number of cycles. In contrast, Singlesolve, for which there is no initial guess, convergences better as the number of V-cycles is increased.
Considering the energy as a function of time (right), we find a significant speed-up when using the Multi-relax implementation.
While there is an overhead to using a V-cycle solver, we believe that the increased computational cost is mostly due to the fact that the construction of the linear systems in Algorithm 1 requires multiplying and summing large matrices -a problem exacerbated by the difficulty of parallelizing these operations for sparse matrices.
We also explored gradient-descent and (quasi-)Newton methods. However, we found the behavior to be similar to that of the Singlesolve approach -the approach required many iterations and would only converge to a local minimum that did not, in general, provide a good reconstruction. It is only by integrating a hierarchical approach that we were able to consistently obtain high-quality results.

Stability
As the energy is not convex, the solution we obtain may depend on the choice of the random initialization (Line 6, Algorithm 2). To assess this, we reconstructed the (8, 9) torus-knot using twenty different initial guesses and computed the Hausdorff distance between each pair of reconstructed geometries. The distance never exceeded 0.11 voxels, indicating that all the iterations converged to the same solution. (The maximum distance was even smaller for the Clifford torus, likely due to the simplicity of the geometry.)

Robustness
We evaluate the behavior of our solver with respect to non-uniform sampling and different types of noise.

Non-uniform sampling
Throughout the paper, results are shown with samples generated by uniformly randomly sampling parameter space. To evaluate the performance of our reconstruction algorithm in the presence of nonuniform sampling we partitioned the embedding space with a hyperplane. Then for a given candidate sample (uniformly randomly sampled in parameter space), we randomly decide to keep the sample with a probability depending on which half of the hyperplane the sample belongs to. Figure 6 shows a visualizations of the point samples generated for a curve consisting of two linked circles (top) and the associated reconstructions (bottom) for different ratios of samples on the two sides of the hyperplane. Even with a significant change in sampling density across the hyperplane our method correctly reconstructs the linked circles, without generating spurious geometry.

Positional and directional noise
To assess the robustness of our method in the presence of positional noise, we displaced a sample's position by a random 3D offset drawn from a normal distribution with standard deviation σp (in units of grid cell width). For directional noise, we rotated the sample's two normals using (separate) random rotations with angles drawn from a normal distribution with standard deviation σα.  A similar resilience to positional noise is observed when reconstructing surfaces in 4D. From left to right, the inset shows the stereographic projections of 10, 000 noisy samples from a 3-lobed Hopf-torus, the computed Tangential Delaunay Complex, and our reconstruction at resolution 16 × 16 × 16 × 16. The standard deviation of the positional noise corresponds to approximately half a voxel width. Because it is interpolatory, the Tangential Delaunay complex reproduces the noise, outputting a nonmanifold mesh for which almost 80% of the (projected) faces selfintersect. In contrast, our method generates a smooth and manifold mesh without any self-intersection. (As stereographic projection is not isometric, the samples are not uniform when mapped 3D.)

Spot noise
Finally, we evaluate the robustness of our method in the presence of spot noise by randomly synthesizing samples within the samples' bounding box. Figure 9 shows the results of one such experiment for the reconstruction of the two linked circles. As the figure shows, our approach correctly reconstructs the geometry even as the number of noisy samples matches the number of curve samples, with spurious geometric components generated away from the samples removed using the density estimator. Only in the cases of extreme noise (right two columns) is spurious geometry not trimmed off, due to high sampling density away from the curve.

Discussion
We presented an extension of screened Poisson Surface Reconstruction, supporting the reconstruction of manifolds in general dimension and co-dimension. In doing so, we have shown that sPSR can be understood as the simplest instance of a broader class of variational problems that reduce manifold reconstruction to the solution of a polynomial system of equations, with polynomial degree growing with co-dimension. We have demonstrated that the approach works in co-dimensiond = 2, robustly reconstructing complex curves in 3D and surfaces in 4D.
As our approach is to solve for ad-valued map whose zero-levelset is the desired manifold, our approach is restricted to reconstructing those manifolds with trivial normal bundles. However, knowing that a manifold has a trivial normal bundle does not specify what that trivialization is. Our approach takes in an anti-symmetric tensor at each sample point, describing the orientation of the manifold without prescribing the trivialization itself. As such, in solving for the implicit function, we implicitly select a trivialization of the normal bundle. In this context, one can view the incorporation of the Dirichlet energy as biasing the reconstruction towards the selection of a smooth trivialization.

Limitations: Trimming
As discussed in Section 5.1, given a closed manifold with trivial normal bundle, we are only guaranteed the existence of an implicit function F : R n → R n−d containing the manifold M as a connected subset of the zero level-set. The implicit function F may have additional level-set components away from the target manifold, necessitating trimming.
The trimming value should depend on the (maximum) expected sampling density, which itself depends on the number of samples and the resolution of the grid. In our experiments we assume that (for each connected component of the manifold) there are at least two samples falling within the same grid cell and set the parameter to 2. This correctly removes all spurious geometry for results in Figures 1-8 and most of the results in Figure 9.

Limitations: Dirichlet Regularization
While empirically we have found that the regularizer in Equation 12 improves the quality of reconstruction, in principle it could preclude the generation of a good solution. This could be the case when the manifold exhibits complex twisting, requiring the coordinate functions of F to change quickly as well -a property in opposition to Dirichlet regularization.