In this section, we present our method for mapping imaging data to an arbitrary computational grid. The approach consists of establishing an efficient, automatic, geometric relationship between a source 3D image (IM) and an arbitrary target unstructured grid (UG) of polyhedra. Here, we assume that the target grid and image occupy approximately the same geometrical space. Thus, we assume that any linear or nonlinear warping necessary to bring image and grid into approximate alignment has been performed upstream of the mapping. We do not assume that the target grid is completely embedded or ‘surrounded’ by the image. Thus, the approach is applicable to images that represent a subset of the grid geometry or visa-versa. Clearly, there can be no assumption that the image and target grids exactly or even closely share a boundary. Geometry in an image is implicit, whereas the geometry of a computational grid is explicit. Typically, Marching Cubes  is applied to the image, to extract the boundaries of the object, resulting in a staircased geometry. Although more advanced methods can produce a relatively smooth boundary  from the image, construction of the target grid will typically include several iterations of mesh adaptivity and smoothing. Thus, in the best possible case, there will be a difference between the boundaries of IM and UG that is on the order of the dimensions of one voxel. These differences are to be expected.
In brief, the approach consists of computing a sparse matrix of intersection volumes V ij between the jth voxel of the image and the ith element of the computational mesh in time, where N is the number of nonzero V ij. To efficiently find corresponding local voxel neighborhoods of a given mesh element, we first identify those voxels that occupy the same physical space as the minimum bounding box of the polyhedra one at a time. This neighborhood is winnowed to just those voxels that actually intersect the polyhedron by comparing the element coordinates against the planar half spaces represented by the faces of each voxel in the neighborhood. For those voxels that do intersect with the mesh element, the polyhedron of intersection between the voxel half spaces and the mesh element are efficiently computed and added to V ij. For images that hold multiple fields, for example, components of velocity or fiber angle, this operation needs to be only performed once. Thereafter, with the sparse matrix of overlap volumes known, a sparse matrix–vector multiply can be rapidly performed once for each field. It is important to note that the algorithm requires no a priori mapping between the image and the grid. With general biomedical geometries, such an a priori mapping would be clearly impractical if not impossible.
Let fIM(x) be the imaging-based field, where is the imaging domain defined by the voxels IMj. For now, let us assume that f is a piecewise constant field of floating point values. Special treatment is required when dealing with integer-valued fields such as cell type, which will be discussed next. We wish to create a similar piecewise constant field, fUG(x), on the domain , defined by the polyhedra of the computational grid. In general, we are only interested in a subset of IM, roughly corresponding to the geometry of UG. That subset, which we denote the preimage, is of course unknown at the outset. In the presentation, for simplicity, we drop any explicit reference to the subset and let the IM stand for the preimage. Let be the characteristic function for voxel IMj, which is defined by
Similarly, let be the characteristic function for the computational element UGi. In this notation, and . This conveniently defines fIM(X) ≡ 0, X ∉ Ω IM and fUG(X) ≡ 0, X ∉ Ω UG. For our map to be exactly conservative,
Clearly, if fUG(x) = fIM(x) ∀ x, this would be satisfied. However, this equality will never be satisfied because an image has a stepped, voxelated boundary, whereas that of the unstructured grid will be typically smooth. Nevertheless, we can force fUG = fIM in the weak sense by requiring
where nUG equations will fix the nUG unknown coefficients . From the previous equation, we have
where | IMj | denotes the volume of IMj (which are all the same), and similarly, | UGi | denotes the volume of UGi (which in general are not all the same), and | UGi ∩ IMj | denotes the volume of the intersection of polyhedron UGi with voxel IMj. This implies that we should set
to satisfy Equation (3) for all UGi. In addition, summing Equation (3) over 1 ⩽ i ⩽ nUG implies Equation (2). As long as UG is completely embedded in IM—which is a starting assumption because we are letting IM stand for the preimage—and as long as the volumes of all UG can be exactly computed—as will be the case if all faces of UGi ∀ i are planar—Equation (4) will be exact and will not only be conservative but will also accurately preserve the bounds on the field.
Here, f − 1(y) denotes the preimage of y, which is the set of voxels j such that . So, is set to be the value y whose preimage maximally intersects with the ith element of the target mesh.
2.3 Parallel implementation
Because Algorithm 1 considers one element at a time from UG, and because Algorithm 2 generates a list of potential candidates in IM from a simple hash operation, the overall approach can clearly be made parallel. The structure of the overall algorithm suggests a possible nested approach with the main loop in Algorithm 1 as the outer level and the nested loop indicated in Algorithm 2 with an OpenMP pragma as the inner level. In this article, for simplicity, we will only show the latter of these two, as indicated in Algorithm 2. We chose OpenMP for the parallel implementation rather than message passing because the application is likely to be on multicore workstations and because, as we will show, the execution time for practical-sized problems is reasonable. However, it is clear that more aggressive attempts at parallelism are possible.