Parameterization, Feature Extraction and Binary Encoding for the Visualization of Tree‐Like Structures

The study of vascular structures, using medical 3D models, is an active field of research. Illustrative visualizations have been applied to this domain in multiple ways. Researchers made the geometric properties of vasculature more comprehensive and augmented the surface with representations of multivariate clinical data. Techniques that head beyond the application of colour‐maps or simple shading approaches require a surface parameterization, that is, texture coordinates, in order to overcome locality. When extracting 3D models, the computation of texture coordinates on the mesh is not always part of the data processing pipeline. We combine existing techniques to a simple parameterization approach that is suitable for tree‐like structures. The parameterization is done w.r.t. to a pre‐defined source vertex. For this, we present an automatic algorithm, that detects the tree root. The parameterization is partly done in screen‐space and recomputed per frame. However, the screen‐space computation comes with positive features that are not present in object‐space approaches. We show how the resulting texture coordinates can be used for varying hatching, contour parameterization, display of decals, as additional depth cues and feature extraction. A further post‐processing step based on parameterization allows for a segmentation of the structure and visualization of its tree topology.


Introduction
The visualization community actively works on techniques for the display of vascular 3D models. The motivation for this is based on the clinical relevance for surgical planning and guidance of interventions. Different imaging modalities [LSBP18] and also physical simulations [OJMN*18] contribute to the generation of data that is desired to be visualized along with medical volume or surface data. However, visual information channels are limited, and therefore new techniques emerge that aim to ease the perception and comprehension of task-oriented data [PBC*16]. This also plays an important role in medical education [PS18], where illustrative techniques are found to highlight certain data features or to guide the attention of the viewer. Understanding the spatial structure of a given 3D object is an important aspect, especially if the user looks at the 2D projection on a common computer monitor. In this case, depth cues are missing which then have to be encoded in another way. Another broad topic is the visualization of blood flow [MVPL18]. In this area users are interested in spatial data inside a blood vessel, but also want to obtain information about surface-related aspects. To encode the multitude of available data, advanced (illustrative) visualization techniques can be applied [LVPI18]. We can generally state that advanced techniques require parametric guidance on the rendered surface data. This may be texture coordinates (to display textures or patterns) or tangent vector fields (to guide pattern orientation or streamline generation). Unfortunately, such parameterizations are not always available from the medical data acquisition pipeline. Furthermore, texture coordinates can be generated in various ways, exposing different advantages and disadvantages.
In this work, we introduce a technique to generate texture coordinates that is suitable for the processing of tubular, tree-like structures, for example, blood vessel surface data. Because of the restricted target morphology that our algorithm is designed for, we can take advantage of that morphology and come up with a very simple approach, utilizing existing and more generally designed algorithms. The initialization of the algorithm can be automated by a method that detects the end-points and root of a tree structure. Additionally, the resulting parameterization can be used for the extraction of branch locations of the mesh. We also show how we can use the output of our algorithm as an additional depth cue for a scene and the parameterization of contours. In this way, our approach can be used for the improvement of structure and depth perception, and the application of illustrative rendering techniques in order to encode multivariate data. The parameterization can be used as input for a segmentation algorithm that subdivides the vasculature at branching points. The resulting segments are arranged in a binary tree hierarchy that enables further visualization applications. Thus, our main contributions are as follows: r a simple approach to generate texture coordinates on tree-like structures; r An automatic, parameter-free branch and end-point detection algorithm; r as an extension to [LL18], we apply a segmentation that results in a binary tree and can be used for the visualization of the vascular tree topology and conduct a user survey to underline the feasibility of our technique. The results are brought to use in several examples.

Related Work
The parameterization of surface meshes has been researched for a long time [FH05]. In the past, this field was highly motivated by the topic of remeshing algorithms [BLP*13]. For remeshing purposes it is usually sufficient to come up with parameterizations that are locally bijective. Globally, 1-to-n mappings from the parameter space to the domain may exist (as in a periodic function). For the display of textures or decals on a surface this can be a disadvantage, because different locations on the surface cannot be sufficiently distinguished in the parameter space. However, such periodic texture coordinates can be useful in the visualization domain, as shown by Knöppel et al. [KCPS15] and Lichtenberg et al. [LSHL18]. Other techniques aim to find a 1-to-1 mapping between the parameter and domain space. An important one to mention in this context is the one by Kälberer et al. [KNP11]. They cut open a given mesh to obtain disc-topology and then apply an iterative integration of texture coordinates. Then, discontinuities at the cut seams are removed by a repair operation. While their approach yields reasonable results in the sense of the pure mapping, the algorithm itself relies on three complex steps and the iterative nature of the algorithm restricts it from scaling well with the mesh size. In our approach, the units in parameter space and domain space are approximately equal, which can be of advantage when mapping textures or patterns w.r.t. the object size. Another possibility is to calculate texture coordinates locally in screen-space and has been proposed by Rocha et al. [RASS17]. They sample the surface and render spheres for each sample to activate the fragment shader. Texture coordinates can then be approximated for the activated pixels in order to draw small decals on the surface. These decals are used to represent multivariate data values.
As mentioned in the introduction, the spatial perception on 2D monitors is restricted due to missing depth cues. Consequently, a range of publications can be found in the literature that address this problem, using different kinds of artificial or nature-inspired ways to encode depth. Kersten-Oertel et al. [KOCC14] provide a good reference for the performance of different depth cues in the vascular domain. Using the colour channel, pseudo chroma-depth proposed by Ropinski et al. [RSH06] employs aspects of the natural perception to intuitively map depth to a red-to-blue colour scale. Applying pseudo chroma-depth to a whole mesh, however, impairs the ability to use other shading techniques to convey structure or to display additional information on the surface itself. To cope with this issue, Behrendt et al. [BBPS17] proposed to apply pseudo chroma-depth only to the contour region of a mesh to make space for supplementary data. Apart from colouring or texturing a given surface to convey data, additional geometry can be added as glyphs to an existing scene [ROP11]. Then, one has to decide how many or where glyphs should be placed in order to achieve a clean and informative result. Later, we will present a simple method to extract branch locations and vessel end-points from a vascular mesh. Being structurally significant locations, these could be used for glyph placement, as done by Lichtenberg et al. [LHL17]. With their work, they followed up on previous methods by Lawonn et al. [LLPH15,LLH17], who used additional geometry in a scene to improve the spatial perception. The problem of spatial perception can be omitted when transferring the visualized data to the 2D domain, as surveyed by Kreiser et al. [KMM*18].
When colour is used to encode data on a surface, a trade-off has to be made. Either the colourmap is disturbed by an additional shading, or geometric features are not perceivable due to missing shading. A workaround is then to use hatching strokes, which do not interfere with the colourmap but are still able to depict the geometry. Hatching along a 3D surface has first been done by Hertzmann and Zorin [HZ00] who computed integral lines along principal curvature directions. Praun et al. [PHWF01] followed with a texture-based approach. Further insights into this illustrative rendering area can be obtained from the survey by Lawonn et al. [LVPI18], while an example of illustrative rendering in the context of vascular models is given by Ritter et al. [RHD*06].
Scalar functions that are defined on a surface have been used for shape analysis. The Reeb Graph (RG) emerged from Morse theory [Ree46] and has developed into an application for 3D shape topology analysis [SK91,PSBM07]. The RG captures topological information of a surface based on the level sets of a continuous scalar function defined on the surface. The Contour Tree (CT) [BR63] is a special case of the RG that does not allow cycles, that is, a binary tree is constructed. We use an implementation by Carr et al. [CSA01] to capture the topology of our tree structures.

Method
Our goal for this work is to obtain texture coordinates T ∈ R 2 for the visualization of a tubular and tree-like mesh M. The two dimensions of this set are denoted by (U, V ) ∈ T . Important for our work here are the Geodesics in Heat (GiH) method by Crane et al. [CWW13] and the Jump Flood Algorithm (JFA) by Rong and Tan [RT06]. These two algorithms have a very general domain of application and provide the basis for our parameterization of U and V . Further, we use the resulting U parameter for a segmentation based on a CT, for which we use the algorithm by Carr et al. [CSA01]. First, we obtain an augmented CTŶ that contains a node for every vertex in M. A segmentation ofŶ yields the CT Y that contains an edge for every segment obtained fromŶ. More details on the CTs follow in Section 3.2.

Parameterization
This section covers the computation of the (U, V ) coordinates for M. Our approach is split into an object-space and a screen-space part. The rationale for this will be clarified in the respective paragraphs that follow.

Object-space: U
We intend to compute U as a continuously increasing scalar field across the mesh. The source of U (where U = 0) is a pre-defined vertex v s , which should be placed at the root (i.e. the source of blood flow) of a given vessel, to achieve the most intuitive results. We provide an automatic root-point detection algorithm that will be described later in this section. Our approach is now to define a vector field Z, so that ∇U = Z (i.e. the gradient of U equals Z).
Hence, the quality of U is determined by the quality and orientation of Z. To define Z, we first compute the directions of minimal curvature C for each triangle of M by applying the method presented by Rusinkiewicz [Rus04]. The orientation of individual elements c i ∈ C is ambiguous (i.e. c i ∼ −c i ). To resolve the ambiguity, we compute the geodesic distance G of each vertex to v s , using the GiH method [CWW13]. The GiH utilizes a relationship between heat transport from the physics domain and distance, achieving quick approximations or even exactly computed geodesic distances on arbitrary topologies through differential operators. Then, ∇G is computed as the gradient of G for each triangle. We then obtain Z element-wise as: where g i ∈ ∇G . By incorporating C, we make sure that Z well describes the geometry of M. Using ∇G , we force Z to have a single source at v s . Finally, we smooth Z to remove noise and high-frequency features of the vector-field. We do an element-wise Lapacian smoothing in the dual graph of Z (i.e. per vertex). The result is then lifted back to each triangle, while keeping Z in the tangent space of M. Now, we can put Z into an equation system to compute U , such that ∇U = Z. Applying the divergence on both sides yields the Poisson equation Solving Equation (2) in a least squares sense results in the coordinate U . As a last step, we rescale U to the range of [0, max(G)]. This brings U into a relation of spatial distances along the surface of M. Figure 1 shows an example result for U . It can be observed that U increases strictly monotonously away from v s . Generally, we can compute the distances w.r.t. to an arbitrary point, but the results are most intuitive if a vessel end-point is chosen as the source.
We have to note that setting U = G works fine for very straight structures. In these cases, the geodesic distances sufficiently capture  the mesh structure. This is not true for more convoluted vasculature, which brings up the necessity to incorporate C. Figure 2 shows the final U compared to an anticipated geodesic between two points on the surface. The geodesic in this convoluted section would not properly cover the geometry. The singularities found in U can be used for extraction of structural features as described in Section 4.1.
Note that the assignment of U could be described as a semi-global parameterization. It is global in the sense that a set of isolines with unique U -values can be found on the shortest path from v s to any given vertex. However, isolines of a certain U -value are not unique on paths to multiple vertices that are found on different branches of the tree-structure. This means that we can identify unique, individual isolines per branch segment, but not for the whole mesh.

Object-space: Root-point
Instead of defining v s manually, we provide an automatic approach to find the root of an input mesh. The algorithm consists of two steps: 1. Identify all vessel end-points. 2. From the given end-points find the root. While previous approaches [LLH17,LHL17] presented methods to find end-points, we propose a faster approach to identify these points. Lawonn et al. [LLH17] determined the shape index of all points, then the Otsu method was applied to extract regions around the end-points and finally a shrinking was applied to get one end-point per region. Lichtenberg et al. [LHL17] improved the technique by incorporating topological information via geodesic distances and graph analysis. Nevertheless, their approach requires some pre-defined parameters that may depend on the input mesh, making it less robust. For the first step of our method, we apply a thinning to the entire mesh, following the approach by Au et al. [ATC*08]. Their method is based on an iterative shrinking determined by where W L and W H are diagonal weighting matrices that change per iteration. For every iteration, Equation (3) is solved in a least squares sense, resulting in Figure 3 (centre). Afterwards, we iterate over every point p i and create a sphere B i with radius r = kē, whereē is the average edge length of the thinned mesh. The factor k = 3 was chosen empirically to set r well above the average edge length in the thinned mesh. This ensures to capture enough sample points in the following step. We extract all points p i j on the contracted mesh that lie inside the radius of each sphere B i (see Figure 3, right). Finally, we take the vector If all points p i j lie in the same half space created by the plane with the origin p i and normal vector n i , then p i is an end-point. Otherwise it is not an end-point (see Figure 3, right). With this, a parameter-free detection of end-points is possible. A root-point can be found as the end-point where the absolute mean curvature is minimal, that is, where the approximated vessel diameter is maximal.

Screen-space: V
The task of obtaining the V coordinate in object-space is conceptually more involved. In an optimal scenario, the gradient of V would be orthogonal to the gradient of U , and therefore be aligned to the direction of maximal curvature in our scenario. This means that V evolves along the circumference of the vessel structure, which will result in discontinuous seams, where minimum and maximum values of V meet. Previous work by Kälberer et al. [KNP11] has tackled this problem by repairing the parameterization in order to match seams with integer values. Another approach has been proposed by Ray et al. [RLL*06] and followed up upon by Lichtenberg et al. [LSHL18] who used periodic functions to avoid seams. Their methods, however, are more complex to implement and yield parameterizations with isolines that are only unique within a single periodic interval.
Instead of treating seams and singularities of V in object-space, we utilize the JFA to compute V in screen-space. This choice removes the necessity to treat seams in the parameterization, because we are only looking at the flat projection of the mesh. Furthermore, the algorithm is relatively easy to implement and executes in a few milliseconds on modern consumer graphics hardware. The drawback is that V has to be recomputed each time the projection of the mesh changes.
In our approach, the JFA is used to compute the minimum distance of each pixel to the contour of a mesh M. Thus, V will always be zero at locations next to contours, regardless of the rotation of the camera or M, and increase away from contours (see Figure 4). The JFA is a method that works on grid structures and is capable of distributing content of a node in log 2 n iterations to all other nodes. If we render to a view port of resolution (x, y), then n = max(x, y). As described in their paper, the JFA [RT06] can be used to assign each pixel to a Voronoi cell of a set of seed pixels. For each pixel, the final result contains the minimal distance of a pixel to its referred seed pixel. In our case, the contour of a mesh is rendered and the resulting pixels are used as the set of seed pixels. Hence, after running the JFA, each pixel contains its minimal distance to the contour. A first result is shown in Figure 5 (left), where thin black strokes represent isovalues of V . Here, two overlapping vessel segments are   shown and the result is not optimal. Some pixels that belong to the segment in the background (horizontal) are parameterized based on the distance to the contour of the foreground (vertical) segment. We address this with a single modification: As an additional parameter, we store the depth of the pixels in camera space. Now, pixels that are not part of the contour are only allowed to reference contour pixels, that have a higher depth value. Assuming that the 3D model has no self-intersections, we use this modification to make sure that each pixel cannot reference parts of the contour that belongs to a closer, overlapping vessel segment. The result of this change can be seen in Figure 5 (right). Note that more complex overlaps result in less meaningful approximations of V , which we will address in Section 6. Finally, we rescale V so that (U, V ) is isotropic: where d p is a local (per pixel) approximation of the distance of two points in world space (i.e. on the actual surface of M), represented by two neighbouring pixels. Therefore, units of V are mapped to world space as well. A combined example showing the results of the object-and screen-space parameterization is shown in Figure 6.

Parameterization of remaining pixels
The results shown so far only include V for pixels representing the surface of M. However, since the JFA is executed on the GPU for all pixels, we also obtain a parameterization for pixels that are otherwise discarded (i.e. the background pixels). Section 4.2 shows how this can be utilized.

Graph generation and segmentation
The isolines of a certain U -value are not bound to form a connected component. This is the case when isolines of a certain value are found on different branches of the vascular tree. This ambiguity can be resolved by constructing a CT Y based on U . We use the algorithm proposed by Carr et al. [CSA01] to do so on a surface mesh. For this, the U -values associated with individual vertices have to be unique across the mesh M. If this is not the case, small permutations of duplicate values can be applied to solve the issue. The CT represents the topological changes of the level sets of U (i.e. the connected isolines). The nodes of the CT are associated with vertices of M that represent local minima, local maxima or saddle points of U . These points are also known as critical points from Morse theory [MSW63]. As shown in Figure 7, the topology of the level sets for a certain scalar value changes at critical points (purple). Following the height field h (see Figure 7) from its minimal to maximal value, a level set is created at the minimum h min , splits into two sets at a saddle point h 2 and vanishes at the two local maxima h 4 and h max . These critical points appear as nodes in the CT Y (Figure 7, right). Figure 7 (centre) depicts the idea of the augmented CTŶ, which is the primary output of the algorithm [CSA01]. The augmented CTŶ contains critical nodes (purple) with degree equal to one or three, as well as regular nodes (white) with degree equal to two. In practice, all vertices of an input mesh, that are not critical points, are represented by regular nodes. For clarity, Figure 7 (right) contains only one example regular node per edge. Each regular node resides on the path between two critical nodes. The scalar value of a regular node will always be between the scalars of the two associated critical nodes.
We useŶ to trace the paths emanating from each critical point, beginning at the lowest U value, that is, the vessel root, and process the remaining points in order of increasing U (or increasing h as in Figure 7). In this way, we find all nodes (i.e. vertices) that lie on a path i between two critical nodes and aggregate these in a segment with index i. The terminating critical node of each path is also included in the segment. With this, a segment represents a connected space of associated vertices with U max ≥ U > U min , where U max and U min are the U values of the two critical points. The node representing the root vertex (i.e. the vertex with the global minimum of U ) is a special case because it has no edge incident from a node with a lower U value. It is inserted into the segment emanating from itself, that is, the root segment. Segments and their associated nodes and edges are depicted in Figure 8 (left) with an orange underlay. If we refer to a segment i ofŶ, we refer to all vertices of M that were aggregated into that segment. By reducing all regular nodes fromŶ, we obtain Y. The reduce operation maintains the connectivity of the tree: removing a node with neighbours A and B creates a new edge between A and B. Thus, a segment i of Y includes a critical node i and its incident edge. Finally, a segment i of Y is a representative for all vertices in M that are associated with the segment i inŶ. Instead of referring to individual nodes or edges of Y andŶ, we will refer to these segments throughout the paper. Since Y is a binary tree, we obtain n − 1 segments, if n is the number of critical points.
The binary structure of the tree further allows us to augment each segment with a binary code B i , that represents the path to reach that segment from the tree root. The root is represented by a single bit set to 1. The transition to the first/second child segment is denoted with a bit set to 1/0, as depicted in Figure 8 (left). Consecutive bits are appended to the right. The length of the binary code is equal to the layer l(B i ) that the segment occupies within the tree, with l(B i ) = a i + 1, where a i is the number of ancestor segments of segment i. Hence, B i ∈ {0, 1} l(B i ) . Based on that binary code, a simple bit-shift operation can be implemented to find out whether a segment is an ancestor or descendant of another segment. The example in Figure 8 (right) shows two binary codes B 1 and B 2 , with l(B 1 ) = 2 and l(B 2 ) = 5. We then shift the binary code B 2 associated with the higher layer (l(B 2 ) > l(B 1 )) by k right-shift operations, where k = |l(B 1 ) − l(B 2 )| is the layer difference, that is, three in the example. This aligns the highest set bits. If the codes of B 1 and B 2 are now equal, the segment of the lower layer (B 1 ) is an ancestor of the segment in the higher level (B 2 ). Otherwise, the two segments are not found in the same sub-tree. We will address the utility of the binary code and the bit-shift operation in Section 4.

Applications
This section describes various applications of the (U, V ) parameters that we obtained in Section 3. We provide a detection of branch locations and vessel end-points, as well as a hatching technique and the rendering of parameterized contours and depth cues. These techniques may be useful for, or inspire, a range of visualization approaches in the context of vascular visualization.

Branch and end-points
We can use U as obtained in Section 3.1 to extract branch locations as well as vessel end-points. For this, we look at the one-ring of each vertex. It can be observed that vessel end-points are found at local maxima of U and branch points are found in saddle-shaped configurations of U . This is due to the use of ∇G in Section 3.1, which creates a sink in the vector field Z at local maxima of G. The points found here may be used for the placement of glyph objects, as in the work by Lichtenberg et al. [LHL17], since the vessel end-points and branch locations are structurally significant features.

Using V to enhance depth perception
In Section 3.1, we mentioned that the JFA also assigns V values to pixels that are not part of the surface representation (i.e. background pixels). Thus, background pixels also refer to their closest contour pixel. In order to incorporate the depth information of the contour pixels, we can modify the distance function used for the background pixels' JFA iterations. Instead of searching for the contour pixel with the minimal euclidean distance d c , we incorporate the depth, for example, d c ← d c · d a and determine the minimum of that term to obtain V . Here, d is the depth of a contour pixel and a controls the impact of the depth difference. With a = 3 we achieve a depth-aware parameterization of the background pixels as shown in Figure 9. The result appears similar to the Void Space Surfaces by Kreiser et al. [KHR18]. The difference is, however, that we do not extend the depth information into the background pixels with a smart interpolation approach. Our combination of depth and distance information results in distinguishable regions that refer to individual branch regions. The boundary of these regions are visible through the abruptly changing isolines. This can, for example, be used to draw outlines or glow effects around a given structure. Figure 10 shows the effect, where a larger glow radius (in screenspace) refers to less distance to the viewer. In this way, one can easily obtain information about the global orientation of a visualized structure.

Hatching
Here, we describe an approach to achieve varying and overlapping hatching strokes using the (U, V ) coordinates. At first, we subdivide the range of U into b disjunct bins.
where o is an offset to U and f controls the number (i.e. frequency) of bins. Usually, f is set to a rather large number, as it represents the number of hatching strokes within each integer interval of U . The offset o can be arbitrarily chosen, as a constant or as a function of V , for example.
Thus, each point i on the surface is assigned an integer value b i and a bin-wise mappingû i of U to the range [0,1]. We can then obtain an intensity value via to draw smooth strokes at the centre of each bin along the isovalues of U , where w i controls the width of the stroke. A maximum stroke length l i can be applied by using V : where s controls the falloff of the stroke intensity. An initial rendering of equally shaped strokes is shown in Figure 11 (left). However, to achieve a visually appealing hatching, it is important to avoid repetitive patterns of strokes. We address this by feeding b i into a pseudo-random generator that yields a value r i in the range [0,1]. By modifying a random seed based on b i , multiple different random distributions r ij can be generated for each b i . This can be used to modify the stroke length l i ← l i · a · (2r i1 − 1) (see Figure 11, centre, left) and the stroke width w i ← r i2 (see Figure 11, centre, right). Finally, multiple sets of hatching strokes can be drawn with an overlap by modifying the offset o. This offset can also be calculated in dependence of V to achieve tilted strokes (see Figure 11, right). As the offset to U affects the calculation of r ij , locally varying parameters for w i and l i are obtained. A whole vessel data set rendered with this technique is shown in Figure 10. Figure 12 shows how the hatching variability could be used to encode scalar information. In the illustration, parts of the surface to the left are rendered with uniform strokes, while moving towards the right, the strokes are more and more varied. In the context of clinical applications, this representation could be used to encode the distance to a reference object (e.g. a tumour or surgical instrument).

Contour parameterization
If a visualization should be used to encode multiple scalar fields, the number of available information channels is often quickly exhausted. In this case, one can use additional geometry to encode information. Due to the rather thin, tubular structure of vessel, the contour of the structure is a viable option to do this. The smaller the diameter of a vessel segment, the better can information from the contour represent the data of the affiliated vessel segment. For example, this has previously been done by Behrendt et al. [BBPS17], who encoded the depth of vasculature by colouring its contour using pseudo chroma-depth. Instead of defining the contour on the surface itself (e.g. by using a Fresnel approximation), we aim to create additional geometry at locations of the rendered mesh, where the surface normal is orthogonal to the view direction. The vertices of the generated faces adopt the U values of the generating surface. Additionally, V values can be created based on the width of the  contour in world space and therefore their scale matches the scale of V as computed in Section 3.1. The V -value of a point on the contour is then its distance to the generating surface. We can then display small texture patches on the contour as shown in Figure 13 (top). In this example, the general direction of blood flow is depicted by an arrow decal at the vessel boundary. In Figure 14, we use U to draw a dashed contour, which could be applied to focus-and-context applications. Alternatively, the contour can simply be coloured w.r.t. the pseudo chroma-depth spectrum as in Figure 15, leaving the surface free for other encodings. This addresses an issue which has been tackled by Behrendt et al. [BBPS17]. They mixed the colour coding of a scalar field and pseudo chroma-depth using a Fresnel term. A drawback of their method is that for small vessel segments neither the depth, nor the scalar encoding is accurately perceivable. With our method, we are able to draw contour margins of invariant size on the mesh (see Figure 15). Hence, for thin vessels only the contour colour remains, which is still not optimal but should be preferred over a hard-to-read mixture of colour scales. Additionally, we omit the shading of the object in this example, in order to properly retain the colour scale. Geometric features are instead depicted by the hatching strokes.

Binary tree colouring
The binary code that is associated with each segment of Y can be used for an automatic colouring of the vasculature, that highlights individual sub-trees. We compute a vector H whose elements i refer to each tree segment i: where δ(B i , j) returns 1 if the j -th bit in B i is set to 1 and −1 otherwise. Note that indexing into δ(B i , j) is 0 based. The resulting H can be used as input to a colour map. Further, each H i will always be in the range (0,1) and the highest impact on the results have the higher bits, that is, the bits associated with segments close to the tree root. Therefore, branches that occur in higher levels of Y can be better distinguished than branches that occur at a deeper level. For instance, the binary code 100 would result in a sum 0.5 − 0.25 − 0.125 = 0.125. In our example (see Figure 16, right), we employ H as the hue value in HSV colour space. We additionally calculate the saturation S based on the layer n i of each segment, except for the root segment: where d is the maximum layer of Y, that is, the maximum length of all segment binary codes, and p = 0.8 controls how quick S falls off between the maximum s max and minimum s min saturation values. With Equation (11), the leaf segments of Y will have a saturation of s min while the two child segments of the root segment are assigned a saturation of s max . A graph plot can then be generated by deriving 2D coordinates for each node i from H i and S s of the associated segment i. We plot this graph on an HSV colour disc with a constant value set to 1, as in Figure 16 (left). The saturation is inverted, that is, the saturation decreases from the centre to the edge. This graph representation can be used for a colour-guided integration with the 3D visualization (Figure 16, right). As we did not define H and S for the root segment, we place it in the centre of the HSV disc and draw it in plain white in the 3D representation. The centre of Figure 16 shows how colour values are interpolated across each segment in 3D, according to the HSV graph representation. The inverted saturation  in the overview plot allows to draw the graph in a more intuitive manner, with the more meaningful branches located at the centre with a more perceptive, saturated colour. The 2D plot is also an indicator for the balance of the branch topology. The magenta area in Figure 16 (left) is occupied only by a single end-point, revealing that an early branch with no further ramifications is present in the data set.
Another application for the binary code is to determine the lineage (i.e. ancestors and descendants) of a selected segment, as depicted in Figure 17. This kind of visualization may come in handy in the context of liver resections, where the segments affected by a cut are highlighted.

Implementation
In this section, we describe details of our implementation.

Offline calculation of U
Since U is defined in object-space, it only has to be calculated once. Therefore, we implemented the calculation of U in Matlab (the code is provided with the original publication [LL18]). In order to solve Equation (2), we calculate the minimal curvature direction C and the gradient of the geodesic distances ∇G . We then use a Matlab solver to obtain the results.

Online calculation of V
The evaluation of V is done anew for every frame. At first the input mesh M is rendered in a preparation pass. Here, we generate a mask that distinguishes between background, contour and surface pixels (see Figure 18, left, top) and a depth texture (see Figure 18, left, bottom) that is used to implement the depth awareness of the the JFA as depicted earlier in Figure 5. From then on, we use two buffers that are read from and written to in each JFA iteration k. After each iteration the read and write buffers are swapped. The JFA finds for each surface pixel the distance to the closest contour pixel. We only edited the original algorithm in a way, such that a surface pixel skips contour pixels that have a lower depth value than the surface pixel (see the shader provided with the original publication [LL18]). For more details on the JFA, we refer to the original paper by Rong et al. [RT06]. After the JFA has terminated, we can read from the recent write buffer and use V as a texture coordinate.

Root-point detection
As an optional pre-processing step, we provide a method to find the root end-point of a vessel. The detection is done in Matlab, using our own implementation of the thinning algorithm proposed by Au et al. [ATC*08] and an iterative search over each vertex as described in Section 3.1 (the code is provided in the supplementary material).

CT and binary code computation
The CT computation and binary code generation are implemented in Matlab (see supplementary materials). To store the binary code as, for example, an Int8 data type, the binary code is padded with zeros to the left in order to fill the data type. For example, the code 10011 is stored as 00010011, that is, a value of 19. Ambiguities cannot be introduced by the leading zeros, because the bit representing the root segment is always set to 1. To load the binary code into our prototype application, we stored each code B i in three Int32 variables along   with the code length n i . Therefore, our prototype can handle binary codes of length up to 96 bits. The bit-shift then has to be done across the three 32-bit values. For the implementation of the bit-shift, we refer to the shader sample provided in the supplementary materials.

Performance
Our performance tests were executed on a desktop computer environment with a 4.00 GHz i7-6400 processor, a GTX-1070 GPU and 16GB RAM. We applied the U computation to seven different vascular trees, with a range from 16k triangles (0.11 s) to 200k triangles (1.9 s). On average, the computation executed in 0.54 s, but the timings to not scale linearly with the number of triangles, that is, the execution depends on the vascular structure as well. All timings are given in Table 1 (second column). The computation of V was found to be independent of the mesh size. As shown in Table 2, our prototype implementation is able to obtain V in full HD resolution in real time. The FPS are given in ranges. Here, the lower bound of the range is the performance when we zoom very close to the mesh. This means that more pixels have to be treated in the assembly of the final images and the frame rate decreases. The upper bound is what we measured for a lower zoom factor (as found in Figures 10 and 12). The JFA execution time has not changed in these cases, since always all pixels are processed. The mesh shown in Figure 3 contains 30k vertices and the computation of the end-points including the rootpoint detection took ∼9.94 s. This is twice as fast as the method by Lichtenberg et al. [LHL17]. An overview of the timings is given in Table 3. We observed that our new approach detects end-points more faithfully while the results further depend on the choice of the detection sphere radius determined by k (recall Section 3, paragraph about root-point detection). A larger radius yields less end-points, detecting only more protruding branches as compared in Figure 19. The proportion of the individual steps are 32% for the thinning, 60% for the sphere tests and 8% for the curvature test. For a mesh containing 100k vertices, our algorithm took ∼86 s, while the reference approach took 128 s. Especially the iterative execution of vertices for the sphere test (see Figure 3, right) could be improved by a parallel implementation.
The construction of the contour graph (Section 3.2) highly depends on the number of vertices to be processed and is therefore not suited for very large meshes, unless pre-computation times can be neglected. The overall timings for the CT construction are given in Table 1 (third column).

Contour generation
The contours that we used for the contour parameterization in Section 4.4 and for the contour mask (see Figure 18, left, top) are generated in the geometry shader. We compute s i = n i , v cam , where n i is the normal of vertex i in a triangle and v cam is the current view direction. Then, we search for zero crossings of s ij between two vertices i and j . If we find zero crossings on two edges of a triangle, we generate a quad with two vertices at the crossing points and two vertices that are extruded by a distance d w.r.t. to the interpolated normal directions at the crossings. In the same way as the normal direction, we can interpolate U from the generating triangle. The V coordinate is then determined by setting V = 0 for the vertices at the crossing points and V = d for the extruded vertices. This yields U coordinates that fit to the generating mesh and the V coordinates are also scaled according to the result of the JFA.

Discussion
We presented an algorithm to compute texture coordinates for tubular, tree-like structures. The input is a user-defined source vertex or a vessel root-point detected by our proposed method. While techniques like that by Kälberer [KNP11] exist, our parameterization is simpler to implement and computationally less costly. We were also able to show that the U coordinate can be used to find branchpoints of the vessel. Another advantage over existing parameterization methods is that the background pixels are also addressed, allowing depth cues as in Figure 10. This is due to the screenspace computation of V , but at the cost of recomputing V for each frame. Nonetheless, our algorithm performs in real time on modern consumer hardware. The properties of V can further be used for a camera-oriented display of decals on a mesh. On cylindrical parts of a structure (i.e. away from branch-and end-points), we find that the gradients of U and V are mostly orthogonal and therefore suitable for the display of patterns and textures. The CT algorithm yields a binary tree representation of the input structure. This restriction can be seen as a limitation, because meshes with a genus other than zero can not be represented faithfully. Nonetheless, meshes with handles are robustly processed, with the handles simply being ignored by the employed algorithm [CSA01]. Figure 20: Close-up of an aneurysm data set. Our approach has issues with the spherical shape, introducing high distortion in V .

User survey
In order to underline the feasibility of our contributions in practice, we handed a questionnaire to five experts. The group of participants consisted of experts with a medical background (2), visualization background (2) and combined background (1). Their experience ranged from graduate status to experts with 15 years of experience. The questionnaire explained our anticipated applications described in Sections 4.1 (C1), 4.2 (C2), 4.3 (C3), 4.4 (C4) and 4.5 (C5 combined 2D and 3D tree view as in Figure 16, C6 lineage view as in Figure 17). The participants were then asked to select the applications that they think would fit to the following general scenarios: Expertpatient communication (S1), expert-expert communication (S2), enhanced shape perception (S3), resection planning (S4) and structure analysis (S5). The experts were also able to comment on their decisions. The sum of selections for each application-scenario combination is shown in Table 4. We can observe that each combination was selected at least once, except for the tree visualization (C5, C6) in the shape perception scenario (S3). Most promising are applications based on the contour parameterization (C4) and tree lineage visualization (C6), each with 19 votes. The fewest potential, considered the given scenarios, is given for the end-point visualization (C1) with 11 votes. However, an expert suggested that it might still be feasible for the discussion of resection volumes and the detection of end-points that are affected by a resection. The enhanced depth perception (C2) was mentioned to be useful in order to highlight risk vessels as well. The application of hatching (C3) is predominantly seen in the area of expert-patient communication, in order to explain interventions at an abstract level. Visualization decals (blood flow) based on the contour parameterization (C4) is probably suited for expert-patient communication, because of its intuitiveness and level of abstraction. The 2D depiction of the vasculature (C5) might support the analysis of complex branching patterns during  anatomical studies or facilitate the communication among both, experts and patients because of its level of abstraction. Finally, the lineage visualization (C6) has the potential to enhance the planning of a resection by structural analysis and to communicate the results to a patient.

Limitations
The parameterization approach presented in this paper is simpler than existing approaches and yields good results for tubular structures. It is tailored to vessel trees in the sense that pixels that represent points on a vessel segment are relatively close to the contour pixels associated with the same segment. Therefore, the V approximation works well. This also accounts for the modification that we did to the JFA algorithm, where pixels may only refer to contour pixels that have a higher depth value. The shading that is usually applied to a mesh is also a factor. The locations where V is locally maximal (i.e. the centreline of the projected contour as in Figure 4) are often locations of specular lights. Hence, these peak locations of V, where the gradient of V is undefined, can be hidden by the shading. Regarding these susceptibilities, the parameterization quality decreases as the mesh morphology is less tubular. We show a stippling pattern applied to an aneurysm in Figure 20. It can be observed that part of the parameterization is highly distorted. This is due to an improper computation of V, as the associated pixels refer to wrong parts of the contour. This behaviour may change with a small rotation of the object, when other parts of the contour become visible or hidden. Therefore, frame coherence is also an issue.
We described in Section 3.1 that we smooth the vector-field Z in order to remove noise and high-frequency features. While this is a crucial step to obtain a proper and smooth U , it can also lead to smoothing out geometric features of the mesh. Figure 21 shows a close-up of a small branch segment, which is predominantly ignored in U .
Our root-point detection algorithm relies on two main assumptions: First, we assume that vessel end-points are cap-shaped, and second, we assume that the diameter of the root is larger than any other diameter. This means that we are likely to detect the head of an aneurysm, if it is larger than the healthy parts of the vessel.
The 2D graph representation of the vasculature as in Figure 16 (left) is currently generated by a straight-forward computation of hue and saturation values based on the segments' binary codes. While this underlines the balance of Y, only a subset of the colour map is used (see Figure 22, left). Therefore, a further optimization of the colour assignment may be feasible. Another option is to only colour user-selected sub-trees as in Figure 22 (centre, right).

Outlook
As discussed, the proposed method is tailored to be applied to tubular meshes. However, we hope that the examples shown in this paper are able to support or inspire future work in this domain. In our opinion, one can build up on this work in two different ways. The first one would be to further improve the methodical details and implementation. Here, the frame coherence should be addressed. We are also optimistic that the U computation can be done real time as well, which might open further interactive application scenarios. As a screen-space approach is independent of the mesh size, this may have the potential to overcome the size limit of the method by Lichtenberg et al. [LSHL18]. The second direction would be to use the current method and integrate it into more sophisticated medical tasks. Using different visual channels and additional geometry (i.e. the contour), multiple scalar fields can be independently visualized with our approach. However, the effectiveness of different combinations in a real-world task needs to be evaluated. In conclusion, we are confident that particular aspects of the proposed technique are feasible in various scenarios and that it therefore creates a basis for further research.