Extended‐volume image‐derived models of coronary microcirculation

Recent advances in tissue clearing and high‐throughput imaging have enabled the acquisition of extended‐volume microvasculature images at a submicron resolution. The objective of this study was to extract information from this type of images by integrating a sequence of 3D image processing steps on Terabyte scale datasets.

this approach requires realistic information about coronary vascular anatomy. [4][5][6] While the 3D topology of the coronary vascular network can be acquired in individual hearts for vessels on the order of 100 μm diameter using a variety of imaging modalities, [7][8][9][10][11] corresponding information for microvasculature is less readily available. Computational models of vascular organization at this scale have been synthesized, 12 often using representative morphometric data from polymer casts. 7,13 More recently, such casts have been reconstructed in 3D with the complete porcine coronary circulation captured at a voxel dimension of 25 μm using episcopic fluorescent imaging with a cryomicrotome. 11 Microcirculatory units in much smaller human heart specimens have been re-assembled at much higher resolution from serial photomicrographs. 10 However, to model coronary blood flow at the microcirculatory level, a micron-scale resolution is needed across relatively large tissue regions.
New optical clearing techniques [14][15][16] reduce light scattering and absorption in fixed tissue and enable fluorescence imaging at much greater specimen depths than has previously been possible. With fast, sensitive video cameras now available, high-resolution images can be acquired successively through extended specimen dimensions. These methods have been used to reconstruct 3D neural circuits throughout optically cleared murine brains 17 employing SPIM.
Our laboratory has used a similar approach to acquire cleared heart specimens at a lateral resolution <1 μm with a purpose-built line-scanning confocal microscope. 18 As part of this study, these advances have allowed us to image the coronary circulation at a micron-scale throughout ~0.6-mm-thick rat ventricular short-axis slices. However, it remains necessary to extract the 3D topology and representative dimensions of all vascular components from such images in order to develop usable network models of coronary circulation. 19 This requires a sequence of signal processing steps that include 3D image enhancement and segmentation, center line tracking, and network formation applied to Terabyte scale datasets.
In this paper, we describe a fully automated, chunk-based image processing pipeline designed to meet these objectives by exploiting the efficiency offered by parallel computing. We outline our chunkbased architecture, the methods used for enhancing and segmenting 3D image data, and the techniques employed to extract 3D network models of coronary microvasculature having a diameter up to 15 μm.
We believe these methods provide a novel, computationally feasible framework for constructing unique image-based 3D models of extended coronary circulation. Short-axis sections ~0.6 mm thick (see Figure 1A) were cut with a vibrating microtome (Vibrotome 1000 Plus; Leica Biosystems), and optimal refractive index matching was achieved by cycles of immersion in graded concentrations of Ce3D 16 20 A 3D reconstruction of the dataset used here is presented in Figure 1B and has a volume of 700 Gigabytes.

| Image acquisition
To define representative vascular network models from extended 3D images of this kind, we had to extract the topology and dimensions of all vascular components and reduce them to their most compact form. Key features of the processing pipeline used here are shown schematically in Figure 1C. For efficient handling of data on this scale, it was necessary to subdivide the image into small blocks called chunks that could be processed separately using parallel computing and then reassembled. Each chunk was filtered to enhance vessel-like structures and minimize noise introduced during imaging. An image of the circulatory network was then extracted by segmenting vascular components. These data were further reduced by extracting centerline and radius values of the vessels.

| Chunk-based processing
Raw image arrays were converted into a chunked-arrayfile format called Zarr, 21 compressed, and stored in separate files. The Zarr format preserved the spatial registration of each chunk, eliminating the need to load entire images into memory. Because standard image viewers do not support Zarr files, we used Napari 22 for visualization.
Parallel processing was enabled through the Python library Dask. 23 Zarr chunks were read from the disk, passed through the parallel processor, and the results were then stored on the disk as Zarr files.
Individual processing of chunks generated edge discontinuities, and this was eliminated by incorporating overlap between chunks.
Chunk size was a major determinant of parallel processing efficiency, with smaller chunks resulting in many disk operations and increased processing time, while larger chunks consumed more memory and reduced the number of parallel processes. We found that a chunk size of 256 × 256 × 300 was optimal for the dataset used here, with an overlap of 32 voxels between each chunk.

| Image enhancement
For each chunk, raw image data were enhanced as follows. First, the impulse response function of the imaging system, which is also known as the point spread function (PSF), was measured using 200-nm microbeads and used to deconvolve the raw image data employing the iterative RL method. 24,25 The Python-Microscopy 26 library was used for PSF characterization, and the RL deconvolution algorithm included in the library was parallelized. The improvement achieved with deconvolution was estimated by comparing successive intensity line profiles, and ~50 iterations produced optimal deblurring. Following deconvolution, we applied a multiscale anisotropic diffusion filter, 27 which smooths intensity along these boundaries, enhancing resolution in capillaries where labeling is least dense.
We used intensity normalization to offset spatial variation in image characteristics that occur across extended specimen dimensions. This was performed independently for each chunk, and a linear interpolation-based correction was used to ensure uniform distribution of intensity across the image volume without discontinuities at the interfaces between chunks. Normalization of the intensity I i,j,k at each voxel within a chunk was based on the mean (μ l ) and standard deviation (σ l ) in that chunk and all 26 chunks immediately adjacent to it. Contributions from each were weighted by the distance of the voxel from the centroids of all 27 chunks involved. The normalization process is represented by where W i,j,k,l is a 256 × 256 300 × 27 element weight matrix.

| Segmentation
A well-established active-contour technique 28 was used to segment the microcirculatory network. This technique required an initial mask to be placed near the object requiring segmentation. We used Frangi filter 29 for this purpose. This Frangi vesselness measure was then binarized to generate masks for the active contour. With this initial mask, 10 iterations were sufficient for the contour to converge to the optimum segmentation.

| Centerline extraction
Centerlines of vessels segmented in each chunk were extracted using a modified form of the voxel-coding method introduced by Zhou and Toga. 30 This scheme employs a minimum distance transform from specified seed points to generate fields at all voxels in the segmented volume. Connected vascular network components and their center points were identified using these generated fields. The connection of these points provided a piece-wise linear representation of vessel center lines.

| Graph representation of vascular network morphology
The network information extracted above was incorporated into an undirected graph structure, which consisted of nodes (vessel center points) and edges representing the connections between them. The 3D coordinates of nodes and orientation, length, and mean radius of each edge were added to the graph (Tables 1 and 2).
Edge radius was estimated using the ray-burst algorithm, 31 which enabled independent parallel processing and captured radius variation in capillaries. A template consisting of N = 100 rays normal to the unit vector � ⃗ n was positioned along each edge of the vessel graph by aligning � ⃗ n with the edge orientation. The template was rotated, and the edge radius was estimated as the median length of rays cast to the vessel boundary.

| Graph reduction
The vessel graph was simplified by removing nodes that were not branches or terminals. A graph search was started from midnodes (2 edges attached) and propagated until either a terminal (1 edge only) or a branch (>2 edges) was encountered. A new edge was added between these boundary nodes, while midnodes and associated edges were removed. The start and end points of this process are represented in Figure 2B,c, respectively.

| Merging graphs
Because the simplified vessel graph in each chunk was processed separately, vessel segments near the boundaries were separated, and contiguous segments in adjacent chunks had to be merged to assemble a graph for the complete image volume. We did this by combining graph data from adjacent chunks with an overlap that ensured all vessel segments spanning chunks were captured. It was also necessary at this stage to convert the local coordinates used  Figure 3 illustrates key steps in the processes of image enhancement and segmentation. Background blur in the 2D subsection (136 × 136 μm) from Figure 3A, a raw image volume of the labeled coronary circulation is removed by deconvolution and denoising while the topology of foreground vessels is preserved and sharpened ( Figure 3B). Energy-based active contour segmentation ( Figure 3C) identifies vessel boundaries and stratifies the voxels within them.

| Validation of image acquisition and image processing
The accuracy of segmentation was quantified as follows. Blood vessels in a small deconvolved image block (512 × 512 × 127 μm) from the lateral midwall region of a normal rat heart were segmented manually and annotated using the ImageJ Segmentation Editor. 32 Anatomically unrealistic structures, including holes in the vessel surface and disconnected vessels, were manually restored. The initial deconvolved block was also segmented using our active contours method, and the Dice score 33 between manual and automatic segmentation was 0.9212.
The precision of vascular radius estimation was validated using synthetically generated helical structures with known radii. A vessel graph was generated for each helix, and the radius at each node was TA B L E 1 Node parameters.

Parameter Description
Medial

| Demonstration of the analysis pipeline
The utility of our analysis pipeline is demonstrated here by quantifying the morphology of the microvasculature throughout the short-axis ventricular section (13 × 11 0.6 mm) presented in Figure 1.
Image acquisition took ~08 h, and the image volume was subdivided into 2600 chunks. We used high-performance computers in our laboratory to process these datasets; see specifications in Table 3.  Figure 1B) is shown in Figure 4.

A part of vessel graphs from three isotropic blocks (256 μm sides)
from subepicardial, midwall, and subendocardial regions are also in Morphological data for the complete short-axis ring are presented in Figure 5, and the vascular network identified consists of ~1.5 × 10 6 vessel segments. Figure 5A,B show that microvessel lengths in the rat coronary microvasculature vary from 6 to 300 μm, but their distribution is heavily skewed toward shorter lengths with the mode at 16.5 μm. Approximately 15% of vessel segments are 16.5 μm with ~47% are 16.5-50 μm long, and 25% 50-100 μm. In contrast, vessel diameters range from 3 to 15 μm and have an approximately normal distribution with a mean diameter of 6.5 ± 2 μm.
Finally, Figure 5C confirms that microvessels exhibit a characteristic regional variation in orientation with progressive rotation transmurally from around 90° with respect to the XY plane adjacent to the outer epicardial surface through 0° in the midwall and toward 90° with respect to the XY plane again at the inner endocardial surface.
This pattern is relatively uniform across the walls of the right and left ventricles.

| DISCUSS ION
As described and demonstrated above, we have developed a comprehensive image-processing pipeline to extract vascular networks from very large microscopy image volumes in which the microvasculature has been labeled with fluorescent probes. Our work integrates and extends a suite of 3D image processing steps,

F I G U R E 4
Segmented vessels in an extracted region ranging from epicardium to endocardium (the highlighted yellow region in Figure 1B) of dimensions 0.5 × 3.7 × 0.6 mm, colors are used to illustrate the depth. A part of three 256 μm isotropic vessel graphs from epicardium, mid-wall, and endocardium are shown from the highlighted regions. will also enable the investigation of biophysical mechanisms that underlie blood flow distribution throughout the heart wall using computer models.
Detailed image-based reconstructions of the coronary microvasculature have, to date, been limited to small regions, 11 and more complete data have only been acquired at a low resolution. 10 We have extracted the microvascular network throughout a short-axis slice of a rat heart, and Table 4 provides a comparison with the scale and resolution of previous studies.
The representative results presented here indicate that coronary microvessels have a mean diameter of 6.5 ± 2 μm. Capillaries and terminal venules were the predominant vascular components identified in this study, and our results are consistent with capillary crosssectional dimensions presented and reported in previous morphological studies in rats 34 and pig 7 hearts. It is difficult to find a basis for comparing the distributions of vessel segment length presented here with data reported elsewhere. 6,7,11 Our findings indicate that a significant fraction of microvessels is branches or anastomoses and that most capillaries do not extend over the full length of cardiomyocytes (generally on the order of 80-120 μm). Finally, the transmural distribution of microvessel orientation in the short-axis ventricular ring presented in Figure 5C matches corresponding findings for myofiber orientation. 35,36 This is hardly surprising since coronary capillaries are known to lie in the extracellular space between adjacent tightly packed ventricular myocytes.
Vessel radii were estimated using the segmented volume, and the method was validated with a synthetically generated vessel network. While this supports the accuracy of the computational process, other factors could have affected the results reported here.
These include shrinkage during tissue processing, imaging artifacts, deconvolution, binarization, the stochastic nature of centerline extraction, and the chunking process. We sought to minimize these errors wherever possible, but as part of future work, independent fiducial markers could be acquired to confirm the absolute accuracy of our approach.
There is a variety of image processing toolkits available for medical image processing and segmentation, including FIJI, 32 Icy, 37 MATLAB, ITK, 38 MIA, 39 python packages, and other online tools.
However, as stated above, the volume of the image datasets studied is so large that parallel computing is necessary to ensure that the volume of data can be handled via a given workflow. Because existing toolkits were not optimized for this purpose or did not deal with our specific tasks, we developed the pipeline described above from scratch.
In addition to the length, radius, and orientation of vessels, we have also included vessel density and structure tensor analysis in the developed toolkit. Furthermore, this toolkit allows for easy integration of different algorithms, making it a versatile tool for processing various types of tissue data. One of the key advantages of this toolkit is its core library, which operates independently of the parallel library. This means that any function designed to work on a small block or chunk can easily be applied to a terabyte-scale data- Two issues had to be addressed to achieve effective chunkbased processing. First, edge effects tend to be introduced at chunk boundaries because convolution operations are not fully supported.
Also, although chunk processing is performed independently, global operations must be homogenized, and this requires data sharing.
Intensity normalization provides an example of this second problem because intensity distributions must be consistent across chunks without discontinuities at the interface between them. Overlaps between chunks were generated efficiently, ensuring filters were preserved at chunk borders without additional data transfer. We used the RL algorithm to deconvolve the raw image volume with a PSF measured for the imaging system. The end point must be estimated during deconvolution before the result deteriorates.

TA B L E 4 Coronary circulation studies.
Despite ongoing debate, the stopping criteria of such iterative deconvolution techniques remain an open problem. 40 In this study, we compared the line profiles at every 10 iterations of the deconvolution and found that RL performed well with 50 iterations.
We intend to use the described imaging pipeline to reconstruct a complete coronary vascular network by stitching together graphs extracted from serial short-axis sections in individual rat hearts and are confident that our described approach will enable this to be achieved. At present, however, the deformation of the separate sections during tissue clearing has prevented us from processing a full heart. To achieve this goal, it will be necessary to embed and stabilize sections prior to processing to enable the registration of adjacent image volumes.
Since the endothelial label "filled" a much smaller fraction of the coronary vessels, the extracted vessel network only covered capillaries and a portion of the arterioles and venules. We did not automatically distinguish between arterioles, capillaries, and venules.
However, visually, venules typically differed significantly from arterioles, presenting as flattened tubes, while arterioles had circular cross sections. Image segmentation proved more difficult in larger "unfilled" blood vessels (transmural arteries and veins) than in microvasculature. To segment these blood vessels, we used a two-stage semi-automatic process in which the endothelial surface was identified before the binarization of the complete vessel. However, this proved time-consuming, and the process could be made more efficient as part of future work by filling coronary conduit vessels (with fluorescent gelatine, for instance) immediately after perfusing the coronary circulation with lectin.

| PER S PEC TIVE S
This paper presents a computational pipeline that allows detailed microvascular networks to be extracted from large fluorescence microscopy datasets. The novel tools developed for this purpose will enable a more comprehensive analysis of vascular morphology in the normal heart and in heart disease than has previously been possible.
The network representations obtained will enable the investigation of biophysical mechanisms that determine regional oxygen delivery in the heart using realistic computer models. The approaches de-

This work was supported by Marsden Fund (Royal Society of New
Zealand Te Aparangi) grants UOA1620 and UOA1920. The image acquisition system used in this study was funded in part by the Leducq Foundation Transatlantic Network Programme Grant RHYTHM.

CO N FLI C T O F I NTE R E S T S TATE M E NT
There are no conflicts of interest reported by any of the authors.

DATA AVA I L A B I L I T Y S TAT E M E N T
Network data of three 256 μm isotropic blocks from epicardium, mid-wall, and endocardium capillaries and the code for the developed image processing pipeline are publicly available at: https:// github.com/vibuj ithan/ vascu lature. Raw image data are available upon request.