STAR: Visual Computing in Materials Science

Visual computing has become highly attractive for boosting research endeavors in the materials science domain. Using visual computing, a multitude of different phenomena may now be studied, at various scales, dimensions, or using different modalities. This was simply impossible before. Visual computing techniques provide novel insights in order to understand complex material systems of interest, which is demonstrated by strongly rising number of new approaches, publishing new techniques for materials analysis and simulation.


Introduction
During the past decades, a clear trend formed in industry of constantly driving research towards tailored materials for new, costefficient, function-oriented, highly integrated and also light-weight components with previously impossible specifications. Industries such as health care, agriculture, construction, packaging, sports equipment, automotive, aeronautics, environment, protection and others thus increasingly adopt these tailored materials to stay ahead of competition. The understanding, discovery, design and use of (new) materials as well as material systems are integral parts of materials science and permanently driven to new frontiers. Ambitious projects boost the development of novel materials for future high quality components. Some outstanding activities are for example found in the manufacturing of wall components for future fusion reactors as presented by Kim et al. [KJS * 14], the design of novel anode materials for energy storage in batteries as presented by Gyulassy et al. [GKLW16], or the analysis of advanced composite components for aeronautic and automotive applications as presented by Bhattacharya et al. [BHA * 15]. These projects all share the fact that only a detailed understanding of the material systems of interest ensures to met the application specific targets. Their targets are highly divers and may range from new materials withstanding temperatures of more than 15 million degrees Celsius, deciding whether carbon nanospheres will be the anode material for future lithium ion-based batteries, to developing carbon fiber reinforced composite materials for the fuselage of a new generation of airplanes. For knowledge discovery, also simulations of material systems are of high importance when designing novel, outstanding materials. Especially simulations of the material systems in their targeted use and environments are catching up momentum: An earlier approach by Laevsky et al. [LTM01] provides an interactive visualization and computational steering tool for interactive numerical simulations of a glass pressing process. Patkar and Chaudhuri [PC13] investigate the mechanics of wetting porous solid objects by computations of fluid flows through porous media. Another technique presented by Gyulassy et al. [GKLW16] simulates the synthesis and ion diffusion of battery materials. In all these areas, visual computing generates new and previously unknown insights, supporting material scientists in understanding the material's inner structures and its behavior while in use. Aside these introductory achievements, the potential impact of visual computing is found in virtually any aspect of materials science. To demonstrate the interrelation of visual The individual objects on the left side are combined to a Mean Object, and a cut through of it on the right side. The blue core of the object depicts a high probability of voxels belonging to the MObject. This core is surrounded by the yellow medium probability layer which together with the outmost low probability layer forms the uncertainty cloud. c 2013 IEEE. Reprinted, with permission, from [RGK * 13].
simulations with much more precise models on the material's inner structures. The enhanced precision of the simulation now allows to produce components fulfilling the target specifications with less material (cheaper), which are even lighter (more economic). Especially in aeronautic applications this is a standing strong demand. Furthermore, these new visual analysis functionalities allow to distinguish pores, inclusions, or voids into critical and uncritical defects based on a wide variety of characteristics. Therefore, non destructive testing (NDT) engineers may now safely decide, if an 80.000 $ aeronautic component may fly or needs to be scrapped. This example impressively shows how visual computing enables solving important materials science challenges and how it opens previously impossible insights into complex materials and their behavior in use.
Aside this example, an increasing number of materials science projects make use of visual computing and especially visualization or visual analysis approaches in their research. In literature, however, there is still a huge gap observable regarding reviews analyzing of the body of work at the intersection of those two domains. Apart from earlier reviews on specific niches, such as the visualization of industrial computed tomography data for non-destructive testing as presented by Huang et al. [HMMW03], or data fusion for non destructive evaluation as presented by Liu et al. [LFK * 07], or dimensional metrology using computed tomography as presented by Kruth et at. [KBC * 11], to date there is no comprehensive survey available giving insight into this growing area of research. In this work our main goal is to close this gap. By analyzing visual computing approaches for materials science in a structured literature review and by integrating our own experience, we shed light on how both areas profit from each other. To analyze the body of existing work we follow the structured literature research similar to Sedlmair et al. [SHB * 14] in their conceptual framework on visual parameter space analysis, Bayer et al. [BHP15] regarding GPU based large-scale volume visualization, and Alsallakh et al. [AMA * 14] in terms of visualizing sets and set-typed data. We first analyze the relevant literature regarding their high level tasks, as well as their used testing or simulation techniques. We review the data characteristics as well as visualization techniques used as well as their suggested interaction concepts. We see the main contributions of our work in the following points: 1. Review and classification of the current body of literature at the intersection of visual computing and materials sciences, 2. Extraction of a cumulative matrix demonstrating the intersections of both fields, 3. Analysis of the application areas as well as their high level visual computing, visual analysis and visualization tasks for materials sciences, 4. Investigation of data acquisition methods regarding the characteristics of the different inputs vs. direct and derived outputs, 5. Discussion of the used visual metaphors as well as why they are preferred over others, 6. Discussion of the used interaction concepts as well as the corresponding analysis workflows, 7. Identification of open research challenges to guide future research endeavors in this area.

Method
This state-of-the-art-report is based on a structured literature review at the intersection of visual computing and materials sciences. The related literature was compiled, reviewed and clustered by two core annotators, i.e., the authors of this report, with continuous feedback from two domain experts in materials science as well as three experts in visual computing. These external advisors were continuously provided with concepts and drafts for the submission over a period of more than a year, asking for their feedback on the report, the material systems, the tasks, relevant testing and simulation techniques. In addition, the future challenges were discussed in a round of specialists from visual analysis, non destructive testing and materials science at a recent workshop. Regarding the core annotators, one of them is an expert, who is active his research at the intersection of visual computing and materials sciences for more than 12 years at the time of writing this report. He is experienced in both domains, visual computing as well as materials sciences. The second author is a high potential junior researcher also working at the intersection of visual computing and materials science. For our state-of-the-art-report the related literature at the intersection of visual computing and materials sciences was screened yielding a total of 241 research papers, which were considered as potentially interesting: We therefore started off with this initial set of contributions on the one hand from top level visualization, visual analysis and visual computing conferences and journals (e.g., IEEE Transactions on Visualization and Computer Graphics, Computer Graphics Forum etc.), and on the other hand from top level materials testing, materials simulation and materials sciences publications (e.g., Journal of Materials Science, Journal of Nondestructive Evaluation, etc.), which showed interrelations of both fields. In the next step we further extended our review to smaller conferences and venues, as well as niche topics in both areas. For the classification of the full set of papers we applied a hierarchical classification scheme: The complete set of papers was initially classified regarding relevance on a scale of zero to five stars from no relevance to core relevance. Core relevant contributions are required to reach a materials science research target through extensive use of visual computing approaches. Papers of this category include novel visual computing techniques for materials science as well as applications or adaptations of existing visual computing techniques within a specific materials sciences domain. As relevant papers contributions are considered which integrate visual computing with limited novelty for solving a materials science task. Papers, which are classified with passing relevance, focus mainly on either of the two domains and hardly interfere with the other one. As not relevant papers we classified contributions, which do not demonstrate the intersection of the two domains. This process led to 88 contributions, classified from relevant to core relevant, which are to be discussed in this report. The second level review targeted a representative overview of all papers in the form of a matrix. We thus investigated the set of 88 papers in detail using an open coding process and manually registered our findings in a dynamically growing and continuously adapted matrix of features based on the following main categories: application, high / low level tasks, material system, data acquisition and characteristics, visual computing / visual analysis aspects. To ensure a common understanding continuous discussions amongst the annotators have been carried out at least on a weekly basis dur-ing filling the matrix of features. In these discussions the main categories were also further subdivided and refined in lower levels, in order to ensure that no information is lost. After all relevant papers were encoded in the feature matrix, a manual clustering of the subcategories was carried out as next step: Subcategories of lower interest showing only limited contributions, applications or knowledge discovery for our matrix, were discussed and consequently cleared, if only passing knowledge discovery was identified. The clustering iterated until descriptive subcategories were found. The results of our literature research were integrated in a cumulative matrix which correlates the analysis tasks with the used visualization and interaction techniques. At the corresponding intersections we integrated the respective application areas as seen in Figure 2.

Visual Computing in Materials Science
Visual computing and especially visualization as well as visual analysis have become highly attractive to generate new, previously impossible insights for materials science by studying a multitude of different phenomena at multiple scales, dimensions, or using different modalities. In the following sections, we define materials science and its various material systems. We further define visual computing and review tasks to be solved, the simulation and testing techniques as well as the used visualization and interaction techniques. During the iterative process of analyzing the body of literature, we classified and clustered the relevant related work in a matrix revealing the different interrelations of both fields. Figure 2 outlines all relations in detail.

Definition of Materials Science
Materials science involves the subareas of understanding, discovery, design and use of (new) materials as well as material systems.
As outlined by Adrian Sutton [Sut] from the University of Oxford, it is difficult to find a comprehensive definition for materials science, which covers all subtle niches of the field. Therefore, materials science is best defined by its core areas and tasks, e.g., as found in the description of Springer's Journal of Material Science. According to their definition materials science includes: "... techniques for studying the relationship between structure, properties, and uses of materials. The subjects are seen from international and interdisciplinary perspectives covering areas including metals, ceramics, glasses, polymers, electrical materials, composite materials, fibers, nano structured materials, nano composites, and biological and biomedical materials ..." [Spr] This description outlines the broad spectrum of materials science, its diversity as well as core relevant domains. For all areas of understanding, discovering, designing and using materials, a primary objective of materials science is found gaining a profound knowledge of the material's properties (material analysis) as well as the material's performance characteristics (material simulation). Visual computing holds a large potential in retrieving and visualizing complex materials characteristics to materials scientists. For example, the properties of advanced materials are determined by material analysis techniques and visualized in order to form a mental image of regional distributions, critical areas or features of interest. Furthermore, regarding their performance, (new) materials are increasingly modeled and simulated within their targeted usage scenario. To make sure that the generated simulation results correspond to reality, they are subsequently verified by means of destructive (DT) and non destructive testing (NDT) techniques using visual computing (see also Straumit et al. [SLW15]). While non-destructive testing enables to reuse the same specimen for tests using supplemental techniques, destructive techniques destroy the specimen during evaluation.

Material Systems
To provide an overview about promising material systems as well as the structures and features of interest, we introduce material systems in the following paragraphs, for which visual computing techniques are highly relevant: Composite Materials are found in a wide variety of industries ranging from leisure, sports, electronics, automotive, aeronautics, to space industry. Especially, advanced composites are regarded as materials of the future in many industries, due to their tailored characteristics for their target application area. They are composed of individual components to fulfill the targeted requirements in terms of strength, stiffness, function-orientation and light weight. Composite materials feature a base matrix material, which forms the components and holds the reinforced components in place. The reinforcements are carrying the loads of the component while in use, in order to end up in a material system of superior behavior as compared to conventional materials. The main part of the reviewed literature in the area of composite materials centers around fiber reinforced materials. For example, carbon fiber reinforced polymers (CFRPs) show a low weight, but high specific stiffness and high specific strength. CFRPs are also cost effective for the properties they deliver, as stated Bhattacharya et al. [BHA * 15]. CFRP is thus a promising candidate material for a large variety of new automotive, aeronautic and space components. Besides CFRPs there are many other fiber reinforced composites in industrial use. Glass fiber reinforced composites (GFRP) integrate glass fibers as reinforcements and allow the manufacturing of cheap and robust injection molded parts as indicated by Kastner et al. [KPHF08]. GFRPs are thus found, e.g., in housings of electronic or automotive components. Fiber reinforced ceramic matrix composites (CMC) withstand high temperatures and show a high resistance to thermal shocks, which opens application areas of this system for heat shields or ceramic disc brakes systems. Further reinforcement components are found in steel fibres as discussed by Fritz et al. [FHG * 09] and Westenberger et al. [WEL12], biological fibers, (e.g., wood-based fibers as presented by Tran et al. [TDD * 12]  Polymer Materials, especially polymer foams, are present in many advanced industrial applications due to their advantageous characteristics: Polymer foams feature stiff, strong, as well as compressible cellular cores and thus attracted significant interest for energy absorption applications. As their cellular structure shows a stochastic behavior, polymer foams require a detailed structural analysis of their foam network. Moreover, regarding the simulation of polymer foams under load, the respective mechanical properties of this material system are typically not linear but hyper-elastic as described by Patterson et al. [PCH * 15]. Even in the manufacturing of polymer foams visual computing is required, observing the plastic foaming processes under shear stress as presented by Wong et al. [WP12]. Another example of new polymer materials in need of detailed investigations are semi-crystalline polymers. As discussed by Tabatabaei et al.
[TBNP14] the properties and morphology of semi-crystalline polymers need to be continuously analyzed during the crystallization process in order to reach the targeted application properties, which are mainly defined by the crystallite structures. In general, polymers are found in important material systems for future applications. For this reason this field is a highly active subarea of research inside materials science, which is especially in need for visual computing supported studies. . As these subcomponent materials are well known, visual computing supported investigation is mainly required in this area for analyzing highly specialized material subsystems. For example, using micropowder injection molding, complex shaped ceramics or metallic parts are produced, which show a special need for visual computing supported analysis. Another example is given in Weber et al.'s work [WRR * 11], who investigated the powder-binder separation using synchrotron-based microtomography and 3D image analysis. Furthermore, fused silica glass is used in the work of Galvin et al. [GGB01] for photo lithography or for first surface mirrors in telescopes. The authors analyzed surface strain and polishing artefacts using scanning electron microscopy. The determination of quantitative material properties is of high relevance in order to advance the material for future applications.
Metals and Alloys are still widely used in industry applications, leisure, automotive and aeronautics. In automotive and even in aeronautics applications, a renaissance of these conventional materials is currently being observed in the form of novel high strength, low density steels or aluminum alloys for lightweight construction. For example Al2024 is such an alloy, which is increasingly used in automotive and aeronautic industry because of its low density and high damage tolerance. Typically, metallic materials are processed in various manufacturing steps such as forging, drawing, rolling in order to tailor their characteristics to the application's requirements. As the material's micro structure and related characteristics mainly define the overall properties of the metal or the alloy, these features are crucial to be analyzed and to be controlled in manufacturing. To achieve this target, visual computing methods are used by Bhimavarapu et al. [BMBN10] to explore the compressive deformation behavior of Al2024. They encode the power density as well as the instability of the alloy in 2D and 4D maps. Ductile cast iron as a further example of this area provides high strength, high ductility and high fatigue strength depending on the micro structures of the graphite particles inside the material. These graphite micro structures are analyzed and quantified according to form and size parameters in visual analysis tools as presented by Fritz et al. [FHG * 09]. In addition to the micro structure of the graphite particles they also investigate steel fibers using directional sphere histograms encoding their orientation in the material system. Another highly interest-   ing material system in this area is found in AlSiC, a metal matrix composite consisting of an aluminium matrix with silicon carbide particles. While AlSiC composites offer the high thermal conductivity of a metal, they facilitate to maintain the low thermal expansion of a ceramic. Reh et al. [RAK * 15] presented a graph based technique for tracking those particles and other features of interest over time / over various steps of a heating / cooling process.
Construction and Building Materials as well as the corresponding material systems are typically referred to as well known materials, which have been used for ages in similar applications. Despite their seeming simpleness also construction materials often contain many different components featuring complex shapes. For example, steel fiber reinforced concrete allows to generate tailored material system for high tech applications such as the construction of tunnels. An investigation of steel fibres in such sprayed concrete was presented by Fritz et al. [FHG * 09]. In contrast to the material systems described before, these materials are typically crafted for direct application in order to fulfill a specific purpose, without further processing or assembly. The application areas of construction and building materials and material systems are widespread and range from cultural heritage applications (Li et al. [LZS16]), civil engineering (Kim et al. [KHK * 12] and La et al. [LGKN15]), private buildings and construction (Ham et al. [HGF14]) to electronic devices (Protopopov et al. [PD00]), printed circuit boards (PCBs) (Cicchiani et al. [CHSW08]) or even micro-electromechanical Systems (Fisher et al. [FH08]).
Biological and Biomedical Materials are used as reinforcement components for new composites. For example, plant-based fibers, such as wood-based fibers as used by Tran et al. [TDD * 12], or hemp fibers utilized by Placet et al. [PMF * 14], are currently tested for various applications as a promising reinforcements. In contrast, Hu et al. [HRN * 03] explore biological materials as such. The authors propose an approach for the segmentation of the meiotic spindle within a mouse egg from Confocal Laser Scanning Microscope (CLSM) volume data. Their method is based on expectancy and standard deviation values of voxels extracted using a Weibull probabilistic framework for segmentation. A further application field of biological and biomedical materials deals with copying the structure of biological materials as well as their characteristics to inorganic materials.

Definition of Visual Computing
Visual computing integrates computer science disciplines dealing with the acquisition, the analysis and the synthesis of (visual) data using computing resources in applications such as industrial quality control, medical image and data analysis, robotics, multimedia systems, computer games, etc.. Aside others, visual computing thus covers aspects from image processing, visualization, computer graphics, computer vision, virtual and augmented reality, pattern recognition, machine learning, as well as human computer interaction. Furthermore, visual computing is also strongly related to other science domains such as mathematics, physics and cognitive sciences. A core aspect of visual computing is found in visualization and visual analysis. As defined in Munzner's book "Visual Analysis and Design" [Mun14], computer-based visualization systems provide visual representations of datasets, which are designed to help people carry out their tasks more effectively. The book further indicates that visualization is especially beneficial, when there is a need to augment human capabilities rather than replace people with computational decision-making methods. This characteristic of augmenting human capabilities perfectly covers the visual computing supported materials sciences tasks discussed in section 4.4.

Tasks of Visual Computing Supported Materials Sciences
The analysis and simulation of the material systems as discussed in section 4.2 pose very different tasks and challenges on visual computing. In the following section we shed light on respective tasks and consider them in more detail. Generally, the main challenges revolve around two large topics: the simulation of material systems and the analysis the material systems.
Simulation of Material Systems is highly important for designing, manufacturing, processing and evaluating novel material systems in their application areas. Tailored visualization and visual analysis are required to solve the domain's research questions. For the simulation of material systems we identified the following three major tasks for visual computing aside the actual computation of the simulation: • Exploration and Visualization of Finite Element (FE) Simulations is an important task facilitated by visual computing to explore the system of interest in its target application. Aside materials sciences, finite element models and finite element simulations are frequently used in computer graphics, robotics, special effects or virtual reality [LB15]. Typically, FE data is visualized using surface-based approaches. This is mainly due to the nature of non uniform grids as typcially used in FE simulations as well as surface based representations used for modeling the simulated domain. More flexibility is found in volumetric visualization techniques, which support the interactive exploration of the complete data. To render these datasets using ray-casting, uniformly distributed sample points are required along each viewing ray. Bock et al. introduced an approach [BSL * 12], which transforms the uniformly distributed sample points into the material space for each cell using a coherency-based method, decoupling expensive world-to-material space transformations from the rendering process. Even for higher order FE models the authors achieved frame rates, which allow for interactive data exploration. Also earlier approaches as presented by Laevsky et al. [LTM01] were focused on the exploration of FE simulations using interactive steering of the simulation and monitoring the simulation's outputs. The application area of this work is found in glass being pressed in a mold, modeling glass as a Newtonian fluid. The goal of this approach, aside understanding the pressing process, is to obtain optimal shapes of new molds by a detailed evaluation of the simulation results and a feedback loop of the generated results back into the FE simulation process. scriptors of the underlying material of interest. In contrast to a local exploration of flow trajectories interacting with the solid structures, the authors target to extract the pore network for estimating the fluid flow using topological descriptors. The porosity is used to finally compute the permeability of the porous media. With their framework the authors facilitate to characterize synthetic material phantoms composed of packed spheres as well as 3D high-resolution X-ray computed tomography data regarding porosity. Patkar and Chaudhuri [PC13] focus on porous solid media interacting with fluids, such as cloth getting wet by a fluid jet or porous stones absorbing water. They make use of smoothed particle hydrodynamics for modeling the fluid dynamics within the specimen. In a three stage approach, they first model the fluid absorption of the object, the transport of fluid inside, as well as finally the dripping of extra fluid by oversaturated parts. They visualize the generated results in animations of the 3D domain.
• Analysis and Visualization of Molecular Dynamics Simulations is especially important for understanding and conditioning novel materials. Molecular dynamics simulation is a widely used technique to analyze material characteristics as well as structural changes under external forces. Gyulassy et al.
[GDN * 07] present an application, which uses molecular dynamics to simulate the particle impacts at various scales: In the nanometer range, particles of several thousands of atoms may be used to smooth or condition surfaces. In micro and macroscopic scales, this kind of simulation is used to mimic impact damages as well as potential materials to compensate. To understand the behavior and the interactions of the underlying materials, the authors carried out molecular dynamic simulations, which model the impact of solid grain material on low density foam. The domain specialists were interested in how the impact craters are forming for two reasons: (1) how is the structure around a crater changing and (2) how are quantitative values affected such as the overall porosity. Guylassy et al. introduced methods for the construction of distance fields, which are topologically clean in order to extract, characterize, and visualize relevant filament structures in the porous material. In a recent work, Gyulassy et al.
[GKLW16] employed and extended their findings to simulate the synthesis and ion diffusion of battery materials using large-scale molecular dynamics simulations. Their technique shows how visual analysis is used to support domain specialists in the investigation and selection of novel anode materials for future batteries. Other approaches address the sheer data sizes generated by large-scale molecular dynamics simulations. Frey et al. [FSG * 11] proposed a technique based on loose capacity-constrained Voronoi diagrams, which allows to replace the huge amount of simulated particles by a small set of representatives. These representatives are required to capture the main characteristics of the particle density and to exhibit coherency over time for creating visualizations, which reflect both particle distribution and geometric structure of the original data. The authors demonstrate their method on molecular simulation datasets of laser ablation from solid aluminum, compressed argon surrounded by vacuum as well as colliding liquid methane and ethane droplets.
Material Analysis denotes the second main area in materials science which makes extensive use of visual computing and thus profits from recent techniques. For this area we identified the following tasks, in which visual computing is of core relevance: • Feature Extraction and Quantification is the most important task in destructive and non destructive testing. The primary goal aside the plain extraction and identification of individual features is found in the investigation of their distribution throughout the specimen as well as their individual properties. As typically the number of features is high and the characteristics of each feature may exceed 25 or more properties, visual analysis techniques are required to explore the generated data. A lot of approaches facilitating feature extraction and quantification make use of segmentation techniques in order to extract the features of interest in 3D as presented by Hu  , who analyze the effect of fiber orientation, stress state and notch radius on the impact properties of glass fiber reinforced polymer samples. Also Gusenbauer et al. [GRKK14] extract and quantify non metallic inclusions in steel in order to investigate their correlation with the breakage behavior in fatigue tests. ing samples using phase contrast X-ray computed tomography (XCT) and investigate alkali-silica reactions regarding their reactive aggregate progressive dissolution together with a deposition of gel, during and after the reaction. During the aging process the specimens are deformed and they develop of microcracks in the structure. The results were compared to traditional 2D techniques such as optical microscopy and SEM. The main scope of interest was to find a method, which allows the rendering of microstructural features in the specimen. The results are presented in 2D slice images as well as 3D renderings of the segmented and colorcoded phases. Further application areas of damage analysis are found in the manufacturing of electronic devices or the construction of buildings, which suffer from similar damage types. For example, faults within printed circuit boards (PCBs) and their components are investigated by Cicchiani et al. [CHSW08]. The main goal of their work was to obtain a positive visual confirmation if a specific failure occurred in the PCB. Mayer et al.
[MLK * 08] target to characterize and classify indications of defects in concrete buildings and even go a step further: They target the prevention of faults by estimating cracks as well as the crack growth of concrete materials. Other types of damage are found in corrosion and delamination which are topic of research of La et al. [LGKN15] and Yashiro et al. [YTT07]. • Dimensional Measurements, in contrast to the previous tasks, this task centers around the traceable dimensioning and tolerancing of measurement features such as straightness, evenness, cylindricity, etc., which is essential for industrial quality control. The practitioners need to know if a specimen fulfills the required internal and external standards. Using conventional tactile or optical coordinate measurement machines, mainly plots of points, lines or surfaces are evaluated, which reflect the applied measurement strategy together with actual and nominal values as well as tolerance bands. In recent years much more information became available when X-ray computed tomography (XCT) was introduced for industrial metrology purposes. Using XCT, the uncertainty of the generated data may be estimated at every spatial position of the data, as the surfaces and interfaces characterizing a measurement feature are only implicitly given in the scanned attenuation coefficients. Information on the quality of the transition air to material or material to material is prevalent in the XCT data, which may be affected by artifacts and other irregularities. While earlier approaches are tasked to reduce and remove artefacts as presented by Heinzl et al. [HKG07], more recent techniques evaluate the datasets' uncertainty. For example, Amirkhanov et al.
[AHK * 13] integrate information on the uncertainty of a measurement feature as context information in commonly used visual metaphors of dimensional measurements, which is strongly related to the next task: • Uncertainty Quantification and Visualization builds upon the extracted (measurement) features and targets to determine the underlying uncertainty budgets: Data acquisition and evaluation typically introduce various types of uncertainty stemming from the environmental conditions, the specimen, the measurement system, the analysis pipeline and other influences. The domain specialists require to quantify and visualize the uncertainty of their measurements in order to avoid that wrong assumptions are made on the data as stated by Amirkhanov et al. [AHK * 13]. As shown by Schonfeld et al. [SB15], a lot of research efforts have been put into finding and verifying local quality measures for standard phantoms. An example of such a measure is the local quality value as presented by Flessner et al. [FMHH15], analyzing the volume data in the proximity of an extracted surface/interface point of a measurement feature. The task of uncertainty quantification and analysis was identified as separate task because of its high relevance in both application and visualization domain. • Optimization of (parts of) a workflow is another task frequently found in visualization and visual analysis for materials science. Typically, methods in this area put their focus on optimizing a specific aspect of the simulation, the data acquisition or the data evaluation pipeline. Some targets in this area are for example to increase the signal-to-noise ratio, to optimize the measurement or testing parameters in order to utilize them in new materials or or to simply improve the visual representation of the underlying data. In terms of optimizing the production process of materials itself, this task is also strongly related to the exploration and visualization of finite element simulations of some material or component as presented by Laevsky et al. [LTM01]. • Risk Analysis marks the final task we identified. For this task, mainly cultural heritage applications are of interest such as analyzing the risk, cultural heritage sites are facing because of environmental or other influences. Qian et al.
[QSCZ16] propose a visual analysis technique for the assessment of the risk of deterioration, which is focusing on matching the major needs of the domain with the objectives of deterioration. They encode the risks of detoriation in spatial views, integrating techniques of visual analytics for analyzing the risk level. A similar visual analysis method is provided by Li et al. [LZS16] investigating the deterioration risk of mural paintings using site-level visualization based on a circle packing layout, a chord diagram tool and heat map encoding the overall disruption risk.
In the next section we review the different methods in materials sciences used to solve these challenges. While the FE, CFD and other simulations as well as the corresponding models for the material systems of interest are generated similarly as for other applications outside materials science and as simulations are similarly returning fields of scalar, vector and tensor data, we do not consider the simulation techniques themselves here any further but refer the reader to indicative papers of Laevsky et al. [LTM01] for FE simulation, Uzshizima et al. [UMW * 12] for CFD simulation, Gyulassy et al. Gyulassy et al. [GKLW16] for molecular dynamics simulation as well as to the tasks identified in section 4.4. We put our focus in the next section on the material analysis, the respective testing techniques as well as the generated data, which are very different from each other regarding their principles and capabilities. The characteristics of the testing techniques mainly determine which visualization technique is used to solve the tasks as descibed.   Figure 3a. XCT did not detect thinnest delaminations due to resolution constraints. Despite this fact, such slice images or 3D reconstructions are used to find all kinds of defects, e.g., cracks, voids, inclusions, etc. in the specimen. (c) Ultrasonic testing image (C-scan) of the specimen in Figure 3a. Delamination defects are clearly visible as the back reflection intensity of the C-scan is reduced by any defect at any depth. (d) Thermography image visualizing temperatures obtained with transmissive, pulsed thermography of the specimen in Figure 3a. All five defects have been detected. Blurring in thermography images is a direct consequence of the heat of the delamination defects spreading within the specimen.

Testing Techniques using Visual Computing supported Materials Analysis
The computationally supported study of materials is based on digital data extracted from the materials of interest. This process of data generation is also referred to as materials testing, which is subdivided in two major categories: destructive (DT) and non-destructive testing (NDT). Destructive methods, as the name indicates, destroy or modify the specimen, its structure or features of interest during the testing procedure for gaining insight into the material as well as to extract its properties. For a number of reasons, such as economic issues, functional or manufacturing issues, especially nondestructive techniques became increasingly important in materials science. Whereas the larger body of work in materials testing is currently found in the domain of non-destructive testing also destructive testing techniques are prevalent in materials sciences. The techniques explained in the following sections are highly dependent on visualization and visual analysis for data evaluation, which is the reason why we put our focus on these methods. Aside the techniques themselves we give an overview of the data generated and derived. Most of these techniques are explained on a sample as introduced by Amenabar et al. [AMLA * 11], which is shown in Figure 3a. In their work they performed a comparison between active thermography, ultrasonic testing, XCT and Shearography for the inspection of delamination in wind turbine blades, which gives an intuitive example for each method regarding the generated data as well as typically used visualization techniques.
3D X-ray Computed Tomography (XCT) originated from medical applications and was adopted for NDT because of its ability to describe both inner and outer structures in detail, in a short time and without destroying the sample. The principle of XCT is based around 3 main components: the X-ray source, the detector and the rotary plate where the sample is placed on. For an XCT scan, a series of 2D penetration images is taken from the sample, recording individual attenuation images from different angles typically along a 360 degree circular trajectory. Visual computing was a key factor for the success of XCT in industrial applications. By means of reconstruction algorithms, 3D volumetric images contain-ing scalars encoding the spatial attenuation are computed and rendered from the recorded series of 2D penetration images. Using grating based phase contrast XCT even multichannel data is generated, integrating scalars on attenuation, phase contrast and darkfield information at each spatial position of the reconstructed area. Slicing techniques facilitate detailed analyses of the generated data and rendering algorithms allow to visualize the 3D datasets. While most manufacturers of industrial XCT systems deliver their devices together with standard reconstruction and simple visualization tools, the fields of computational reconstruction as well as visual analysis of industrial XCT data is a highly active area of research. The plain 3D attenuation XCT data is used as basis to derive further information such as isolines and surfaces, complex data regarding features as well as their quantified properties, vector and tensor data in stress and strain fields, or even the dimensionality is increased to also encode time in the analysis of dynamic processes. Starting with the simulation of XCT scans for novel components and materials regarding the setup and the optimization of the scanning protocol as presented by Reiter et al. [RHS * 11], or the optimization of XCT scan positions as presented by Amirkhanov et al. [AHRG10], visual computing already supports in advanced scan planning. For reducing artifacts due to strongly changing penetration lengths or attenuation coefficients during an XCT scan, Amirkhanov et al.
[AHR * 11] introduced a method, which accounts for the different materials present, reconstructs them individually, and fuses them to the final dataset. The major part of work about visual computing in XCT centers around the visualization and visual analysis for nondestructive testing or metrology applications. Regarding non destructive testing, for example the aeronautics industry requires tools for robust extraction of pores in fiber reinforced composite structures as presented by Heinzl et al. [HWR * 14]. For metrology purposes surface models are typically extracted from the XCT data in order to compare the geometry of a specimen with its CAD model. Earlier approaches as presented by Heinzl et al. [HKKG06] focus on the extraction and comparison of surface models to CAD. More recent approaches, e.g., as presented by Flessner et al. [FMHH14] evaluate and visualize the quality of surface models determined  Ultrasonic Testing (UT) utilizes sound waves of short wavelength with a high frequency to test the wave propagation within the specimens under investigation. The wave propagation through the specimen varies depending on the material as well as through interfaces and gaps in the material. Often, the specimen is immersed in water or other liquids to improve the coupling of the ultrasonic wave with the specimen, especially when the specimen's surface is rough, and to provide a uniform contact with the specimen. In the field of non destructive testing, 2D ultrasonic techniques are state of the art, generating 2D scalar data representations encoding amplitude, brightness and other modes of the propagation of the ultrasonic waves in the specimen. While XCT for the acquisition of 3D data has been quickly adopted in non-destructive testing, 3D ultrasonic testing shows less momentum in materials science and only a few papers exist, which utilize ultrasonic wave propagation data for 3D visualization. For 3D ultrasonic testing a matrix array probe is used to scan specimens volumetrically using rotational or fan scan patterns. Sun et al. [SGW * 14] indicated that visual analysis of 3D UT data dramatically speeds up the data analysis as the practitioners do not have to investigate the two-dimensional images frame by frame. An overview of how to generate 3D data from phased array ultrasonic testing techniques was presented by Here, the C-scan (specific UT mode aside A(amplitude)-mode and B(brightness)-mode and others, which is formed in a plane normal to a B-mode image) of the back reflected intensity was taken, as it allows to nicely reveal the defects in the specimen. Thermography (IR) denotes a family of different methods, which utilize thermal energy to digitize a specimen for gaining non destructive testing data. In terms of the measurement principle, the specimen is heated either from the outside through a heat lamp or a flash, or heat is generated inside the specimen through mechanical deformation of the specimen for active thermography. In contrast, in passive thermography no heat is induced and the specimen is scanned in its current condition. Passive thermography is, e.g., used to test a specimen under stress in production environment. To analyze the thermal response, typically the infrared emission is used (infrared thermography). This infrared emission is usually encoded in 2D scalar grey scale images or using color maps encoding the thermal response. In 3D, the infrared emission data may also be visualized on the surface of the specimen as well as in subsurface regions as presented by Maldague and Marinetti [MM96]. An example image of an investigation using thermography is shown in Figure 3d as presented by Amenabar et al. [AMLA * 11]: The thermography of the specimen in Figure 3a was taken using transmissive, pulsed thermography and all five different defects could be detected. As outlined by the authors, no temperature differences between the different defect types could be observed as clues regarding their thickness and depth. Therefore, the different depths and thicknesses of the defects could not be distinguished.
Optical Coherence Tomography (OCT) utilizes light to provide cross-sectional tomographic images of the microstructures of materials under test. Just as echography does with sound, an OCT system measures backscattered and backreflected light to distinguish between different layers and structures in the specimen. The utilization of light improves the resolution compared to echography by 10-100 times but reduces the penetration depth down to 1 10 . OCTs main application area is found in biomedical imaging. Despite the aforementioned limitations and because it is a real-time high-resolution and non-destructive technique, OCT has also been utilized in materials science in recent years. Currently, there are two main sub-techniques: Time-domain OCT (TDOCT) and Fourierdomain OCT (FDOCT). For material investigation FDOCT is the more suited due to its improved detection capabilities as stated by Placet et al. [PMF * 14]. For rendering OCT images, typically color coded 2D images are used and encode the strength of the backscattered signal. An example for OCT data is provided by Duncan et al. [DBR98] in Figure 4a where subsurface samples were taken in intervals of 10 µm to a depth of 140 µm and combined to produce a 3D visualization. Scanning Electron Microscopy (SEM) uses an electron beam to scan the surface of a specimen in a specific pattern and in order to image the signals of the electrons hitting the specimens surface. SEM generates 2D scalar data, which are often colored to focus on features of interest. The data generated in the different types of SEMs may encode the signals on secondary electrons, reflected or back-scattered electrons, photons of characteristic Xrays or light (cathodoluminescence), absorbed current (specimen current) and transmitted electrons. SEM thus facilitates also a characterization regarding the chemical decomposition of the analyzed materials aside the structural analysis, so the surface morphology and the chemical properties of a specimen can be derived in detail. To further increase the resolution of the images and therefore to gain a better insight into the specimen of interest, transmission electron microscopy (TEM) is increasingly used as stated by Bender et al. [BDM * 10]. Sanderson et al.
[SLKL02] use SEM to investigate 2D plus time data on the fouling and cleaning process of membranes during filtration. They further correlate ultrasonic time domain reflectometry analysis with the fouling layer of a membrane. Figure 4b shows images gathered with SEM at different scales and different states of fouling. SEM is also used together with various material removal techniques such as micro-grinding, chemical etching, or focused ion beam (FIB), in order to destructively investigate a material of interest in 3D. For example in combination with chemical etching, SEM is utilized to generate complete 3D images of the specimen as described by Lanzagorta et al. [LKS * 98]. Using these material removal techniques the sample is destroyed with the primary target to remove thin layers of the material and image the revealed cross section using SEM again in order to finally get a 3D representation of the specimen. In addition, also photogrametric methods are used in order to generate 3D representations of the surface of the specimen using a low number of images which show the specimens slightly tilted.
Terahertz Testing (THz) aside its prominent applications in airport security, THz is less known to be used in materials science. THz in 3D mainly covers the data acquisition using continuouswave terahertz computed tomography in materials science. This technique has emerged because of its useful property to facilitate imaging of transparent objects. For several techniques such as 3D-XCT it is difficult to test samples with low attenuation inner structures, e.g. the inner structure of transparent materials as plastics, wood or paper, which is due to an increased noise level, preventing from robust analyses of the features of interest. Terahertz testing overcomes these limitations because of the low absorption and large penetration depths of the transmitted THz waves, which allows to increase the local contrast. This highly desirable property opens new ways for visual material analysis as well as new challenges for visualization as stated by Balacey et al. [BRP * 16]. An example provided by Jansen et al. [JWP * 10] evaluates drilled holes in a plastic samples (see Figure 4c). Regarding data, THz testing generates spectral data as well as scalar data, e.g., encoding intervals in the frequency domain.
Apart from the aforementioned methods for the analysis of material systems, for sure many other destructive and non destructive techniques are existing, which are actively used in materials science. For example, hyperspectral imaging is used for inspecting artifacts from geology and cultural heritage. To discuss all remain-ing techniques in detail would go beyond the scope of this paper. As the contributions using other techniques play a minor role in materials science or they are on the border to other research areas such as geology, or they currently make very limited or no use of visual computing supported materials study according to our findings, these techniques are considered as out of scope for this paper and therefore not discussed any further.

Data Types
An important categorization for visual computing in materials science is the characteristics of the input, output and derived data. Regarding the categorization we adapted the types as defined by Schneiderman [Shn96] for our domain. A similar categorization can also be found in Munzner's book "Visual Analysis and Design" [Mun14]. As 1-dimensional data is typically used for extracting quantitative derived data, e.g., the overall porosity or properties of a feature in the volume, for visual computing 1D data is therefore not extensively used and thus left out in the following consideration.
2-dimensional data types are represented by 2D spatial images of testing results in many papers, e.g., in Malzbender et al. [MSM13], who use 2D images of ceramic materials to investigate crack propagation, Galvin et al. [GGB01], who image and investigate surface strain of glass surfaces at a nanometer scale, or Tanaka et al. [TKH13] who analyze the behavior of hydrogen diffusion and desorption in duplex stainless steel and Fe-30% Ni alloys with visualizations of grain boundary diffusion. 2D data representations are also used as simulation results, e.g., encoding pressure in a glass pressing simulation [LTM01].
3-dimensional data is one of the mostly used data types in the visual computing supported materials science literature. The reason is found in the 3D nature of the specimens as well as their features. 3D data types are often derived and reconstructed from many 2D images. Placet et al. [PMF * 14] for example applied multiple methods such as optical coherence tomography and focused ion beam to reconstruct 3D data. Focused ion beam together with scanning electron microscopy were used by Bender et al. [BDM * 10] investigating milling strategies for the structural characterization of through silicon vias. In the work of Weber et al. [WRR * 11], 3D reconstructions of synchrotron-based X-ray tomography data were used to investigate micropowder injection molding in order to optimize the molding process for achieving high dimensional accuracy. For ultrasonic testing Kitazawa et al. [KKB * 09] proposed a method to gather 3D data from the specimen which based on phased array ultrasonic testing.
Temporal data allows to get an insight into ongoing processes of materials science. This time component may be very discrete, as it is the case in of Patterson et al. [PCH * 15] work, where synchrotron X-ray tomographic imaging is used during the compression of polymeric materials. Their goal is to analyze the mechanical properties of materials, which are important for predicting lifetime performance, damage path-ways and stress recovery. Due to required relaxing phase of the material for up to 10 minutes, which is necessary as the residual motion of the material would blur the images, the data over time is highly discrete. In contrast to this example the data may also be continuous, which allows an interactive real-time visualization of a simulation as presented by Laevsky et al. [LTM01]. They use a tool, that operates on data streams which are directly visualized. These data streams are fed in a fluid simulation to optimize the process of glass pressing. Data is of course still discrete as it is processed by a computer, but the notion of continuity comes from the fact, that it is visualize-able in real-time.
Multi-dimensional data in materials science often comes through the investigation of material properties from data generated using more than one testing method or by deriving additional data. Multi-dimensional data is used both in terms of material analysis or in simulations, where many different properties are computed. Ota et al. [OKTM05] are using 3D-XCT to measure flows induced by shock waves in tubes for computational fluid dynamics. They extract quantities such as velocity, density, pressure and temperature in order to investigate the flow in the underlying materials. An example for derived data can be found in the work of Bhimavarapu et al. [BMBN10], where stress, strain, strain rate and temperature are obtained, combined and derived by various testing techniques for the for the investigation of alloys.
Trees and tree structures are at the moment rarely used for solving problems in materials science applications. Whereas simulation tools are frequently using trees to setup the scenery of the spatial domain, we encountered only a single usage of trees in the area of analyzing segmentation ensembles. Fröhler et al. [FMH16] are using trees as basis for clustering and navigating through similar segmentation masks. The tree in their application helps to structure the segmented datasets and to stepwise explore deeper layers of similar segmentation masks.
Networks inside materials are often of interest as means of abstraction for complex linked features (e.g., open pore networks) or as rendering in the spatial domain (e.g., closed foam networks). Due to the intrinsic nature of the specimens, those the networks influence the specimen's mechanical characteristics. Networks are typically not explicitly given but need to be derived via visual computing methods. Ushizima et al. [UMW * 12] uses synchrotron Xray computed microtomography as well as geometric and topological descriptors to derive pore networks and pore microstructures for estimating the permeability of porous media. In their work pore networks are represented by a graph, in which edges define the possible flow inside the material. These edges are assigned with a weight and augmented with their connectivity to the top of the stack, so that no flow is present in dead-end edges. Rey et al. [RMF07] represents a new node-nested Galerkin multigrid method for metal forging simulations. This method operates on 3D meshes, which are represented as networks, where nodes define the points in space and are assigned with properties such as pressure or velocity. The edges of the graph are used to define how the points of the network are spatially connected. The relationship of the nodes in space is of interest as the individual nodes can interact better with each other and overall material properties can be defined.

Visual Representations in Visual Computing supported Materials Analysis
In this section, we describe our findings concerning visualization techniques and visual metaphors used in the domain of visual computing for materials science. With regard to the tasks as discussed in section 4.4, as well as the discussion of the data generated and derived, we subdivide the visualization techniques and visual metaphors in the following categories with respect to the domain, their respective analysis task is targeted on: Visual Representations for Spatial Data are inherently bound to nature of the spatial domain the specimens of interest exist in. Many of such visualization and visual analysis techniques are thus found in the areas of simulating material systems as well as material analysis. Gyulassy et al. [GKLW16] use spatial visualizations of large-scale molecular dynamic simulations for evaluating simulated graphite nanosphere battery materials. The authors study the diffusion characteristics of (lithium) ions or other diffusers by employing a topological analysis of the distance function of carbon rings, and construct explicit triangulations to represent the carbon rings in the material. The carbon rings are classified as blocking or non-blocking and visualized using color coding (see Figure 5). The blue patches are part of rings with a valance of six or less, which are blocking the diffusion, while all the other patches are of rings with valance between seven and ten permitting diffusion. These patches are clustered along the exterior or along the principal axes. This allows to focus on the defects and to derive scientific findings such as, defect rings are occurring in neighbourhood of other defect rings, as there are many large components of defect patches. Further applications of simulation in combination with spatial data is found in the estimation of the permeability of porous media as presented by Ushizma et al. [UMW * 12] who provided augmented topological descriptors for analyzing pore networks. They presented 3D visualization techniques for analyzing pockets and extracting flow graphs in simulated and real world datasets of materials science. Malik et al. [MHG09] utilize techniques adapted from 2D graphical representations such as box-plots and transfer them to the 3D domain, e.g., as overlays over a spatial visualization or in a magic lense showing regions of interest. This concept yields the visualization of quantitative data to compare datasets, in their case a CAD model of the specimen with a volumetric dataset scanned by 3D-XCT. Overlays of boxplots can be used to compare data sets visually and to provide a fast overview. The main purpose of this technique is thus found in variance comparison as well as in monitoring production tolerances. Such tools are not only used in comparative visualization, but also for the investigation of the deterioration risk of mural paintings. Li et al. [LZS16] use an overlay technique based on glyphs for risk analysis of ancient frescoes. The glyphs are used to visualize the risk type, risk area size, the position and orientation of the risk area. The generated panorama view of the object, for which the corresponding risks should be determined, help the domain specialists to semi-automatically get a comprehensive view of the object. Furthermore, Wu et al. [WT10] are using overlays to visualize the simulated curvature flow (planar geometric heat flow) in materials science applications on a triangulated surface of a planar geometric object. The specific application areas mentioned in this work are found in physical simulations and materials science, where the flow plays an important role for the topology-adaptive front propagation with a curvature-dependent speed. As seen in the previous approaches, often additional information in spatial visualization is color encoded on the features of interest. Allerstorfer et al. [AHKG10] use transfer function based color-coding and opacity mapping to visualize the uncertainty of surfaces and interfaces of materials scanned with 3D-XCT data. More uncertain regions are rendered more transparently. They also apply color coded isolines as an overlay to visualize uncertainty inside the specimen. Labeling in hyperspectral imagery to distinguish between different materials in 3D is presented by Kim et al. [KHK * 12]. Hyperspectral imaging is used for inspecting artifacts from geology and cultural heritage. The data is visualized in three dimensions labeling different materials with different spectra and therefore different properties. Such an encoding easily allows the investigation of the material's distribution in a specimen. Function plots such as power dissipation maps and instability maps are used by Bhimavarapu et al. [BMBN10] for the Al 2024 alloy, in order to encode the efficiency and the instability in 3D space of temperature, as well as the strain rate and strain in process maps. With this type of visualization the complete deformation behavior of a specimen can be visualized at once as the relation between a varying strain and the efficiency or the instability is shown. Takatsubo et al. [TWM * 08] uses laser ultrasonic testing for visualizing the propagation of ultrasonic waves on a 3D specimen in order to gain insight into defects as well as their position: If scattering of the waves is observed, some irregularity is contained in the specimen. An experienced practitioner gains the required information on the defect from the wave propagation images. To support the visual inspection, post-processing is often applied to these images. For example, a maximum amplitude image is generated to view slits in a specimen. Such techniques are especially useful, if the testing method only supplies 2D images of a 3D object in order to form a 3D mental model of the specimen, e.g. 2D visualizations of surfaces which are not smooth or planar.
Visual Representations for Spatiotemporal Data target to explore the temporal domain in the context of spatial data. Spatiotemporal data analysis in materials science is catching increasingly attention as the respective methods for data acquisition and processing allow for comprehensive studies gaining new, previously impossible insights. In the domain of simulation, visualization techniques are used to show the change of material systems in use within their target application. Earlier approaches as presented by Leavsky et al. [LTM01] are interactively visualizing deformation simulations. The primary target is to visualize the deformation of glass over time under the effect of heat in order to optimize glass pressing. The visual metaphors used are found in 3D animations showing the change of form of the glass materials as well as 2D animations encoding the glass velocity magnitude and the pressure in a slice of interest. In the work of Frey et al. [FSG * 11] complex particle simulations are visualized by clustering in Voronoi diagrams. They reduce the visual clutter of a huge amount of simulated particles, keeping the information of particle distribution and geometric structure. To visualize the change over time for the particles, the authors use animations and visualize calculated path lines for small subsets of particles (see Figure 6). Besides lower memory consumption and less visual clutter, this approach improves rendering performance and allows for a more comprehensive view of the simulation. Spatiotemporal data visualization also integrates analyses of dynamic processes, i.e., ongoing processes, which are evaluated over time, under load or under changing environmental conditions. In order to analyze these kind of dynamic processes using (interrupted) insitu testing, most of approaches are using spatiotemporal data (i.e., 2D or 3D spatial image data plus time as additional dimension). An approach exploring dynamic heating/cooling processes of metal composites as well as drying processes of wood was presented by Reh et al. [RAK * 15]. The authors introduced Fuzzy Tracking Graphs, which allow to follow the creation, continuation, split, merge and dissipation of features in a graph based representation as they evolve during the ongoing process. In an Event Explorer the extracted features of each step of the (interrupted) in- situ test may be further investigated regarding the individual feature properties at a timestep of interest. Another example for 2D porosity investigation was developed by Tran et al. [TDD * 12]. In their work, the mechanical properties of wood fiber networks are investigated using loading tests and XCT imaging in order to visualize strain over time. Using heat maps visualizing the normalized strain field, they superimpose additional information on 2D slice images, showing the local thickness of pores in the wood network, in order visually inspect the mechanical behavior and the correlations between the presence of wood, the porosity of wood, as well as strain. Further approaches generate lists of 3D spatial visualizations at key events. Key events may denote the introduction of cracks in the specimen, a maximum deformation or just at some given points in time to gain an insight into the process. Patterson et al. [PCH * 15] visualize and investigate cellular materials under strain with in-situ X-ray synchroton tomography to get the mechanical properties of the material, which are important for predicting lifetime performance, damage path-ways and stress recovery. In order to explore the damage mechanisms in composite materials such as glass fiber reinforced polymers under increasing load Amirkhanov et al. [AAS * 16] presented a tool for the analysis of 4D-XCT data. Aside the extraction and classification of the corresponding defects in each step of the insitu test into matrix fractures, fiber/matrix debondings, fiber pull-outs, and fiber fractures, various exploration techniques are proposed to highlight the defect regions in context of the XCT data. For example, Defect Density Maps serve as overview of the defect distributions in 2D and 3D as well as for visualizing the final fracture region, which is not explicitly given but more an estimation based on the extracted defects in the region. Another technique presented by Malik et al. [MHG10] allows to study the evolution of ongoing processes as well as ensembles of XCT data for performing parameter studies using comparative visualization techniques. Extending the idea of the checker board visualization, rendering the first dataset on the black tiles and the second dataset on white tiles, the authors propose space filling hexagons as basis for their 2D multi image view. The hexagons show a circle in the middle encoding the reference dataset. Around this circle the hexagon is divided into multiple sectors, showing all other datasets in comparison. For the comparison, the encoding may be switched from greyscale images (original data), to differences in greyvalues encoded in color, to homogeneity of the considered region as binary image. In microscopy it is often required to view the microstructure of a surface from slightly different perspectives. Since microscopes produce only orthographic images several additional steps are needed to get perspective images. Levoy et al. [LNA * 06] propose a method to achieve this goal. By inserting an array of microlenses into the train of a common microscope, so that they can capture light fields, a perspective visualization may be computed in real-time. The gathered data can also be used to compute images with varying focus, so that the microstructure of the surface may be investigated in detail.
Visual Representations for Quantitative and Derived Data are required to provide additional insights into data, which may not be obvious in the spatial or spatialtemporal domain only. Methods in this area cover a large number of domains ranging from feature quantification, geometrical tolerancing, to network analyses. Therefore, examples for this category can be found both in ma-terial simulation and material analysis. In the domain of material simulation Zobel et al. [ZSS15] propose a tool for domain specialists to verify, if and where the design of a new fiber reinforced polymer component might fail in order to improve it in additional design loops. The authors are using feature based tensor visualization to evaluate simulations of respective fiber reinforced polymer samples for virtual prototyping. Their tool presents simple and efficient overviews of the stress and strain in the complete object and introduces a new type of glyph encoding the desired fiber directions and the fiber orientation tensor. These glyphs are based on superquadrics encoding the orientations of the fibers. Furthermore, the tool indicates if the fiber orientation may cause failures using colors. Finally, the admissible fiber directions at specific regions of the specimen may be shown in the data. Quantitative data analysis of features in materials are of increasing interest. A tool facilitating the quantitative analysis of fibers in XCT scans of fiber reinforced materials was presented by Weissenböck et al. [WAL * 14]. After extracting quantitative information on the fibers (e.g., length, orientation, start and endpoint) the authors are using parallel coordinates as well as a scatter plot matrix for the exploring and clustering fibers based on their characteristics. In order to ensure the maximal possible flexibility in terms of exploration and visual analysis, all views are linked with each other as well as with the representation in the spatial domain. Fritz et al. [FHG * 09] and Westenberger et al. [WEL12] also evaluated fibers in XCT scans of fiber reinforced materials. After extracting each fiber and quantifying the fibers regarding their orientation in space, these approaches are using a sphere or a hemisphere as visual metaphor to encode fiber orientation distributions of fiber reinforced materials: The sphere/hemisphere allows the mapping of the orientations of the straight fibers as angles on the sphere/hemisphere. Exploiting symmetry of the orientations facilitates the simplification of the visualization from a sphere to a hemisphere, which is then azimuthal projected on a 2D plane. This is a demonstrative and innovative visualization for fiber orientation distributions as it avoids visual clutter, which direct volume rendering of the fibers would introduce. Network visualizations were also adapted for use in materials science, e.g., for the analysis of rocks as presented by Grau et al. [GVTA10]. In the spatial domain, it is often hard to determine, which nodes are near and which are far to the viewer. To solve these problems, a size may be assigned to the knots to render them smaller if they are far away. This is not always possible, as the knots are typically extended with derived properties such as pore volume etc. of the material. To solve this issue, color-coding with a legend is used to encode depth. Colorcoding of the nodes can also be used to provide information about the distances removed, e.g., from a selected node, or to visualize which nodes are connected to a node. The authors use their network visualization technique to solve domain specific tasks such as finding the shortest path between two pores (see Figure 7). In the left image out-of-scope pores are ghosting. In the middle image these pores are cut away and the maximum distance is longer. In the right picture an opacity attenuation in relation to the distance is used. This also allows simulation of fluid extrusions or intrusions through the structure and therefore the visualization has an additional ben-  introduced Fuzzy Tracking Graphs visualize the evolution of features and corresponding events over time. This representative example of how complex processes can be simplified using 2D visualization techniques ensures a fast and comprehensive visual analysis for practitioners. Wireframe visualizations are used by Wang et al. [WNG06] in order to derive and visualize cloud-like structures, such as plasmapauses (boundary of a plasmasphere), from large image data, which are in motion and deforming governed by magnetic field properties. Aside being cheap to render, wireframes as visual metaphor allow to intuitively follow the structure in motion, which is only reasonable if a perspective rendering is used or if front-and backfaces are easily discernible. In addition, also the 3D visualization of derived functions are important in materials science: Data aggregations are used by Reh et al. [RGK * 13] for computing MObjects, which are introduced as the mean object of a large number of features of interest such as pores, cracks, particles, voids or fibers in the data. The authors compute these mean objects in order to integrate them as precise average structures for finite element simulation purposes. Their approach is to first segment the features of interest, align them according to their center of gravity, aggregate all pore voxels and normalize all voxels in the created mean object to 1. The normalization to 1 allows for an interpretation the mean object's data as the voxels' probability of belonging to the mean object, which the authors denote as uncertainty cloud (see Figure 1).
For quantitative visualization also simpler 2D visualization techniques such as charts, plots, scatterplots, histograms etc., are widely used in materials science used for understanding complex data. As a large number of papers is building on these simpler visualization techniques, we only indicate the following examples: Bhimavarapu et al. [BMBN10] in their approach analyzing the compressive deformation behavior of the Al 2024 alloy are using function plots with level lines to visualize extracted strain rates and flow strain in relation. Froehler et al. [FMH16] use histograms as supportive techniques to explore the frequency of the parameters and derived outputs used in segmentation pipelines in materials science. As the histograms are linked to filter sliders, a quick exploration of the data will be facilitated.

Interaction Techniques
As a result of the increasing complexity of data in materials science as well as the growing demands regarding their analysis, especially passive visualizations techniques have reached their limits. For that reason interactive visualization methods are increasingly utilized to support the visual analysis process in materials sciences. For several challenges, even interactive steering is employed to influence the data generation process. We thus reviewed and analyzed the interaction techniques used in visual computing supported materials science and present our results in the following sections. Regarding the categorization of the interaction techniques we first used the categorization as proposed Yi et al. [YaKSJ07]. When coding the interaction techniques from our set of relevant literature, it turned out that this categorization was in some terms is too coarse and in others too fine grained for visual computing in materials science. As consequence, we focus in our review on those interaction techniques, which are highly relevant in visual computing based materials science and combine the proposed categories from Yi et al. [YaKSJ07] with those described by Kosara et al. [KHG03]: Explore and Reconfigure addresses the three basic interaction techniques of translating, rotating and zooming into data, which are used in materials science for spatial, spatiotemporal and quantitative and derived data visual analysis approaches as well as interactive steering. All three basic interaction techniques fit in the category 'Explore' as defined in Yi's classification while Translate and rotate may also be categorized as 'Reconfigure'. 'Zoom', especially a semantic zoom, fits in the category of 'Abstract/Elaborate'. Despite the different approaches regarding categorization, the three basic interaction techniques of translate, rotate and zoom are frequently used in visualization systems. Especially, in the exploration and analysis of spatial data, these interaction techniques often appear in literature of visual computing supported materials science as they are easy to implement but provide huge benefits in data exploration. For example, Zobel et al. [ZSS15] use translation, rotation and zoom in order to explore, if and where the design of a new fiber reinforced polymer component might fail.
Linking and Brushing is using multiple views showing different aspects of the same data, which are connected (linked with each other) in order to interactively manipulate them all together. The term brushing refers to the categories 'Select' and 'Filter' and the term linking to the category 'Connect' from Yi's classification. Either in a single view or in all views, items may be selected, rotated, translated, zoomed, filtered by text, using sliders or other ways. As all views are linked, all views will be updated automatically once a change occurred in one of the views in order to show the selected data in context. In their Porosity Analyzer tool for the evaluation of the segmentation pipelines Weissenböck et al. [WAG * 16] combined scatterplot matrices encoding the characteristics of pores with 3D spatial views of the data. With the scatter plot matrix, the masks as generated by the corresponding segmentation pipelines are evaluated regarding their input and output parameters. Results of individual segmentation-pipeline runs are selected by brushing in the corresponding scatter plot and linked to 2D slice views and 3D renderings of aggregated segmentation masks and statistical contour renderings. Qian et al.
[QSCZ16] use linking and brushing to support conservators and site managers in the deterioration risk assessment of cultural heritage. Their approach allows brushing of exogenous factors such as humidity, light, or wind, as well as endogenous factors which cause deterioration. The brushed characteristics are displayed with linked box plots, scatter plots and other charts in order to provide an overview graph for decision making in a visual analysis process.
Focus Plus Context as defined in Kosara's classification helps to visualize large data sets, which cannot be displayed or rendered at once and as a whole. In a visualization system using focus plus context, the users select a subset of data and put the focus on the analysis on this subset. The subset is then displayed in more detail and providing additional visual metaphors, e.g., rendering derived data. As context information, the complete data set is typically provided out of focus, e.g., abstracting or reducing details. There are different methods to achieve this effect. For example, the data around the focus can be geometrically distorted. Li et al. [LZS16] uses focus plus context to provide users with details (focus) and an overview as a heat map (context) describing the spatial distribution of the overall disruption risk for the deterioration risk assessment of ancient frescoes. Various applications of this technique are seen in the method of Grau et al. [GVTA10] supporting the exploration of porous structures using illustrative visualization. In their proposed graph based visualization of porous media, the user can select individual pores by clicking or within a given radius, as well as pores not reachable from the outside, at the boundary or the surface of a given model. This selection defines the focus. All other pores and the solid around those pores are rendered as context information. In their application the selected pores are rendered opaque and highlighted with color, while context pores as well as the solid material are rendered using a ghosting effect. To overcome the problem that context pores and solid overlap focus pores, they use in addition a cut-away technique for interfering pores and solid areas, which allows full focus but reduces the context in the viewing direction.
Filter is defined in both Yi's classification Kosara's classification and describes the process of searching in a data set in order to reduce the displayed data. This can be achieved via free text forms, in cases of textual or derived data sets. Another approach makes use of sliders to find all values, which feature a property of a specific range. In the workflow introduced by Fritz et al. [FHG * 09] the user can filter fibers of a fiber reinforced material by their angles and other characteristics. This way, only fibers within the filter range are visible for the evaluation. Qian et al. [QSCZ16] implemented filtering as drop-down menues for constraining areas of interest, endogenous risk factors, and exogenous factors. This filtering techniques can be combined to for refinements, which demonstrates another option using filters. A combination of multiple filter types allows for an advanced interactive search facilitating complex and interactive queries.
Aside these interaction techniques also other techniques are used in visual computing for materials science such as 'Abstract/Elaborate' in the form of glyphs as in the method presented by Zobel et al. [ZSS15] or 'Reconfigure', as presented by Reh et al. [RPK * 12] to identify suitable viewing angles. As these interaction techniques play a minor role in the current body of literature, they are not considered any further in this report.

Interactive Steering
Interactive steering makes use of visual representations of spatial, spatiotemporal data as well as from representations of quantitative and derived data. It also employs the interaction techniques as explained in Section 4.8. As interactive steering is considered in materials science to achieve the most benefits for a materials science problem, we also review this area from the materials science perspective: At the moment interactive steering of complex simulation and analysis techniques is a still a sparsely used method. However, in the presented approaches it already allows to generate huge benefits in materials science. Interactive Steering adjusts parameters of the data acquisition process while the acquisition process is still ongoing. It provides means to review intermediate results and uses the extracted insights in order to steer the data acquisition process using the results determined. This principle helps to explore complex phenomenons without the need of computationally expensive determination of the phenomenon's complete parameter space. Interactive steering shows the advantage, that long running simulations may be corrected and steered if they generate suboptimal results. The advantages however are overshadowed by the fact, that the corresponding data acquisition is costly and thus the interaction may end up in a tedious process. The tool of Laevsky et al. [LTM01] employs a visual steering system combining distributed computation and visualization of glass pressing simulations. The authors provide a tool enabling users to control the underlying simulation. Intermediate visualization results, such as 2D colorcoded renderings of the pressure in the form, act as a steering and visualisation front-end in order to derive parameters for a new simulation run. The interaction with the simulation is facilitated using a graphical network visualization of the dataflow. The corresponding modules in the network process data streams and represent functions such as the computation of gradients or the iso-surface extraction. Each module features its own parameters, which can be manipulated using GUI elements. The results can be visualized in 2D or 3D animations, in order to allow an interactive exploration of the simulation. Another very different approach is presented by Martin et al. [MTGG11] for performing deformation simulations with example-based materials. In their approach, the user applies pressure and strain on simulations of flexible materials. For example, a gummy bear is pressed with a spoon and the deformation may be followed interactively. The materials are represented using surface models. This allows users to adapt the underlying material parameters according to the visualized deformation. Despite this work being primarily focused on the creation of art, the authors also explain secondary applications in the domain of materials science regarding the design of new materials. Such techniques may accelerate the creation of new materials and support material scientists in their daily work.

Challenges for Future Work
Though a whole body of work is already found in visual analysis for materials science, there are much larger challenges to be solved in future. We combined our findings with the results of a discussion round on the topic of visual analysis for materials sciences, which took place between specialists from the domains of visual analysis, visualization, non destructive testing and materials science at a recent workshop. To solve these challenges, new collaborations and projects are strongly encouraged, integrating end users from materials science, specialists in materials analysis and experts in visualization and visual analysis, in order to boost visual computing supported materials science. Our findings are summarized below, separated into high and low level challenges:

High Level Challenges
Regarding high level challenges, the tasks to be solved, which are of imminent need, are found in the following points: • The Integrated Visual Analysis Challenge was identified as standard visualization tools are not enough to explore the generated materials science data in detail. What is required are integrated visual analysis tools, which are tailored to a specific application area of interest and which guide users in their investigations. Using linked views and other interaction concepts these tools are required to combine all data domains using meaningful and easy to understand visualization techniques. Especially for the analysis of dynamic processes, where spatial and temporal data are evaluated, e.g., when materials are being tested under load or in different environmental conditions, these kind of tools are highly anticipated. Only this concept allows to make the most out of all the data available. • The Quantitative Data Visualization Challenge centers around the design and implementation of tailored visual analysis systems for extracting and analyzing derived data, e.g., as computed from extracted features over spatial, temporal or even higher dimensional data domains. Therefore, feature extraction techniques, e.g., for the extraction of flow paths through porous media, segmentation techniques, e.g., for the extraction of voids in the data, and even clustering techniques, e.g., for finding interesting feature classes, are required as prerequisites for the targeted visual analysis. As the quantification may easily end up in 25 or more properties, which are computed per feature, clustering techniques allow to distinguish the features of interest into feature classes. These feature classes may then be statistically evaluated in order to visualize the properties of the individual features as well as the properties of the different classes. Particularly, techniques from information visualization will be of interest for solving this challenge. • The Visual Debugger Challenge is an idea, which uses visual analysis to remove errors in the parametrization of a simulation or a data acquisition process, in order to finally improve results. Similarly to a debugger in computer programming, which tests and identifies errors in the code and which provides hints to improve the code, a visual debugger in visual computing for materials science should show the following characteristics: It should indicate errors and identify wrongly used algorithms in the analysis. Such a tool should also identify wrongly used and incorrect parameters, which either show no or very limited benefit or provide erroneous final results. Furthermore, it should give directions on how to improve a targeted analysis and suggest suitable algorithms or pipelines for specific tasks. • The Interactive Steering Challenge uses visual analysis tools to steer ongoing simulation or data acquisition systems. In this challenge, a visual analysis system should monitor a costly process and give directions in order to continuously refine its results. The main goal here is to find a reasonable solution to the problem of interest. For example, in the material analysis domain this could be a system, which provides settings for data acquisition based on the image quality achieved. If the image quality no more fulfills the target requirements, the system influences all degrees of freedom in the data acquisition in order to enhance image quality again. The same holds for the materials simulation domain. Visual analysis can help to steer target material properties in a specific application environment by predicting tendencies of costly simulation runs, e.g., using cheaper surrogate models.

Low Level Challenges
Whereas the previous section focused on specific visual analysis challenges in materials science, the following points outline more general challenges of the domain, which will help to drive collaborations as well as research in the respective areas: • The Common Data Repository Challenge came up in discussions between the communities of visual analysis, visualization, non destructive testing as well as materials science, as typically the generated data is proprietary for specific end users, who do not want to share corporate secrets with competitors. Therefore, an archive of disclosed data should be created similar to the "digimorph project" (www.digimorph.org), a dynamic archive on high-resolution XCT data of biological specimens. This materials science archive should contain spatial data, spatiotemporal data as well as derived data together with descriptions of the sample as well as the analysis task. • The Visual Analysis Consulting Challenge is a request of material testing and simulation specialists, who would like to provide their end users in materials science with visual analysis methods and contacts in order to solve their problems. On their daily routines, typically they can not do so, as their services end with providing the data required. Therefore, consulting services, software frameworks and methods would be required, which allow the end users of materials science to make most out of their data.

Conclusions
In this work we have presented the state of the art regarding visual computing for materials science. This is the first concise overview on the current state of research activities within this emerging field. After reviewing the definition of materials sciences and material systems exploiting visual computing, we analyzed the high level visual computing, visual analysis and visualization tasks for materials sciences, as well as testing techniques which are providing the data for the respective analyses. We reviewed the data characteristics as well as visualization techniques and visual metaphors used as well as the interaction concepts employed. Our review showed that about the half of all relevant literature still uses mainly passive visualization techniques. In these approaches of materials science using visual computing, simple visualization techniques as the plain output of the measured raw data or the extraction of a plot, a histogram, or even a binary value are sufficient to answer a number of problems in materials science, e.g., whether the material system is qualified for a specific application. Interactive visualization becomes a requirement if the input data dimensionality is or exceeds 2D. If input data is 3D, interactive techniques are required to explore the data in detail. For higher dataset dimensions, e.g., the exploration of derived values, the used interaction concepts influence the quality of the data exploration results in materials science. Interactive visual steering of costly data evaluation or simulation is required by materials science in the sense of visual debuggers. These visual debuggers should support domain specialists regarding data acquisition as well as simulation processes. At the moment, despite returning the maximal benefits, these kind of systems are rarely seen. We finally concluded our report with the identification of open research challenges based in our observation results in order to guide future research endeavors in this area. We hope that our work will be the basis to establish further fruitful collaboration across the domains in order to boost research in all materials science domains as well as in visual analysis and visualization.