Neural Networks Push the Limits of Luminescence Lifetime Nanosensing

Luminescence lifetime‐based sensing is ideally suited to monitor biological systems due to its minimal invasiveness and remote working principle. Yet, its applicability is limited in conditions of low signal‐to‐noise ratio (SNR) induced by, e.g., short exposure times and presence of opaque tissues. Herein this limitation is overcome by applying a U‐shaped convolutional neural network (U‐NET) to improve luminescence lifetime estimation under conditions of extremely low SNR. Specifically, the prowess of the U‐NET is showcased in the context of luminescence lifetime thermometry, achieving more precise thermal readouts using Ag2S nanothermometers. Compared to traditional analysis methods of decay curve fitting and integration, the U‐NET can extract average lifetimes more precisely and consistently regardless of the SNR value. The improvement achieved in the sensing performance using the U‐NET is demonstrated with two experiments characterized by extreme measurement conditions: thermal monitoring of free‐falling droplets, and monitoring of thermal transients in suspended droplets through an opaque medium. These results broaden the applicability of luminescence lifetime‐based sensing in fields including in vivo experimentation and microfluidics, while, hopefully, spurring further research on the implementation of machine learning (ML) in luminescence sensing.


Introduction
Sensing approaches based on luminescent molecular and nanoprobes are uniquely attractive, especially in the biological context, because of their minimally invasive nature and remote working principle. [1,2]These methods provide a readout from the DOI: 10.1002/adma.202306606analysis of the change of a luminescence feature (e.g., absolute intensity, ratio between intensities, peak position, luminescence lifetime).In particular, luminescence sensing based on changes in the luminescence lifetime has become increasingly popular [3] for the detection of molecular interactions, [4,5] identification of biochemical species, [6] and monitoring of quantities like pH [7] and temperature. [8,9]While the performance of lifetime-based approaches is not affected by spectral distortions, [10] they face other limitations that can cripple their precision and/or accuracy.One major challenge is to reliably extract information from decay curves characterized by low signal-to-noise ratio (SNR).Indeed, the uncertainty () associated to the lifetime extracted from decay curves () is higher in conditions of low SNR.The corresponding uncertainty (M) on the sensed magnitude (M) is concomitantly high, given that -in first approximation -it is related to the lifetime uncertainty as follows [11] : where S r is the relative sensitivity calculated from the calibration curve ( vs M) as follows: L. Ming, D. Jaque, E. To minimize M, it is therefore necessary to take actions to minimize the uncertainty associated with the determination of the lifetime ().For example, the exposure time can be increased to improve the quality (i.e., the SNR) of the decay curves, but at the cost of a higher temporal uncertainty.In in vitro bioimaging, where the goal is to extract information from living cells, this could preclude the possibility to monitor fast events, such as ion channel opening and rapid local temperature variations. [12,13]ecay curves with higher SNR can also be obtained by collecting the signal from a larger monitored area or increasing the concentration of luminescent nanosensors, yet losing spatial resolution and possibly introducing more marked interferences, respectively.A strategy should be developed to overcome this tradeoff.
[19][20][21][22][23][24] The application of these algorithms ensures high spatiotemporal resolution and an increased reliability of the thermometric approaches.Recently we also showcased the use of dimensionality reduction approaches in luminescence thermometry to automate the selection of spectral thermometric parameters that ensure the most precise thermal readout. [25]Given the recent successes of ML algorithms in luminescence sensing, we asked ourselves: Could we use an ML algorithm to make luminescence lifetime-based thermometers reliable regardless of the noise level (i.e., SNR value)?
In this work, we answer this question, showing that a class of CNNs, can be used to improve the performance of lifetimebased sensing approaches in conditions of low SNR.To showcase the potential of this strategy, we chose Ag 2 S semiconductor nanocrystals (SNCs) acting as luminescent nanothermometers.Yet, the results are to be considered of general applicability, regardless of the luminescent (nano)sensor and the sensed quantity.Since CNNs algorithms are used chiefly for image processing and identification of patterns, [26][27][28] the decay curves are first transformed into suitable images.The implemented CNN with U-NET structure can learn the relevant features of noisy images (i.e., curves) and provide more precise lifetime estimations in situations of low SNRs.We demonstrate the validity of the developed data treatment under adverse measurement conditions -i.e., short exposure times and high tissue-induced photon extinction -and show that the proposed method outperforms traditional decay curve analysis approaches in terms of precision.Even more importantly, the use of CNN ensures that the sensor calibration (generally performed in favorable conditions of high SNR) can be reliably applied to data obtained under markedly different experimental conditions, which are usually more challenging and hence associated with lower SNR values.

Signal-to-Noise Ratio and its Effect on Luminescence Lifetime Estimation
To showcase the effect that lowering SNR has on the estimation of the luminescence lifetime value (and, by extension, on the per-formance of lifetime-based sensing approaches), we recorded the luminescence decay of Ag 2 S SNCs dispersed in water (Figure 1a, characterization provided in Section S1, Supporting Information of ESI) at 25 °C with different SNR.The SNR values (I/I) were calculated as the ratio between the maximum intensity and the standard deviation of the noise extracted over a time window that contained no fluorescence signal (i.e., the last 30% of the curve; see Section S2, Supporting Information of ESI).The noise level was varied by decreasing the exposure time from 60 to 0.01 s while keeping constant the other experimental conditions -excitation power density (0.5 mW cm −2 ), Ag 2 S SNC concentration (0.5 mg mL −1 ), and luminescence collection system.Five representative decay curves are plotted in Figure 1b obtained with SNRs of 77, 52, 39, 24, and 15.Though all the curves in Figure 1b describe the exact same decay process, the impact of the different levels of noise on the estimation of the luminescence lifetime is significantly different.
When the SNR value decreases, a higher relative uncertainty is associated to the estimated average luminescence lifetime.This trend is observed by using two traditional strategies for extracting this value: fitting to an exponential function and integrating decay curve (Figure 1c, the fitting and integrating details are explained in Sections S3 and S4 of the Supporting Information).The mathematical expression used for fitting the curves is the following: where I 0 is the intensity at the decay's start, b is the background level,  is the stretching exponential which takes values between 0 and 1, and  is the characteristic decay time.This function was preferred over single or multiple exponential ones because no clear components are observed in the decay curves, as is generally the case with Ag 2 S SNCs. [29]To fit the decay curves, we have explored two approaches: the maximum likelihood estimator (hereafter, MaxLik) and weighted least squares estimators (hereafter, WLSE).The key distinction between them lies in how they handle the variability of data points.While WLSE assigns different weights to individual data points to downweight more uncertain observations, MaxLik aims to find the parameters that maximize the likelihood of observing the given data under a specific model, taking into account the entire likelihood distribution.However, since both methods yielded strikingly similar results (Section S4, Supporting Information), the subsequent discussion will be based only on the performance of MaxLik technique.The need to employ MaxLik arises because, particularly for low photon counts (i.e., for decay curves with low SNRs), the variance of the counts significantly departs from a Gaussian distribution, which is the fundamental assumption underpinning the majority of least-square estimators.MaxLik addresses this issue by determining the likelihood that a specific set of parameters corresponds to the experimentally observed photon detection distribution.In other words, instead of minimizing: One minimizes [30,31] : where m i and mˆi denote, respectively, the measured and expected (i.e., predicted by the chosen model) number of photons falling into the i-th detection channel.After finding the values of  and , the mean luminescence lifetime is defined as the first moment of Equation (3), i.e.: For the integration method, the histogram of detected photons is observed and the following equation is used to determine the lifetime [32] : where t p is the instant when the maximum number of photons is detected, N i is the number of photons detected at time channel i, and N  is the background level.Both procedures allow the characterization of the dynamics of the decay without the need to normalize the curve.Avoiding, therefore, the introduction of artifacts in the calculations.More details can be found in Section S3 (Supporting Information) of ESI.

The Effect of the SNR Value on the Performance of Lifetime-Based Sensing
The trend observed in Figure 1c is worrying in the context of luminescence lifetime-based sensing and specifically thermometry, because a higher uncertainty in the estimation of the lifetime translates to a higher uncertainty in the temperature readout.The thermal uncertainty (or thermal resolution), T, of a thermometric approach based on luminescence lifetime is given by: This is a first-order approximation of a Taylor expansion, valid only if the uncertainty on the thermal readout only depends on the uncertainty in the estimation of the lifetime. [33]As from Figure 1c, higher / values are found when SNR (I/I) takes on lower values.Therefore: The central role played by the thermal uncertainty in the evaluation of the performance of a thermometric (but really any sensing) approach has been recently echoed by several researchers. [34,35]Equation 9 highlights that T is a function of SNR and, as such, of the specific experimental conditions employed to perform the calibration.Such conditions are optimized to ensure a sufficiently high SNR, which in turn guarantees precise lifetime estimations.However, during actual measurements, the conditions are generally not the same ones used for the calibration (e.g., an opaque medium can be present in the optical path), hence lower SNRs values are encountered.This leads to a less precise thermal readout and thus the T value obtained from the calibration should be regarded as a lower limit hardly achievable during measurement.
To demonstrate the significance of this consideration in luminescence thermometry, we calibrated Ag 2 S SNCs as lifetimebased nanothermometers under ten different SNRs.For each pair of temperature and SNR (the latter controlled adjusting the exposure time at each temperature point) fifty decay curves were recorded.In the explored temperature range (18-38 °C), the lifetime of Ag 2 S SNCs is expected to decrease by 30 ns.The decay curves collected at different temperatures varying the SNR values were fitted (Equation 3) or integrated (Equation 7) to extract the calibration curves (average lifetime vs temperature).Representative calibration curves are reported in Figure 2a,b.As expected, the error bars considerably increase when lowering SNR.Hence, an approach that mitigates the negative impact of low SNR values in terms of precision of lifetime-based thermal readout is needed.It is important to note that the thermal dependence of lifetimes obtained through Equation ( 7) differs from those obtained via a MaxLik fit.This discrepancy arises because during the fitting of the dataset with a stretched exponential it is possible to independently determine the values of  and  and subsequently deduce the mean relaxation time.In contrast, when utilizing the integration method, one can only derive the ratio between this mean relaxation time and the second moment of the decay (as outlined in Section S3, Supporting Information of ESI).In both scenarios, however, the parameters are suitable for thermometric applications, and serve for the ensuing comparison.

Elimination of Bias and Mitigation of Uncertainty Using the U-NET
Opportunely, and due to the expansion of ML concepts, we now have access to a plethora of analysis methods. [36,37]Among them are the so-called U-Shaped CNNs (also known as U-NETs). [38][41] This latter aspect makes U-NETs good candidates for solving the issue at hand: Overcome the impact of noise (i.e., low SNR values) on the performance of lifetime-based luminescence sensing.Yet, the implementation of U-NETs in this context does not seem straightforward, because their potential is more easily harnessed when applied to images rather than decay curves.While it is possible to modify the structure of the U-NET to accommodate 1D datasets as input, such an adaptation significantly constrains the algorithm's capabilities (as verified in Section S5, Supporting Information of ESI).We thus converted each decay curve into an image by mapping it onto a square spiral (more details in Experimental Section), with the beginning and the end of the curve at the center and at the outmost part of the spiral, respectively (Figure 3a).
This mapping allowed to convert each decay curve into an image, which can be fed into a U-NET algorithm (Figure 3b).While a thorough description of U-NETs is beyond the scope of this manuscript, it is key to mention that they trade spatial information for feature determination.Simply put, during the contraction path (Figure 3b), the algorithm uses an operation (max pool) which groups neighboring pixels into one, and assign to it a value that is the maximum among the values of the grouped pixels (Figure S5, Supporting Information).This operation is repeated several times, and results in the contraction of the image Several pairs of images, consisting of a noisy and a clean signal each, are provided to train the neural network, whose architecture consists of a contractive and an expansive path.While the contractive path trades the spatially distributed temporal information for feature determination, the expansive pathway combines feature and temporal information through a sequence of up-convolutions and concatenations with high-resolution features obtained from the contractive path.By minimizing the loss between the actual clean signal and the one provided by the U-NET, an optimal performance is achieved.into a vector. [39]As such, the disposition of the pixels, and hence the sampling pattern, matters greatly for the performance of a U-NET.For this reason, other 2D time sampling patterns were explored in this study, but all resulted in a poorer performance compared to the spiral (Section S5 of ESI and Figure S7, Supporting Information).To train the U-NET, the set of images corresponding to the decay curves with SNR equal to 300 (the highest one) was set as ground truth, while those obtained with lower SNRs were used as input.From the resulting set of 10 500 associations between noisy (SNR < 300) and clean (SNR = 300) signals, 84 00 were randomly selected for training, i.e., to adjust the weights and biases of the U-NET.The remaining 2 100 were set apart for validation, i.e., for tuning the hyperparameters of the U-NETlike the number of hidden layers (Figure S8, Supporting Information).Stabilization of the learning curve was achieved after 70 epochs of training (Section S6, Supporting Information of ESI).This approach enabled the U-NET to learn how to suppress noise in the images of the decay curves.
To obtain the value of  from the image after noise suppression (Figure 4a), the greyscale sum (GSS) of the output image -, i.e., the sum of the grey values of each pixel -was multiplied by the bin size of the time sampling (0.5 ns in our case; further discussion on this can be found in the Supporting Information).By fitting the thermal dependence of  to a quadratic function, a thermometric method was established.It is worth emphasizing that the step of GSS-to- conversion is not strictly necessary: GSS can directly be used as thermometric parameter, but we decided to retrieve the value of  for a more direct comparison with the more classical approaches of signal integration and fitting.The calibration curves obtained for high and low SNR values (77 and 15, respectively) using the U-NET approach (Figure 4b) show a greatly reduced uncertainty for low SNRs compared to the one observed when fitting or integrating the decay curves.The improvement in the analysis of low-SNR luminescence decay curves could be inferred by comparing how the relative uncertainty / changes under temperature variations and under different SNRs.For such, we studied the thermal dependence of  SNR = 15 / SNR = 77 .The results show that, under the explored conditions, the U-NET significantly improves the precision (Figure 4c) of the lifetime determination.
Altogether, these results showcase the circumvention of the problem of having to consider different calibrations for different levels of SNRs, disengaging one from artifacts introduced by varying experimental conditions.Clearly, the risk of introducing mathematical artifacts and unwanted bias is high with ML, so care should be exercised in verifying that the observed improvement is real using proper quality checks.

Testing the Performance of the CNNs
The superiority of the proposed U-NET-based approach and the absence of overfitting (i.e., a situation when a model captures the noise and peculiarities of one dataset, rather than the underlying patterns and trends, and ends up performing poorly with unseen data) was demonstrated with two experiments simulating challenging real-life scenarios.
The first experiment entailed measuring the temperature of several droplets falling in rapid succession (10 droplets per second) from a syringe filled with a dispersion of Ag 2 S SNCs kept at a constant temperature (Figure 5a).This setup reproduces the situation of rapid thermal events occurring on a small spatiotemporal scale.The volume of the droplet was ≈25 mm 3 .The recording of the decay curve took place after the droplet had fallen by 5 mm.It was estimated that the droplet crossed the excitation laser path for ≈17.5 ms (Figure 5b), which results in a reduction in the acquisition time and hence in the SNR of the decay curves.To demonstrate the advantages of employing our U-NETsupported thermometric approach, the measurements were performed setting the temperature of the dispersion in the syringe to 25 and then 35 °C.At least 20 droplets were monitored for each cycle and two decay curves were acquired and analyzed per droplet with an exposure time of 6 ms.The thermal readouts obtained for each set using the three approaches (area, fit, and U-NET; Figure 5c) are ostensibly different.One can visually infer that the U-NET-based method provides less scattered data and tends to agree more with the expected temperature values (solid lines in Figure 5c).Regarding the last point, since it only took ≈17.5 ms for the droplet to travel 5 mm, it is reasonable to expect that the temperature of the droplet when it passes in front of the laser does not differ much from the one of the dispersions in the syringe.The uncertainty in the thermal readout (T, calculated as the standard deviation of the datapoints for the two temperatures) was between 2 and 10 times lower for the U-NET-based approach (1.02 and 1.96 °C, respectively) compared to the ones based on the integration (8.26 and 10.58 °C, respectively) and fitting (2.70 and 3.46 °C, respectively) of the luminescence decay curves (Figure 5c).
The second experiment aimed at investigating how the performance of luminescence thermometry can be improved when it is performed through a scattering medium like a biological tissue.To do so, the thermal relaxation of a fixed droplet of Ag 2 S SNCs dispersion was recorded through a tissue phantom of different thicknesses (Figure 6a), simulating the challenging conditions of subcutaneous in vivo thermal monitoring using low doses of nanothermometers.The droplet was heated via irradiation with a continuous-wave 1450 nm laser and the acquisition of the decay curves was started as soon as the heating laser was turned off.The thickness of the phantom varied from 0 to 3 mm in steps of 1 mm.In this situation, the SNR decreases with increasing thickness of phantom due to photon scattering and absorption events taking place along the optical path of detection.Also in this experiment, the use of U-NET-based analysis allowed for less scattered thermal readouts compared to the fitting and integration procedures, especially with a thicker phantom (Figure 6b).
To compute the level of precision in each of these scenarios, the average of the absolute difference between the thermal relaxation curves and their smoothed versions (obtained by applying a Savitzky-Golay filter) was calculated.Figure 6c reveals that U-NET-based analysis outperformed the traditional methods, resulting in a maximum uncertainty of 1.16 °C for the thickest tissue (0.22 °C in the absence of phantom).When compared with the value provided by traditional methods (Fit, 1.43 °C and Area, 5.49 °C), this constitutes a 1.3-and 4-fold improvement in precision, respectively.Such an improvement could mark the difference between meaningfully monitoring, for example, the onset and progression of heating and cooling events in the brain, where processes like seizure, ischemia, and drug consumption can induce slight but measurable temperature changes. [42]

Conclusion
In this work, we have demonstrated the effectiveness of a Ushaped CNN (U-NET) to improve the performance of lifetimebased luminescence sensing.We chose luminescence thermometry as a case study and addressed the challenge of increased uncertainty in lifetime estimation (i.e., thermal readout) as the SNR decreases.To do so, Ag 2 S SNCs were selected as representative luminescent nanoprobes due to their brightness and lifetimebased thermal sensitivity.By employing a U-NET CNN, we mitigated the issues posed by Poisson errors generally associated with low SNR measurements when using more traditional decay curve analysis approaches based on curve fitting and integration.Specifically we achieved precise lifetime estimation also in conditions of noisy signals.
The superiority of our U-NET-supported thermometric approach and the absence of over-fitting in conditions of low SNR were confirmed through two experiments.The first one involved measuring the temperature of falling droplets, while the second focused on analyzing the thermal dynamics of suspended droplets through several layers of tissue phantom.Analysis of the datasets obtained from these proof-of-concept experiments revealed a significant improvement in precision when using the U-NET approach, particularly at low SNR values.
The proposed approach opens the door to a more reliable monitoring via lifetime-based thermometry of biological events when the recorded signal is of low intensity due to experimental limitations such as poor nanothermometer brightness, reduced nanothermometer concentration, presence of opaque media in the optical path, and reduced excitation power density to avoid biological tissue damage.Beyond biological applications, this U-NET-supported lifetime sensing also holds potential for microfluidics, where temperature measurement of moving droplets could enhance the understanding of microscale thermodynamics and provide a more precise control of chemical reactions.
We believe we are just scratching the surface of ML approaches combined with luminescence sensing.ML algorithms are expected to open new, exciting avenues in simultaneous sensing of several magnitudes and the identification of (combination of) sensing parameters that are reliable despite changes in external conditions.The development of pre-packaged, ready-toimplement solutions for dataset analysis will set the stage for exciting developments in monitoring of magnitudes like temperature and pressure in challenging conditions and environments.

Experimental Section
Acquisition of Luminescence Decay Curves: Luminescence decay curves were recorded using a Timeharp 260 time-correlated single photon counting (TCSPC) module (PicoQuant) coupled with a photomultiplier tube (PMT) detector (H10330C, HAMAMATSU).Samples were excited using Values of Signal-to-Noise Ratio Selected for Calibration and Training: In the calibration procedure, a range of signal-to-noise ratio (SNR) values was carefully selected to evaluate and compare different approaches.Specifically, SNR values of 15, 20, 24, 31, 39, 49, 52, 62, and 77 were utilized for this purpose.These SNR values allowed to comprehensively assess the performance of various methods.The SNR value of 300 was reserved as the ground truth during the training of the U-NET model, ensuring a reliable and accurate baseline for the analysis.
Spiral Mapping of Luminescence Lifetime Decay Curves: The spiral mapping was selected due to the compression of the temporal information of the signal into a spatial structure that the U-NET could better exploit.This spatial structure, in turn, helped the neural network to more effectively learn and remove noise from the signal.As shown in the Section S5 (Supporting Information) of ESI, simply arranging the intensities according to columns and rows, on the other hand, did not provide as much structural information, which led to lower performance for the denoising neural network.

Figure 1 .
Figure 1.Effect of the quality of luminescence decay curves on the estimation of lifetime.a) Schematic representation of the experimental setup used for the measurement of the luminescence decay curves of Ag 2 S SNCs.b) Luminescence decay curves obtained under different SNR values (left) and violin representation of the data distribution measured over the last 30% of each decay curve (right).The white dot is the median, the grey box encompasses the 25-75% range of the data, the lines represent the 1.5 IQR (interquartile range), and the kernel density estimates are modelled as Poisson functions.c) Dependence on the SNR of the relative uncertainty on the luminescence lifetime extracted with MaxLik fitting (fit) and from the analysis of the integrated signal (area) approaches.Circles and squares are datapoints obtained from fitting or integrating 100 normalized decay curves for each SNR value.

Figure 2 .
Figure 2. Thermometer calibration at different SNRs.a) Calibration datasets (decay curves as a function of temperature) for SNR values of 77 and 15.Only five representative decay curves are plotted.b) Calibration curves obtained from the analysis of the datasets in (a) via classical curve analysis approach of integration (area, left) and exponential fitting (fit, right).

Figure 3 .
Figure 3. Proposed U-NET for better estimation of luminescence lifetimes.a) Conversion of decay curves into images by applying a square spiral mapping.b) Flowchart representing the architecture and use of the selected U-NET.Several pairs of images, consisting of a noisy and a clean signal each, are provided to train the neural network, whose architecture consists of a contractive and an expansive path.While the contractive path trades the spatially distributed temporal information for feature determination, the expansive pathway combines feature and temporal information through a sequence of up-convolutions and concatenations with high-resolution features obtained from the contractive path.By minimizing the loss between the actual clean signal and the one provided by the U-NET, an optimal performance is achieved.

Figure 4 .
Figure 4. U-NET improves precision of the thermometric approach.a) Scheme showing the steps involved in the analysis of the decay curves to obtain the luminescence lifetime value () from a decay curve for the three decay curve analysis approaches investigated.b) Calibration curves obtained using the trained U-NET for SNR values of 77 (blue circles) and 15 (green diamonds).The values of GSS are also reported as an additional y-axis on the right.c) Relative uncertainty in the lifetime value varying the SNR as a function of temperature for the three approaches investigated (area -yellow squares; fit -orange circles; U-NET -magenta triangles).The data used for the area and fit approaches are the ones reported in Figure 2b.Subscripts 15 and 77 indicate the SNR value at which the parameter was obtained. is the uncertainty associated to the corresponding to each average lifetime value ().

Figure 5 .
Figure 5.Validating the U-NET-based approach with the thermal readout of a fast event.a) Schematic representation of the experiment.A droplet falls from a syringe with a dispersion of Ag 2 S SNCs at a certain temperature.The droplet is illuminated by a 630 nm pulsed laser while on a free fall.The luminescence decay curves are recorded for further analysis.b) Photographs of a representative droplet during its free fall.The dashed red lines are included as guide to the eyes, indicating the direction of the laser.c) Thermal readout obtained using the different decay curve analysis approaches.The U-NET is the one providing less scattered data and more closely matching the temperature of the dispersion reservoir in the syringe (indicated by horizontal black lines: 25 and 35 °C).The datasets are represented using violin plots where the white dot is the median, the grey box encompasses the 25-75% range of the data, the lines represent the 1.5 IQR (interquartile range), and the kernel density estimates are modeled using the Kernel Smooth function in Origin.

Figure 6 .
Figure 6.Validation of the U-NET-based approach with a measurement under scattering tissue.a) Schematic representation of the experiment performed.A droplet is left hanging at the tip of a syringe and is illuminated with a 1450-nm laser to induce heating.Immediately after turning off the heating laser, a pulsed 630 nm laser illuminates the droplet for probing the temperature.Tissue phantoms of different thicknesses are put in front of the detector to naturally reduce the SNR and mimic the conditions usually found in biological applications.The decay curves are detected for further analysis.b) Thermal relaxation profiles acquired after analyzing the luminescence signal, detected after crossing different thicknesses of tissue, by integrating the area under the curve, fitting or applying the U-NET.c) Dependence of the thermal uncertainty of the readouts with the phantom thickness (top) and relative improvement in precision using U-NET versus area and fit approaches (bottom).