Relative contributions of galactic cosmic rays and lunar proton “albedo” to dose and dose rates near the Moon
Harlan E. Spence,
Institute for the Study of Earth, Oceans, and Space, University of New Hampshire, Durham, New Hampshire, USA
Corresponding author: H. E. Spence, Institute for the Study of Earth, Oceans, and Space, University of New Hampshire, Morse Hall, Room 306, 8 College Rd., Durham, NH 03824-3525, USA. (Harlan.firstname.lastname@example.org)
 We use validated radiation transport models of the Cosmic Ray Telescope for the Effects of Radiation instrument and its response to both primary galactic cosmic rays (GCR) and secondary radiation, including lunar protons released through nuclear evaporation, to estimate their relative contributions to total dose rate in silicon (372 μGy/d) and dose equivalent rate at the skin (2.88 mSv/d). Near the Moon, we show that GCR accounts for ~91.4% of the total absorbed dose, with GCR protons accounting for ~42.8%, GCR alpha particles for ~18.5%, and GCR heavy ions for ~30.1%. The remaining ~8.6% of the dose at Lunar Reconnaissance Orbiter altitudes (~50 km) arises from secondary lunar species, primarily “albedo” protons (3.1%) and electrons (2.2%). Other lunar nuclear evaporation species contributing to the dose rate are positrons (1.5%), gammas (1.1%), and neutrons (0.7%). Relative contributions of these same species to the total dose equivalent rate in skin, a quantity of more direct biological relevance, favor those with comparatively high quality factors. Consequently, the primary GCR heavy ion components dominate the estimated effective skin dose. Finally, we note that when considering the lunar radiation environment, although the Moon blocks approximately half of the sky, thus essentially halving the absorbed dose rate near the Moon relative to deep space, the secondary radiation created by the presence of the Moon adds back a small, but measurable, absorbed dose (~8%) that can and should be now accounted for quantitatively in radiation risk assessments at the Moon and other similar exploration targets.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 Launched serendipitously during the unusually prolonged sunspot minimum that separated solar cycles 23 and 24, the Lunar Reconnaissance Orbiter (LRO) spacecraft [Chin et al., 2007] orbits the Moon while immersed in powerful fluxes of galactic cosmic rays (GCR) still declining from space age highs reached in the late 2009 to early 2010 interval [Mewaldt et al., 2010; Schwadron et al., 2010]. The Cosmic Ray Telescope for the Effects of Radiation (CRaTER) instrument [Spence et al., 2010] on LRO was designed to study the effects of these GCR in the lunar radiation environment. CRaTER makes direct measurements of the ionizing radiation while in low lunar orbit (~50 km altitude), a source known to be responsible both for failures of electronics parts on robotic spacecraft (e.g., single-event upsets of computer memory) and for biological risks to astronauts in deep space (e.g., DNA damage and elevated cancer probability).
 GCR intensities in the inner solar system vary in antiphase with the solar cycle. The Sun's changing magnetic field and solar wind outflow over a solar cycle lead to comprehensive changes in the interplanetary medium, not only close to the Sun but even out to the very edge of our heliosphere, the boundary separating the solar system from the interstellar medium. These conditions in the interplanetary medium modulate the entry of GCR through the heliosphere and into the inner solar system such that GCR intensities at the Earth's orbit minimize near solar maximum and maximize near solar minimum. By most measures, the Sun slowly emerged from an extreme solar minimum of historic significance; the solar minimum separating cycles 23 and 24 was both deep and prolonged and was unprecedented in the space age as were the commensurate highs in GCR intensities.
 The CRaTER instrument, launched on LRO into this unusually deep and enduring solar minimum period, thereby had the fortuitous opportunity to measure the effects of biologically relevant radiation during a space-age-era “worst-case” scenario. GCR and other species (secondary lunar radiation and solar energetic particles) penetrate the instrument, and their energy deposition is recorded in solid-state detectors (SSDs) as they traverse the telescope. We use CRaTER measurements to estimate linear energy transfer (LET), measured in keV/µm, after passing through carefully chosen amounts of material, including thin aluminum shields and volumes of tissue equivalent plastic (TEP), to mimic conditions needed to assess shielding as well as to quantify the radiation dose relevant for astronaut safety (e.g., eye lens dose and blood-forming organ dose) at the Moon or at other exploration destinations.
Case et al. [2013, this issue] provided the first definitive LET spectra deduced from CRaTER, appropriate for deep solar minimum and, thus, maximal GCR conditions. Zeitlin et al.  studied the evolution of the radiation field and LET spectra within the CRaTER instrument, particularly focusing on the physics of GCR heavy ions, and with important implications for radiation shielding. Porter et al.  compared the measured CRaTER LET spectra with spectra predicted using the High-Energy Transport Code for Human Exploration and Development in Space model and the relevant input GCR spectrum. Finally, Looper et al. [2013, this issue] validated a sophisticated Geant4 (Geometry and Tracking) model [Allison et al., 2006] of the CRaTER instrument by comparisons of model LET predictions of CRaTER's response to both GCR primaries and lunar secondaries. In this paper, we use the validated model results of Looper et al.  to estimate the relative contributions of GCR primary and secondary sources to the radiation environment near the Moon. We specifically restrict our attention to periods when solar activity is absent, and hence, solar protons are not a relevant factor in dose and dose rate; Joyce et al. [2013, this issue] explored the radiation environment during such impulsive solar active periods.
 The earliest energetic particle observations near the Moon predate the Apollo era. Lin  used measurements from Geiger-Mueller tubes and an ion chamber on board Explorer 35 to explore lunar influences on energetic particle fluxes. Explorer 35 orbited the Moon in a highly elliptical orbit, with a periselene of 1.46 lunar radii (RL), an aposelene of approximately 5.4 RL, and an orbital period of ~11.5 h. With this orbit, which brought the Explorer 35 energetic particle sensors periodically closer to and then farther from the Moon, Lin established the ways in which the Moon's solid body “shadowed” energetic particles.
 Lin's seminal paper demonstrated that >15 MeV cosmic ray proton fluxes [see Lin, 1968, Figure 1] vary according to relative proximity to the Moon. Specifically, Lin showed quantitatively that GCR proton fluxes reduced in proportion to the fraction of the sky subtended by the Moon. With Explorer 35's periselene altitude of ~800 km, the Moon filled approximately 1.69 sr of the full sky, representing ~13% of all of deep space from which galactic cosmic rays arrive essentially isotropically. Cosmic ray intensities at periselene shows this commensurate reduction and with a return to higher fluxes at higher altitudes where the Moon's solid angle decreases. Subsequent studies by Van Allen and Ness  and then by Reiff  affirmed the essence of Lin's initial conclusions of energetic proton variations near the Moon caused by its absorbing effects. Most recently, data from the microdosimeter component of the CRaTER instrument on LRO have been also used to quantify this same shadowing effect of the Moon on cosmic ray particle intensities [Mazur et al., 2011].
 With insight and considerable foresight, Lin  commented that while the observed reduction in >15 MeV GCR proton fluxes near the Moon approximately followed that predicted from simple geometric “shadowing” arguments, the “calculation does not take into account the cosmic-ray lunar albedo contribution to the count rate.” As with Earth's atmosphere, the interactions of primary cosmic rays with the lunar regolith lead to production of secondary by-products through nuclear interactions. It is in this context that Lin used the term “albedo” to refer to those secondary products that move outward from the lunar surface owing to the interactions with GCR. Hereinafter, we use the term albedo in this same manner, rather than the more standard astronomical use of the term albedo for reflected light. Descriptions of the physics of the nuclear process producing this sort of energetic particle albedo, a process termed nuclear evaporation, may be found in the seminal works of Bethe  and Weisskopf  and in later publications such as Le Couteur  and Beard and McLellan .
Schrader and Martina  first described how albedo X-ray photons could be used to learn about the composition of the first meter or so of the lunar regolith. Lingenfelter  extended this work to show how albedo neutrons could be used to determine the hydrogen content of the regolith by mapping the spatial distribution of flux variations; their work serves as the basis of all subsequent lunar neutron mapping experiments as first prescribed by Feldman et al. . Several subsequent missions such as Lunar Prospector [Binder, 1998] and Lunar Reconnaissance Orbiter [Chin et al., 2007] used albedo neutrons to explore the inferred presence of water by appealing to the role that hydrogen plays in mediating neutron escape from the lunar subsurface [e.g., Feldman et al., 1998a, 1998b; Lawrence et al., 2006; Mitrofanov et al., 2010]. Other studies have used models to predict the radiation environment near the Moon, including lunar secondaries in addition to neutrons and gamma rays [Adams et al., 2007; Aikens et al., 2011].
 During the development of the LRO mission, Spence et al.  noted that CRaTER should indeed be able to detect albedo protons coming from the lunar surface. However, they anticipated that off-nadir measurements would be required for optimal detection of this population, arguing that there would be a far greater yield of forward-scattered secondaries to LRO altitudes when the viewing geometry was toward the Moon's limb. In these geometries, GCR primaries would create more favorable grazing incidence conditions. Under such viewing, one might expect the equivalent of “limb brightening,” an atmospheric physics phenomenon, but produced for different reasons. Spence et al.  noted that this was considered a secondary science goal of the CRaTER investigation, given that the design of the mission was for zenith-pointing geometries for which a lower yield of nuclear evaporation products is expected.
 Despite the less favorable viewing conditions experienced in the main mapping portions of LRO's mission, Wilson et al.  identified unique, definitive signatures of lunar albedo protons and used them to create the first map of the Moon imaged with protons produced by nuclear evaporation. In contrast with albedo neutrons that are shown to vary with near-surface composition [e.g., Feldman et al., 1998a, 1998b], i.e., hydrogen content, Wilson et al. were only able to say that the Moon was featureless in proton albedo, at least to the statistical uncertainties of the data accumulated to that point. Their work established a limit on the degree to which the proton albedo can be used to produce scientifically useful maps. Future proton albedo maps, based on the many additional years of observations and improved statistics beyond the initial Wilson et al. discovery map, may indeed reveal statistically significant features that could, in turn, be associated with spatial localization of materials in the lunar regolith producing different albedo yields.
 In related work, Looper et al.  further quantified our understanding of the proton albedo based on modeling. They use a comprehensive physical description of the CRaTER instrument, including detailed geometries of the housing and detectors and their material properties, along with Geant4 to quantify the instrument response to ionizing radiation. Geant4 is a model that calculates the physical interactions of energetic particles in matter, including the resultant ionizing radiation and its transport. Looper et al. used Geant4 to quantify how the simulated CRaTER instrument responds to two ionizing radiation sources: (1) spectra of primary GCR ions consistent with those seen during the LRO mission and (2) spectra of secondary particles coming from the Moon as a result of the primary GCR ions.
Looper et al.  created the secondary population by modeling the nuclear interactions of a thick slab of matter possessing the material properties of lunar regolith with spectra of GCR protons and alpha particles illuminating the slab isotropically. They then tracked the generated secondaries coming upward from the surface as a function of angle to produce a spectrum of secondary particles at LRO altitudes (50 km), properly accounting for the decay of some of the unstable secondary particles during transit. The Looper et al.  work carefully compared the observed and simulated CRaTER responses to these two sources. The excellent quantitative agreement not only yields important insights into the various physical features seen in the CRaTER observations but also provides definitive validation of the Geant4 model of CRaTER and its interaction with GCR and lunar secondaries.
 In this paper, we use the validated Looper et al.  simulations of the CRaTER instrument response to GCR and lunar secondaries to compute the dose and dose rate near the Moon. An advantage of this approach is that the validated model allows us to separate the relative contributions of the many primary and secondary ionizing sources of radiation to dose and dose rate that are otherwise partly or completely inseparable in the observations. This data-constrained model ability provides new insights on the lunar environment near the Moon, and it should help optimize shielding strategies needed to reduce radiation risks [Zeitlin et al., 2013], to better understand space weathering [Schwadron et al., 2012; Jordan et al., 2013] and other effects, including deep dielectric charging.
3 CRaTER Measurements
 The CRaTER instrument measures the energy spectrum of ionizing radiation near the Moon using solid-state detectors (SSDs) sandwiching two pieces of TEP. CRaTER employs a bidirectional telescope to measure the energy losses of penetrating particles in three thin-thick pairs of SSDs (see Case et al. [2013, Figure A1] for the telescope geometry). Thin detectors are odd numbered (D1, D3, and D5), and their thick detector pairs are even numbered (D2, D4, and D6). Thin (thick) detectors are designed to record particles with high (low) LET at three locations within the telescope. During normal operations, the D1-D2 detector pair is directed toward zenith, while the D5-D6 detector pair is directed toward nadir (lunar center).
 Each detector operates independently through six separate electronic chains, identifying all ionizing radiation events above a detection threshold in each detector and producing shaped electronic pulse heights related through calibration to deposited energy. When any one electronic chain identifies a pulse height above the threshold, then its pulse height (deposited ionizing energy) and the remaining five detector pulse heights (and hence energies) are determined for that ionizing radiation event. CRaTER's primary data product thus comprises a time-tagged series of energy deposits in each of the six detectors whenever any single detector registers an ionizing event above a settable threshold.
 The particles which deposit energy in CRaTER's SSDs possess high energies and move at substantial fractions of the speed of light. (Low-energy particles such as those in the solar wind and its suprathermal tail do not penetrate the aluminum covers that surround the silicon detectors.) Incident particles which enter the field-of-view pass along the principal axis of the CRaTER instrument and produce observable energy deposits in one or multiple detectors. However, given the comparatively small spatial separations between detector pairs (typically only several centimeters) along with the high particle speed, any multiple detections for a single ionizing radiation traversal occur within an extremely short time interval (less than nanoseconds). Because the detector electronics take far longer to process signals (microseconds), multiple detections for a single event are, for all practical purposes, recorded simultaneously. We cannot use timing of events between detector pairs to infer particle directionality, an approach used in some lower-energy time-of-flight detection schemes.
 Instead, we appeal to the physics of ionizing radiation energy loss in matter to infer directionality in another way. The Bethe  formula describes the principle of energy loss as a particle passes through matter and loses energy through ionization. Particles moving at high energies lose fractionally little energy, but then lose more and more energy as they slow and, potentially, even eventually stop in the matter they traverse. Within CRaTER, we can thus establish ionizing radiation directionality in a statistical sense by exploring energy loss in detector pairs, particularly those pairs separated by an amount of intervening matter that slows the particles substantially (i.e., between D2 and D4 in the zenith direction and between D4 and D6 in the nadir direction).
 Accordingly, CRaTER coincidence measurements are used to infer incident particle directionality. Figure 1 illustrates a two-dimensional histogram of energy deposits in D4 versus energy deposits in D6, for all times when there were coincident signals in both detectors above their threshold energies, but with no detection also in D2 [see also Looper et al., 2013, Figure 2]. This figure shows all such coincident detections occurring during 160 days in late 2009 through early 2010, bracketing the peak of GCR fluxes around solar minimum.
 For any single event that is detectable in both D4 and D6, it is challenging to establish uniquely the incident particle's direction. However, as Figure 1 shows, particle directionality can be determined in a statistical sense as particles traversing the telescope in different directions trace out distinct and separable patterns. Particles coming from zenith leave more energy in the D6 detector where they are slowing and depositing more energy than they did when traversing D4. Conversely, protons coming upward from the Moon leave increasingly more energy in D4, though leaving a lower but nearly same energy in D6. Wilson et al.  described the technique for directionality identification by appealing to these unique multiple coincidence “fingerprints.” In our work, however, we appeal directly to modeling results for which particle directionality is known a priori, noting that these models have been validated through comparison with observations, including albedo particles using the Wilson et al. technique.
4 Dose Rate Estimates
 Figure 1 demonstrates how CRaTER measurements are used to infer incident particle directionality. The data in Figure 1 include 160 days of CRaTER observations, spanning the period when LRO first entered its mapping orbit on 16 September 2009 through 6 March 2010, just before the first Forbush decrease of the LRO mission (see Looper et al.  for more details). Figure 2 is the equivalent D4-D6 crossplot but, this time, based on the Geant4 simulation of Looper et al. . The Looper et al. model used a single GCR spectrum chosen to represent the deep solar minimum conditions which bracketed the peak GCR fluxes. Clearly, the simulated response bears a very close resemblance to the observations, reproducing the signature characteristics associated with the primary GCR protons and heavy ions coming from zenith, as well as the albedo protons and other secondary radiation coming from nadir.
 Figure 3 quantifies the comparison and establishes a strong validation of the modeling results. Whereas the observations cannot easily distinguish between the various constituents, such accounting is easily accomplished in the model output. The figure (see also Looper et al. [2013, Figure 4] for details) shows the simulated LET spectrum for the primary GCR populations of protons, alphas, and all heavy ions combined (colored dashed curves) for the detector pair without the coincidence requirements of Figures 1 and 2. The albedo populations are also shown (solid curves); each albedo component provides a significant fraction to the total, at least in certain portions of the LET range. We sum the model components to arrive at an integrated simulated spectrum (solid black curve). The simulated prediction demonstrates a remarkable agreement with the actual observed LET spectrum (solid pink curve), matching both spectral shapes and features and intensities spanning over 5 orders of magnitude in intensity. Subtle differences between the model-derived LET spectra and the observed one may be interesting but are negligible in the context of the following dose and dose rate calculations.
 Having demonstrated the high fidelity of the Looper et al. model, we next use their Geant4-derived LET spectra to estimate the radiation dose and dose rates of the primary GCR and secondary albedo sources. Absorbed dose has the SI unit of gray (Gy), where 1 Gy = 100 rad, and is a measure of the energy (joules) deposited per unit mass (kg) in matter. We focus on the energy deposits made in the D5-D6 detector pair situated just behind the nadir aluminum shield to quantify the dose and dose rate at shallow depths behind a thin wall of metal (i.e., an estimate of skin dose behind an astronaut's space suit). We choose the nadir direction as it is most sensitive to the lunar albedo. As noted in Spence et al. , CRaTER's nadir detector shield is composed of an aluminum alloy (6061-T6) with a mass density of 2700 kg/m3 (or in cgs units, 2.70 g/cm3) and a thickness of 810.3 µm, representing a shield with a column density of ~0.22 kg/m2 (more commonly expressed in cgs units as ~2.2 g/cm2) For comparison, the equivalent aluminum thickness, i.e., ~0.3 kg/m2 or ~3.0 g/cm2, of a space suit's fabric is comparable [Wilson et al., 2006] to the thin wall shielding the nadir-viewing D5-D6 detectors.
 The total dose rate (in silicon) estimated from the Geant4 simulation of CRaTER's D5-D6 detector combination is 372 μGy/d, or an annual dose of 0.136 Gy. These values are the daily dose rate and equivalent annual dose for modeled conditions representative of the period bracketing the recent solar minimum. Joyce et al.  used CRaTER measurements to compute the total dose rate in the zenith detector pair (D1-D2) over the course of the mission. The data-derived dose rate maximized in the same late 2009 to early 2010 with a value of ~320 μGy/d [Joyce et al., 2013]. One possible explanation for the lower D1-D2 dose rate estimated compared to the D5-D6 dose rate is the lunar albedo source which does not reach D1-D2. The LRO dose rate is also comparable to the dose rate in silicon reported by Zeitlin et al.  of 332 μGy/d measured by the Mars Science Laboratory (MSL) while in cruise phase in interplanetary space during solar quiet conditions between late 2011 and summer 2012. Differences between the LRO and MSL dose estimates are expected for several reasons.
 Since LRO is near the Moon which shadows the GCR, CRaTER's measured dose rate should be reduced compared to that of deep space. However, as noted above, because of its proximity to the Moon, it experiences an albedo absent in deep space which will enhance its dose. Given that the Moon was closer to the center of the solar system than MSL was during its cruise from Earth to Mars, we might also expect a small difference in GCR intensity at the two spacecraft on average owing to the radial gradient of GCR with distance from the Sun. Also, the simulated CRaTER doses are representative of the deepest minimum phase of the solar cycle when GCR intensities were at their absolute maximum, compared to later times in the cycle during MSL's cruise phase when the GCR intensity had subsided considerably from space age highs. Finally, the CRaTER dose estimates are behind minimal shielding compared to even the lightly shielded estimates of Zeitlin et al. . These several factors that compete to increase or decrease the dose rate measurements likely account for the observed difference.
 Figure 4 shows a pie chart by species of the absorbed dose rate (in silicon) as a percentage of the individual contributions to the total dose rate. The GCR protons dominate the dose rate, accounting for nearly 43% of the total. The GCR alphas are the next largest single contributor, delivering 18.5% of the dose rate. Collectively, all other heavier GCR ions add another ~30%. Together, the GCR alone account for over 90% of the total absorbed dose rate; the remaining fraction (~8.6%) comes from the albedo populations.
 The bar histogram in Figure 4 further subdivides the albedo portion. Albedo protons contribute more than a third of the albedo absorbed dose rate in silicon and electrons approximately one quarter. The remaining dose rate is delivered by a combination of albedo positrons, gammas, and neutrons, in order of decreasing fraction. Note that the albedo neutrons constitute less than 1% of the total absorbed dose rate in silicon (and total dose) at LRO altitudes.
5 Dose Equivalent Rate Estimates in Skin
 We next use the same modeled and validated LET spectra to estimate the dose equivalent rate in skin, a quantity of more direct biological relevance. To do so, we must first relate the dose rate measured (or, more accurately, modeled) in silicon to the dose rate in water in order to convert between what is measured by CRaTER's solid-state (silicon) detectors and what the dose would have been in water, taken as an approximation of human tissue. To do so, we use the approach of Benton et al.  to convert between different forms of LET. The Benton et al.  conversion is based on earlier measured range-energy relationships of ions in water and in silicon [Henke and Benton, 1967; Benton and Henke, 1969], yielding a functional relationship between the LET measured in Si to the inferred LET in H2O. We use the relationship provided in Benton et al. [2010, equation 3] for conversion between the measured LET in Si and that inferred in H2O: log(LET H2O) = −0.2902 + 1.025log(LET Si).
 That intermediate conversion yields estimates of dose and dose rate in water, but still in the fundamental physics units of gray. In order to estimate the biological radiation risks of that absorbed dose in water, we go one step further by calculating what is referred to as the dose equivalent, a quantity that can be estimated for any particular human organ or tissue type (i.e., skin, liver, lung, etc.). Dose equivalent is expressed in the SI units of sieverts (Sv), where 1 Sv = 100 rem, and is related to absorbed dose through an LET-dependent factor known as the quality factor, Q. We estimate the total dose equivalent at the skin by integrating, over the full range of LET, the product of the differential LET spectrum for each species and the LET-dependent quality factors. In this work, we adopt quality factors as specified in the International Commission on Radiological Protection (ICRP) Report 60, QICRP60. They are as follows:
Schwadron et al. [2013, Figure 1] in this special issue reproduced graphically this LET-dependent form of QICRP60, described in ICRP Publications 60 and 92 [International Commission on Radiological Protection (ICRP), 1991, 2003] and, most recently, in ICRP Publication 123 [ICRP, 2013].
 We use the LET conversion and LET-dependent quality factors to estimate the dose equivalent rates (mSv/d) at the skin from the absorbed dose rates (mGy/d) in silicon using the modeled response of the CRaTER D5-D6 detector pair. For the time interval modeled near the space age peak in GCR intensity, the total dose equivalent rate at the skin is, on average, ~2.88 mSv/d or an annual dose equivalent at the skin of ~1.05 Sv. For context, this represents ~2 times the occupational limit for terrestrial radiation workers and approximately one third the annual exposure limit for astronauts. Our value compares well with recent estimates by Cucinotta et al.  and Zeitlin et al. , especially given that their studies were focused on a combination of different locations, times in the solar cycle, and amounts of shielding than in ours. It is worth noting that we have estimated the effective annual skin dose behind a very small amount of shielding (equivalent to a space suit's fabric). Clearly, more effective shielding strategies while on the Moon or in interplanetary space would be employed to reduce skin dose risks to well below NASA limits.
 Finally, we note that because dose equivalent scales so strongly with high values of Q, and since at relativistic velocities LET approximately scales with particle charge, the dose equivalent at the skin is dominated by the heavy ions, which is a well-known phenomenon. Whereas the light ions contribute moderately to absorbed dose and dose rate, they contribute far less substantially to dose and dose rate equivalent. Furthermore, because the albedo populations generally have inherent low Q values, their collective contribution to the dose rate (and dose) equivalent at the skin (and dose) at LRO altitudes is even less significant than they are to absorbed dose.
6 Conclusions and Summary
 We use the validated model developed by Looper et al. , which simulates both the primary GCR and secondary albedo ionizing radiation sources experienced by the LRO CRaTER instrument, to estimate absorbed and dose equivalent rates near the Moon. During the period near the last solar minimum, when GCR intensities were greatest, the total dose rate in silicon at LRO altitudes is estimated to be 372 μGy/d, or an equivalent annual dose of 0.136 Gy; the total dose equivalent rate at the skin is estimated to be ~2.88 mSv/d, or an annual dose equivalent at the skin of ~1.05 Sv. Furthermore, we show that GCR accounts for ~91.4% of the total absorbed dose, with GCR protons accounting for ~42.8%, GCR alpha particles for ~18.5%, and GCR heavy ions for ~30.1%. The remaining ~8.6% of the dose at LRO altitudes (~50 km) arises from secondary lunar species, primarily albedo protons (3.1%) and electrons (2.2%). Other lunar nuclear evaporation species contributing to the dose rate are positrons (1.5%), gammas (1.1%), and neutrons (0.7%). Relative contributions of these same species to the total dose equivalent rate at the skin, the quantity of direct biological relevance, overwhelmingly favor those with comparatively high quality factors, i.e., the heavier ions. Consequently, the heavy ion GCR components increasingly dominate the dose equivalent at the skin, with the albedo components as comparatively negligible contributors. Finally, we note that when considering the lunar radiation environment, although the Moon blocks approximately half the sky, thus essentially halving the absorbed dose rate near the Moon relative to deep space, the secondary radiation created by the presence of the Moon adds back a small, but measurable, amount (~8%) of absorbed dose that can and should now be accounted for quantitatively in radiation risk assessments at the Moon and other similar exploration targets that are solid, airless bodies (such as other planetary moons, including icy moons, and asteroids).
 We thank all CRaTER and LRO team members whose dedication, skills, and labor made this experiment and mission possible. This work was funded by the NASA Science Mission Directorate under contract NNG05EB92C.