Background: ‘Lower colour metrics’ describes the laws of colour mixture as manifest in trichromatic colour space and best known in its two-dimensional projection, the chromaticity diagram. ‘Higher colour metrics’ describes how distance in this colour space translates into perceptual difference. It is higher in the sense that it builds on the fundamentals of lower colour metrics.
Methods: A historical account is given of the development of higher colour metrics, with many ups and downs, since Helmholtz started it at the end of the 19th Century.
Results: Despite long periods of silence, Helmholtz’s basic ideas have survived by successfully extended modelling, which could also account for seemingly paradoxical effects of luminance and saturation on colour discrimination.
Conclusion: The subject theme, which presently is at a low tide of interest, deserves the renewed interest of colour vision researchers.
A few years ago Mollon1 wrote a fascinating chapter on the history of the trichromatic basis of colour vision in the new edition of The Science of Colour. Fascinating as a history, but as ‘past’ history, because now virtually all disagreements seem to have disappeared. There is consensus about trichromasy, its basis in three types of cone receptors and their action spectra. The fact that the International Commission on Illumination (CIE) is now engaged in defining a standard fundamental colour observer2 in terms of an agreed set of these action spectra speaks volumes. A subsequent challenge in colour metrics, which Mollon did not touch on, is the problem of what Schrödinger3 has called ‘higher colour metrics’: the problem of how to understand the non-uniformity of the colour mixture diagram. To formulate it differently (Figure 1), why are there such large local differences in size, shape and orientation of the MacAdam4 ellipses representing just perceptible colour differences?
This paper concentrates on that question but the presentation of the just perceptible colour differences in the chromaticity diagram, however familiar, is poorly suited to a scientific discussion in terms of mechanisms of colour vision. The representation of colours by the rather artificial (x,y) notation was chosen by CIE for its nice pictorial overview, with white in the centre and blue, green and red far apart in the corners. Model calculations were done in terms of the excitation levels of the three cone systems: L (long wavelength sensitive), M (middle wavelength sensitive) and S (short wavelength sensitive). Only to verify model predictions, I will return to the familiar CIE chromaticity diagram.
The basic cue to the question of why the MacAdam ellipses vary so much in size, shape and orientation was given by Helmholtz in his last two papers on vision,5,6 at the end of the 19th Century. However, his line of thinking, though still continued, has not really struck roots. One cannot find a trace of a reference to this theoretical concept in Smith and Pokorny’s7 chapter on colour discrimination in The Science of Colour, in Brainard’s8 subsequent chapter on colour differences or in its counterpart by Eskew, McLennan and Guilianini9 in another recent book on colour vision, Color Vision, From Genes To Perception. In view of the major advances in neural research techniques, it is understandable that the interest of colour vision scientists has drifted away from simple happenings at receptor level to complex neural processing. Gain in quantitative understanding has come much more from the line of thinking started by Helmholtz. Therefore, it seems worthwhile to present a retrospective view that offers equally fascinating insights as Mollon’s review on lower colour metrics. In this paper, I will try to induce the reader to follow me along sometimes seemingly hardly connected and often crooked pathways that so often mark scientific progress, to end at the present state of affairs in higher colour metrics. To help the reader to keep track of where we are, Figure 2 gives an outline with an indication of the sections in this paper.
This paper is a personal account in more than one way. In the first place, it is a historical review, including my own contribution to the subject. It is written in the hope that others might be willing to pick up where our generation has left. Second, it is a tale illustrated partly with some direct or indirect oral history, the author having the good luck to have been born on a continent where at that time ‘things happened’. Finally, it is a personal view on a subject that, apparently, does not (yet) belong to the scientific canon. Before starting, it is fitting to refer to the still valuable survey of Stiles10 on the same subject a third of a century ago, though with a quite different angle of incidence.
EMERGENCE OF THE ENERGY CONCEPT AND THE MINIMUM PERCEPTIBLE
We start with the young Helmholtz, when still an active military physician. He spent his ‘lost’ hours on hospital night watch studying the reach of the then just emerging concept of conservation of energy, both in inanimate nature and in living organisms. The result of his studies was a beautiful booklet Über die Erhaltung der Kraft,11 published in 1847, which was to become the keystone to that conservation law. A few years earlier, Robert Mayer12 and Joule13 had laid the basis by establishing the mechanical heat equivalent but Helmholtz gave the equivalence concept its broader significance by widening it to the domains of mechanics, heat, electricity, radiation, chemistry and even plant and animal metabolism. Soon after, the not yet crystallised term ‘energy’ became a generally accepted basic concept in science, opening completely new fields of scientific research. One of these was the development of ever more sensitive instruments to convert radiant energy into electrical current: the bolometer, the thermopile, the photoelectric cell et cetera. In fact, Mollon1 hardly paid attention to the philosophy behind and the development of these sensors, which were crucial for the development of photometry and colorimetry and thereby for the elaboration of the Young-Helmholtz trichromatic colour vision model.
Another consequence was that sooner or later someone should become intrigued by the question of how sensitive our own radiant to neural energy converter, the retina, is in comparison with the newly developed radiometers. In 1905 Zwaardemaker,14 Donders’s successor in physiology, published a survey paper on that subject. He concluded that the retina, able to detect some 10−17 J under favourable circumstances, was the winner by far. As a physiologist, he could be content but he could not attach special significance to this finding, which was just another number, though very small indeed. Only when—in the same year!—the existence of discrete radiant energy particles (quanta or photons), hesitatingly suggested by Planck15 a few years before, was proved to be a reality by Einstein,16 this 10−17 J became an interesting finding. It translated into some 25 photons and after accounting for light losses between cornea and receptors, it became clear that only a very few photons sufficed to evoke an impression of light. The ultimate consequence was that only one photon sufficed to excite a receptor: the retina was not just a relative winner, it was an absolute winner. Even in the future, other instruments could not be expected to surpass the retina. Nowadays, we know this finding to be of great interest in terms of signal detection in photon or thermal noise but in those days apparently nobody realised this consequence. Zwaardemaker’s17 final conclusion that the retina needs only two quanta to produce a light sensation got completely lost as just another not too interesting curiosity in science. Further on we will pick up this theme again but first we will go back to Helmholtz, whom we left when he generalised the validity of the energy conservation law to the whole inanimate and animate domain.
FROM LOWER TO HIGHER COLOUR METRICS: THE HELMHOLTZ LINE ELEMENT
This energy conservation law, which at the same time was a well-circumscribed definition of the concept of energy, enabled Helmholtz to pick up Young’s three receptor view on trichromacy and to convert it to a quantitative model, complete with receptor action spectra. Well, it was not entirely complete. Interestingly, Helmholtz never quantified the intensity ordinate in his graphical plots. It remained the task of others to gradually fill up Helmholtz’s grand view with solid experimental data. That holds even more for Helmholtz’s ideas on higher colour metrics. Long before MacAdam,4 Helmholtz5,6 had realised that colour space, as defined by the colour mixture laws, could not be uniform in the sense that equal distances meant equal perceptual differences. He was aware that it was the rate of change in the ratio of the excitations of the three cone systems that determined colour discrimination. Moreover, he was familiar with Riemann’s18 work on non-Euclidean geometry. In his final work on physiological optics, he applied that knowledge to define ‘a line element in colour space’. This essentially describes the future MacAdam ellipses, which together define the experimental local metrics in the colour triangle. In Helmholtz’s view small perceptual distances ds in colour space should be defined (in modern terminology) by
with L,M,S the excitation levels of the respective cone systems and ‘JND’ meaning ‘just noticeable difference’. Phrased a bit more transparently, the small differences in excitation levels (dL, dM, dS) have first to be weighted according to their JNDs and then quadratically added as if these JNDs were intrinsic signal uncertainties. Helmholtz assumed the three receptor systems to behave similarly, according to Weber’s law with accuracies proportional to the respective cone excitation levels, that is, JND L = σL, JND M = σM, JND S = σS:
On the basis of this line element, in fact the elastic yardstick to measure small, near threshold distances in colour space, Helmholtz could calculate the spectral course of wavelength discrimination as
a formula that reflects Helmholtz’s above-mentioned notion that it is the rate of change with wavelength in the three cone system excitation levels that determines wavelength discrimination. That is optimal where the fundamental sensitivities cross sharply and deteriorates where, at the spectral flanks, the fundamental sensitivities tend to a parallel course.
Unfortunately, Helmholtz was too far ahead of his time, both in his mastery of mathematics to convince his readers and in the availability of experimental data to reliably verify his predictions. His successors considered this line element theory, which had got a prominent place in the second edition of Helmholtz’s famous handbook,19 a beauty spot on their venerated teacher and therefore, they skipped the corresponding chapter from the third, posthumous edition, an act that Stiles10 adduced ‘as a reminder to all editors of the hazards of pruning great works’. As a result, Helmholtz’s work on this theme was almost forgotten; almost, because occasionally someone did rediscover this real treasure.
The first of these was Schrödinger,3 the famous founder of quantum mechanics. He noted that Helmholtz’s line element, though a basically correct construction, could not be entirely correct because one of its consequences was that lines of equal colour (‘isohues’) and lines of equal brightness (‘isophotes’) were not mutually complementary (‘orthogonal’). One can easily see that by rewriting Equation (2) as
which says that the colour space is Euclidean on a (lnL, lnM, lnS) basis. In the dimensionally reduced (L,S) plot of (Figure 3), which is a representation of the colour space of deuteranopes, it is immediately clear that isophotes (L + S = constant) are far from orthogonal to isohues (L : S = constant).
This was no mere academic objection in terms of mathematical elegance because as a consequence, it predicts that small differences in colour might be compensated by small differences in luminance. To avoid this paradox, if one sticks to the orthogonality requirement and consequently defines constant luminance by L.M.S = constant (interrupted line in Figure 3) this understandably leads to erroneous consequences for the shape of the photopic luminosity function, Vλ. Substituting L = Iλ . Lλ, et cetera (indicating that, with monochromatic light, the excitation of the cone system is equal to the product of spectral intensity and spectral sensitivity) in L.M.S = constant, we obtain
As a matter of fact, Vλ in this Helmholtz reconstruction shows a camel back shape; (the exaggeration is Schrödinger’s) clearly deviating from the experimental course (Figure 4).
Schrödinger discovered a way out by postulating a different version of the line element
a construction that implies a two-stage JND mechanism: first at the receptor level (√L,√M,√S) and then at a higher level of neural processing, where L, M and S have already combined to the luminance signal L+M+S. With this line element, Schrödinger circumvented the orthogonality problem, without actually changing too much in the prediction on colour discrimination and produced a neatly additive Vλ (Figure 4). The calculations are based on present day views on cone primaries.20
Whereas Helmholtz’s line element had a firm functional basis in Weber’s law, Schrödinger’s was a purely mathematical concoction. By defining the part in square brackets, he attained the desired orthogonality between isohues and isophotes, and by introducing the calibration factor 1/Luminance, he managed to comply with Weber’s law: in the dimensionally reduced (√L,√S) diagram of Figure 5, the expansion of the JND circles just keeps pace with the diverging of the isohues.
Contemporary colour scientists must have realised the artificiality of Schrödinger’s line element (‘it worked but do not ask why’) and Schrödinger’s strongly theoretical work has drawn even less attention than Helmholtz’s original paper. In retrospect, with its introduction of the square root construction, it is a historically interesting study. For the moment, a different pathway has to be followed.
THE PHOTOCHEMICAL BASIS OF VISION
At the time that Zwaardemaker discovered that the minimum perceptible was close to one photon, visual scientists were more fascinated by the unravelling of the biochemical excitation mechanism in the photoreceptors. Under the inspiring guidance of Selig Hecht, that started a new ‘hype’ to interpret a great variety of visual functions (acuity, contrast sensitivity, flicker fusion et cetera) in terms of a photochemical equilibrium between the photopigments and their photolytical products. In retrospect, this theoretical concept could not but fail. With only one photon sufficing to trigger a receptor, it does not make sense to speak of a chemical equilibrium. Apparently, Hecht and his coworkers were not acquainted with Zwaardemaker’s finding so their erroneous frame of thinking dominated the vision research scene for some two decades with, admittedly, a wealth of valuable experimental data, collected to substantiate their faulty theories. The photochemical equilibrium theory died a silent death after Jahn’s two comprehensive papers21,22 in 1946. Curiously in 1941, the same Hecht, together with his coworkers Shlaer and Pirenne23 rediscovered that only a few photons sufficed to trigger a visual response. How they reached this conclusion will be the subject of the next section. There is no indication that they realised that this finding would undermine most of their earlier interpretations.
IDEAL SIGNAL IN NOISE DETECTION
Hecht’s work on the minimum perceptible and the almost simultaneous but entirely separate work by Van der Velden24 (it was wartime and contacts were broken) were based on an approach that was different from that of Zwaardemaker. Zwaardemaker measured incident energy and thus had to correct for postulated but uncertain losses. Hecht, Shlaer and Pirenne23 and Van der Velden24 could circumvent this problem by starting from the knowledge that the minimum perceptible should be counted in terms of a very limited number of photons, N ±√N = N (1 ± 1/√N) and that the intrinsic inaccuracy ±√N in N should result in an intrinsic inaccuracy in the visual threshold. Therefore, they determined the intrinsic ‘relative’ inaccuracy in the visual threshold and interpreted that as ±1/√N. It is not our aim to further discuss the minimum perceptible problems. Suffice to say that all authors, often using sophisticated refinements in their approach, agreed on 2 ≤ N < 10, that is, N is rather small but certainly more than one. As two photons, even when coming from a point source, cannot be expected to hit the same receptor, the retina must use coincidence circuits, a well-known means to suppress noise. With differences in opinion on the exact value of the minimum perceptible, there was consensus that the visual system acts as a sophisticated signal-in-noise detector. That conclusion gave the finding the wider significance that was absent in Zwaardemaker’s time.
It was not by accident that this signal-in-noise approach emerged as an interesting research theme around 1940. Whereas at the end of the 19th Century, Zwaardemaker’s interest to compare the performance of the eye with that of physical sensors was raised by the competition in physics to make increasingly sensitive sensors, the interest of investigators in the 1940s was triggered by the competition in physics to improve the signal in noise ratio in detectors. An amusing history, told to me by my teacher Burger, may illustrate this. In 1925, Moll and Burger25 were working on a thermo-relay, a highly sensitive method of signal amplification. The signal from a thermocouple was fed into a mirror-galvanometer and the light beam deflected from the galvanometer was directed towards a second thermocouple. Any small signal from the first thermocouple would be registered by reading the deflection of a second galvanometer. The light spot reading this second galvanometer would not stabilise even if the investigators did their experiments at night to avoid vibrations due to traffic and at neap tide to minimise the effects of the dash of the waves on the coast. Only then, they came to realise that they had reached the limit of Brownian movement.a
The founding of the new interest in the minimum perceptible is most evident in Van der Velden’s26 work that was started as a comparative study of what he called ‘essential sensitivity’ (in terms of just detectable signal-in-noise, in contrast to Zwaardemaker’s ‘absolute sensitivity’) in various light sensors. Like Zwaardemaker, he wanted to include a comparison with the eye and, like Hecht a few years before, he was not aware of Zwaardemaker’s earlier work and he ended up with exactly the same two quanta conclusions. This time the seed did not fall on barren rocks. Subsequent studies, in particular in the Netherlands, proved that more generally the functioning of the eye could be seen as that of a near to perfect signal-in-noise detector. One example illustrates (Figure 6, left) the improvement in contrast sensitivity with light level at sub-Weber levels as determined by Van den Brink and Bouman27
This so-called De Vries-Rose law (De Vries28; Rose29) is entirely in agreement with expectations on the basis of ideal signal-in-noise detectors. Similarly, Van der Horst and Bouman30 could prove that colour discrimination improved, over a long luminance stretch, with the square root of luminance (Figure 6, right). Below, we will see that this behaviour is in agreement with the laws for a perfect signal in noise detector.
THE STILES LINE ELEMENT
In his studies on colour vision, Stiles had discovered that in contrast to Helmholtz’s assumption, the three cone systems were not equal in their signal processing accuracy. His experimental findings indicated that their Weber fractions were different according to
He realised that this should have an impact on the line element as defined by Helmholtz on the basis of equal Weber fractions. He correspondingly adapted31 the Helmholtz line element to
and obtained fairly satisfactory results (Figure 7, left), though with his characteristic prudence, Stiles noted ‘certain main features of the experimental results are correctly reproduced but discrepancies with Wright and MacAdam’s measurements of general colour limens may indicate that the modified element ignores some factors that are operative in their experiments’.31
He calculated that MacAdam ellipses were a bit too plump but in retrospect, this has to be attributed to his choice of the receptor primaries, rather than to the form of his line element. More importantly, the incompatibility with the orthogonality between brightness and hue, as signalled by Schrödinger, virtually disappeared as a problem. Repeating the derivation of Vλ (equation 5) produces
the different exponents reflecting the different weights of the three cone systems. This Vλ has a course that hardly deviates from the experimental findings (Figure 4), mainly due to the reduced weight of the S-system and the great spectral overlap between Lλ and Mλ.
History repeated itself. Even though Stiles’s line element was a firm step forward and without serious competition as a theoretical predictor of colour discrimination, it did not draw substantial attention among those who should have been interested most. Developers of formulae for industrial colour difference continued to produce a dubious wealth of empirical formulae with no theoretical basis32 (see further on).
PHOTON NOISE MODEL APPLIED TO COLOUR DISCRIMINATION
The fact that the luminance dependence of wavelength discrimination so nicely complies with De Vries-Rose behaviour (Figure 6) brought Bouman and Walraven33 in the late 1950s to investigate the consequences for the spectral dependence of wavelength discrimination. Though they did not formulate their results in terms of a line element, these can be reformulated in those terms. As the photon catch in a receptor system X is X ±√X due to the intrinsic photon noise, JND X = √X, so that
This means that colour space can be made Euclidean by plotting it in terms of (√L, √M, √S). In the same way as Stiles calculated the MacAdam ellipses with his line element, Equation (7), Walraven33 used a variation of Equation (9) (Figure 7, right). The match between theory and experiment is even better than with the Stiles line element. Later research34 discussed in the next section showed that an improvement of the fit to the experimental results could not be expected in view of the spread between the various successive sets of ‘MacAdam’ ellipses, obtained in only slightly different experimental conditions. The somewhat better fit with the Walraven line element should be attributed mainly to the use of a slightly different set of fundamental spectral sensitivity functions.b
I may draw attention to the conspicuous similarity between the Bouman and Walraven (Equation 9) and the Schrödinger (Equation 6) line elements. Their only difference is the 1/Luminance weighting factor, which was introduced by Schrödinger to comply with Weber’s law and which Bouman and Walraven, restricting themselves to De Vries-Rose light levels, did not need. Consequently, their predicted functions for relative wavelength discrimination do not differ. It is a matter of historic caprice that, by accepting the quantum nature of light, Walraven33 could describe colour discrimination just like Schrödinger, the later founder of quantum mechanics. Apparently at that time, he did not recognise this underlying quantum basis.
EXTENDED PHOTON NOISE MODEL
It is evident that De Vries-Rose behaviour with photon noise as the sole limiting factor in visual discrimination should come to an end once the signal load of the neural channels (in terms of spikes per second) approaches its maximum capacity. Then, either passive saturation or, more realistically, active mechanisms of neural gain control to delay saturation,35 may be expected to become the limiting factor. This is not the place to discuss these mechanisms and it may suffice here to indicate that the expression JND X = √X should be extended with higher order terms
I will treat this introduction of higher order terms as just a mathematical operation but these extra terms can be attributed to real mechanisms. The first term, ‘saturation’, was introduced to account for the effect of the refractive period τ on the neural excitation by which the 1:1 relation between quantum catches and spikes gets lost. The second term, ‘supersaturation’, was introduced to account for the neural noise, στ, in the length of the refractive period.36 Within the context of this paper, I will restrict the discussion to the first order term, αX. The corresponding line element reads
By comparing Equations (11) and (7), it will be clear that, in the limit, this extended fluctuation line element and the Stiles line element can be made to fuse on the base of an appropriate choice of the α’s
a version of which, with slightly different numerical values, was introduced by Vos and Walraven.34 Formulated this way, the Stiles Weber fractions get a deeper significance. This is illustrated in Figure 8, which tells us that the Weber branches apparently branch off from their common De Vries-Rose stem at different excitation levels.
In line with evidence from other sources, the easiest way to interpret this is to assume that the S-system is sub-served by 20 times fewer receptors and the L-system by some 60 per cent more receptors than the M-system. Of course, this 32 : 20 : 1 ratio can be only a rough average, as we know that individuals can differ significantly.37 As this paper is about colour metrics and not about receptor mosaics, we will not pursue this aspect and refer the reader to the original publication.38
The conclusion of the foregoing was that the Stiles line element, Equation (7), and the Bouman and Walraven line element, Equation (9), are not competing constructions but can be interpreted as different versions of the Vos/Walraven fluctuation line element Equation (12), each valid in its own luminance domain. It can be remarked that the link between the Stiles and the Bouman/Walraven line elements and the possibility to interpret the Stiles line element in terms of a non-ideal photon sensor, were already noted by Trabka.39
THE KÖNIG-DIETERICI ANOMALY
In 1884, König and Dieterici40 described a curious anomaly in wavelength discrimination of around 460 nm, shown in Figure 9 in the form of later data by McCree41 and of a recent redetermination by Mollon and Estévez42 with more sophisticated techniques.
It will be clear that this regression in a very restricted wavelength domain does not fit in the line element description described so far. In the past, first Bouman and Walraven43 and later Vos and Walraven34 have tried to find a satisfactory explanation in terms of an early supersaturation of the S-cone system, by using the β term in Equation (10). Now, we can skip those explanations because, thanks to a suggestion by Mollon and Estévez,42 a better interpretation can be given. Essentially, this comes down to postulating a failure of the opponency weighting systems, when the two input signals (S and L+M for the Y/B opponency channel) are far out of balance and this is typically only the case at high excitation levels around 460 nm. Mollon and Estévez42 did not quantify their view, however, recently, Vos and Walraven (paper in preparation) have elaborated a model for such a post-receptoral response compression model to quantify this new mechanistic aspect. The result was that the König-Dieterici anomaly could be satisfactorily described (drawn line in Figure 9). It introduces a further level of non-linear processing in the balance layer, from L/M to the red versus green signal, and by analogy from (L+M)/S to the yellow versus blue signal and from LUM1/LUM2 to the brightness contrast signal (Figure 10).
We conclude that the Mollon-Estévez42 suggestion to attribute a more significant role in colour discrimination to the opponent processing stage can be seen as a welcome improvement of our original colour vision scheme. Moreover, it recognises the view, due to Le Grand44 and further advanced by Krauskopf, Wu and Farell,45 that the existence of so-called cardinal directions in colour induction experiments should be interpreted in terms of opponent processing. This paper is about colour metrics, so we did not further specify the way in which L, M and S combine to a luminance signal. For a discussion on that, we refer to a paper by Vos, Estévez and Walraven.46
Summarising, this modified version of the line element theory of colour discrimination describes all the characteristic elements known from experimental studies on colour discrimination: the shapes, sizes and orientationsc of the MacAdam ellipses, the humps and bumps in the spectral course of wavelength discrimination, the √Luminance improvement in wavelength discrimination, the deflection thereof towards Weber behaviour and the anomalous changes in the wavelength discrimination pattern signalled by Mollon and Estévez.42
OTHER DESCRIPTIONS OF COLOUR DISCRIMINATION
So far, I have concentrated on the development of the line element concept without paying attention to alternative concepts and descriptions. It is fitting to redress that imbalance. At the 1971 AIC Helmholtz memorial symposium on Color Metrics,32 a whole range of colour difference formulae passed the review, each of them stemming from another source of interest. Whereas line elements were developed typically by vision researchers, colour difference formulae typically stem from the paint industry. In that industry, it is of paramount importance to reproduce paints within well-defined tolerance limits. Having little affinity with colour vision theory, paint engineers have developed a wealth of colour difference formulae, based mainly on trial and error, with the CIE 1931 X,Y,Z-system as underlying colour description47 without making a choice of receptor primaries. As we have experienced how critical this choice is, much more than the very form of the line element, it comes as no surprise that these trial and error productions have led to such a manifold of colour difference formulae. Moreover, it is interesting to note that Brainard8 more or less inadvertently showed (in his Figure 5.6) the inadequacy of this type of description. We have to mention, though, that the observation conditions for surfaces with which the paint industry is confronted are often far away from the experimental conditions in colour vision research. Therefore, in his recently published book on these problems, Kuehni48 correctly remarked that we have to realise that ‘various data sets developed at different times vary considerably for generally unknown reasons and are described optimally by different formulas’. For all those reasons, it may suffice here to mention, for the sake of completeness, the existence of these colour difference formulae and not further discuss their competitive merits in comparison with the results of the line element approach. Suffice it to state that within limited colour domains, they all try to describe rather than to explain.
In addition, there are the colour atlases, of which the Munsell49 system may be regarded as the prototype. The numerous commercial colour fans can be regarded as reduced derivatives. They do not have their base in a formula but just present a multitude of hardware colour samples claiming that they together form a perceptually uniform colour space, with mutual distances near to JND level. The Munsell system as a representative example does not use the CIE XYZ-system but rests on an empirical hue/value/chroma ordering principle. It has proven to be useful, be it without much theoretical significance.
Completely different is the earlier-mentioned Le Grand-Krauskopf approach sketched by Eskew, McLellan and Guilianini9 in their survey chapter on chromatic detection and discrimination in Gegenfurtner and Sharpe’s book on colour vision. The existence of a Helmholtzian line element approach is not even mentioned and it starts more or less provocatively with: ‘The modern history of the study of chromatic discrimination begins with the work of Yves Le Grand (1949)’; an implicit idiosyncratic definition of ‘modern’, rather than a truistic statement. They claim to give a model description, though a concrete sketch of that model is not presented. It is clear from the text that in their view, colour discrimination is essentially seated in the opponent R/G and Y/B colour mediating channels. That their description of wavelength discrimination shows a rather poor fit to the experimental course is dismissed as a fault of the experiments as ‘from our perspective, the classic wavelength discrimination experiment is a poor one’. Moreover and even in contradistinction with the just mentioned dismissal, they add that the discrepancy between theory and experiment can be considered as a sign that second site desensitisation might give an improvement in data fit. The result of such an intervention is not shown.
I conclude that successful competing models do not really exist. The clearly convenient atlases and the colour difference formulae do not claim to be a model and the model approach described by Eskew, McLellan and Guilianini9 does not produce a successful description. Moreover, they ignore the photon noise limitations, which should be incorporated as preceding the physiological processing. In addition, none of these systems describing colour differences mentions the changes in colour discrimination with luminance.
In the extended photon noise model, the line element concepts of Helmholtz, Schrödinger and Stiles have been integrated under one umbrella. Helmholtz formulated higher colour metrics in terms of a line element based on the receptor primaries Lλ, Mλ and Sλ. Schrödinger added the requirement of orthogonality, resulting in the appearance of √Lλ, √Mλ and √Sλin the line element. Stiles introduced different Weber fractions for the three-receptor systems that reconciled the Helmholtz concept with the orthogonality requirement. Finally, the Vos/Walraven fluctuation line element integrated these three predecessors into one unifying mechanistic model, based on photon noise as a primary limiting factor in colour discrimination. At low luminances it behaves as a ‘De Vries-Rose’ version of the Schrödinger line element but at normal daylight luminances, it behaves as the Stiles line element. In addition, the König-Dieterici anomaly could be attributed to off-balance failure of the opponent channels, as suggested by Mollon and Estévez.42 The resulting line element description is represented by the model-sketch of Figure 10 that may be considered as an improvement of an earlier version.50 Essentially, it tells that colour and brightness discrimination are primarily determined by physics: the intrinsic uncertainty produced by the quantum noise in the receptor input plus the spectral properties of the receptor pigments. It is further limited, mainly in level, by saturation effects in the receptor output channels and modified in spectral properties by saturation effects in the opponent channels. It does not deny neural processing but rather it indicates where neural processing may further limit colour and brightness discrimination.
This further neural processing seems to be at a beginning stage of quantification. Of course, beyond the retinal opponency stage other levels of neural processing occur, accounting for more integral colour percepts. Quantification of these is even further remote, but fascinating thoughts on this subject were recently formulated by Billock and Tsou.51
The road pattern of Figure 2 depicts a criss-cross of ideas and tryouts. It starts from Newton’s observation that the one-dimensional optical spectrum bends to a two-dimensional perceptual colour circle, a notion that finally led to the famous Young-Helmholtz trichromatic colour vision theory. It continues via Helmholtz’s Weber inspired line element, to Stiles’s modified version of it. The notion that quantum noise should be considered as the yardstick of visual discrimination led to the fluctuation line element; a construction that was widened in this paper by adopting the suggestion of Mollon and Estévez42 of non-linear balance processing in the opponent channels. The result is a zone-fluctuation model that well describes colour and brightness discrimination as a function of luminance and colour, plus the virtual addition of luminance. It proves that the neural processing amazingly well preserves the physical information present at the receptor level.
To conclude, I quote (with assent) Stiles’s closing words in his 1971 address to the ICA Driebergen Helmholtz Memorial Symposium on Color Metrics:10‘The impression I have is that, in parallel with the excellent and intensive studies on colour discrimination under conditions which are of practical importance, there is still room for more primitive investigations, under perhaps extreme and diverse experimental conditions of no particular applicational interest, which may enlarge the scope of the line-element idea.’
Interestingly, in the mentioned publication they still attribute the residual vibrations to ‘probably micro-seismic perturbations’.
Attention is directed to the blue corner ellipse, which seems to be the exception to the rule. Its orientation is typically deviant from the experimental data. A possible cause is described in the next section.
The blue corner ellipse seems to be the odd one out; this might be due to the König-Dieterici anomaly, as it lies close to the 460 nm point on the spectral locus. A further elaboration has to wait for more accurate data on the luminance dependence of the JND ellipses in that region.