Mechanisms of Cognitive Development: Domain-General Learning or Domain-Specific Constraints?

Authors


should be sent to Vladimir M. Sloutsky, Center for Cognitive Science, 208C Ohio Stadium East, 1961 Tuttle Park Place, Ohio State University, Columbus, OH 43210. E-mail: sloutsky.1@osu.edu

The issue of how people acquire knowledge in the course of individual development has fascinated researchers for thousands of years. Perhaps the earliest recorded effort to put forth a theoretical account belongs to Plato, who famously advocated the idea that knowledge of many abstract categories (e.g., “equivalence”) is innate. Although Plato argued with his contemporaries who advocated the empirical basis of knowledge, it was the British empiricists who most forcefully put forth the idea of the empirical basis of knowledge, with John Locke offering the famous “tabula rasa” argument.

The first comprehensive psychological treatment of the problem of knowledge acquisition was offered by Piaget (1954), who suggested that knowledge emerges as a result of interactions between individuals and their environments. This was a radical departure from both extreme nativism and extreme empiricism. However, these ideas, as well those of empiricist-minded behaviorists, fell short of providing a viable account of many human abilities, most notably, language acquisition.

This inability prompted Chomsky to propose an argument that language cannot be acquired from the available linguistic input because it does not contain enough information to enable the learner to recover a particular grammar, while ruling out alternatives (Chomsky, 1980). Therefore, some knowledge of language must be innate to enable fast, efficient, and invariable language learning under the conditions of the impoverished linguistic input. This argument (i.e., known as the Poverty of the Stimulus argument) has been subsequently generalized to perceptual, lexical, and conceptual development. The 1990 Special Issue of Cognitive Science is an example of such generalization.

The current Special Issue on the mechanisms of cognitive development has arrived exactly 20 years after the first Special Issue. In the introduction to the 1990 Special Issue of Cognitive Science, Rochel Gelman stated:

Experience is indeterminant or inadequate for the inductions that children draw from it in that, even under quite optimistic assumptions about the nature and extent of the experiences relevant to a given induction, the experience is not, in and of itself, sufficient to justify, let alone compel, the induction universally drawn from it in the course of development. For example, there is nothing in the environment that supports a child’s conclusion that the integers never end. (R. Gelman, 1990, p. 4)

If input is too impoverished to constrain possible inductions and to license the concepts that we have, the constraints must come from somewhere. It has been proposed that these constraints are internal—they come from the organism in the form of knowledge of “core” domains, skeletal principles, biases, or conceptual assumptions. To be useful in solving the indeterminacy problem, these constraints have to be (a) top-down, with higher-levels of abstraction appearing prior to lower levels (i.e., elements of an abstract structure must guide processing of specific instances), (b) a priori (i.e., these constraints have to precede learning rather than being consequence of learning), and (c) domain-specific (because generalizations in the domain of number differ drastically from those in the domain of biology, the principles guiding these generalizations should differ as well).

Formally, the Poverty of the Stimulus argument has the following structure: If (a) correct generalizations require many constraints and (b) the environment provides few, then (c) the constraints enabling correct generalizations do not come from the environment. While this argument is formally valid, its premise (b) and its conclusion (c) are questionable. Most important, do we know that the environment truly provides few constraints? And how do we know that?

The research featured in this Special Issue proposes an alternative way of meeting the challenge of understanding cognitive development. Instead of assuming top-down, a priori, domain-specific constraints, this research tries to understand how domain-general learning mechanisms may enable acquisition of knowledge for an organism functioning in an information-rich environment.

Chater and Christiansen (2010) focus on language learning and evolution and propose two critical ideas: (a) the idea of language adapting to biological machinery existing prior to the emergence of language and (b) the idea of “C-induction.” First, they argue that there is no credible account of how a richly structured, domain-specific, innate Universal Grammar could have evolved. They suggest that the solution to the logical problem of language evolution requires abandoning the notion of a domain-specific and innate Universal Grammar. As part of their second argument, Chater and Christiansen (2010) offer a critical distinction between “natural” and “cultural” induction (i.e., N-induction and C-induction). N-induction involves the ability to understand the natural world, whereas C-induction involves the ability to coordinate with other people. They argue that the problem of language acquisition has been traditionally misconstrued as a solution to an extremely difficult N-induction problem (i.e., the discovery of abstract syntax); however, according to the authors, the problem should be construed as a much easier problem of C-induction. Instead of inducing an arbitrary set of constraints (i.e., the problem of N-induction), individuals simply have to make the same guesses as everyone else. Crucially, this process of C-induction is made easier by the fact that the others have the same biases as the learner and because language has been shaped by cultural evolution to fit those exact biases. Chater and Christiansen (2010) further suggest that the same line of argumentation is likely to extend to other kinds of development for which the learning of a culturally mediated system of knowledge is important.

Johnson’s (2010) paper focuses on perceptual development. Perception has been at the center of our attempts to understand sources and origins of knowledge: How do people parse cluttered and occluded visual experience into separable objects? Does this ability develop over time through experience and learning, or is it based on some form of a priori knowledge (e.g., such as knowledge of objects)? In contrast to those advocating innate knowledge of objects, Johnson (2010) argues that there is no need to posit such innate knowledge. In his view, although some components of object perception (e.g., edge detection) may emerge from prenatal development (or even prenatal learning), other major components of object perception (e.g., perception of objects over occlusion) develop postnatally. According to Johnson’s (2010) developmental proposal, initially perception of occluded objects requires support from multiple features, including the size of the occluding gap, the alignment of edges, and common motion. In the course of development, infants learn to perceive occluded objects independently of these features.

Kemp, Goodman, and Tenenbaum (2010) discuss how people learn about causal systems and generalize this knowledge to new situations. In particular, having learned that drug D has side effect E in person P, the learner may eventually generalize this knowledge to conclude that drugs have side effects on people. How is this learning achieved? One possible way of solving this problem is for the learner to have a highly constrained hypothesis space, specific to each knowledge domain. This fact has been at the heart of the nativist proposals arguing for innate sets of constraints specific to certain domains of knowledge. Although Kemp et al. (2010) agree that constraints are important for learning, they propose that these constraints do not have to be a priori—children can learn inductive constraints in some domains—and that these constraints subsequently support rapid learning within these domains. They develop and test a computational model of causal learning, demonstrating that constraints can be acquired and later used to facilitate learning of new causal structures. The critical idea is that when learners first encounter a new inductive task, their hypothesis space with respect to this task could be relatively broad and unconstrained. However, after experiencing several induction problems from that family, they induce a schema, or a set of abstract principles describing the structure of tasks in the family. These abstract principles constrain the hypotheses that learners apply to subsequent problems from the same family, and allow them to solve these problems given just a handful of relevant observations.

Sloutsky’s (2010) paper discusses the development of concepts. It has been widely acknowledged that concepts allow nontrivial generalizations (e.g., that plants and animals are alive) and that concepts support reasoning. How do people acquire concepts? And given that generalizations are constrained (people generalize the property of being alive to garden flowers, but not to plastic flowers), where do these constraints come from? Unlike proposals arguing for a priori constraints, Sloutsky’s (2010) proposal attempts to link conceptual development to a more general ability to form perceptual categories, which exhibits early developmental onset and is present across a wide variety of species. Sloutsky (2010) argues that conceptual development progresses from simple perceptual grouping to highly abstract scientific concepts. This proposal of conceptual development has four parts. First, it is argued that categories in the world have different structure. Second, there might be different learning systems (subserved by different brain mechanisms) that evolved to learn categories of differing structures. Third, these systems exhibit differential maturational course, which affects how categories of different structures are learned in the course of development. And finally, an interaction of these components may result in the developmental transition from perceptual groupings to more abstract concepts.

Smith, Colunga, and Yoshida (2010) consider the role of attention in acquiring knowledge. They note that in her introduction to the 1990 Special Issue of Cognitive Science, Rochel Gelman asked, “How is it that our young attend to inputs that will support the development of concepts they share with their elders?” Gelman’s analysis suggested that the problem cannot be solved without some form of innate knowledge (e.g., “skeletal principles”) that guides learning in particular domains. Smith et al. (2010) give a different answer to this question. They suggest that the so-called knowledge domains are marked by multiple cue-outcome correlations that in turn correlate with context cues (e.g., the context of word learning may differ from the context of spatial orientation). In the course of learning, children learn to allocate attention to bundles of predictive cues in a given context (this process is called attentional learning). The outcome of this process has the appearance of domain specificity—children learn to differentially allocate attention to different cues in different contexts. In short, Smith et al. (2010) present an account of how domain-general processes (e.g., attentional learning) may give rise to behaviors that have the appearance of domain-specific knowledge.

One set of competencies appearing as a “knowledge domain,” or even as a dedicated module, is spatial cognition. Young children as well as a variety of nonhuman species have been found to exhibit sensitivity to spatial information, thus prompting some researchers to propose the existence of a dedicated and encapsulated geometric module. Twyman and Newcombe (2010) consider reasons to doubt the existence of this geometric module and offer a different account of the development of spatial abilities. This account is based on the idea of adaptive cue combination originally proposed by Newcombe and Huttenlocher (2006). According to the proposal, although some biological predispositions for processing of spatial information may exist, fully fledged representation and processing of spatial information emerges through interactions with and feedback from the environment. As a result, multiple sources of spatial and nonspatial information are integrated into a nonmodular and unified representation. Information that is high in salience, reliability, familiarity, and certainty, and low in variability, is given priority over other sources of information. In contrast to modularity proposals, according to the adaptive combination view, experience affects which cues are used in the combination and, as a consequence, the resulting representation. In particular, cues that lead to adaptive behaviors are more likely to be used again in the future, whereas cues that lead to maladaptive behaviors are less likely to be used. This position offers a clear view of how spatial abilities emerge and change in the course of development.

In addition to discussing the papers appearing in this Special Issue of Cognitive Science, Goldstone and Landy (2010) offer their view on the problem. They start with a simple observation that the idea of “skeletal principles” does not obviate the need for developmental explanations because skeletal structures themselves are subject to growth and development. Goldstone and Landy (2010) exemplify this idea by many systems (e.g., neural networks is but one example) whose internal structure is shaped by the nature of input. They conclude that the field of cognitive development has witnessed a major shift since the 1990 publication of the Special Issue of Cognitive Science—the field has moved from delineating specific constraints in domains such as language, motion, quantitative reasoning, social perception, and navigation to explicating mechanisms of how some of these constraints may emerge. Goldstone and Landy (2010) conclude that a new challenge for the study of cognitive development is to understand how general learning processes can give rise to learned domains, dimensions, categories, and contexts.

This Special Issue is a result of multiple efforts by multiple individuals. First, the authors deserve thanks for their willingness to write the papers and subject their work to a standard rigorous peer-review process that required multiple revisions. A set of anonymous reviewers who read the papers (and then revisions) also deserve appreciation. And special thanks go to Art Markman (the Executive Editor of Cognitive Science) and Caroline Verdier (the Managing Editor of Cognitive Science) who encouraged, supported, and guided the authors through the challenging process of putting together this Special Issue.

The collection of papers featured in this Special Issue focuses on the same topic as the 1990 Special Issue. However, the current set of papers offers solutions that are different from those offered in 1990. While in the 1990 issue the main argument was for domain-specific constraints that were considered to be the starting point of development, the current set attempts to understand how constraints emerge in the course of learning and development. Although particular accounts of how this knowledge emerges from domain-general processes may (and most likely will) change over time, the approach itself represents a substantial paradigm shift. Time will tell how successful this approach will be in answering the challenging questions of cognitive development.

Acknowledgments

Writing of this manuscript was supported by grants from the NSF (REC 0208103), from the Institute of Education Sciences, U.S. Department of Education (R305H050125), and from NIH (R01HD056105) to Vladimir M. Sloutsky.

Ancillary