AI and Cognitive Science: The Past and Next 30 Years

Authors


should be sent to Kenneth D. Forbus, EECS Department, Northwestern University, 2133 Sheridan Road, Evanston, IL 60201. E-mail: forbus@northwestern.edu

Abstract

Artificial Intelligence (AI) is a core area of Cognitive Science, yet today few AI researchers attend the Cognitive Science Society meetings. This essay examines why, how AI has changed over the last 30 years, and some emerging areas of potential interest where AI and the Society can go together in the next 30 years, if they choose.

1. The computational hypothesis

For many decades, science had one major formal modeling language: differential equations. Developing differential equations took over 350 years, moving from intuitive, informal arguments to precise, formal proofs only at the end of the 19th century. Differential equations provided accurate numerical predictions and concise ways to state theories in many realms. There were always lacunae; the study of nonlinear dynamics and chaos theory started back in the 19th century. But something interesting and radical happened in the 1930s and 1940s: The idea of computation as a modeling language was developed. This is not the instrumental use of computation, for example, as a way of calculating approximate solutions to differential equations. The radical idea was the use of computation as a formal way to model process and “how to” knowledge, also known as procedural knowledge. Computation started being seen as a language that could be used in expressing scientific theories. Artificial Intelligence (AI) was the first field to be founded using this idea, in 1956.

The fundamental hypothesis of Artificial Intelligence is that computation is a useful way to model minds. What kind of computation? That remains an open question, although many constraints are becoming clearer. This hypothesis does not rule out explanations using differential equations, as computational models can contain them. The crucial point is that the language of computation is richer than the language of differential equations. Computation as a formal modeling language for cognition is a revolutionary notion, and subsequent progress in the field has proven its value, as discussed below.

The second field to adopt this idea was Cognitive Science, in 1978. There were (and are) many fields that study minds, each bringing valuable tools and perspectives. What was lacking was a common language, so we can recast our ideas into theories combining our insights. Cognitive Science was founded on the idea that computation would be that common language. Computation provides new tools for exploring theories of cognition, creating a new form of simulation that can be used to explain existing data and predict new findings. One need only look at the early proceedings, and the first issue of Cognitive Science, to see this.

These two fields are probably not the last to be making this intellectual bet. In current biology, there are signs of the same thinking emerging. Traditional differential equations models are being replaced with computational models, which provide more perspicuous accounts of phenomena in genetic regulatory networks and transcription processes.

2. Where did AI go?

Given this history, it is not surprising that AI had a major role in our Society at the beginning. The early CogSci and AI conferences were often co-located and always coordinated, as the degree of overlap in attendees was high. But by 2001, the CogSci organizers scheduled it directly opposite IJCAI1 on a different continent, arguing that there was not enough overlap to worry about.

What happened? I see three reasons for this split:

1. Scientific excitement elsewhere: The scientific goals of AI are not identical with Cognitive Science. AI seeks to understand intelligence in general, with humans being a special case. Many AI researchers truly do not care about modeling humans. Their scientific bet is that directly studying the computation required for intelligent behavior will yield better insights into the nature of intelligence. There is historical precedent: Airplanes were created by a careful study of how aerodynamics worked, not by studying the details of birds. The deepest insights on how birds fly came ultimately from applying aerodynamic principles discovered while trying to create airplanes (Ford & Hayes, 1998). So the best way to understand how minds work may well be to build them, testing each part carefully to see what needs to be involved, and what constraints hold on systems of parts to achieve particular forms of intelligent behavior. This is an open, empirical question.

Artificial Intelligence is thriving. This can be seen from the explosion of conferences and meetings: A cursory survey reveals over 27 AI-related meetings, and the number is growing. Of these, only four (including the Society’s meeting) are Cognitive Science meetings.

2. Financial seduction: AI researchers face plenty of nonscientific temptations. I believe it is safe to say that more AI researchers have used their work to become multimillionaires than any other area of Cognitive Science. Consider some current examples of AI applications:

  • • Internet companies: Search engines are heavy consumers of AI technology, as are recommender systems, which are used by Web vendors to help customers find what they want. Constraint solvers are used offline by travel companies like Orbitz, so that online searches are fast.
  • • Computer gaming: AI is essential in modern games, ranging from on-screen opponents/collaborators, to dynamic tuning to improve the player’s experience, to editors that let players create complex 3D objects and animations, as in Spore.
  • • Natural interaction: Speech recognition and handwriting recognition are now embedded in Microsoft’s operating system. Spoken dialogue systems are becoming ubiquitous for customer service and search on cell phones.
  • • Knowledge management and data mining: Large knowledge bases are being used to integrate and organize knowledge in a variety of fields. Medical researchers at the Cleveland Clinic, for example, use a natural language system from Cycorp to access and integrate information across heterogeneous databases.
  • • Mobile robots: The descendants of Rod Brooks’ insects are now in many homes, vacuuming and scrubbing the floors (i.e., Roomba and its variants). Teleoperated machines are sent into dangerous areas, to look for explosives and to draw fire.

This surprises some: The last they heard was that there was an “AI Winter,” which was about the first generation of AI companies in the 1980s, not the science. As the examples above illustrate, AI has been successful in a wide range of enterprises.

3. Dismissive attitude towards AI within parts of the Society: Multidisciplinary work requires mutual understanding and appreciation of differences. Unfortunately, one of the best strategies for getting noticed is to declare a revolution, and that everything earlier must now be rejected. Connectionism, situated cognition, embodied cognition, and dynamical systems have all used this tactic. It is useful in founding a new field by setting up new societies and conferences—it was used in setting up AI originally—but it is counterproductive in a multidisciplinary field. There are far more reasonable people than unreasonable in all of these areas, but the unreasonable can do considerable damage. Many AI researchers have had papers rejected from CogSci simply because they used symbolic modeling techniques, or did not run their own human-subjects experiments. If reviewing is not perceived to be done on a paper’s merits, it is unsurprising that researchers will leave for other venues.

To summarize, AI may have gone away from this Society, but it has not gone away at all. Quite the contrary, as we’ll see next.

3. Important trends in AI

Here is a summary of what has been happening in AI over the last 30 years.

3.1. Symbolic systems have successfully scaled up

Satisfiability (SAT) solvers, planners, and schedulers now handle large-scale problems involving over a million variables and several million constraints. They are routinely used in manufacturing, scheduling, diagnosis, and other applications. The Cyc project, which set out in 1984 to build a large-scale common sense knowledge base, is alive and well: ResearchCyc is available for free to the research community, and OpenCyc is freely available on SourceForge. Symbolic representations are heavily used in modeling human reasoning, planning, problem solving, and conceptual learning, as well as modeling higher-level linguistic phenomena, including natural language semantics, dialogue, and discourse.

3.2. Learning is everywhere

Statistical machine learning techniques, such as support vector machines, reinforcement learning, and inductive logic programming, are now widely used across all areas of AI. Transfer learning is receiving more attention: For example, Peter Stone’s group used their version of the Structure-Mapping Engine to transfer soccer strategies learned via reinforcement learning on one set of robots to another set of robots (Liu & Stone, 2006). Relational learning is the current frontier; while much can be done with classifiers, broadening the expressiveness of what can be learned is crucial for many tasks, as well as for capturing the range of human learning.

3.3. Combining logic and statistics

Figuring out the best ways to combine symbolic and statistical methods for reasoning and learning is currently a hot topic. Some integration strategies are straightforward and routinely used, for example, using probabilities to focus/prune search in reasoning. Others seek deeper integration, for example, Markov Logic Nets (Richardson & Domingos, 2006) and Bayesian Logic (Milch et al., 2007). Whether tightly integrated schemes can scale to realistic reasoning is currently an open question.

3.4. AI and the Web

The Web has fueled new AI research in two ways. First, a problem bedeviling early natural language researchers was, in essence, “Who’s going to type all that stuff in?” The Web now provides corpora for many types of experiments. The second important impact for AI has been the rise of the Semantic Web, which uses simple knowledge representations to organize, find, and combine information. The Semantic Web, according to Sir Tim Berners-Lee,2 is the future of the Web—and there is now evidence that he could be right. Biologists, for example, are embracing the Semantic Web as a way to help them organize the massive amounts of data they generate, and to speed scientific progress by sharing workflows. Web-scale knowledge representation leads to interesting new research questions: The EU’s Large Knowledge Collider (LarKC) project3 is particularly interesting here, as they are exploring cognitively inspired reasoning approaches, for flexibility and scaling.

3.5. Integrated intelligent systems

One of the most exciting frontiers is combining techniques from multiple areas of AI to build systems that achieve broader slices of intelligent behavior. For example, King et al. (2009) describe a robot scientist that has automatically generated hypotheses about a biological system and then conducted the experiments to test its hypothesis, leading to a new discovery in biology.

3.6. Physically grounded AI

Using physical sensors and actuators is also receiving considerable attention. Unfortunately, today’s mechanical actuators are expensive, fussy, and frail compared to biological systems. Moreover, today’s robotic sensors are far cruder than biological sensors. Nevertheless, hearty pioneers have created interesting systems that can perform well in the world, as illustrated by the winning of the DARPA Grand Challenge road race in only its second year.

4. Parallel but diverging paths

In some ways AI and Cognitive Science remain fellow travelers, while in others they have diverged considerably. Let us examine the 2008 AAAI proceedings as a representative sample, for concreteness.

There are many shared affections, albeit with different foci: Interest in natural language has surged at the Society’s conference in recent years, and it represents about 12% of the papers in the 2008 AAAI proceedings. But three-quarters of the AAAI papers were in the AI on the Web track, which is not cognitively oriented. Bayesian techniques are currently popular in this Society and were mentioned in about 24% of the AAAI papers. However, logic was mentioned in about 35% of the AAAI papers, but papers using logic are rare at the Society’s meetings.

A number of topics that are well represented at the Society’s conference are virtually absent at AAAI. Neural nets, still popular in CogSci, are a prime example. Only six papers (2%) in AAAI08 mentioned neural nets. This may seem surprising given the ubiquity of statistical learning in AI today. Neural nets were popular a decade or two ago, but they have been supplanted by other statistical learning techniques. Cognitive architectures, a mainstay in CogSci, receive little attention at AAAI. Only five papers (2%) involved cognitive architectures in 2008: Three used SOAR, one used ACT-R, and one used ICARUS. Situated cognition never had much of a presence at AAAI, and I could find no mention of it in the 2008 proceedings. Embodied cognition fares only slightly better: one paper, or 0.3%, mentions it, despite the strong interest in physically embodied AI.

5. The why of AI’s trajectory

What accounts for these changes? A major factor has been the elimination of many resource constraints: Computing power, representational resources, algorithmic/software resources, and data resources. Let us examine each in turn.

5.1. Computing resources

Computing power shapes what is possible in AI research. When Evans (1968) did his pioneering ANALOGY program, he used punch cards. The program was divided into two phases, because it would not fit on the IBM mainframe otherwise. As Table 1 indicates, things were better by the 1970s. These figures are for a mainframe which was so expensive that typically one was shared for an entire university, and only a handful of laboratories had their own. One cannot buy a cell phone today with a CPU this slow. Today’s workstations, which cost three orders of magnitude less, are three orders of magnitude faster and have three to four orders of magnitude more RAM. (Slow but vast external storage has grown even faster.) By using a cluster, you can use 10–105 such machines in your systems.

Table 1. 
Computing power, then and now
 1970s Mainframe2009 WorkstationScale up
Speed25 MHz3 GHz1,200
RAM<1.2 MB2–8 GB18,000
No. of users10–251 

This has completely changed the scale of what can be done in AI research. For example, in one of its early experiments in 2007, Powerset automatically parsed the entire Wikipedia into a semantic representation in just 2 days.

5.2. Representation resources

Bobrow’s (1968) STUDENT system, which solved algebra word problems, had 52 facts in its knowledge base. By the 1980s, researchers were working with knowledge bases that ranged from 100 to a few thousand facts. Thanks to sustained long-term efforts, the picture has now completely changed: Large-scale representation resources exist that researchers can simply pick up and use. Currently, the most popular are as follows:

  • • WordNet: (Miller, 1995) WordNet contains on the order of 155,000 words, whose meanings are characterized by clustering into 105 synsets.
  • • VerbNet: (Kipper, Korhonen, Ryant, & Palmer, 2006) VerbNet contains over 5,300 verb lemmas.
  • • FrameNet: (Baker, Fillmore, & Lowe, 1998) FrameNet has 10,000 entries for verbs, nouns, and adverbs, using Fillmore’s Frame Semantics.
  • • OpenCyc/ResearchCyc: (Lenat, 1995) The Cyc knowledge base systems contain on the order of 106 facts, which can be used with their reasoning engine or extracted for use in other reasoners.

WordNet is perhaps the most widely used resource in natural language research today. VerbNet is seeing wider usage, given the growing research interest in semantics. ResearchCyc and OpenCyc have been used by a variety of researchers, both as components in larger systems and as sources of knowledge for representation and reasoning experiments.

These resources have changed the nature of what research can be done. There are still many investigations where small sets of simple hand-coded representations are reasonable to use. But now there are fewer excuses for not testing one’s ideas at larger scales.

We are at the beginning of a positive feedback loop. These representation resources are fueling efforts to accumulate yet larger bodies of formally represented knowledge, through learning by reading, vision and robotics, and interacting with people via sketching and games. In the near future, it would not be surprising to see knowledge bases reach 107–109 formally represented facts. This will be a qualitative change: The hand-generated resources above focus on very general knowledge, to make each hour of human labor count the most. The new techniques for capturing knowledge are focusing more on specifics, fleshing out at least some of the stuff of experience. That in turn leads to some intriguing new modeling possibilities, which we return to below.

5.3. Algorithmic/software resources

Artificial Intelligence researchers are benefiting from the ability to build upon the software of others. A cornucopia of systems, modules, toolkits, and frameworks are available. These enable researchers to run experiments and to build new kinds of systems without starting from scratch. Some popular examples4 include the WEKA toolkit in machine learning, the Collins parser, the SHOP hierarchical task network planner, and the OpenCyc reasoning system.

Here, too, positive feedback loops are forming. For instance, CogSketch is a publicly available sketch understanding system being developed as a cognitive simulation, a means of gathering data in behavioral experiments, and as a platform for sketch-based educational software. These three goals feed on each other: For education, one needs the ability to assess sketches in a human-like manner to provide feedback, for example. It incorporates knowledge from OpenCyc and a model of qualitative spatial reasoning as well as models of analogical matching and retrieval.

5.4. Data resources

Humans swim in a sea of data, with sensory systems that are marvels compared to non-biological systems. Even so, available technology is improving. In the 1970s, the MIT AI lab had one camera for years, which cost about $100K in 1970 dollars. It was lower quality than today’s cell phone cameras. Image processing remains extraordinarily demanding, which is a bottleneck. Massively multicore CPUs and GPUs may finally change this. Mobile robots are becoming commodities, but sensing for manipulation and compliant limbs are still far from mammalian capabilities. Many of these problems are problems of materials and energy storage, which, unlike computing, are not improving exponentially. Therefore, it is hard to predict how fast progress will be in this area.

As noted above, the Web has changed the nature of computational linguistics by providing vast quantities of materials, ranging from carefully written online books to the rough-and-tumble cacophony of the blogosphere. There is text galore, and making sense of it all is an exciting challenge, as outlined below.

6. The next 30 years

The next 30 years are going to be extremely exciting for AI researchers. This period will see programs that approach—and possibly reach—human-level artificial intelligence. By that I mean software organisms that operate flexibly in a world, communicating and working with us via language and other modalities, learning continually as they operate. My bet is that such systems will be made possible by insights from cognitive science more broadly, but others are placing quite different bets.

From a cognitive science perspective, this will happen by creating larger-scale cognitive simulations, a practice I call macromodeling. Most current cognitive simulations focus on one process in isolation. Inputs are all hand generated, and outputs are hand evaluated. Although such simulations can be useful for modeling a local phenomenon, they often do not scale to larger phenomena: They do not deal with data beyond a narrow range, nor can they be used as a component in a larger model. The goal of macromodeling is to capture broader swaths of an organisms’ behavior.5 Macromodeling focuses on larger units of analysis, where most of the inputs to constituent simulation models are automatically generated and their outputs are used by other parts of the larger-scale model. Next we consider three examples of macromodeling: Learning by reading, social robotics, and modeling conceptual change. None of these efforts would have been possible 10 years ago.

6.1. Learning by reading

One way people learn about the world is by reading. Modeling this ability requires integrating natural language, knowledge representation, and machine learning. Three projects exemplify progress in this area. Learning Reader (Forbus et al., 2007) uses texts with simplified syntax, similar to what children might receive, focusing on learning about the world from reading. It ruminates on what it has read, asking itself questions offline, leading to better performance. TextRunner (Banko & Etzioni, 2008) focuses on accumulating statements that are akin to binary predicates, with word clusters instead of predicates or arguments. This provides a more targeted form of Web search and a simple form of question answering, for example, one can ask “<> loves Obama” and get back answers like “Hollywood” and “Ben Affleck.” TextLearner (Matuszek et al., 2005) uses its existing knowledge about types of entities (e.g., political figures) to detect gaps in its knowledge. It fills those gaps by using natural language generation to formulate Web search queries, which it then combines through a refinement process, ultimately presenting English glosses of what it has learned for vetting by human editors.

6.2. Social robotics

One way to study how people interact is to create robots that serve as interlocutors (Breazeal & Scassellati, 2002). With robots, one can conduct ablation studies without moral hazard, and gather data on what capacities are necessary for particular kinds of interactions. For example, self-recognition of a robot’s body parts, either directly or in a mirror, can be learned via constructing dynamic Bayesian models (Gold & Scassellati, 2009). Softbots in virtual environments, provided by today’s computer games, provide a valuable laboratory setting for studying how language is grounded in perception and action (Gorniak & Roy, 2007). Moreover, it is now possible to create systems that combine vision, speech, and dialog modeling that can interact with multiple participants, in real time, in natural settings (Bohus & Horvitz, 2009), opening up vast frontiers.

6.3. Modeling conceptual change

One of the deepest mysteries of human cognition is how we create powerful conceptual models of the world by experiencing the world and our culture (Carey, 2009; Nersessian, 2008). Computational models of conceptual change (Esposito, Semeraro, Fanizzi, & Ferilli., 2000; Neri, Saitta, & Tiberghien, 1997) and scientific discovery (Thagard, 1993) have been hampered by two difficulties. First, hand generation of stimuli can lead to tailorability issues. Second, the amount of stimuli that need to be encoded for plausible testing is daunting. These remain difficult challenges. Ideally, encoding would be fully automatic, based on vision and language, and models could conduct their own experiments in the physical world. This ideal remains far off, but now there are two promising approximations. The first is to use virtual worlds to simplify the domains, encoding, and manipulation. The second is to use semiautomatic encoding methods on real-world materials. For example, as progress in learning by reading shows, reliable understanding can be achieved for simplified English. Creating simplified English from natural texts can be done more easily, and with less tailorability, than hand encoding formal representations. Similarly, progress in sketch understanding (Forbus, Usher, Lovett, Lockwood, & Wetzel, 2008) enables visual and spatial stimuli to be more easily encoded. Sketches and language can be used to define “comic strips” that provide high-level descriptions of behaviors for simulations of concept learning and conceptual change (Friedman & Forbus, 2009). Such descriptions are easier to produce and have reduced tailorability.

7. Conclusion

Many AI researchers believe that understanding how people and other animals work provides valuable clues towards understanding intelligence, in all of its potential forms. And many researchers in other areas of cognitive science find value in computational models. Symbolic AI still provides the best way to model complex knowledge structures and the processes that operate over them to achieve capabilities that are central to human cognition. AI now combines symbolic methods with statistical methods, exploring how to reason and learn with vast amounts of data, ranging from sensors and robots to the World Wide Web. It is unfortunate that there is so little interaction currently between the Society and the AI community. Here are some concrete suggestions for improving the situation:

  • • Treat computational and representational requirements of tasks as an important form of evidence. Such evidence can be just as valid and valuable as constraints from behavioral or biological research.
  • • Broader criteria for what constitutes human data. As cognitive psychology has come to dominate the Society, review criteria have narrowed. Not every paper needs to contain a new human subjects experiment, nor must they be limited to phenomena introduced by cognitive psychologists. Laboratory experiments are not required to determine that people can learn by reading, for example. As Cassimatis, Bello, and Langley (2008) argued, one measure of simulations must be raw ability, to handle (even in principle) the tasks an organism faces. While detailed modeling of behavioral laboratory data remains a useful approach, a variety of other valid methods for testing simulations exist, including human-normed tests, panels of judges, and examining misconceptions.

Our Society may have founded the field of Cognitive Science, but it is no longer co-extensive with it. This is a cause for celebration: The idea has taken hold. However, it also means that there are now other options for AI researchers interested in Cognitive Science. Given that computation as a theoretical language for cognition is at the heart of the Cognitive Science enterprise, it is vital for our continued health as a Society in the next 30 years to rebuild the deep connection with Artificial Intelligence that helped get us off to such a strong start.

Footnotes

  • 1

    International Joint Conferences on Artificial Intelligence, a major AI conference.

  • 2

    The inventor of the World Wide Web.

  • 3

    http://www.larkc.eu/

  • 4

    No URLs are included because they can be found by straightforward Web searches.

  • 5

    The integration constraint (Forbus, 2001).

Acknowledgments

Thanks to Dedre Gentner, Michael Witbrock, Kate Lockwood, Martha Palmer, and Scott Friedman for helpful comments.

Ancillary