### 1. Introduction

- Top of page
- Abstract
- 1. Introduction
- 2. Composition
- 3. Composition models
- 4. Collecting similarity ratings for phrases
- 5. Modeling experiments
- 6. Discussion
- Acknowledgments
- References
- Appendices
- Supporting Information

The question of how semantic knowledge is acquired, organized, and ultimately used in language processing and understanding has been a topic for great debate in cognitive science. This is hardly surprising, as the ability to retrieve and manipulate meaning influences many cognitive tasks that go far and beyond language processing. Examples include memory retrieval (Deese, 1959; Raaijmakers & Shiffrin, 1981), categorization (Estes, 1994; Nosofsky, 1984, 1986), problem solving (Holyoak & Koh, 1987; Ross, 1987, 1989), reasoning (Heit & Rubinstein, 1994; Rips, 1975), and learning (Gentner, 1989; Ross, 1984).

Previous accounts of semantic representation fall under three broad families, namely semantic networks, feature-based models, and semantic spaces. (For a fuller account of the different approaches and issues involved we refer the interested reader to Markman, 1998). Semantic networks (Collins & Quillian, 1969) represent concepts as nodes in a graph. Edges in the graph denote semantic relationships between concepts (e.g., dog is-a mammal, dog has tail) and word meaning is expressed by the number and type of connections to other words. In this framework, word similarity is a function of path length—semantically related words are expected to have shorter paths between them (e.g., *poodle* will be more similar to *dog* than *animal*). Semantic networks constitute a somewhat idealized representation that abstracts away from real-word usage—they are traditionally hand coded by modelers who a priori decide which relationships are most relevant in representing meaning. More recent work (Steyvers & Tenenbaum, 2005) creates a semantic network from word association norms (Nelson, McEvoy, & Schreiber, 1999); however, these can only represent a small fraction of the vocabulary of an adult speaker.

An alternative to semantic networks is the idea that word meaning can be described in terms of feature lists (Smith & Medin, 1981). Theories tend to differ with respect to their definition of features. In many cases these are created manually by the modeler (e.g., Hinton & Shallice, 1991). In other cases, the features are obtained by asking native speakers to generate attributes they consider important in describing the meaning of a word (e.g., Andrews, Vigliocco, & Vinson, 2009; McRae, de Sa, & Seidenberg, 1997). This allows the representation of each word by a distribution of numerical values over the feature set. Admittedly, norming studies have the potential of revealing which dimensions of meaning are psychologically salient. However, a number of difficulties arise when working with such data (Murphy & Medin, 1985; Sloman & Rips, 1998). For example, the number and types of attributes generated can vary substantially as a function of the amount of time devoted to each word. There are many degrees of freedom in the way that responses are coded and analyzed. And multiple subjects are required to create a representation for each word, which in practice limits elicitation studies to a small-size lexicon.

A third popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment. Words that are similar in meaning, for example, *boat* and *ship* tend to occur in contexts of similar words, such as *sail*, *sea*, *sailor*, and so on. *Semantic space* models capture meaning *quantitatively* in terms of simple co-occurrence statistics. Words are represented as vectors in a high-dimensional space, where each component corresponds to some co-occurring contextual element. The latter can be words themselves (Lund & Burgess, 1996), larger linguistic units such as paragraphs or documents (Landauer & Dumais, 1997), or even more complex linguistic representations such as *n*-grams (Jones & Mewhort, 2007) and the argument slots of predicates (Grefenstette, 1994; Lin, 1998; Padó & Lapata, 2007). The advantage of taking such a geometric approach is that the similarity of word meanings can be easily quantified by measuring their distance in the vector space, or the cosine of the angle between them. A simplified example of a two-dimensional semantic space is shown in Fig. 1 (semantic spaces usually have hundreds of dimensions).

There are a number of well-known semantic space models in the literature. For example, the Hyperspace Analog to Language model (HAL, Lund & Burgess, 1996) represents each word by a vector where each element of the vector corresponds to a weighted co-occurrence value of that word with some other word. Latent Semantic Analysis (LSA, Landauer & Dumais, 1997) also derives a high-dimensional semantic space for words while using co-occurrence information between words and the passages they occur in. LSA constructs a word–document co-occurrence matrix from a large document collection. Matrix decomposition techniques are usually applied to reduce the dimensionality of the original matrix, thereby rendering it more informative. The dimensionality reduction allows words with similar meaning to have similar vector representations, even if they never co-occurred in the same document.

Probabilistic topic models (Blei, Ng, & Jordan, 2003; Griffiths, Steyvers, & Tenenbaum, 2007) offer an alternative to semantic spaces based on the assumption that words observed in a corpus manifest some latent structure linked to topics. These models are similar in spirit to LSA, they also operate on large corpora and derive a reduced dimensionality description of words and documents. Crucially, words are not represented as points in a high-dimensional space but as a probability distribution over a set of topics (corresponding to coarse-grained senses). Each topic is a probability distribution over words, and the content of the topic is reflected in the words to which it assigns high probability. Topic models are *generative*; they specify a probabilistic procedure by which documents can be generated. Thus, to make a new document, one first chooses a distribution over topics. Then for each word in that document, one chooses a topic at random according to this distribution and selects a word from that topic. Under this framework, the problem of meaning representation is expressed as one of statistical inference: Given some data—words in a corpus—infer the latent structure from which it was generated.

Semantic space models (and the related topic models) have been successful at simulating a wide range of psycholinguistic phenomena, including semantic priming (Griffiths, Steyvers, & Tenenbaum, 2007; Landauer & Dumais, 1997; Lund & Burgess, 1996), discourse comprehension (Foltz, Kintsch, & Landauer, 1998; Landauer & Dumais, 1997), word categorization (Laham, 2000), judgments of essay quality (Landauer, Laham, Rehder, & Schreiner, 1997), synonymy tests (Griffiths et al., 2007; Landauer & Dumais, 1997) such as those included in the Test of English as Foreign Language (TOEFL), reading times (Griffiths et al., 2007; McDonald, 2000), and judgments of semantic similarity (McDonald, 2000) and association (Denhire & Lemaire, 2004; Griffiths et al., 2007).

Despite their widespread use, these models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. However, it is well known that linguistic structures are *compositional* (simpler elements are combined to form more complex ones). For example, morphemes are combined into words, words into phrases, and phrases into sentences. It is also reasonable to assume that the meaning of sentences is composed of the meanings of individual words or phrases. Much experimental evidence also suggests that semantic similarity is more complex than simply a relation between isolated words. For example, Duffy, Henderson, and Morris (1989) showed that priming of sentence terminal words was dependent not simply on individual preceding words but on their combination, and Morris (1994) later demonstrated that this priming also showed dependencies on the syntactic relations in the preceding context. Additional evidence comes from experiments where target words in sentences are compared with target words in lists or scrambled sentences. Changes in the temporal order of words in a sentence decrease the strength of the related priming effect (Foss, 1982; Masson, 1986; O'Seaghdha, 1989; Simpson, Peterson, Casteel, & Brugges, 1989). For example, Simpson et al. (1989) found relatedness priming effects for words embedded in grammatical sentences (*The auto accident drew a large crowd of people*) but not for words in scrambled stimuli (*Accident of large the drew auto crowd a people*). These findings highlight the role of syntactic structure in modulating priming behavior. They also suggest that models of semantic similarity should ideally handle the combination of semantic content in a syntactically aware manner.

Composition operations can be naturally accounted for within logic-based semantic frameworks (Montague, 1974). Frege's principle of compositionality states that the meaning of a complete sentence must be explained in terms of the meanings of its subsentential parts, including those of its singular terms. In other words, each syntactic operation of a formal language should have a corresponding semantic operation. Problematically, representations in terms of logical formulas are not well suited to modeling similarity quantitatively (as they are based on discrete symbols). On the other hand, semantic space models can naturally measure similarity but are not compositional. In fact, the commonest method for combining the vectors is to average them. While vector averaging has been effective in some applications such as essay grading (Landauer & Dumais, 1997) and coherence assessment (Foltz et al., 1998), it is unfortunately insensitive to word order, and more generally syntactic structure, giving the same representation to any constructions that happen to share the same vocabulary. This is illustrated in the example below taken from Landauer et al. (1997). Sentences (1-a) and (1-b) contain exactly the same set of words, but their meaning is entirely different.

(1) a. It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem.

b. That day the office manager, who was drinking, hit the problem sales worker with the bottle, but it was not serious.

The relative paucity of compositional models in the semantic space literature is in marked contrast to work in the connectionist tradition where much effort has been devoted to problem of combining or *binding* high-dimensional representations. The construction of higher level structures from low-level ones is fundamental not only to language but many aspects of human cognition such as analogy retrieval and processing (Eliasmith & Thagard, 2001; Plate, 2000), memory (Kanerva, 1988), and problem solving (Ross, 1989). Indeed, the issue of how to represent compositional structure in neural networks has been a matter of great controversy (Fodor & Pylyshyn, 1988). While neural networks can readily represent single distinct objects, in the case of multiple objects there are fundamental difficulties in keeping track of which features are bound to which objects. For the hierarchical structure of natural language this binding problem becomes particularly acute. For example, simplistic approaches to handling sentences such as *John loves Mary* and *Mary loves John* typically fail to make valid representations in one of two ways. Either there is a failure to distinguish between these two structures because the network fails to keep track of the fact that *John* is subject in one and object in the other, or there is a failure to recognize that both structures involve the same participants because *John* as a subject has a distinct representation from *John* as an object. The literature is littered with solutions to the binding problem (for a detailed overview, see the following section). These include tensor products (Smolensky, 1990), recursive distributed representations (RAAMS, Pollack, 1990), spatter codes (Kanerva, 1988), holographic reduced representations (Plate, 1995), and convolution (Metcalfe, 1990).

In this article, we attempt to bridge the gap in the literature by developing models of semantic composition that can represent the meaning of word combinations as opposed to individual words. Our models are narrower in scope compared with those developed in earlier connectionist work. Our vectors represent words; they are high-dimensional but relatively structured, and every component corresponds to a predefined context in which the words are found. We take it as a defining property of the vectors we consider that the values of their components are derived from event frequencies such as the number of times a given word appears in a given context (Turney & Pantel, 2010).^{1} Having this in mind, we present a general framework for vector-based composition that allows us to consider different classes of models. Specifically, we formulate composition as a function of two vectors and introduce models based on addition and multiplication. We also investigate how the choice of the underlying semantic representation interacts with the choice of composition function by comparing a spatial model that represents words as vectors in a high-dimensional space against a probabilistic model that represents words as topic distributions. We assess the performance of these models directly on a similarity task. We elicit similarity ratings for pairs of adjective–noun, noun–noun, and verb–object constructions and examine the strength of the relationship between similarity ratings and the predictions of our models.

In the remainder, we review previous research on semantic composition and vector binding models. Next, we describe our modeling framework, present our elicitation experiments, and discuss our results.

### 2. Composition

- Top of page
- Abstract
- 1. Introduction
- 2. Composition
- 3. Composition models
- 4. Collecting similarity ratings for phrases
- 5. Modeling experiments
- 6. Discussion
- Acknowledgments
- References
- Appendices
- Supporting Information

Compositionality allows languages to construct complex meanings from combinations of simpler elements. This property is often captured in the following principle: The meaning of a whole is a function of the meaning of the parts (Partee, 1995, p. 313). Therefore, whatever approach we take to modeling semantics, representing the meanings of complex structures will involve modeling the way in which meanings combine. Let us express the composition of two constituents, **u** and **v**, in terms of a function acting on those constituents:

- (1)

Partee (1995, p. 313) suggests a further refinement of the above principle taking the role of syntax into account: The meaning of a whole is a function of the meaning of the parts and of the way they are syntactically combined. We thus modify the composition function in Eq. 1 to account for the fact that there is a syntactic relation *R* between constituents **u** and **v**:

- (2)

Unfortunately, even this formulation may not be fully adequate. Lakoff (1977, p. 239), for example, suggests that the meaning of the whole is greater than the meaning of the parts. The implication here is that language users are bringing more to the problem of constructing complex meanings than simply the meaning of the parts and their syntactic relations. This additional information includes both knowledge about the language itself and also knowledge about the real world. Thus, a full understanding of the compositional process involves an account of how novel interpretations are integrated with existing knowledge. Again, the composition function needs to be augmented to include an additional argument, *K*, representing any knowledge utilized by the compositional process:

- (3)

The difficulty in defining compositionality is highlighted by Frege (1884) himself who cautions never to ask for the meaning of a word in isolation but only in the context of a statement. In other words, it seems that the meaning of the whole is constructed from its parts, and the meaning of the parts is derived from the whole. Moreover, compositionality is a matter of degree rather than a binary notion. Linguistic structures range from fully compositional (e.g., *black hair*), to partly compositional syntactically fixed expressions, (e.g., *take advantage*), in which the constituents can still be assigned separate meanings, and noncompositional idioms (e.g., *kick the bucket*) or multiword expressions (e.g., *by and large*), whose meaning cannot be distributed across their constituents (Nunberg, Sag, & Wasow, 1994).

Despite the foundational nature of compositionality to language, there are significant obstacles to understanding what exactly it is and how it operates. Most significantly, there is the fundamental difficulty of specifying what sort of ‘‘function of the meanings of the parts’’ is involved in semantic composition (Partee, 2004, p. 153). Fodor and Pylyshyn (1988) attempt to characterize this function by appealing to the notion of *systematicity*. They argue that the ability to understand some sentences is intrinsically connected to the ability to understand certain others. For example, noone who understands *John loves Mary* fails to understand *Mary loves John*. Therefore, the semantic content of a sentence is systematically related to the content of its constituents and the ability to recombine these according to a set of rules. In other words, if one understands some sentence and the rules that govern its construction, one can understand a different sentence made up of the same elements according to the same set of rules. In a related proposal, Holyoak and Hummel (2000) claim that in combining parts to form a whole, the parts remain independent and maintain their identities. This entails that *John* has the same independent meaning in both *John loves the girl* and *The boy hates John*.

Aside from the philosophical difficulties in precisely determining what systematicity means in practice (Doumas & Hummel, 2005; Pullum & Scholz, 2007; Spenader & Blutner, 2007), it is worth noting that semantic transparency, the idea that words have meanings which remain unaffected by their context, contradicts Frege's (1884) claim that words only have definite meanings in context. Consider, for example, the adjective *good* whose meaning is modified by the context in which it occurs. The sentences *John is a good neighbor* and *John is a lawyer* do not imply *John is a good lawyer*. In fact, we might expect that some of the attributes of a good lawyer are incompatible with being a good neighbor, such as nit-picking over details, or not giving an inch unless required by law. More generally, the claims of Fodor and Pylyshyn (1988) and Holyoak and Hummel (2000) arise from a preconception of cognition as being essentially symbolic in character. While it is true that the concatenation of any two symbols (e.g., *g* and *l*) will compose into an expression (e.g., *gl*), within which both symbols maintain their identities, we cannot always assume that the meaning of a phrase is derived by simply concatenating the meaning of its constituents. Although the phrase *good lawyer* is constructed by concatenating the symbols *good* and *lawyer*, the meaning of *good* will vary depending on the nouns it modifies.

Interestingly, Pinker (1994, p. 84) discusses the types of functions that are *not* involved in semantic composition while comparing languages, which he describes as *discrete combinatorial systems*, against blending systems. He argues that languages construct an unlimited number of completely distinct combinations with an infinite range of properties. This is made possible by creating novel, complex meanings which go beyond those of the individual elements. By contrast, for a blending system the properties of the combination lie between the properties of its elements, which are lost in the average or mixture. To give a concrete example, a *brown cow* does not identify a concept intermediate between *brown* and *cow* (Kako, 1999, p. 2). Thus, composition based on averaging or blending would produce greater generality rather than greater specificity.

#### 2.1. Logic-based view

Within symbolic logic, compositionality is accounted for elegantly by assuming a tight correspondence between syntactic expressions and semantic form (Blackburn & Bos, 2005; Montague, 1974). In this tradition, the meaning of a phrase or sentence is its truth conditions which are expressed in terms of truth relative to a model.^{2} In classical Montague grammar, for each syntactic category there is a uniform semantic type (e.g., sentences express propositions, nouns, and adjectives express properties of entities, and verbs express properties of events). Most lexical meanings are left unanalyzed and treated as primitive. In this framework, the proper noun *John* is represented by the logical symbol *JOHN* denoting a specific entity, whereas a verb like *wrote*, is represented by a function from entities to propositions, expressed in lambda calculus as *λ**x*.*WROTE*(*x*). Applying this function to the entity *JOHN* yields the logical formula *WROTE*(*JOHN*) as a representation of the sentence *John wrote*. It is worth noting that the entity and predicate within this formula are represented symbolically, and that the connection between a symbol and its meaning is an arbitrary matter of convention.

On the one hand, the symbolic nature of logical representations is advantageous as it allows composition to be carried out syntactically. The laws of deductive logic in particular can be defined as syntactic processes which act irrespective of the meanings of the symbols involved. On the other hand, abstracting away from the actual meanings may not be fully adequate for modeling semantic composition. For example, adjective–noun phrases are represented in terms of predicate conjunction, for example, *male lawyer* corresponds to . This approach cannot, however, handle the context-sensitive adjectives discussed above. *John is a good lawyer* is not equivalent to the conjunction of *John is good* and *John is a lawyer*. More generally, modeling semantic composition means modeling the way in which meanings combine, and this requires that words have representations which are richer than single, arbitrary symbols.

#### 2.2. Connectionism

Connectionist models of cognition (see among others Elman et al., 1996; Rumelhart, McClelland, & the PDP Research Group, 1986) can be seen as a response to the limitations of traditional symbolic models. The key premise here is that knowledge is represented not as discrete symbols that enter into symbolic expressions, but as patterns of activation distributed over many processing elements. These representations are distributed in the sense that any single concept is represented as a pattern, that is, vector, of activation over many elements (nodes or units) that are typically assumed to correspond to neurons or small collections of neurons.

Much effort in the literature has been invested in enhancing the representational capabilities of connectionist models with the means to combine a finite number of symbols into a much larger, possibly infinite, number of specific structures. The key property of symbolic representations that connectionist models attempt to emulate is their ability to bind one representation to another. The fundamental operation underlying binding in symbolic systems is the concatenation of symbols according to certain syntactic processes. And crucially the results of this operation can be broken down into their original constituents. Thus, connectionists have sought ways of constructing complex structures by binding one distributed representation to another in a manner that is reversible.

Smolensky (1990), for example, proposed the use of tensor products as a means of binding one vector to another to produce structured representations. The tensor product **u** ⊗ **v** is a matrix whose components are all the possible products *u*_{i}*v*_{j} of the components of vectors **u** and **v**. Fig. 2 illustrates the tensor product for two three-dimensional vectors (*u*_{1},*u*_{2},*u*_{3}) ⊗ (*v*_{1},*v*_{2},*v*_{3}). A major difficulty with tensor products is their dimensionality, which grows exponentially in size as more constituents are composed (precisely, the tensor product has dimensionality *m* × *n*).

To overcome this problem, other techniques have been proposed in which the binding of two vectors results in a vector which has the same dimensionality as its components. Holographic reduced representations (Plate, 1991) are one implementation of this idea where the tensor product is projected onto the space of the original vectors, thus avoiding any dimensionality increase. The projection is defined in terms of *circular convolution*, a mathematical function that compresses the tensor product of two vectors. The compression is achieved by summing along the transdiagonal elements of the tensor product. Noisy versions of the original vectors can be recovered by means of *circular correlation*, which is the approximate inverse of circular convolution. The success of circular correlation crucially depends on the components of the *n*-dimensional vectors **u** and **v** being real numbers and randomly distributed with mean 0 and variance 1/*n*. Binary spatter codes (Kanerva, 1988, 2009) are a particularly simple form of holographic reduced representation. Typically, these vectors are random bit strings or binary *N* vectors (e.g., *N* = 10,000). Compositional representations are synthesized from parts or chunks. Chunks are combined by binding, which is the same as taking the exclusive or (XOR) of two vectors. Here, only the transdiagonal elements of the tensor product of two vectors are kept and the rest are discarded.

From a computational perspective, both spatter codes and holographic reduced representations can be implemented efficiently^{3} and the dimensionality of the resulting vector does not change. The downside is that operations like circular convolution are a form of lossy compression that introduces noise into the representation. To retrieve the original vectors from their bindings, a *clean-up memory* process is usually employed where the noisy vector is compared with all component vectors in order to find the closest one.

Tensors and their relatives can indeed represent relations (e.g., *love*(*x*,*y*)) and role–filler bindings (e.g., in *loves(John, Mary)* the *lover* role is bound to *John* and the *beloved* role is bound to *Mary*) in a distributed fashion. However, Holyoak and Hummel (2000) claim that this form of binding violates role–filler independence. In a truly compositional system, complex structures gain meaning from the simpler parts from which they are formed *and* the simpler components remain independent, that is, preserve their meaning (Doumas & Hummel, 2005, Doumas, Hummel, & Sandhofer, 2008). Doumas and Hummel (2005) propose a model of role–filler binding based on synchrony of neural firing. Vectors representing relational roles fire in synchrony with vectors representing their fillers and out of synchrony with other role–filler bindings. These ideas are best captured in LISA, a neural network that implements symbolic structures in terms of distributed representations. Crucially, words and relations are represented by features (e.g., *human*, *adult*, *male*), which albeit more informative than binary vectors, raise issues regarding their provenance and the scalability of the models based on them (see the discussion in the Introduction).

#### 2.3. Semantic spaces

The idea of representing word meaning in a geometrical space dates back to Osgood, Suci, and Tannenbaum (1957), who used elicited similarity judgments to construct semantic spaces. Subjects rated concepts on a series of scales whose endpoints represented polar opposites (e.g., *happy*–*sad*); these ratings were further processed with factor analysis, a dimensionality reduction technique, to uncover latent semantic structure. In this study, meaning representations were derived *directly* from psychological data, thereby allowing the analysis of differences across subjects. Unfortunately, multiple subject ratings are required to create a representation for each word, which in practice limits the semantic space to a small number of words.

Building on this work and the well-known vector space model in information retrieval (Deerwester, Dumais, Landauer, Furnas, & Harshman, 1990; Salton, Wong, & Yang, 1975), more recent semantic space models, such as LSA (Landauer & Dumais, 1997) and HAL (Lund & Burgess, 1996), overcome this limitation by constructing semantic representations *indirectly* from real language corpora. A variety of such models have been proposed and evaluated in the literature. Despite their differences, they are all based on the same premise: Words occurring within similar contexts are semantically similar (Harris, 1968). Semantic space models extract from a corpus a set of counts representing the occurrences of a target word *t* in the specific context *c* of choice and then map these counts into the components of a vector in some space. For example, Bullinaria and Levy (2007) consider a range of component types, the simplest being to transform the raw frequencies into conditional probabilities, *p*(*c*_{i} | *t*). They also consider components based on functions of these probabilities, such as the ratio of the conditional probability of the context to its overall probability, or the point-wise mutual information between context and target. An issue here concerns the number of components the vectors should have, or which contexts should be used in constructing the vectors. Often, the most frequent contexts are used, as rarer contexts yield unreliable counts. Dimensionality reduction techniques can be also used to project high-dimensional vectors onto a lower dimensional space (Blei et al., 2003; Hofmann, 2001; Landauer & Dumais, 1997).

Semantic space models resemble the representations used in the connectionist literature. Words are represented as vectors and their meaning is distributed across many dimensions. Crucially, the vector components are neither binary nor randomly distributed. They correspond to co-occurrence counts, and it is assumed that differences in meaning arise from differences in the distribution of these counts across contexts. That is not to say that high-dimensional randomly distributed representations are incompatible with semantic spaces. Kanerva, Kristoferson, and Holst (2000) propose the use of random indexing as an alternative to the computationally costly singular value decomposition employed in LSA. The procedure also builds a word–document co-occurrence matrix, except that each document no longer has its own column. Instead, it is assigned a small number of columns at random (the document's random index). Thus, each time a word occurs in the document, the document's random index vector is added to the row corresponding to that word.

Random vectors have also been employed in an attempt to address a commonly raised criticism against semantic space models, namely that they are inherently agnostic to the linguistic structure of the contexts in which a target word occurs. In other words, most of these models treat these contexts as a structureless bag of words. Jones and Mewhort (2007) propose a model that makes use of the linear order of words in a context. Their model represents words by high-dimensional holographic vectors. Each word is assigned a random^{4}*environmental* vector. Contextual information is stored in a lexical vector, which is computed with the aid of the environmental vectors. Specifically, a word's lexical vector is the superposition of the environmental vectors corresponding to its co-occurring words in a sentence. Order information is the sum of all *n*-grams that include the target word. The *n*-grams are encoded with the aid of a place-holder environmental vector Φ and circular convolution (Plate, 1995). The order vector is finally added to the lexical vector to jointly represent structural and contextual information. Despite the fact that these vectors contain information about multiword structures in the contexts of target words, they are, nonetheless, still fundamentally representations of individual isolated target words. Circular convolution is only used to bind environmental vectors, which being random contain no semantic information. To make a useful semantic representation of a target word, the vectors representing its contexts are summed over, producing a vector which is no longer random and for which circular convolution is no longer optimal.

Sahlgren, Host, and Kanerva (2008) provide an alternative to convolution by showing that order information can also be captured by permuting the vector coordinates. Other models implement more sophisticated versions of context that go beyond the bag-of-words model, without, however, resorting to random vectors. For example, they do so by defining context in terms of syntactic dependencies (Grefenstette, 1994; Lin, 1998; Padó & Lapata, 2007) or by taking into account relational information about how roles and fillers combine to create specific factual knowledge (Dennis, 2007).

So far the discussion has centered on the creation of semantic representations for individual words. As mentioned earlier, the composition of vector-based semantic representations has received relatively little attention. An alternative is not to compose at all but rather create semantic representations of phrases in addition to words. If a phrase is frequent enough, then it can be treated as a single target unit, and its occurrence across a range of contexts can be constructed in the same manner as described above. Baldwin, Bannard, Tanaka, and Widdows (2003) apply this method to model the decomposability of multiword expressions such as noun compounds and phrasal verbs. Taking a similar approach, Bannard, Baldwin, and Lascarides (2003) develop a vector space model for representing the meaning of verb–particle constructions. In the limit, such an approach is unlikely to work as semantic representations for constructions that go beyond two words will be extremely sparse.

A different type of semantic space is proposed in Lin and Pantel (2001) (see also Turney and Pantel, 2010). They create a *pair–pattern* co-occurrence matrix, where row vectors correspond to pairs of words (e.g., *mason*:*stone*, *carpenter*:*wood*) and column vectors to patterns attested with these pairs (e.g., *X works with Y*, *X cuts Y*). A pattern-based semantic space allows the measurement of pattern similarity (e.g., *X solves Y* is similar to *Y is solved by X* or *X found a solution to Y*) as well as the similarity of semantic relations between word pairs (e.g., *mason*:*stone* shares the same semantic relation with *carpenter*:*wood*). Approaches based on pair–pattern matrices are not compositional; they capture the meaning of word pairs and clauses as a whole, without modeling their constituent parts.

Vector addition or averaging (which are equivalent under the cosine measure) is the most common form of vector combination (Foltz et al., 1998; Landauer & Dumais, 1997). However, vector addition is not a suitable model of composition for at least two reasons. Firstly, it is insensitive to syntax and word order. As vector addition is commutative, it is essentially a bag-of-words model of composition: It assigns the same representation to any sentence containing the same constituents irrespective of their syntactic relations. However, there is ample empirical evidence that syntactic relations across and within sentences are crucial for sentence and discourse processing (Neville, Nichol, Barss, Forster, & Garrett, 1991; West & Stanovich, 1986). Secondly, addition simply blends together the content of all words involved to produce something in between them all. Ideally, we would like a model of semantic composition that generates novel meanings by selecting and modifying particular aspects of the constituents participating in the composition. Kintsch (2001) attempts to achieve this in his predication algorithm by modeling how the meaning of a predicate (e.g., *run*) varies depending on the arguments it operates upon (e.g., *the horse ran* vs. *the color ran*). The idea is to add not only the vectors representing the predicate and its argument but also the neighbors associated with both of them. The neighbors, Kintsch argues, can strengthen features of the predicate that are appropriate for the argument of the predication.

Tensor products have been recently proposed as an alternative to vector addition (Aerts & Czachor, 2004; Clark & Pulman, 2007; Widdows, 2008). However, as illustrated in Fig. 2, these representations grow exponentially as more vectors are combined. This fact undermines not only their tractability in an artificial computational setting but also their plausibility as models of human concept combination. Interestingly, Clark, Coecke, and Sadrzadeh (2008) try to construct a tensor product-based model of vector composition which makes an explicit connection to models of linguistic composition. In particular, they show how vector-based semantics can be unified with a compositional theory of grammatical types. Central to their approach is the association of each grammatical type with a particular rank of tensor. So, for example, if we take nouns as being associated with simple vectors, then an adjective as a noun modifier would be associated with a matrix, that is, a vector transformation. Clark et al. (2008) do not suggest concrete methods for constructing or estimating the various tensors involved in their model. Instead, they are more interested in its formal properties and do not report any empirical tests of this approach.

Unfortunately, comparisons across vector composition models have been few and far between. The merits of different approaches are illustrated with special purpose examples, and large-scale evaluations are uniformly absent. For instance, Kintsch (2001) demonstrates how his own composition algorithm works intuitively on a few hand-selected examples but does not provide a comprehensive test set (for a criticism of Kintsch's 2001 evaluation standards, see Frank, Koppen, Noordman, & Vonk, 2008). In a similar vein, Widdows (2008) explores the potential of vector product operations for modeling compositional phenomena in natural language, again on a small number of hand-picked examples.

Our work goes beyond these isolated proposals; we present a framework for vector composition which allows us to explore a range of potential composition functions, their properties, and relations. Under this framework, we reconceptualize existing composition models as well as introduce novel ones. Our experiments make use of conventional semantic vectors built from co-occurrence data. However, our compositional models are not tied to a specific representation and could be used with the holographic vectors proposed in Jones and Mewhort (2007) or with random indexing; however, we leave this to future work. Within the general framework of co-occurrence-based models we investigate how the choice of semantic representation interacts with the choice of composition model. Specifically, we compare a spatial model that represents words as vectors in a high-dimensional space against a probabilistic model (akin to LSA) that represents words as topic distributions. We compare these models empirically on a phrase similarity task, using a rigorous evaluation methodology.

### 3. Composition models

- Top of page
- Abstract
- 1. Introduction
- 2. Composition
- 3. Composition models
- 4. Collecting similarity ratings for phrases
- 5. Modeling experiments
- 6. Discussion
- Acknowledgments
- References
- Appendices
- Supporting Information

Our aim is to construct vector representations for phrases and sentences. We assume that constituents are represented by vectors which subsequently combine in some way to produce a new vector. It is worth emphasizing that the problem of combining semantic vectors to make a representation of a multiword phrase is different to the problem of how to incorporate information *about* multiword contexts into a distributional representation for a single target word. Whereas Jones and Mewhort (2007) test this ability to memorize the linear structure of contexts in terms of predicting a target word correctly given a context, our composition models will be evaluated in terms of their ability to model semantic properties of simple phrases.

In this study we focus on small phrases, consisting of a head and a modifier or complement, which form the building blocks of larger units. If we cannot model the composition of basic phrases, there is little hope that we can construct compositional representations for sentences or even documents (we return to this issue in our Discussion section). Thus, given a phrase such as *practical difficulty* and the vectors **u** and **v** representing the constituents *practical* and *difficulty*, respectively, we wish to produce a representation **p** of the whole phrase. Hypothetical vectors for these constituents are illustrated in Fig. 3. This simplified semantic space^{5} will serve to illustrate examples of the composition functions we consider in this paper.

In our earlier discussion, we defined **p**, the composition of vectors **u** and **v**, representing a pair of words which stand in some syntactic relation *R*, given some background knowledge *K* as:

- (4)

The expression above defines a wide class of composition functions. To derive specific models from this general framework requires the identification of appropriate constraints that narrow the space of functions being considered. To begin with, we will ignore *K* so as to explore what can be achieved in the absence of any background or world knowledge. While background knowledge undoubtedly contributes to the compositional process, and resources like WordNet (Fellbaum, 1998) may be used to provide this information, from a methodological perspective it is preferable to understand the fundamental processes of how representations are composed before trying to understand the interaction between existing representations and those under construction. As far as the syntactic relation *R* is concerned, we can proceed by investigating one such relation at a time, thus removing any explicit dependence on *R*, but allowing the possibility that we identify distinct composition functions for distinct syntactic relations.

Another particularly useful constraint is to assume that **p** lies in the same space as **u** and∼**v**. This essentially means that all syntactic types have the same dimensionality. The simplification may be too restrictive as it assumes that verbs, nouns, and adjectives are substantially similar enough to be represented in the same space. Clark et al. (2008) suggest a scheme in which the structure of a representation depends on its syntactic type, such that, for example, if nouns are represented by plain vectors then adjectives, as modifiers of nouns, are represented by matrices. More generally, we may question whether representations in a fixed space are flexible enough to cover the full expressivity of language. Intuitively, sentences are more complex than individual phrases and this should be reflected in the representation of their meaning. In restricting all representations within a space of fixed dimensions, we are implicitly imposing a limit on the complexity of structures which can be fully represented. Nevertheless, the restriction renders the composition problem computationally feasible. We can use a single method for constructing representations, rather than different methods for different syntactic types. In particular, constructing a vector of *n* elements is easier than constructing a matrix of *n*^{2} elements. Moreover, our composition and similarity functions only have to apply to a single space, rather than a set of spaces of varying dimensions.

Given these simplifying assumptions, we can now begin to identify specific mathematical types of functions. For example, if we wish to work with linear composition functions, there are two ways to achieve this. We may assume that **p** is a linear function of the Cartesian product of **u** and **v**, giving an additive class of composition functions:

- (5)

where **A** and **B** are matrices which determine the contributions made by **u** and **v** to **p**.

Or we can assume that **p** is a linear function of the tensor product of **u** and **v**, giving a multiplicative class of composition functions:

- (6)

where **C** is a tensor of rank 3, which projects the tensor product of **u** and **v** onto the space of ∼**p**. (For readers unfamiliar with vector and tensor algebra, we provide greater detail in Supporting information).

Linearity is very often a useful assumption because it constrains the problem considerably. However, this usually means that the solution arrived at is an approximation to some other, nonlinear, structure. Going beyond the linear class of multiplicative functions, we will also consider some functions which are quadratic in **u**, having the general form:

- (7)

where **D** is now a rank 4 tensor, which projects the product **uuv** onto the space of **p**.

Within the additive model class (Eq. 5), the simplest composition function is vector addition:

- (8)

Thus, according to Eq. 8, the addition of the two vectors representing *practical* and *difficulty* (see Fig. 3) would be . This model assumes that composition is a symmetric function of the constituents; in other words, the order of constituents essentially makes no difference. While this might be reasonable for certain structures, a list perhaps, a model of composition based on syntactic structure requires some way of differentiating the contributions of each constituent.

Kintsch (2001) attempts to model the composition of a predicate with its argument in a manner that distinguishes the role of these constituents, making use of the lexicon of semantic representations to identify the features of each constituent relevant to their combination. Specifically, he represents the composition in terms of a sum of predicate, argument, and a number of neighbors of the predicate:

- (9)

Considerable latitude is allowed in selecting the appropriate neighbors. Kintsch (2001) considers only the *m* most similar neighbors to the predicate, from which he subsequently selects *k*, those most similar to its argument. Thus, if in the composition of *practical* with *difficulty*, the chosen neighbor is *problem*, with , then this produces the representation .

This composition model draws inspiration from the construction–integration model (Kintsch, 1988), which was originally based on symbolic representations, and introduces a dependence on syntax by distinguishing the predicate from its argument. In this process, the selection of relevant neighbors for the predicate plays a role similar to the integration of a representation with existing background knowledge in the original construction–integration model. Here, background knowledge takes the form of the lexicon from which the neighbors are drawn.

So far, we have considered solely additive composition models. These models blend together the content of the constituents being composed. The contribution of **u** in Eq. 8 is unaffected by its relation with **v**. It might be preferable to scale each component of **u** with its relevance to **v**, namely to pick out the content of each representation that is relevant to their combination. This can be achieved by using a multiplicative function instead:

- (12)

where the symbol ⊙ represents multiplication of the corresponding components:

- (13)

For this model, our example vectors would combine to give: .

Note that the multiplicative function in Eq. 12 is still a symmetric function and thus does not take word order or syntax into account. However, Eq. 12 is a particular instance of the more general class of multiplicative functions (Eq. 6), which allows the specification of asymmetric syntax-sensitive functions. For example, the tensor product is an instance of this class with **C** being the identity matrix.

- (14)

where the symbol ⊗ stands for the operation of taking all pairwise products of the components of **u** and **v**:

- (15)

Thus, the tensor product representation of *practical difficulty* is:

- (16)

Circular convolution is also a member of this class:

- (17)

where the symbol stands for a compression of the tensor product based on summing along its transdiagonal elements:

- (18)

Subscripts are interpreted modulo *n* which gives the operation its circular nature. Circular convolution compresses the matrix in Eq. 16 into the vector .

One reason for choosing such multiplicative functions is that the magnitudes of **u** and ∼**v** can only affect the magnitude of **p**, not its direction. By contrast, in additive models, the relative magnitudes of **u** and **v**, can have a considerable effect on both the magnitude and direction of **p**. This can lead to difficulties when working with the cosine similarity measure, which is itself insensitive to the magnitudes of vectors. For example, if vector definitions are optimized by comparing the predictions from the cosine similarity measure to some gold standard, then it is the directions of the vectors which are optimized, not their magnitudes. Utilizing vector addition as the composition function makes the product of the composition dependent on an aspect of the vectors which has not been optimized, namely their magnitude. Multiplicative combinations avoid this problem, because effects of the magnitudes of the constituents only show up in the magnitude of the product, which has no effect on the cosine similarity measure.

The multiplicative class of functions also allows us to think of one representation as modifying the other. This idea is fundamental in logic-based semantic frameworks (Montague, 1974), where different syntactic structures are given different function types. To see how the vector **u** can be thought of as something that modifies **v**, consider the partial product of **C** with **u**, producing a matrix which we shall call **U**.

- (19))

Here, the composition function can be thought of as the action of a matrix, **U**, representing one constituent, on a vector, **v**, representing the other constituent. This is essentially Clark et al.’s (2008) approach to adjective–noun composition. In their scheme, nouns would be represented by vectors and adjectives by matrices which map the original noun representation to the modified representation. In our approach all syntactic types are simply represented by vectors; nevertheless, we can make use of their insight. Eq. 19 demonstrates how a multiplicative composition tensor, *C*, allows us to map a constituent vector, **u**, onto a matrix, **U**, while representing all words with vectors.

Putting the simple multiplicative model (see Eq. 12) into this form yields a matrix, **U**, whose off-diagonal elements are zero and whose diagonal elements are equal to the components of **u**.

- (20)

The action of this matrix on **v** is a type of dilation, in that it stretches and squeezes **v** in various directions. Specifically, **v** is scaled by a factor of *u*_{i} along the *i*th basis.

One drawback of this process is that its results are dependent on the basis used. Ideally, we would like to have a basis-independent composition, that is, one which is based solely on the geometry of **u** and **v**.^{7} One way to achieve basis independence is by dilating **v** along the direction of **u**, rather than along the basis directions. We thus decompose **v** into a component parallel to **u** and a component orthogonal to **u**, and then stretch the parallel component to modulate **v** to be more like **u**. Fig. 4 illustrates this decomposition of **v** where **x** is the parallel component and **y** is the orthogonal component. These two vectors can be expressed in terms of **u** and **v** as follows:

- (21)

- (22)

Thus, if we dilate **x** by a factor *λ*, while leaving **y** unchanged, we produce a modified vector, **v**′, which has been stretched to emphasize the contribution of **u**:

- (23)

However, as the cosine similarity function is insensitive to the magnitudes of vectors, we can multiply this vector by any factor we like without essentially changing the model. In particular, multiplying through by **u**·**u** makes this expression easier to work with:

- (24)

In order to apply this model to our example vectors, we must first calculate the dot products **practical** × **practical** = 156 and **practical** × **difficulty** = 96. Then, assuming *λ* is 2, the result of the composition is . This is now an asymmetric function of **u** and **v**, where **v** is stretched by a factor *λ* in the direction of **u**. However, it is also a more complex type of function, being quadratic in **u** (Eq. 7).

Again, we can think of the composition of **u** with **v**, for this function (Eq. 24), in terms of a matrix **U** which acts on **v**.

- (25)

- (26)

where *i*, *j*, and *k* range over the dimensions of the vector space.

The matrix **U** has one eigenvalue which is larger by a factor of *λ* than all the other eigenvalues, with the associated eigenvector being **u**. This corresponds to the fact that the action of this matrix on **v** is a dilation which stretches **v** differentially in the direction of **u**. Intuitively, this seems like an appropriate way to try to implement the idea that the action of combining two words can result in specific semantic aspects becoming more salient.

### 6. Discussion

- Top of page
- Abstract
- 1. Introduction
- 2. Composition
- 3. Composition models
- 4. Collecting similarity ratings for phrases
- 5. Modeling experiments
- 6. Discussion
- Acknowledgments
- References
- Appendices
- Supporting Information

In this paper we presented a framework for vector-based semantic composition. We formulated composition as a function of two vectors and introduced several models based on addition and multiplication. These models were applied to vectors corresponding to distinct meaning representations: a simple semantic space based on word co-occurrences and a topic-based model built using LDA. We compared the model predictions empirically on a phrase similarity task, using ratings elicited from native speakers. Overall, we observe that dilation models perform consistently well across semantic representations. A compositional model based on component-wise multiplication performs best on the simple semantic space, whereas additive models are preferable with LDA. Interestingly, we also find that the compositional approach to constructing representations outperforms a more direct noncompositional approach based on treating the phrases essentially as single lexical units. This is not entirely surprising as our materials were compiled so as to avoid a high degree of lexicalization. Such an approach may be better suited to modeling noncompositional structures that are lexicalized and frequently occurring (Baldwin et al., 2003; Bannard et al., 2003).

Despite this success, a significant weakness of many of the models considered here is their insensitivity to syntax. The multiplicative model, in particular, is symmetric, and thus makes no distinction between the constituents it combines. Yet, in spite of this, it is the strongest model for the simple semantic space. And although the weighted addition and dilation models differentiate between constituents, their dependence on syntax is rather limited, involving only a differential weighting of the contribution of each constituent. Perhaps more importantly, none of the representations could be said to have any internal structure. Thus, they cannot be broken down into parts which can be independently interpreted or operated upon. Symbolic representations, by contrast, build complex structures by, for example, binding predicates to arguments. In fact, it is often argued that however composition is implemented it must exhibit certain features characteristic of this symbolic binding (Fodor & Pylyshyn, 1988; Holyoak & Hummel, 2000).

Our results do not indicate that models which mimic symbolic binding (i.e., tensor products and circular convolution) are better than those that do not (at least for the phrase similarity task and the syntactic structures we examined). In particular, circular convolution is, across the board, the worst performing model. One issue in the application of circular convolution is that it is designed for use with random vectors, as opposed to the structured semantic vectors we assume here. A more significant issue, however, concerns symbol binding in general, which is somewhat distinct from semantic composition. In modeling the composition of an adjective with a noun, it is not enough to simply bind the representation of one to the representation of the other; we must instead model the interaction between their meanings and their integration to form a whole. Circular convolution is simply designed to allow a pair of vectors to be bound in a manner that allows the result to be decomposed into its original constituents at a later time. This may well be adequate as a model for syntactic operations on symbols, but, as our results show, it is not, by itself, enough to model the process of semantic composition. Nevertheless, we anticipate further improvements to our vector-based composition models will involve taking a more sophisticated approach to the structure of representations, in particular with regard to predicate–argument structures. Our results also suggest that assuming a single semantic representation may not be sufficient for all tasks. For instance, it is not guaranteed that the same highly structured representations appropriate for deductive inference will also provide a good model for semantic similarity. Semantics, covering such a wide range of cognitive phenomena, might well be expected to involve multiple systems and processes, which make use of quite distinct representations.

In this article, we have been concerned with modeling the similarity between simple phrases, consisting of heads and their dependents. We have thus avoided the important question of how vectors compose to create representations for larger phrases and sentences. It seems reasonable to assume that the composition process operates over syntactic representations such as binary parse trees. A sentence will typically consist of several composition operations, each applied to a pair of constituents **u** and **v**. Fig. 6 depicts this composition process for the sentence *practical difficulties slowed progress*. Initially, *practical* and *difficulties* are composed into **p**, and *slowed* and *progress* into **q**. The final sentence representation, **s**, is the composition of the pair of phrase representations **p** and **q**. Alternatively, composition may operate over dependency graphs representing words and their relationship to syntactic modifiers using directed edges (see the example in Fig. 7).

It is interesting then to consider which composition function would be best suited for representing sentences. For example, we could adopt different functions for different constructions. Our experiments show that the simple multiplicative model performs best at modeling adjective–noun and noun–noun combinations, whereas the dilation model is better for verb–object constructions. Alternatively, we could adopt a single composition function that applies uniformly across all syntactic relations. As discussed earlier, the simple multiplicative function is insensitive to syntax and word order. The dilation model, however, remedies this. It is also based on a multiplicative composition function but can take syntax into account by stretching one vector along the direction of another one (see Eq. 24).

Overall, we anticipate that more substantial correlations with human similarity judgments can be achieved by implementing more sophisticated models from within the framework outlined here. In particular, the general class of multiplicative models (see Eq. 6) appears to be a fruitful area to explore. Future directions include constraining the number of free parameters in linguistically plausible ways and scaling to larger data sets. The applications of the framework discussed here are many and varied. We intend to assess the potential of our composition models on context-sensitive semantic priming (Till, Mross, & Kintsch, 1988), inductive inference (Heit & Rubinstein, 1994), and analogical learning (Mangalath, Quesada, & Kintsch, 2004; Turney, 2006). Another interesting application concerns sentence processing and the extent to which the compositional models discussed here can explain reading times in eye-tracking corpora (Demberg & Keller, 2008; Pynte, New, & Kennedy, 2008).