Computing with cognitive spatial frames of reference in GIS

In everyday communication, people effortlessly translate between spatial cognitive frames of reference. For example, a tourist guide translates from a map (“the fountain is north‐west of the church”) into a cognitive frame for a tourist (“the fountain in front of the church”). While different types of cognitive reference frames and their relevance for language cultures have been studied in considerable depth, we still lack adequate transformation models. In this article, we argue that transformations in current Geographic Information Systems (GIS) are inappropriate to this end. Appropriate transformation models need to go beyond point discretization to take into account vague transformations, in order to deal with forms, sizes, and vagueness of spatial relations relative to ground objects. We argue that neural fields should be used to denote fuzzy positions, directions, and sizes in a particular frame. We propose fuzzy vector spaces to approximate neural field behavior with affine transformations, including fuzzy translation, rotation, and scaling, in order to efficiently transform between different cognitive perspectives. We use an implementation in Haskell to describe a geographic map from the perspective of six well‐known cognitive frames of reference. Based on these findings, we give an outlook on the principles of a “neural GIS.”

different cognitive frame of reference (cf. Levinson, 2003;Majid, Bowerman, Kita, Haun, & Levinson, 2004): (a) "The fork is to the left of the spoon." In this description, the reference frame is centered on the spoon (ground object) and oriented by the speaker. (b) "The fork is in front of the spoon." In this description, an intrinsic frame of reference is used, which is centered on the spoon (ground object) and oriented by the spoon. When people use natural language descriptions expressions like "in front of" or "left of" to describe objects, they refer to a fuzzy location of different shape and size, depending on whether the ground object is a church building or a spoon on a table (Levinson, 2003). That is, human speakers can express the same location from different perspectives and in a way that accounts not only for the inherent spatial vagueness, but also for the geometry of figure and ground objects (Spencer, Simmering, Schutte, & Sch€ oner, 2007). Yet, neither the shape nor the size of the spoon allow us to precisely determine where "in front of the spoon" is.
Still, in everyday communication, people more or less effortlessly translate between these perspectives in order to understand what a speaker means (Levinson, 2003). While the different types of cognitive reference frames and their relevance for different language cultures have been studied in considerable depth (Majid et al., 2004), we still lack formal models that can be used to transform a geometric representation from one cognitive frame to another (Carlson, 1999;Logan & Sadler, 1996;Regier & Carlson, 2001;Tenbrink & Kuhn, 2011), and thus to approximate the location that a natural language expression refers to (Eschenbach, 1999). We suggest one reason for this is that current Geographical Information Systems (GIS) are based on reference systems with crisp geometric relations and precise transformations (Burrough & Frank, 1996), while cognitive reference frames require transformations that can take into account vague locations, translations, rotations, and scalings with respect to ground objects of arbitrary shapes.
In this article, we suggest that appropriate transformation models can be defined in terms of neural fields (Johnson, Spencer, & Sch€ oner, 2008;Pouget, Deneve, & Duhamel, 2002), which have been used in neuroscience to represent approximate relative locations in terms of spatial arrays of firing neurons (Lipinski, Schneegans, Sandamirskaya, Spencer, & Schner, 2012). Inside a neural field, location, distance, direction (angle), and other geometric properties are not a result of crisp measurement, but of the effects of firing neurons that add up at locations of interest, inside one field as well as between fields. By connecting neural fields, one can transform egocentric to allocentric locations, and absolute to relative ones, and vice versa. Furthermore, this can be done in a way that takes into account vague location, orientation, size, and shape of ground objects.
For the purpose of practical computing with locations over different frames of reference in a GIS, we propose mimicking the behavior of neural fields with fuzzy vector spaces (Katsaras & Liu, 1977;Lubczonok, 1990;Viertl & Hareter, 2006). This allows us to compute neural transformations in a tractable and deterministic way over objects, spatial locations, and relational templates, all of which are represented as fuzzy fields. Based on this method, we model six well-known types of cognitive spatial frames of reference (Frank, 1998;Levelt, 1996) in terms of corresponding transformations. We apply them to a geographic map example in order to illustrate how the same spatial configuration can be viewed from different cognitive perspectives. Our method is able to compute relative sizes of fuzzy locations like "near," taking into account the size, location, and shape of different ground objects, and it also gives us a way to measure the degree to which a spatial term applies to a situation.
The article is structured as follows: In Section 2, we explain the challenge and significance of cognitive reference frame transformations based on a map example and by reviewing prior work including neural fields. In Section 3, we introduce fuzzy vector spaces as a method to compute transformations. In Section 4, we show how to use this formalism to model well-known cognitive frames of reference. In Section 5, we discuss our implementation, its efficiency, and its effectiveness, by applying it to the map example. In Section 6, we compare our method with current GIS and provide an outlook on a neurally grounded GIS, before we conclude.

| CH AL LE N GES OF M ODE LI N G C OGN IT I VE SP A TI A L TRA N S F ORM A TI ON S
In Johnson-Laird's treatment of mental models (Johnson-Laird, 1983), he concludes (p. 241) that a psychological theory of semantics cannot stay on the level of a propositional mental language ("mentalese"), unless it risks the loss of reference, in particular, the loss of spatial reference. 1 Instead, he suggests procedural semantics, in which mental models are constructed not in the abstract, but in terms of concrete exemplars.
A cognitive frame of reference is an example of such an exemplar-based mental model. A cognitive transformation reinterprets space from a particular perceptual perspective, including both the necessary geometric detail as well as the approximate metric position of objects referenced by spatial terms. Positional detail is lacking in purely propositional models (e.g., in qualitative spatial reasoning; Freksa, 1991), because they abstract from metric properties and directions of Euclidean space. For instance, in an object-centered intrinsic coordinate system, the meaning of the coordinate origin is the object itself, and the meaning of the primary (north) axis is the direction in which the object "looks." This direction and the object are perceived by an observer whose perspective is aligned with another coordinate system, in which the primary axis denotes the observer's direction of view. The observer knows what "in front" means for this object, simply because he or she is able to perform a particular transformation (rotation, translation, scaling) to turn the observer-centric view into the object-centric one. However, even though Euclidean vector space would in principle allow for such transformations, it fails to account for the vagueness of locations and the dependence of spatial referencing on the shape and size of ground objects. The problem is that, for example, "near" the lake has a very different meaning from "near" the glass (Hahn, Fogliaroni, Frank, & Navratil, 2016), and that both are vague and thus cannot be represented by ordinary vectors. In other words, spatial reference-including vagueness and geometric detail-is a result of transforming vague locations using geometric features of discrete objects. Therefore, the key to modeling cognitive spatial reference lies in appropriate geometric transformation models.

| Transforming a geographic map
To illustrate the challenge, we consider cognitive transformations of the simple geographic scene depicted in Figure   2a. It involves an observer who is standing in front of a house and a tree (indicated by the star symbol). Note that these objects have very different sizes and geographic scales.
Alternative spatial cognitive perspectives on this scene can be obtained by expressing the location of the tree as a vector relative to a ground object, choosing among different spatial orientations for rotation (Frank, 1998). In an egocentric frame, ground object and observer coincide, while in an allocentric frame, they do not. In relative frames, the orientation is given in terms of an object-dependent vector, while in absolute frames, it is given by another (more or less universally available) geocentric ground. Thus, in the spatial scene, there is only one meaningful case of an egocentric relative frame, in which the tree is located to the front/right of the observer (Figure 2a). There are, however, three cases of allocentric relative frames: (a) oriented by the front of the ground object, a.k.a. intrinsic frame, locating the tree to the left/front of the house (Figure 2d); (b) oriented by the observer, a.k.a. deictic frame, locating the tree to the back/right of the house ( Figure 2e); (c) oriented by the vector from the ground object to the observer, a.k.a. retinal frame, 2 and which locates the tree to the front/left of the house (Figure 2f). Only retinal frames allow observers to SCHEIDER ET AL.

| 1085
project their (left/right) sides onto the situation. For example, in Figure 3, the chair has an orientation based on its intrinsic frame, whereas the labels for the sides of the desk are chosen with respect to the observer's left and right hand. Thus, a retinal frame, even though allocentric, largely depends on an observer.
We conclude that it is these different parameters of vector operations which generate the space of possible cognitive transformations, and thus of different types of reference frames. 3 Ordinary vector algebra as proposed by Frank (1998), however, does not seem to do the job, because it does not handle uncertainty and the dependencies of reference frames on the size and shape of objects. The kind of geometry brought to the task needs to inherently deal with vagueness and discretization. The transformation proposed by Frank (1998) involves a separate discretization step that always leads from extended geometric representations to discrete point vectors. However, cognitive operations underlying such a step are unknown. Furthermore, the possibility of a FIG URE 3 Determining whether something is to the right/left (Levelt, 1996) from the perspective of an observer FIG URE 2 The location of a tree with respect to an observer and a house using different cognitive reference frames (terminology adapted from Frank, 1998): (a) The tree is to the front/right of the observer (egocentric relative frame); (b) the tree is to the east of the observer (egocentric absolute frame); (c) the tree is to the south-east of the house (allocentric absolute frame); (d) the tree is to the left/front of the house (intrinsic frame); (e) the tree is to the back/right of the house (deictic frame); (f) the tree is to the front/left of the house (retinal frame) vague geometric term as input to a transformation (and thus requiring the transformation itself to be vague) is not considered. Objects do not always have clear fronts and borders, and if we want to transform an absolute frame into an intrinsic frame, we need to make use of such vague parts in order to align the axis of the intrinsic frame.
The result of such a transformation will be a reference frame whose spatial extension in terms of the original frame will necessarily be vague itself. That is, the north axis in the intrinsic frame represents a vague direction in the absolute frame, pointing approximately in the direction of the front of the object (see Figure 4). And what is more, depending on the form and scale of a ground object, a spatial relation such as "in front of" needs to look different (see car versus house in Figure 4). In fact, one can consider spatial relations and spatial objects as "Gestalts" (i.e., stable fields of perceptual force; Lehar, 2003) that autonomously adapt to sensoric input.

| Geometric formalisms for spatial reference
Even though Levinson (2003) regards frames of reference as a result of a mental and linguistically bounded construction, he has not made explicit how they might be constructed. According to him, the major difficulty lies in a cognitive representation that is required to be general/vague/propositional and at the same time precise and geometric (Levinson, 2003, p. 214). These difficulties may have led researchers in spatial cognition to avoid geometrical and constructive approaches upfront. Qualitative spatial formalisms (Freksa, 1991) have proven to be useful for modeling spatial propositions and reasoning on a qualitative level, but do not account for geometric extensions of terms. Linguistic models of spatial frames of reference, on the other hand, are focused on the formation of language expressions (Tenbrink & Kuhn, 2011), not on geometry.
Yet, some notable exceptions do exist. Logan and Sadler (1996) initiated research about computational techniques for cognitive spatial referencing by accounting for spatial relations in terms of spatial templates (i.e., fuzzy regions of acceptability of a relation term, such as "above") that are aligned in a coordinate system. Building on this work, Regier and Carlson (2001) empirically investigated different kinds of computable spatial template functions, and proposed the Attentional Vector Sum (AVS) method to determine acceptable regions for relational terms. However, they left open how reference frame transformation could be done using this method. Spatial templates correspond to various qualitative projective (directional) and distance relations in spatial cognition, which have been investigated mainly for non-extended objects and not in the context of transformations (Clementini, 2013). Clementini (2013) defines transformations only on a qualitative spatial level (i.e., as mappings from 5-intersection to spatial relations) and only for crisp objects. Eschenbach (1999)'s coordinate-free axiomatic specification of Levinson's spatial frames of reference and De Felice, Fogliaroni, and Wallgr€ un (2011)'s geometric model of qualitative spatial relationships between extended objects account neither for spatial uncertainty nor FIG URE 4 The vague direction of a house front and the vague location "in front of the house" in an absolute frame. Note that the vague location "in front of the car" is much smaller and thinner SCHEIDER ET AL. | 1087 for transformations. Matsakis, Wendling, and Ni (2010) model spatial relationships in terms of summarized crosssections and force fields in a way very similar to the fuzzy vector approach taken in this article. However, their focus is not on cognitive reference frames and their transformations.

| The advantage of neural field models
An entirely different approach is suggested by computational neuroscience. In a neural field (Johnson et al., 2008), synthetic neurons are arranged in a two-dimensional array, such that neurons next to each other exercise a lateral inhibition, while neurons between different fields are wired such that they excite each other (Simmering, Schutte, & Spencer, 2008). Neurons inside a field can store complex spatial scenes in terms of continuous levels of excitation, and thus provide a natural way to deal with fuzzy spatial information. Furthermore, based on the way neurons are wired, it becomes possible not only to simulate spatial memory effects (Simmering et al., 2008;Spencer et al., 2007), but also spatial transformations and reference frames. Neural field models (Lipinski et al., 2012;Pouget et al., 2002) can account for certain kinds of vague spatial transformations by connecting neural fields with transformation fields. For example, in Figure 5a, a transformation field is used to compute a head-centered angular direction in degrees (upper field, 0 means front, relative location indicated by red arrow) on a one-dimensional line, starting from an eye-centered field and an eye-position field (Deneve, Latham, & Pouget, 2001), all of which are spatially fuzzy. Transformations can, for example, be modeled in terms of basis functions. 4 Such models explain how humans are able to transform complex signals on their retinas into stable directions relative to their head or body positions without any higher-level reasoning involved.
The same mechanism can be used to compute transformations of positions. In Figure 5b, Lipinski et al. (2012) have wired two fields together to transform a fuzzy one-dimensional position (target field) into a fuzzy object-centered position, relative to a fuzzy reference position (reference field). In the transformation field, the effects of both fields FIG URE 5 Spatial reference frame transformation based on neural fields: (a) Head-centered transformation of heading directions with basis functions (Deneve et al., 2001). With respect to the eye, the person heads left, and with respect to the head, the eye is positioned straight. Thus, the heading is to the left of the head; (b) transformation of a onedimensional location (Target) relative to another location (Reference), resulting in an object-centered location "left of" the reference (Lipinski et al., 2012). In the object-centered field, the location of the reference is denoted by the center line in the middle are combined and generate a corresponding peak in the resulting field. The location of this peak shows that the target location is to the "left of" the reference field. In this way, the authors were able to automatically detect that, for example, a green pen on the table is, to a certain degree, "to the right and above" a red pen lying on the table. The degree to which this is the case corresponds to the firing of corresponding neurons. The trick underlying this method is that the projection takes into account all combinations of fuzzy target and reference positions to determine the relative fuzzy position in the object-centered field. So, each neuron in an input field influences each neuron in the resulting field (and vice versa) to a degree defined by the particular wiring. By summing up these influences, the whole system is able to assess the degree to which a qualitative spatial expression holds, such as "left," "right," "above," and "below." The large advantage of neural field models over crisp geometries lies in getting rid of the necessity to "crispify" objects, and in the possibility of taking into account their size, shape, and inherent uncertainty when computing transformations.
How can we apply such a method to compute spatial configurations in a geographic map or GIS? First, fuzzy operations at least on the level of affine transformations (translation, rotation, scaling) are needed to account for vague shapes and sizes when computing with spatial terms, whereas only translations were simulated by Lipinski et al. (2012). Most importantly, however, neural simulations, like the ones discussed above, are computationally very expensive. They are non-deterministic, based on feedback and eigenstability among neurons, 5 and thus do not result in a tractable method of geoprocessing. In the next section, we propose simple, deterministic, and tractable operations which nevertheless capture essential aspects of neural transformations, without relying on feedback.

| FU ZZY V EC TOR SP AC ES A N D FU Z ZY TR A NS FOR M A TI ONS
In this section, we apply fuzzy vector space theory to deal with the challenge of integrating vagueness and geometric shape into vector transformations. After explaining the mathematical principles, we define three types of fuzzy transformations which we later (Section 4) use as building blocks for modeling cognitive transformation processes. We illustrate fuzzy vector operations with images computed using the implementation described in Section 5.1.

| Fuzzy vector spaces
Fuzzy vector spaces were originally defined by Katsaras and Liu (1977), generalizing the degree of membership of an element in a fuzzy set (Zadeh, 1965), then further developed by Lubczonok (1990). A good introductory text is Viertl and Hareter (2006). In this article we largely follow Lubczonok's and Viertl's formulas and extend them by definitions of a fuzzy angle and rotation necessary for the transformation operations.
FIG URE 6 A two-dimensional fuzzy vectorṼ constructed of two one-dimensional fuzzy vectorsÃ;B (Viertl & Hareter, 2006), including a crisp vector v with the membership values lÃ ðvÞ51 and lB ðvÞ50 for the fuzzy sets SCHEIDER ET AL.

| 1089
A vector is fuzzy if it can take on any crisp/discrete vector value to the degree indicated by its fuzzy membership function. To distinguish between a crisp vector and a fuzzy vector, a fuzzy vector is indicated by~(tilde).
Definition 1 In general, a fuzzy vectorṼ is defined as a pair ðV; lṼ Þ, where V is a vector space over some field and lṼ : V ! ½0; 1 is a fuzzy membership function.
It is worth noting that a fuzzy vector can have any geometrical form, just like a field in physics. In particular, it is not constrained to point-like shapes but could be trapezoidal (see Figure 6) or wave-like. Thus, fuzzy vectors can be used not only to express uncertainty but also to create complex shapes (e.g., the shapes of spatial objects or of vague spatial relations).
Ordinary fuzzy operations (Zadeh, 1965) also apply to fuzzy vectors. Fuzzy set union in Equation 1 is the maximum of membership degrees (e.g., a; b; c; ::), whereas fuzzy set intersection in Equation 2 is the minimum of membership degrees (with the binary equivalents Ú and Ù correspondingly). Furthermore, the average is given in For example, the fuzzy intersection ofÃ andB in Figure 6 yields a fuzzy value of lÃ ÙB ðvÞ50 at the position v, in contrast to the fuzzy union lÃ ÚB ðvÞ that results in membership value 1.
Further operations that summarize the distribution of fuzzy vectors are fuzzy thresholds and centroids.
For example, given a fuzzy vector, we can test whether it exceeds a given threshold value n as given in Definition 2.
Definition 2 Threshold test for a fuzzy vector given threshold n 2 ½0; 1: Another way to describe the distribution of fuzzy values is the mass center (centroid or weighted average) of a fuzzy vectorṼ given in Definition 3.

| Fuzzy vector addition and subtraction
Fuzzy vector space theory provides operations which are as expressive as ordinary vector space operations. Actually, the crisp version of vector algebra is only a special case of the fuzzy version. And just as (crisp) vector geometry can be used to define new (crisp) coordinate systems from existing ones (e.g., by translating the origin), we use fuzzy operations to construct fuzzy spatial reference frames. However, in contrast to ordinary vector spaces, fuzzy vector operations have fuzzy inputs and generate fuzzy outputs. For instance, the meaning of the origin in a system transformed by subtraction is a fuzzy vector in the input system, not a point, and adding fuzzy vectors increases their fuzziness and adds up their shapes, as explained below.
Following a common approach, we define addition of two fuzzy vectorsÃ andB over a vector space V as follows: The new membership function lÃ 1B is given as (where indices j and i range over the set of vectors in the crisp vector space): Definition 4 Fuzzy vector addition: The idea is that the membership value lÃ 1B of a given crisp vector v with respect to the fuzzy vector sumÃ1B is given as the union (maximum) of the intersections (minima) of the membership values of any two vectors whose sum yields v. Note that for a certain v, there are (infinitely) many binary vector sums which yield v5v i 1v j . For each of these, we take the minimum of the membership values of the corresponding two vectors. 6 Then we take the maximum of these minima over all equivalent sums in order to determine the value of the new membership function at the location v. Note that this value will depend on the smallest membership degree of the two crisp vectors building the sum, v i , v j , and if their membership degree is high inÃ andB, this will also yield a high membership value for v i 1v j with respect toÃ1B. In this way,Ã will be transferred to a new position determined relative toB, just as in the ordinary vector sum. However, in contrast to the ordinary vector sum, the fuzziness ofÃ andB will increase the fuzziness ofÃ1B. The new vector will also "take into account" the shape of the two fuzzy input vectors, meaning that the shape of the sum will be a sum of the shapes of both input vectors and not simply an intersection or union of their shapes. For example, in Figure 7, we added a "north-west" fuzzy vector ( Figure 7b) to a spoon object (Figure 7a), yielding a fuzzy vector that highlights the place northwest of the spoon. We use fuzzy addition to add spatial templates to objects inside a cognitive reference frame.
Definition 5 Fuzzy vector subtraction: Subtraction of fuzzy vectors can be used to express the location of a fuzzy vector relative to another vector, and thus to generate a new "object-relative" reference frame (compare Section 4.2). The new FIG URE 7 Fuzzy addition of a fuzzy "spoon" object (a) and a "north-west" fuzzy template (b), yielding "spoon" 1 "northwest" (c). In all pictures, the center point is the coordinate origin of the respective frame

| Fuzzy vector rotation
It is straightforward to rotate a fuzzy vector about a crisp angle simply by rotating the fuzzy set of vectors. However, inside a world of fuzzy things, we need to measure rotation angles too, based on fuzzy vectors, not based on crisp vectors. For fuzzy vector rotation, we first define a fuzzy angle in terms of fuzzy vectors and then rotate a given fuzzy vector in a fuzzy way. We derive a fuzzy angle from a fuzzy vector as follows: Definition 6 Fuzzy angle of a fuzzy vectorṼ : A fuzzy angleã of a vectorṼ is a fuzzy set ðA; lã Þ of angles A5f0; . . . ; 360 g whose membership function is defined by the maximum ofṼ -membership values of those vectors which enclose a given angle with the primary axis. This yields a distribution of membership across 3608, with those angles having strongest membership that point in the direction of the fuzzy vector. For example, the fuzzy angle measured between the object in Figure 9a and the spoon object in Figure 9b is drawn with blue lines ending with dots in Figure 10.
Given such a fuzzy angleã, the rotation operation of a fuzzy vectorṼ is defined by: Definition 7 Fuzzy rotation of a fuzzy vectorṼ around fuzzy angleã:

| Fuzzy vector scaling and matching
In the operations discussed above, the shape of the result vector is a combination of the shapes of the inputs.
However, we can also directly compare two fuzzy vectors in terms of their shape and size. This allows us to do scaling without ever taking into account any crisp numbers.
In order to compare the shapes and sizes of two fuzzy vectors, the first operation to apply is to superimpose them onto each other in the center of the reference frame. We do this simply by subtracting a vector's centroid from the fuzzy vector itself.
Definition 8 Centering fuzzy vectorÃ at the origin of the reference frame: Second, finding the scale difference between two fuzzy vectors is analogous to finding a factor k which scales a given vector v such that it yields another vector v 0 : kv5v 0 . Note that such an operation would correspond to building the quotient of two vectors k5 v 0 v , which is not defined in crisp vector space. 7 However, such an operation is defined in fuzzy vector space.
The fuzzy scale factor of a fuzzy vector with respect to another fuzzy vector is derived by exhaustively scaling one of them and fuzzy intersecting the results with the other vector. This will yield a high fuzzy value for those scale factors k which most perfectly match the two fuzzy vectors. The fuzzy scale factor of a vectorÃ with respect to vectorB is therefore a fuzzy set of (crisp) scale factors k. To avoid over-blurring, we turned the fuzzy scale factor into a crisp scale factor based on its center of mass 8 (see Figure 11) and then applied this crisp factor to scale a fuzzy vector.
Definition 10 Scaling of fuzzy vectorÃ with fuzzy scale factork: lkÃ ðvÞ : 5lÃ ðv j ÞjðkÞ 21 v5v j Note that we need to rule out zero scale factors, because we cannot divide by zero and because fuzzy vectors scaled by zero collapse to the origin. Fuzzy scaling just "shrinks" or "blows up" a vector if it is centralized ( Figure 12). Otherwise, it also translates a fuzzy vector.

Scaling a fuzzy vector by the scale difference between two other vectors is determined as follows.
Definition 11 Scaling vectorÃ with the fuzzy scale difference betweenB andC: scaleÃasBtoC : 5 centerC centerB ! A Note that in this latter definition, we take the fuzzy scale factor from centered vectorB to centeredC in order to determine their difference in size, and use this factor to scale another vectorÃ which is not (necessarily) centered (compare Figure 12).
With these fuzzy versions of vector translation, rotation, and scaling, we are ready to compute affine transformations on fuzzy vectors, and thus to transform cognitive spatial frames of reference into each other.

| M ODE LI N G SP A TI A L FR A M ES OF R E FE RE N CE WIT H F U ZZY TRA N S F ORM A TI ON S
Even though neural fields are inherently fuzzy, they provide enough geometric detail to take metric location, direction, shape, and size into account (cf. Spencer et al., 2007). In how far are fuzzy vector transformations a good way of dealing with this problem in GIS? In order to answer this question, first the similarity of neural fields and fuzzy vectors is explained, and second a model of frames of reference in terms of fuzzy vectors is proposed.

| Neural fields and fuzzy vectors
From the perspective of GIS, everything essential we need to say about cognitive spatial transformations can be said in terms of fuzzy vectors. A state of activity of a neural field can be represented by a fuzzy vector, 9 while the fuzzy domains (i.e., the crisp vector space, or angle space, or the space of scales) correspond to arrays of neurons. Neural fields can be FIG URE 1 2 Fuzzy scaling. Note the (upper) dot above the (centered) egg in (a). Starting from (a), we first computed the fuzzy scale factor from this dot (5B in Definition 11) to the egg (5C in Definition 11), and then scaled the fuzzy "spoon" vectorÃ (Definition 11) (b) accordingly, resulting in (c) layered and can take on different semantics in time (Simmering et al., 2008). One field can represent an egocentric perceptual frame, others represent object-centered frames, while still others represent more abstract spatial relations, such as "in front." Furthermore, the semantics of the activity of a neural field is determined by the connections between individual neurons inside and between these fields (Pouget et al., 2002;Simmering et al., 2008). New reference frames are constructed by new connections between fields. We argue here that fuzzy vector operations capture an essential aspect of such wirings: every neural location (crisp vector) of the input field can influence every neural location (crisp vector) in the output field depending on their state of activity and the preferences of the connection. The connection preferences between neurons are reflected in the chosen algebraic relation between crisp vectors (expressed in terms of an addition, subtraction, rotation, or scaling), whereas the summed activity influence of neuron combinations on a target neuron are reflected by a sum function (such as the maximum or the product; Pouget & Sejnowski, 1997) over their fuzzy intersections. 10 In doing so, we can safely ignore neural feedback, as long as our goal is not to model eigenstability of neural patterns, such as in the perception of objects or in spatial memory. We assume here that reference frame transformations can be modeled without feedback, since the resulting patterns are more or less determined once the input patterns are known.

| Modeling frames of reference in terms of fuzzy vectors
A frame of reference gives the domain of a fuzzy vector a particular semantics, just as a coordinate reference system gives the domain of coordinates a particular meaning. That is, if a fuzzy vector is in a particular frame of reference, then the elements of the domain of that fuzzy vector denote locations inside that frame of reference, and these locations can be shared by different fuzzy vectors. For example, from the perspective of an observer, observed objects correspond to a set of fuzzy vectors in a particular egocentric reference frame. Equivalent crisp vectors in the domain of these fuzzy vectors correspond to equivalent egocentric locations. A new (e.g., object-centered) frame of reference can be obtained by applying a sequence of fuzzy transformations on known frames. We demonstrate in the following how this idea can be modeled with fuzzy vectors.
We conveniently start with the reference frame of an observer (ego) who perceives a scene from his or her perspective, that is the field of view (FoVofEgo) of this observer. Perceived objects (Definition 12) correspond to fuzzy vectors 11 (Õ i ) in this reference frame. One of these objects is the observer, a person who can perceive herself (O 0 ). We furthermore use a unary function f(Á) on perceived objects, which retrieves their front direction (a fuzzy vector in the direction of the object front). Another function c(Á) retrieves the tip of a compass needle that points north given a vector denoting its origin. This function is used to measure cardinal directions. The latter two functions require perceptual or measurement processes and thus are input to and not output from our method (compare Section 6).

Definition 12 Objects
Besides perceived objects, we need to model also spatial relations in the form of spatial templates (Logan & Sadler, 1996). Spatial templates can be used to determine some relation between a ground and a figure object. We assume that spatial templates are embodied and thus initiated in the egocentric reference frame (FoVofEgo) with the observer as ground. The figure is located by a fuzzy vector perceived from the view (and relative to the size) of the observer.
What turns such a template into a binary relation between arbitrary objects is simply a transformation from observer to other ground objects. The simplest case is a template for a single figure object. Here is a number of such binary templates standing for the relations infront, back, left, right, near, far (compare Definition 13 and Figure 13).
Definition 13 SpatialTemplates FoVofEgo 5 {F infront ;F back ;F left ;F right ;F near ;F far } whereF i are figure slots in the field of view of the observer.
We can easily define combinations of these relations in terms of fuzzy intersections, denoted by the operator Ù. For example, the spatial relation nearfront ( Definition 14) is an outcome of the intersecting spatial relations near and infront given in Definition 13, as illustrated in Figure 13c.
Note that in this way spatial templates can simultaneously take into account any geometric properties, such as distance, angle, or complex shape, in order to define a spatial relation.
A recurring sequence of fuzzy vector operations is needed to transform into a new frame of reference, cf. The formula in Definition 15 is used to derive more specific transformations (Definitions 16,17,18,19,20,and 21) for all known types of frames of reference discussed in Section 2.1 (compare Figure 2).
In these definitions, fuzzy vector variables range over objects in an observer's field of view. Each operation transforms the figure object into the kind of frame indicated in its title. fðÁÞ denotes an object's front and cðÁÞ cardinal north. For all cases in which the ground objectÕ grd or the orientation origin (Ṽ ortfrom ) is FIG URE 1 3 Fuzzy spatial templates denoting a spatial relation between a ground object located in their origin and some figure object. Per default, this origin denotes the observer. "Near front" is an intersection of the "near" and an transform retÕgrdÕfig 5transformÕ grdÕgrdÕ0Õfig 4.3 | Scaling with respect to ground object and assessing the degree to which spatial relations apply After transformation into a particular frame, a figure object can be compared with a spatial template in order to assess whether or not the corresponding relation between the ground object and the figure object holds in that frame ( Figure   14), for example whether an object is intrinsically left of the observer. For this purpose, we first need to adjust the scale of the template according to the size of the ground object. For instance, when the ground object is a house, then "near" will need to be larger than in case of the observer as ground. For this purpose, we scale all templates by the factor from the observer to the respective ground object, as in Definition 22.
Definition 22 Scaling of relational templates relative to ground: scaleÕ grdF tpl 5 scaleðF tpl Þ as ðÕ 0 Þ toÕ grd To check whether a figure object O j satisfies a spatial templateF i , 12 we combine Definitions 15 and 22 (compare flowchart in Figure 14) to define another function (Definition 23). In this function, the transformed object O j is fuzzy intersected with the scaled fuzzy templateF i , and the result is compared to a threshold value n as in Definition 2. Using this function, the question "Is Andre near the front of a house?" can be answered by: satisfies nÕAndreF nearfront (transform intrÕhouse )Õ house . Moreover, it shows how a reference frame selection could be done (Carlson, 1999), namely by simultaneously transforming an object into all frames and determining the one which most perfectly satisfies a spatial template.

| COM P U TI NG COG NI TI V E TR AN SF ORM A TI ON S OF A GE OGR A PH I C SCEN E
In this section, we discuss our implementation and test it through an application to the example of Section 2.1.

| Implementation
We implemented a discrete version of our formalism in Haskell (https://www.haskell.org/) because its recursive functional syntax allows for translating the functional definitions of the last sections in a straightforward manner. 13 In the type system of Haskell, we represent a fuzzy vector as a map data structure. 14 The key of the map is a crisp vector that maps to the membership value of the fuzzy vector. As basic numeric data type we rely on doubles.
With this minimal set we define membership functions for one dimension and combine them to two dimensions for a simple geographic space. All fuzzy vector operations and reference frame transformations have been specified and computed as complex functions.
All fuzzy domains are discretized. For the discretization of vector space, we specified a two-dimensional minimum bounding rectangle and a stepsize which represents spatial granularity. A small stepsize indicates high resolution; a bigger stepsize indicates low resolution. In our test, we used a grid of size 40 3 40. For the discretization of scale factors, we divided the number line (from 0 to 10) into steps of 0.1, and similarly for the angle domain between 0 and 3608. All vectors have to fit into the raster spanned by the minimum bounding rectangle and stepsize. Partial function application allows handling errors that are introduced by this domain discretization. For this purpose, we implemented general error-tolerant functional definitions of binary operations on fuzzy vectors and reused them in each case.
In general, computing the operations that we proposed in this article has at most a complexity of OðnmÞ, where n is the size of one input fuzzy domain and m is the size of the output fuzzy domain. This is due to the fact that in all binary fuzzy operations on two fuzzy sets, with lÃ ÃB ðxÞ5 W flÃ ðx i ÞÙlB ðx j Þjx5x i Ã x j g, * is bijective, so that the inverse Ã 21 always exists. Thus, given an output element x, one of the two input fuzzy domain elements x i or x j can always be derived from the other: xÃ 21 x i 5x j .
Our approach is tractable but not particularly efficient. Yet, it is straightforward to improve on this: First of all, computations can be restricted to all non-zero domain elements of an input domain, which scales if the non-zero area does not increase equally with the considered extent. This can be realized with a spatial index over domain elements, restricting the search to those areas containing information. Second, if the fuzzy membership function is known, computations can furthermore be restricted to function parameters (Viertl & Hareter, 2006). Finally, since the computation for any given neuron is independent of its neighbors, it can be parallelized by allocating input space, yielding O n p m , where p is the number of parallel processors. In our tests, we were able to compute all transformations on the chosen level of granularity in a time ranging up to 30 minutes, based on allocating the fuzzy vector domain to four parallel processors. However, the algebraic matching of crisp vectors (the connection of "neurons") could be done in different processes, ideally such that each individual algebraic connection is a separate process. This would correspond to the parallel firings of neurons in a preconfigured neural net, which would reduce complexity to linear time. The human brain seems able to achieve this, based on its massively parallel architecture (Merolla et al., 2014).

| Application to geographic map example
In Figure 15 we illustrate transformations computed in Haskell and corresponding to the frames in Figure 2, based on the definitions of frames of reference in Section 4.2. The fuzzy location of the tree can now be expressed relative to the positions, alignments, and sizes of the observer and the house, without any need for prior discretization. The coordinate systems in Figure 15 express different frames of reference, with the origin denoting either the observer (egocentric) or the house (allocentric), and the vertical/horizontal axes expressing either front-back/left-right or (absolute) north-south/west-east directions. Note that the size, shape, and location of the tree are transformed relative to the location, size, and shape of the ground objects. In Figure 15a, the large object in the upper left part denotes the house with its front, and to the right of it there is a tree. The observer is shown at the origin, and the primary axis points in the direction of view of the observer. In Figure 15b, the same scene is shown with the primary axis pointing to magnetic north (which was assumed to point to the corner of the house). In Figure 15c, the tree is shown with respect to the house as ground object and magnetic north. In Figures 15d-f, the tree is shown with respect to the house and different kinds of relative orientations (house front, observer front, house-observer direction).
Despite the influence of shape and fuzziness on transformed objects, qualitative spatial descriptions of these objects turn out to be relatively precise. From the "viewpoint" of the house, the tree is located in an enlarged fuzzy area, because the location of the house is spatially extended and the tree is "viewed" from the set of "house" locations.
Still, it is located exactly were we expect it to be according to the various cognitive reference frames: it is absolute "back-right" ("south-east") ( Figure 15c), intrinsic "back-left" from the house (Figure 15d), but deictic "right" from it (Figure 15e), and retinal in the "front-left" corner ( Figure 15f). This is also what comes out when computing satisfaction of appropriately scaled spatial relational templates in Figure 13, based on Definitions 22 and 23, which additionally accounts for relative scale (see Figure 14). For allocentric (house-centered) transformations, we scaled each template according to the scale difference from observer to house. It then turns out that the tree is always "near" the house, but not "near" the observer (Hahn et al., 2016). This shows how our method allows us to compute with relative sizes of relations such as "near", depending on the ground object.

| OU TLOOK-T OWAR D N EUR AL GIS
A neural GIS built on the described techniques would represent space by cognitive frames of reference, in addition to ordinary geodetic coordinate reference frames 15 (Iliffe, 2000). A field in a neural GIS denotes phenomena, vague locations, directions, and sizes in a particular cognitive reference frame. Transforming and intersecting fields allows for checking to what degree spatial expressions apply. This idea is captured in the following basic design principles, each highlighting the differences from ordinary GIS.
1. Neural field principle. Unlike a layer in an ordinary GIS, a field in a neural GIS represents a single extended perceptual phenomenon. This can be either a spatial object, a spatial field (a spatially continuous phenomenon), a spatial relation (a spatial template), or a direction or size. A neural field consists of a neural domain (consisting of spatial | 1099 values such as locations, angles, and sizes), whose elements are in some activity state (denoted by the fuzzy value).
There are no such things as points, lines, or polygons as in vector GIS, however spatial objects can be point-like, linear, or region-like, as in a raster GIS.
2. Fuzzy semantics principle. Unlike coordinates in a spatial reference system of ordinary GIS, the elements of a neural domain denote fuzzy positions, directions, and sizes defined relative to another field. That is, the meaning of each element of a field is another field. For example, the meaning of the origin in an object-centered reference frame is a field that represents an object, and the meaning of its vertical axis is a fuzzy angle.
3. Transformation provenance principle. Unlike spatial transformations in a GIS, which are usually reversible (Iliffe, 2000), fuzzy transformations are defined relative to an object and direction. Thus, they are not reversible without knowing this object and direction. For example, if the corresponding object field is lost, then reversing an objectcentered transformation is impossible. For this reason, the semantics of a cognitive spatial reference frame depends on its provenance, and losing provenance means losing meaning.
4. Reference frame (equivalence) principle. A cognitive reference frame is defined as an equivalence class of neural fields generated by a given cognitive transformation (compare Definition 15). That is, each neural field generated by the same transformations (including identical ground objects and directions) is a member of the class of fields in the same reference frame. This is why transformation provenance counts: we only know the spatial reference of a neural field in so far as we know how it was generated.
5. Correspondence principle. Unlike overlay operations of layers in a GIS, fuzzy set intersections of neural fields determine correspondences between fields; in particular, the degree to which spatial objects coincide or spatial templates apply (see Figure 14). Like in GIS overlays, correspondence checks between neural fields can only be performed for fields of a common cognitive reference frame class, since only then is the meaning of corresponding domain elements ($ grid cells) comparable.
6. Language assertion principle. Unlike ordinary GIS (which are based on crisp geometric models and are less suitable for modeling natural language expressions), vague language assertions can be tested in a neural GIS based on the correspondence principle. The degree to which a spatial relation template matches the location of an object is taken as a model for determining the degree to which corresponding spatial terms apply. For this purpose, four steps need to be taken. First, language terms need to be translated into neural fields depending on their meaning.
For example, names and definite descriptions need to be translated into fuzzy objects, and relational expressions into fuzzy spatial templates. Second, if these neural fields are not in a common reference frame, then fields need to be translated into such a frame, using fuzzy transformations. Third, templates need to be scaled relative to ground objects. Our scaling approach is generally applicable whenever the size of a term is relative to the size of the ground objects involved, regardless of the scale level. And fourth, the degree of correspondence stands for the degree to which the assertion applies to the perceived situation (see Figure 14).
7. Spatial resolution and extent principle. Similar to map algebra and raster processing in ordinary GIS, a neural field element has a spatial resolution and a field has an extent, and in order to apply correspondence checks and transformations, elements and extents need to spatially coincide. If this is not the case, then a neural GIS needs translations and resamplings similar to a raster GIS. For example, if we add a small spatial template to an object field with a much larger extent, then the resulting neural field needs to be resampled at locations not covered by the template field. Note, however, that a neural field cannot be reduced to map algebra, since the latter does not involve vector operations, and also differs from vector GIS operations on cell geometries. Our approach rather corresponds to a new way of transforming raster maps relative to their cell attribute values as well as their geometries.
To realize a neural GIS, the methods outlined in this article need not only be implemented in an efficient way (as discussed in Section 5.1), but also be substantially extended. For example, topological and direction operators for objects need to be included. Topological operators allow distinguishing interiors, boundaries, and exteriors of extended objects (Egenhofer & Franzosa, 1991;Randell, Cui, & Cohn, 1992), similar to discrete (grid-based) topological models as proposed by Egenhofer and Sharma (1993), Winter and Frank (2000), and Roy and Stell (2002). In this way, it becomes possible to take boundaries into account when computing locations relative to ground objects. For example, the location "in front of the house" can more accurately be computed by extracting the border of the house and by adding a fuzzy spatial template to one side of it, namely the one that is facing toward its front. In this way, we prevent the expression "in front" being evaluated from locations in the interior or back of the house, as was the case in our examples. 16 Furthermore, topological operators can also be used to denote interiors and exteriors of objects, and thus allow extension of spatial templates with the terms "inside" and "outside." Furthermore, we need methods to extract front directions from objects as well as cardinal directions, which go beyond the approaches discussed in this article. To model the full range of spatial expressions (like "under" and "above"), it becomes necessary to move to threedimensional neural fields. Finally, it is an open question what data sources a neural GIS might best use, and how neural fields can best be associated with spatial and non-spatial language terms (see below). In particular, it is open how attribute (unary non-spatial) information can be handled inside a neural GIS.
Another important field of future work concerns empirical validation of the transformation method, and the setting up of a repository of tested spatial templates for spatial language terms. The templates we used in this article should be adapted in correspondence with empirical evidence. One way to do this is based on user studies in laboratories, as was done by Regier and Carlson (2001) and others. Note, however, that the purpose of a neural GIS is not to model cognitive processes as precisely as possible (for this purpose, neural field models from neuroscience are probably much more adequate), but rather to provide a useful approximation for purposes of geocomputation with language expressions in GIS applications. Another possibility for testing is therefore to extract place graphs from spatial descriptions in natural language texts (Khan, Vasardani, & Winter, 2013;Vasardani, Winter, & Richter, 2013), to georeference named places with gazetteers (as was done by Kim, Vasardani, & Winter, 2015), and unnamed places manually (to obtain a high-quality localization), and to check correspondences of transformations of unnamed places with their supposed locations using our method. In this way, we could test geometric situations against empirical knowledge contained in natural language descriptions of geographic space, which frequently contain spatial references (Derungs & Purves, 2014).

| CON CLU S I ON S
We have proposed a novel method to compute with and transform cognitive spatial frames of reference, including their approximate geometric extensions, taking into account fuzzy locations, directions, shapes, and sizes of ground objects. The main idea is to model spatial phenomena (objects, angles, and directions, but also spatial relations) in terms of neural fields, in which the activity state of a neuron represents the degree to which a phenomenon is present or a relation applies, and in which transformations are a result of neurons interacting with each other across different fields.
Thus, it becomes possible to compute transformations without prior discretization, taking into account the spatial uncertainty and the shape and size of objects. Furthermore, the degree to which spatial assertions in natural language apply to a situation can be tested, provided that terms can be mapped to neural fields. To compute with neural fields in GIS, we have suggested fuzzy vector space theory, which captures essential aspects without relying on feedback loops. We proposed a number of transformation functions, showed how they can be used to define six well-known types of spatial cognitive reference frames, and tested them by transforming a geographic map example into six different cognitive perspectives. The result illustrates the potential of the method for spatially interpreting and testing natural language expressions such as "in front," "to the left," and "in the back of the house." Based on these results, we formulated the first principles of a neural GIS, effectively supporting human perspectives on space, based on transforming neural fields and checking for correspondences with spatial relations and objects. We have identified a number of future tasks, including the provision of topological and directional operations, models for different spatial expressions, the association of language terms with neural fields, as well as the empirical testing and building up of a repository of spatial templates that can be used for modeling language expressions.