High-frequency techniques are applied to analyze and optimize radio localization systems in indoor scenarios. The localization estimate of a mobile station is obtained using the fingerprint technique. The fingerprint of a point is a vector whose terms are the field strengths of the signal received from a network of fixed base stations. The location algorithm is based on minimizing the Euclidean distance between the fingerprint of the unknown point and the fingerprints of a mesh of reference points. Traditionally the fingerprints (stored in a database) of the reference points are obtained by measurement. Here, we propose obtaining these data by using deterministic Geometrical Theory of Diffraction (GTD) models. The model considers multiple bounces and the transmission through indoor walls and also the scattering from furniture in order to obtain a reliable database for the problem. Using the GTD model, the localization system is simulated in two indoor scenarios with different parameters. The proposed procedure based on a GTD model appears well suited for the design and optimization of localization systems tailored to specific scenarios and avoids large and costly measurement campaigns. Several results and conclusions are presented that illustrate these points.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 In this paper, we tackle the problem of location using available RF-based Wireless Local Area Network (WLAN) infrastructures. Indoor positioning systems based on location using WIFI (Wireless Fidelity) utilize metrics based on the received signal strength (RSS) and/or the time-of-arrival (TOA) information of radio channels that are available in the WLAN interface devices [Bahl et al., 2000; Cisco Systems, Inc., 2006; Kaemarungsi, 2006; Kaemarungsi and Krishnamurthy, 2004]. The TOA metric is more accurate and reliable than a RSS based metric but requires larger frequency bands and fails when the system cannot detect direct raypaths.
 Two methods are used in location systems based on RSS and/or TOA metrics: a distance-based method and a pattern-based method, [Hatami, 2006; Kanaan et al., 2006]. The distance-based method considers the distance between the device to be located and at least three Base Stations of the WIFI network, also known as Access Points (APs). The distance is measured considering the RSS or/and the TOA. The distance-based method suffers large errors in many cases when the device to be located is not in the line of sight of any one of the reference APs. In these cases the RSS and the TOA from an AP exhibit nonlinear behavior due to the errors causes by the undetected direct paths (UDP), [Pahlavan et al., 2006]. In UDP cases distance based methods give errors in the location higher than 10 m. Pattern based localization methods, also known as fingerprint techniques, compare the RSS and TOA measured by the device to be located with the values of these parameters in a grid of reference points. These values form the so called radio map of the fingerprint technique. Location by fingerprint exploits the unique relationship between the RSS and/or TOA metrics in a set of received RF signals at a specific location. Currently, the fingerprint technique is one of the most effective and accurate indoor localization methods with localization errors of a few meters or lower.
 Traditionally, the radio-propagation database required by the fingerprint technique has been provided by costly, time-consuming measurement campaigns. This takes into account all the factors that determine the propagation channel for a particular scenario. However, it is very cumbersome to perform a complete test of the performance of a localization system and nearly unrealizable an optimization process of such system only by measurements, [Assad, 2007; Hatami, 2006]. On the other hand, the IEEE 802.11 statistical channel models [Erceg et al., 2004; Medbo and Berg, 1998; Medbo and Schramm, 1998] can be applied with minimum cost and effort for testbeds and optimization of localization systems based on WIFI, but these models do not take into account the particular layout of an scenario and therefore are less accurate than the measurements. To overcome these difficulties, we can resort to the GTD that for indoor scenarios can give enough accurate propagation models [Cátedra and Pérez-Arriaga, 1999; Saéz de Adana et al., 2000] with affordable computational costs when the GTD is implemented with an advanced ray-tracing accelerating technique such as the one presented in this paper. These GTD propagation models provide a useful tool for WLAN design, scheduling and deployment for localization applications based on fingerprints. These models can avoid the cost and time-consuming measurement campaigns for obtaining the radio maps and give more reliable results than the statistical channel models. High-frequency techniques also allow for optimization of the fingerprint reference point grid by choosing the grid spacing that minimizes location error [Hatami, 2006]. In addition, using these techniques, it is also possible to adjust parameters related to the APs. These parameters may include the number of APs, the location of each AP in the indoor scenario, the relative power level of the APs, the antenna steering direction and the directivity of the AP.
 This paper is organized as follows. In section 2, we present a summary of the fingerprint technique and show how it can be implemented using numerical predictions based on GTD instead of expensive and long measurement campaigns. In section 3 we describe the main features of FASPRI, the GTD computer tool used in this work [Cátedra and Pérez-Arriaga, 1999; Saéz de Adana et al., 2000]. This tool incorporates an efficient treatment of multiple reflections and transmission in the scenarios walls and a heuristic approach based on Physical Optics (PO) for the treatment of nonelectrical large obstacles, such as furniture or columns, which are often present in indoor scenarios. A summary of results obtained in two realistic indoor scenarios are presented in section 3. The results include an analysis of the main parameters of the localization algorithms and lead to some conclusions for optimal localization system design as well as the main cause of localization error. Finally the paper ends with some important conclusions.
2. Analysis and Design of a Fingerprint Localization System Using GTD
 The deployment of fingerprint based positioning systems using the proposed technique can be divided into two phases. First, the radio map, or RF, fingerprint database is obtained in a grid of reference points. Figure 1 shows an example of a reference grid. The vector of received signal power values from the APs at a particular position is called the location fingerprint of that point. The fingerprint at point O(x,y) due to m APs, is the vector FPo, of dimension m defined by
where [P1, P2,…., Pm] are the RSS measured in dBm at point O(x,y) received from AP1, AP2,… … APm.
 The components of the fingerprint vector [P1, P2,…., Pm] are calculated by averaging the received power levels in a set of 15 points, 14 of them located on a cube whose center is the point where the fingerprint is computed (Figure 2). For component Pk we have
where Pkl is the RSS from APk at point l of the 15 set of points indicated in Figure 2.
 This procedure is implemented in order to emulate the different positions and orientations that the mobile station can have and also to take into account the fast variation of the field strength due to fast fading. This average represents a statistical treatment of these random variations. The size of the cube for obtaining the smallest error in the location process is of the order of a wavelength. The optimal size of the cube has been obtained considering three side values: λ/2, λ 2λ. From our simulations, a cube of side λ gives the smallest standard deviation of the mean error in the localization process. It also gives a similar mean error when compared to a cube with side length 2λ. However, a larger cube is not advisable when the spacing in the reference grid is small.
 In this work the fingerprints are obtained in the reference grid of points using FASPRI code, a well validated computer tool for 3-D propagation analysis based on GTD in indoor scenarios [Saéz de Adana et al., 2000]. In this way we avoid the necessity of expensive measurement campaigns. The indoor scenarios are modeled by 3-D flat faceted CAD models obtained from the architect designs of the buildings with the standard degree of details used in architecture, [Cátedra and Pérez-Arriaga, 1999]. The code considers also the types of building materials, the locations of the APs and their parameters of radiation (antenna steering direction, radiated power, frequency and radiation pattern) as input. The number of points Nm in the reference grid strongly impacts the accuracy and reliability of the location process. FASPRI tool takes into account ray coupling mechanisms formed by any combination of reflections and transmissions as well as scattering on small obstacles up to a prefixed order.
 In addition, the effect produced in location process by the presence of a human observer porting the mobile station or near it must be taken into account. This presence modifies the fingerprint values causing an additional attenuation [Kaemarungsi and Krishnamurthy, 2004]. In order to incorporate this effect in the location process we model the observer body using a faceted model. In the second phase, the localization algorithm is applied to estimate the coordinate of the test points where device to be located is placed.
 The estimation of the coordinates of each test point is obtained using the closet neighbour (CN) algorithm that considers the Euclidean distance between the fingerprints at the test point and at the points of the reference grid, [Bahl et al., 2000]. The estimated coordinates of a test point T(xe, ye, ze) are those of the reference grid point that has the minimum Euclidean distance between the associated fingerprints. The Euclidean distance Dj between the test point T and point j of the reference grid is given by:
where FPj(i) and FPT(i) are the i-components of the fingerprint vectors of grid reference point j and test point T, respectively, and Nf is the dimension of the fingerprint vector. If j is the reference point where Dj has the minimum then the estimated coordinates of the test point are given by
where Pj(xj, yj, zj) is the position vector of grid reference point j.
 We can consider either all the APs of the WLAN (in this case Nf is the number of APs in this WLAN) or only a subset of APs (for instance, those which give the strongest field strength at the observation point). In the latter case, Nf is less than the total number of available APs. We will see in the results section that the number of APs considered in (3) has a strong impact on the localization accuracy.
 A possible improvement of the location algorithm is to make an interpolation between the coordinates of the grid reference points with smaller Euclidean distances [Prasithsangaree et al., 2002]. The estimate of the coordinate of the point where it is the mobile station, T(xe, ye, ze) is calculated by means of the expression:
The summation in (5) extends over the Nc points of the reference grid that have the smallest Euclidean distance.
3. GTD Computer Tool
 In the previous version of FASPRI code, the AZB (Angular Z-Buffer) algorithm [Cátedra and Pérez-Arriaga, 1999; Saéz de Adana et al., 2000] was applied to compute indoor propagation while considering several effects (simple, double and triple reflection and transmission). This implementation reduces the CPU-time. However, when the order of the effects increases or the geometric model is very complex, as usually happens in indoor scenarios, this approach becomes impracticable because the memory required for storing the different AZB-matrices is excessive. The order of a ray path is the total number of reflections, transmissions and diffractions. For instance, rays that suffer double reflections or double transmissions or reflection-transmission are of order 2.
 In order to solve this problem, a new iterative approach for computing n-order effects between different flat surfaces has been developed. It is useful for computing reflections and transmissions by or through electrically large facets. A new version of FASPRI has been developed, [González et al., 2007; Lozano et al., 2007]. It includes the AZB and SVP (Space Partitioning Algorithm) algorithms [Cátedra and Pérez-Arriaga, 1999], in an iterative scheme together with the A* heuristic search method to efficiently compute the contribution of higher-order reflections or reflections in complex scenarios, [Fujimoto et al., 2006; Russel and Norvig, 2003]. Using this technique, we obtain the M stronger rays (higher field intensities) that join the emitter with the receiver considering all possible rays with order less than or equal to N. The electromagnetic kernel is based on a combination of GO-…-GO-GO to calculate multiple reflections and transmission between flat facets. The GO contributions are computed based on image theory [Balanis, 1989].
 The AZB algorithm is used to determine the possible surfaces, which can produce an n-order reflection/transmission of rays coming from a given “active” surface. The space seen from the “active” surface is divided into several angular regions (anxels); the AZB matrix contains the information about the surfaces which are visible from this facet. The approach includes the back-face culling test and the painter's algorithm [Cátedra and Pérez-Arriaga, 1999]. The AZB matrix can be saved in a file, so that it only has to be calculated once for each indoor scenario.
 On the other hand, the SVP algorithm is used to find the potential occultations of the ray path segments [Cátedra and Pérez-Arriaga, 1999]. The SVP algorithm selects the surfaces that are located in certain voxels as potential occluding surfaces; it saves the shadowing test of a vast amount of surfaces of the model. An adaptive SVP algorithm has been developed based on splitting the parallelepiped that encloses the scene in nonuniform voxels. In this way, most of the voxels are created in the scene parts that have most of the surfaces. In the preprocessing stage, the surfaces are stored in the appropriate AZB and SVP matrices. Once the preprocessing is finished, the ray-tracing algorithm for calculating multiple iterations between facets is applied as follows:
 The candidate surfaces that contribute to the first iteration, the “active” surfaces, are selected between all the facets of the scene considering the back-face culling and shadowing tests [Cátedra and Pérez-Arriaga, 1999]. The shadowing test is accelerated by the SVP algorithm, as outlined above. For each one of the “active” surfaces, a cost function is calculated. The cost function represents the potential strength of a ray that is reflected or transmitted in the surface. We define the cost function as the inverse path length of the rays from the source point to the surface. We also included the losses due to the transmissions through any walls before reaching the surface in the cost function. The “active” surfaces are stored in a list called OPENLIST. The surfaces are ordered in this list in increasing order of the cost function.
 The surface with the smallest cost function is selected, which is the first one in OPENLIST, and its contribution to the total field at the observation point is computed using the Fresnel reflection or transmission coefficients [Balanis, 1989]. This surface is then removed from OPENLIST.
 If the number of bounces or transmissions is less than a prefixed maximum number, then the next step is to determine the facets that contribute significantly to the following iteration; they are selected from the AZB matrix of the “active” surface. Considering that the “active” surface radiates principally in some directions, only a few surfaces are considered. The “passive” surfaces are those located in the anxels where these directions are contained. The “passive” surface in this bounce becomes the “active” one in the next bounces. Then, the cost function of these new “active” surfaces is calculated. The new “active” surfaces are inserted in OPENLIST. The surfaces in this list are reordered in increasing order of cost function value. The process is repeated. In the next step the first surface in OPENLIST is considered and is treated following the same procedure previously described for the first (active) surface treated.
 The process finishes when all the elements have been analysed, (OPENLIST is empty), or when the maximum number of ray contributions we want to compute has been reached. Figure 3 shows a flowchart of the new algorithm implemented in FASPRI.
 Electrically small obstacles are treated in FASPRI using a heuristic approach based on PO. The response of the scatter to any incident field (due to a direct field or to any order reflected/transmitted field) is obtained by modeling the scatter by a set of equivalent currents that depend of the magnitude, phase and incident angle of the waves that reach the scatter. Figure 4 shows the ray tracing for the 30 strongest contributions on an indoor scenario. The coupling of the scattering in the column is one of the 30 strongest coupling mechanisms.
 We have analyzed two test cases: the Sota and Politecnica buildings. The Sota test case is an approximately square business building with dimensions 38 m × 36 m (Figure 5). This building was used in the validation of FASPRI code, [Cátedra and Pérez-Arriaga, 1999; Saéz de Adana et al., 2000]. The second test case, shown in Figure 6, is the plan of a University building. We have considered several WLAN deployments with different numbers of APs, antennas locations and antenna steering directions. In each case we assumed that all the APs radiate the same power at a frequency of 2.4 GHz. We conducted a statistical study considering the mean squared value of the localization error considering 26 points randomly distributed in the building plants.
 In general, the error in the location detection decreases inversely with the number of antennas (or APs) until a critical number is reached that strongly depends on the indoor geometry, location and steering direction of the antennas. The best results were obtained using only the APs that produced the highest received power levels at the grid reference points in the location algorithm. Figure 7 shows the relation between the mean error obtained in the localization process (with 16 and 12 APs, for the Sota and Politecnica test cases, respectively) and the number of APs (antennas) with strongest power in the test or reference points considered in the localization algorithm for the two test cases.
 The location error in the detection process in principle decreases with the number of points in the reference grid. However, large errors appear when many points of the reference grid appear with similar fingerprints. The best results are obtained when the grid spacing is approximately 4λ. Figure 7 shows the relation between grid spacing (fingerprint number) and the mean error produced in the location.
 In order to reduce the errors produced in the estimate of the coordinates, the interpolation using only the points of the reference grid with smaller Euclidean distance has been analyzed. The results obtained for different tests show that, in general, no reduction in the location error is obtained by interpolation (Figure 8). Therefore, for its simplicity we recommend the coordinates of the reference grid point whose fingerprint has the minimum Euclidean distance from the test point be assigned to the test point. This conclusion works in a statistical sense. For the localization at a particular point, the interpolation could improve the accuracy under some circumstances. The reason for this behavior may be due to the coherent nature of the field arriving at point: the total field is the coherent adding of the field of all the multipath components. If we move along the straight line connecting contiguous points of the reference grid, the RSS from an AP does not always change linearly. Therefore, there is not always a linear relation between the RSS from an AP and the distance along such a straight line.
 We have considered the increment of the error due to unforeseen changes in the indoor scenarios. It must be remarked that the fingerprints in the reference grid of points are obtained using a computer deterministic tool or by measurement considering a given scenario. If this scenario changes (e.g. the furniture is changed, people are introduced) the previously obtained fingerprints will not be accurate and therefore, their use will be a source of error. One of the most common changes is the presence of the human body supporting the mobile station. We modeled the human body by a vertical flat facet with a breadth of 1.0 m and a height of 1.8 m, in such way that any ray that passes through the facet suffers attenuation of 10 dBs. We computed the fingerprints in the test points considering the presence of a facet in one of the four positions indicated in Figure 9 (only one facet each time). We applied the location algorithm minimizing the Euclidean distances between the fingerprints obtained in the test points with the facet modeling the human body with the fingerprints of the reference grid of points obtained without considering those facets. As can be seen in Figure 10, the accuracy in the location was the worst when the scenarios used for the acquisition of the fingerprints in the reference grid of points and in the test points were different due to these unforeseen changes.
 High-frequency techniques were applied to analyze and optimize radio localization systems in indoor scenarios. The localization estimation of a mobile station is obtained using the fingerprint technique. Statistical analyses for the location error have been presented considering several parameters. Several conclusions have been reached: (1) the localization does not improve by using interpolation; (2) the accuracy increases when the algorithm considers only the terms of the fingerprint vector that correspond with the APs that give the strongest power at each point of the reference; (3) changes in the scenarios due to furniture modifications or the presence of people can drastically increase the error. From the last conclusion, it is clear that the localization technique must take into account small changes in the scenario. One way to do this is to use the APs of the WLAN to predict these changes considering the changes in the power received in each AP coming from the rest of the APs. Future works of the author will address this issue.
 This work has been supported in part by the Madrid Community Project S-0505/TIC/0255 and by the Spanish Department of Education and Science, project TEC2007-66164.