Radio Science

An optimization in computation time for the prediction of radio coverage zones

Authors


Abstract

[1] This paper presents a method which optimizes the computation time for radio coverage prediction, whatever propagation model is used. The principle consists in reducing, in comparison with classical techniques, the number of application points of a propagation model. The proposed method is based on a multiresolution analysis of measured signals operating at 1.8 GHz and on an electromagnetic analysis of the propagation environment. The method is compared with a classical technique and is evaluated in terms of a reduction in computation time and of an increase in accuracy. Satisfactory results were obtained for microcells and small cells, and gains in computation time, with values close to 3 and 80, were achieved for scalar and vectorial models, respectively. As 90% of the estimation errors relating to the received signal level are less than 2.2 dB, a high level of accuracy was also assured.

1. Introduction

[2] The past few years have seen a rapid growth in the mobile radio communication market, with the second and third generation systems. The increasing number of users has necessitated high-quality radio coverage prediction. During the deployment of such networks, performance models have been generated to predict radio-electric coverage of strategically located transmitters [Lee, 1993].

[3] It is helpful here to recall the definition of the radio-electric coverage zone of a transmitter: the geographical zone in which the received power of radio electric signals (of the given transmitter) has a mean magnitude higher than the threshold relied on for acceptable communication quality [Balanis, 1989]. The classical computation techniques of such zones use electromagnetic wave propagation models, which can be either vectorial [Athanasiadou and Nix, 2000; Escarieu et al., 2001] or scalar [Deygout, 1991]. Such models are applied to a set of fictive reception points, distributed with a constant spatial step in the studied geographical zone. This step is generally only a few meters in urban areas [Tan and Tan, 1996]. This method leads to large and sometimes prohibitive computation time requirements in complex geographical environments [Lee, 1990]. Approaches exist to reduce these prediction computation times [Hata, 1980; Okumura, 1968; Aveneau et al., 2000; Liang and Bertoni, 1998; Yun et al., 2000]: for example, some involve simplifying the propagation model used.

[4] The present study is both different from and complementary to previous approaches. The approach is different because it aims to reduce the number of application points of the propagation model, rather than to reduce its complexity. It is complementary in that it is independent of propagation models and of their optimizations.

[5] The proposed method is thus based on the following hypothesis: The signal received by a mobile phone presents variations induced by diverse electromagnetic phenomena (reflection, diffraction, etc.) of wave propagation. Each type of variation is due to a particular combination of these phenomena. By extending this principle to a geographical zone, this combination can be considered as a spatial partition in which each element is characterized by a homogeneous variation of the received signals, i.e., by an identical combination of electromagnetic interactions undergone by the waves.

[6] Drawing on this hypothesis, which is set out and examined in section 2, the aim is to reduce the computation time necessary for a coverage zone prediction by limiting the application of the propagation model to merely a few points in each element of such a partition. The verification of this hypothesis is based on two tools, implemented in sections 3 and 4. The first is electromagnetic analysis software which allows the identification of different regions of the studied geographical zone associated with particular mechanisms of wave propagation. The second tool is a multiresolution segmentation algorithm which detects ruptures in the mean power of measured signals at 1.8 GHz. The application of these techniques is then presented for the purpose of verifying the formulated hypothesis.

[7] Section 5 presents an optimized coverage zone prediction method for a radio transmitter. In particular, a justification is set out for the reduction in the number of application points per element of the model.

[8] The evaluation of the method in terms of computation time and accuracy is addressed in section 6. Two configurations are considered, i.e., small cells and microcells.

2. Hypothesis

[9] In the mobile phone radio communication context involving a transmitter and a receiver, the information is propagated by multi paths as shown in Figure 1. Thus the received signal, in one location, results from the combination of waves that have followed different paths.

Figure 1.

Multipath propagation.

[10] During its propagation, each wave can undergo a particular combination of electromagnetic interactions with the propagation environment [Parsons, 1992]. Classically, an electromagnetic interaction is viewed as either a reflection phenomenon involving surfaces or a diffraction phenomenon involving wedges encountered by the waves. A wave which does not undergo any interaction is propagated in the line of sight. Referring to the example of Figure 1, we note that the received signal is due to one line of sight wave, one reflected wave, one diffracted wave, one diffracted-then-reflected wave. Bearing in mind these notions of propagation and observations of radio electric signals during measurement projects [Pousset et al., 2003], the following hypothesis to reduce the necessary time for coverage zone prediction was formulated: On a mobile route, the changes in the significant mean level of the received signal are due to changes in the combinations of electromagnetic interactions (reflection, diffraction and line of sight) undergone by the received waves.

[11] By extending this hypothesis to a geographical zone, we suppose that the set of points for which the received signal is induced by the same combination of electromagnetic interactions enables a specified region to be defined. In this case, the dynamic and the mean level of the signal do not vary very much. It is thus deduced that the studied geographical zone is a spatial partition composed of different elements, each element exhibiting homogeneous behavior of the received signal. To go from one element to the other entails considering signals which change in shape. Figure 2 allows us to illustrate this hypothesis with a schematic example.

Figure 2.

(a) Spatial partition and (b) schematic received signal on the measurement route.

[12] Considering the transmitter position as that illustrated in Figure 2a, the application of the physical laws associated with wave propagation [Tan and Tan, 1996] results in different regions appearing. Each of them is characterized by a combination of electromagnetic interactions: (1) simple diffraction, (2) visibility and (3) visibility plus reflection. A mobile receiver, following the route shown in Figure 2a, is going to cross different regions of the partition and thus will receive the schematic signal indicated in Figure 2b, in accordance with the formulated hypothesis. Thus, for each region, a particular combination of electromagnetic interactions induces a specific behavior of the signal. Therefore the points P1a, P2a, P3a and P4a, corresponding to the changing regions on the mobile route, are strongly correlated to the points P1s, P2s, P3s and P4S, indicating significant modifications of the received signal shape.

[13] From this hypothesis, in order to optimize the computation time of the coverage zone, it becomes possible to apply a propagation model using only a few points of each element. Thus, as the mean level does not vary greatly in this last case, we can extrapolate the result to the full element. The propagation model is consequently not applied according to a constant spatial step, in contrast with the classical technique.

[14] The heart of the proposed method can consequently be viewed as an electromagnetic analysis tool for analyzing the environment. It allows us to break down the studied environment into regions characterized by particular combinations of physical phenomena. More precisely, this analysis is applied during a hypothesis verification phase (section 4) and during the implementation of the method (section 5).

[15] However, another tool contributes to the verification of the hypothesis. This involves applying an analysis technique to the signals acquired on the predefined routes, which in turn allows significant ruptures to be detected.

3. Electromagnetic Analysis of the Environment

[16] The purpose of the electromagnetic analysis is to provide a spatial partition of the environment. Each element of this partition is characterized by a certain combination of interactions (diffraction, reflection and also line of sight wave). However, although wave propagation is a three-dimensional phenomenon, the electromagnetic analysis software has not been realized in three dimensions because of the difficulty involved. It has been chosen to progressively tend toward this theoretically ideal solution, through several versions of increasing complexity. Initially, a version producing partition in a horizontal plane was developed, then a version based on a study in vertical planes and finally a version equipped for a 2.5-D propagation study. The result corresponds to a fusion of the horizontal and vertical partitions. Whatever the version considered, the partition is always calculated for a height of 1.5 m above the ground; this height corresponds to the mean height of a mobile receiver.

3.1. Horizontal Electromagnetic Analysis (2DH)

[17] As a general rule, the electromagnetic analysis software is based on the search of optical boundaries created by the reflecting faces and the diffracting wedges of the obstacles comprising the environment. For a horizontal analysis, the reflection and diffraction phenomena are described in the Figures 3a and 3b.

Figure 3.

(a) Reflection zone and (b) diffraction zone.

[18] The building generates reflected zones when these faces are in line of sight of the transmitter (cf. Figure 3a). These zones are defined with the respect to the geometrical optic laws, which entail that the reflection angle is equal to the incidence angle. The building also generates two diffraction zones at the wedges marked (1) and (2), as is shown in Figure 3b. To determine the diffracted zones resulting from the wedges of a building, it is necessary to carry out a preliminary study. Theoretically, the diffracted waves fill all the free space [Keller, 1962; Kouyoumjian and Pathak, 1974]. Nevertheless, the magnitude of diffracted waves decreases when the angle α increases (Figure 4). Acknowledging this, we define an angle αl as being the perceptible limit of influence of diffracted waves, in comparison with the line of sight wave. For an angle greater than αl, the diffracted waves are too attenuated and can be neglected.

Figure 4.

Limit of influence of the diffracted waves in relation to α.

[19] It has been shown [Pousset et al., 2003] that the angle α is always less than 3 degrees for frequencies greater than 1 GHz, whatever the situation (distance, opening angle and nature of the wedges). This angle is very small in comparison with the dimensions of the environment and so can be considered as zero. In the electromagnetic analysis, the diffraction is then taken into account in the shadow zones of the obstacles.

[20] We can observe in Figure 3b the partial superposition of two diffraction zones. By making a parallel with a ray-tracing approach, it is seen that one receiver located in this superposition of zones would receive two once-diffracted paths.

[21] In the case of a building, considering one reflection and one diffraction, the electromagnetic analysis provides the result proposed in Figure 5. We can observe a spatial partition composed of different elements, the gray levels of shading indicating the nature of the associated interactions. Emphasis is again placed on the existence of elements defined by superposition of zones, such as the element corresponding to the intersection of a zone in the line of sight and a zone corresponding to one reflection.

Figure 5.

Different zones induced by a building for one diffraction and one reflection.

[22] The objective of the electromagnetic analysis is to provide a partition for a coverage computation, i.e., which considers only significant variations from a mean received level. Consequently, the final partition must be constituted only of elements traducing such variations. Using a hierarchy of the physical phenomena according to the attenuation they bring about on the waves [Pousset et al., 2003], the element characterized by a line of sight and a reflection is ultimately considered as an element in the line of sight. In fact it corresponds to points receiving a direct path which is more energetic than the reflected path. Taking account of these aspects and after possible regrouping of contiguous zones characterized by similar attenuations, the final partition provided by the electromagnetic analysis is represented in Figure 6.

Figure 6.

Regrouping result.

3.2. Vertical Electromagnetic Analysis (2DV)

[23] The electromagnetic analysis applied to vertical planes is based on a similar approach to the above, but is slightly more difficult to implement. In the vertical version, the partition is also provided in a horizontal plane at 1.5 m above the ground. However, in this case the propagation analysis is made in a succession of vertical planes. Figure 7a shows a schematic example comprising buildings and deals with one diffraction. The information contained in each plane at 1.5 m above the ground is then extracted (Figure 7b) to constitute a partition similar to that obtained in the horizontal version (Figure 8). Then, the principle of interaction hierarchy, as well as the possible regrouping of contiguous elements, is applied to this partition, as for the horizontal version, in order to generate the final partition.

Figure 7.

(a) Regular scanning of the environment in a vertical plane and (b) electromagnetic analysis of plane 2 for one diffraction only.

Figure 8.

Generation of a partition from a vertical electromagnetic analysis.

3.3. 2.5-D Electromagnetic Analysis

[24] This version consists in associating the two previous analyses (horizontal and vertical). The result is always a partition presented in a horizontal plane at 1.5 m above the ground.

[25] The effective computation of this partition consists in combining the partitions obtained in the horizontal and vertical versions; this operation being also based on the principle of hierarchical organization of the interactions.

3.4. Parameters of the Electromagnetic Analysis

[26] Whatever version is considered, the electromagnetic analysis software needs input data corresponding to geometrical and electrical characteristics of the environment. These are established in three dimensions by the “National Geographical Institute” in France. Moreover, the analysis is parameterized by the number and the nature of the electromagnetic interactions—diffraction and reflection—taken into account for the partition computation.

[27] These parameters are essential because they directly influence the resolution of the partition. Consideration of complex combinations of interactions leads to the determination of a partition comprising a high number of elements. As shown in Figure 2, the parameters are viewed to be correct when the changes of mean level of the received signal on the route correspond to changes of partition elements. The values of these parameters are obtained during the verification of the hypothesis, as is explained immediately below.

4. Hypothesis Verification

[28] The purpose of this section is to show the validity of the hypothesis forming the basis of the optimization method for coverage zone prediction. This hypothesis implies the existence of a very strong correlation between the significant variations in mean level of measured signals obtained at 1.8 GHz via a narrow band channel sounder developed by Rhodes and Schwartz and the radio wave propagation mechanisms. This calls for an attempt to establish the correspondence between the measured signal variations and the spatial partition obtained by electromagnetic analysis. To this end, we have implemented a signal segmentation technique that permits the identification of significant variations. This technique is presented in the following section.

4.1. Signal Segmentation

[29] The goal here is to detect mean level ruptures in a signal. The wavelet maxima constitute a tool which, through its properties, achieves this. This multiresolution analysis allows the study of a signal on different scales [Mallat, 1988], each of them corresponding to a particular frequency band. Moreover, any increase in scale is associated with a decrease in frequency. Thus, if we only take into account the low frequencies, we achieve a smoothing of the signal. The filter function used is called the scale function. The rapid variations, lost during the smoothing of the changing scale, are projected in the complementary wavelet base [Flandrin and Goncalves, 1993].

[30] The wavelet maxima approach is based on the wavelet transform of a signal s(t) ∈ L2(ℜ) (space of finite energy signals) at the scale “e” and time t defined by

equation image

where ψe1(t) is the dilatation by the scaling factor “e” of ψ1(t) with ψ1(t) ∈ L2(ℜ) is the wavelet function.

[31] Thus

equation image

Moreover, the wavelet function ψ1(t) is defined as the first derivation of the scale function ζ(t):

equation image

So

equation image

where ζe(t) = ζ(equation image) is the dilated scale function.

[32] Finally, equation (1) can be written

equation image

[33] Using the commutability of the convolution and the derivation, relation (5) becomes

equation image

[34] In the present case, the signal to be segmented is not temporal but spatial. Thus relation (6) becomes:

equation image

where x represents the curvilinear abscissa.

[35] Relation (7) can be interpreted as a derivation of the signal, followed by a smoothing realized by the ζe(x) function on the scale e. The rapid variations of s(x) disappear progressively as the scale becomes high. Thus the extrema are due to the significant variations in the mean level of the signal.

[36] Mallat and Zhong [1992] have shown that the local minima of We1(x) do not correspond to large variations in the signal, but to inflexion points of the s * ζe(x) function. However, the local maxima of ∣We1(x)∣, called wavelet maxima, traduce signal discontinuities observed on different scales.

[37] The studied signal being discrete, the wavelet transformation must also be discrete. To allow a rapid numerical computation, the scale must vary according to the dyadic sequence (2j)j∈Z. Moreover Meyer [1989] and Daubechies et al. [1991] have studied the conditions of redundancy, orthogonality, and stability of the wavelet dyadic transformation.

[38] Let be ψequation image(x) the dilatation of the ψ1(x) wavelet function by the 2j factor:

equation image

[39] The ψequation image(x) wavelet dyadic transformation of the signal s(x), on the scale 2j and at the distance x, is defined by the same convolution product, except that

equation image

[40] Consequently, the search of the significant variations of s(x) leads to the identification of the local maxima of ∣We1(x)∣. Figure 9b presents the transformation result for a breakdown of a simulated signal on 11 scales (Figure 9a).

Figure 9.

(a) Segmentation of a signal on (b) eleven scales.

[41] In concrete terms, this representation of wavelet maxima contains the locations and values of We1(x) for each scale when ∣We1(x)∣ reaches a local maximum. In our application, we have to retain the scale that furnishes the maxima allowing us to segment the signal according to its significant variations in mean level. For the proposed signal which presents four discontinuities depending on the level, the choice of the tenth scale permits identification of the five intervals of the signal. The eleventh considers only the two principal discontinuities. To obtain the position of these ruptures of the signal from the wavelets maxima, it is necessary to raise the frequency of the locations of maxima. In fact, they undergo a modification of their location due to the transformation which brings about a filtration of the signal and hence a phase difference. This stage is called chaining [Carré et al., 2001].

[42] As emphasized above, the choice of the scale enabling initialization of the chaining process is the essential parameter of this segmentation technique. If too high a scale is chosen, nonsignificant ruptures of the signal will be retained. Moreover, for some of these ruptures, it will not be possible to link them to significant changes in the combinations of electromagnetic interactions. Conversely, if a too small a scale is chosen, there is the risk of not taking into account significant modifications of interaction combinations. A statistical study has led to the elaboration of a criterion for identifying the optimal scale for the application. It is based on the stability of the segmented intervals in comparison with the total spread of the signal [Combeau, 2004]. It is quantified by the quotient σi/σs (σi: spread deviation of the magnitude in the considered interval and σs: spread deviation of the entire signal magnitude).

4.2. Principle of the Hypothesis Verification

[43] The purpose of this section is to verify the validity of the hypothesis formulated in section 3. It should be noted that there exists a very strong correlation between slow variations in measured signals and the mechanisms of radio electric wave propagation. The technique used to validate this hypothesis consists in verifying the correspondence between the variations of the measured signal and the spatial partition. In this way, the segmentation algorithm (section 4.1) is applied to detect the changing behavior of measured signals.

[44] To present this technique on the basis of the flowchart proposed in Figure 10, a simple example is examined in detail. Let us consider the mobile route indicated by the white line in Figure 11a and the signal shown in Figure 12, at a frequency of 1.8 GHz. The studied environment is a suburban area.

Figure 10.

Flowchart showing the steps in the validation of the hypothesis.

Figure 11.

(a) Spatial partition and (b) zoom on the mobile route.

Figure 12.

Segmented signal.

[45] The segmentation algorithm, presented in section 4.1, identifies three intervals in this signal. Their limits are at the P1s and P2s points on the curvilinear abscissa, representing 6 5m and 117 m (Figure 12).

[46] It is then necessary to identify the parameters of the electromagnetic analysis software (the number and nature of the electromagnetic interactions) enabling a correct partition for this segmentation. Figure 11a shows the partition of the studied environment provided by the electromagnetic analysis, with one diffraction and one reflection.

[47] For the studied mobile route, the results of this analysis do not correspond to those obtained by measured signal segmentation. The elements of the partition, crossed by the mobile on the route, are too many and the partition therefore too fine. It is thus necessary to reduce the number of electromagnetic interactions to consider.

[48] Figure 11b shows the zoom on the mobile route, of the partition (Figure 11a) for one diffraction and no reflection. It can be seen that the route crosses two elements of the partition: a zone in line of sight of the transmitter and a diffraction zone corresponding to the shadow zone of the building. The curvilinear abscissas of the two points, P1a and P2a, derived from the electromagnetic analysis software, have values of 70 m and 12 0m. Comparing on the one hand the abscissa points resulting from the segmentation, P1s and P2s, and, on the other, P1a and P2a, a difference of a few meters is noted. Nevertheless, this result is judged acceptable for two reasons. The first one is related to the uncertainty of the mobile receiver position during the measurements. The second one is the uncertainty level, approximately three meters here, inherent in the terrain database. Thus, on this simple example, the hypothesis is verified: the detected ruptures in the measured signal correspond effectively to interaction changes when one diffraction and no reflection are chosen as parameters of the electromagnetic analysis software.

[49] For a complex case corresponding to a measured signal (Figure 13) in a microcell context in Paris (France), Table 1 presents the changes in the number of intervals constituting the mobile route.

Figure 13.

Measured signal (Paris).

Table 1. Variation in the Route Segment Number With Respect to the Interactions
InteractionsNumber of Segments
0R1D65
0R2D124
0R3D142
0R4D149
1R4D200
2R4D220
2R2D213
3R3D243

[50] This number is obtained by treatment of the spatial partition according to the chosen interaction combination in the electromagnetic analysis. The reference is, as before, the number of intervals obtained by segmentation on 11 scales of the measured signal. In this case, the value is 203.

[51] Table 1 reveals that, for the studied microcell configuration, it is necessary to consider four diffractions and one reflection. The difference between the reference value and that obtained by the electromagnetic analysis is about 1%. Moreover, in this case, the positional difference of the segment extremities in the two approaches is, once more, only a few meters. As will be shown in section 6, this level of accuracy is sufficient for the application under consideration. Many signals in varied configurations have been treated in an analogous way, allowing the statistical verification of the hypothesis.

[52] It should be noted that it becomes possible to establish a learning stage aimed at determining, for a particular type of environment, the ratio “number of reflections/number of diffractions” which would give the optimal spatial partition in relation to many measured signals. Thus, for a geographical zone of a previously computed type, measured signals would be useless in determining the parameters of the electromagnetic analysis. It is sufficient to consider the parameters of the already-studied environment.

5. Effective Computation of the Coverage Zone

5.1. Principle

[53] After verification of the hypothesis, it is affirmed that the electromagnetic analysis software used generates spatial partitions comprising elements having small variations in the mean level of the received signal. The optimization of the coverage zone computation time consists in applying a propagation model to merely a few points per element of the spatial partition, then extrapolating the received level to the whole element. The process can be seen to follow the flowchart shown in Figure 14.

Figure 14.

Flowchart of the application phase.

[54] Thus, from a certain type of environment and the associated optimal combinations of interactions (number of reflections/number of diffractions), the optimal spatial partition is obtained using electromagnetic software. Thus, to implement this flowchart, we have still to determine the number of application points of the propagation model and their locations in each element of the partition.

5.2. Number of Points Per Element of the Partition

[55] The number of points is one of the key parameters here. It must be minimal in order to reduce the computation time in comparison with a classical prediction method, while still being sufficiently high to assure a good estimation of the received power in the element being analyzed.

[56] To identify it, our principle consists in deriving the number of points needed to estimate, quite correctly, the median power of each segment of a signal. This is realized with the multiscale analysis (section 4.1). The estimation uncertainty of the median power has arbitrarily been fixed at 3 dB because this value leads to satisfactory results (see section 6). So, this estimation uncertainty of the median power is defined, for each segment of the signal, by the difference between the median power (in dBm) of the segment and this due to few uniformly distributed points of the same segment. So, the variation in the estimation error of the median power according to the number of points considered for the computation is presented in Figure 15.

Figure 15.

Variation in the estimation error according to the number of points considered per interval.

[57] This curve is the product of an accurate statistical study based on a large number of segments of signals and representing several tens of thousands of measurement points realized in the studied environments presented in this article. It is clear that two points are enough to estimate the level of a signal segment with an error appreciably less than 3 dB. One could expect this type of result since one of principal properties of wavelet maxima treatment is to assure the stability of the signal at each interval. Moreover, during this study, it was shown that these two points must be placed uniformly at each interval in order to take into account possible nonsignificant variations of the signal.

[58] The same approach can be extended to the elements of a spatial partition. We consider two application points uniformly distributed in each element, in order to estimate the corresponding received power.

5.3. Propagation Model and Extrapolation

[59] In accordance with the flowchart in Figure 14, after the application of the electromagnetic analysis on the studied geographical zone, we have at our disposal a partition in which each region is characterized by a combination of interactions. Figure 16a illustrates such a partition for the area of Paris in which the signal of Figure 13 was measured.

Figure 16.

(a) Spatial partition and (b) coverage zone.

[60] In order to predict the received power in this geographical zone, a propagation model is then applied to each pair of points per element of the previously identified partition. For each element, the average of these two estimates is then applied to the points of a regular meshing, as would be the case for a classical coverage prediction method. In this way we obtain an estimation of the coverage comparable to that furnished by a classical technique (Figure 16b). In Figure 16b, the variation of the received power is indicated by the gray scale.

[61] We have noted that the elements of the partition in line of sight of the transmitter are generally wide enough, as is illustrated in Figure 16a. Consequently, a uniform extrapolation of the estimated mean level implies a sizable error in prediction. To eliminate this, the free space model is applied in these regions for each point of the regular grid.

6. Results

[62] This section sets out an evaluation of the performance optimization method, in terms of both accuracy and computation time. A compromise between coverage quality and significant gains in computation time is considered.

[63] The presented results relate to two configurations (microcells and small cells) and have been obtained with a scalar model called μG [Wiart et al., 1993]. For the two configurations, the three versions of our method (2DH, 2DV and 2.5-D) are evaluated. It should be noted that the method works for frequencies higher than 1 GHz, a constraint related to the need to take into account of the phenomenon of diffraction. In fact, the transition zones around the optical boundaries must be very narrow if the diffraction phenomenon is to be addressed only in the shadow zones (cf. Figure 4). All the computation time results have been obtained with a 1.6 GHz processor.

6.1. Microcell Configuration

[64] This configuration corresponds to a transmitter height below the average height of roof tops in the Arc de Triomphe quarter of Paris. The frequency considered here is 1.8 GHz and Table 2 presents the findings resulting from this configuration.

Table 2. Performances for a Microcellular Configuration
VersionsInteractionsϕ 50%, dBϕ 90%, dBCoverage Rate, %Computation TimeReduction of Number of Points, %
Specific TreatmentsModelTotal Time
 0R1D1.153.525.139 s → 1t19 s + t198.7
 0R2D1.435.8856.3812 s → 1.33t212 s + t296.8
 0R3D1.655.937220 s → 2.22t320 s + t392.3
2DH0R4D1.465.5379.3933 s → 3.66t433 s + t488.3
 1R4D0.962.1981.712 mn 4 s → 13.8t52 mn 4 s + t583.3
 2R4D0.962.1782.066 mn 8 s → 40.9t66 mn 8 s + t681.3
2DV0R4D0.923.360.7537 s → 4.11t737 s + t788.5
2.5D0R4D1.084.5584.2454 s → 6t854 s + t886.4

[65] Initially, the quality of the obtained coverage is evaluated in relation to the combinations of interactions using three criteria. The first two are the values at 50% and 90% of the cumulative function ϕ(x) of estimation errors. These result in the difference between the powers calculated with the current method and those calculated with the classical approach considered as the reference approach, for each point of the regular grid. Here, the meshing step conforms to that used by operators in urban areas, that is to say five meters. The third criterion is the coverage rate of the method, which here is the percentage of points for which the method gives an estimate of power. The larger the combination of interactions, as addressed in the electromagnetic analysis, the more the simulated waves will be able to propagate in inaccessible areas.

[66] Secondly, the rapidity of the method is evaluated. For each combination of interactions, the computation times of the method are presented, distinguishing the time used by the specific treatments (electromagnetic analysis and extrapolation) from those of the propagation model. However, the current method is independent of the propagation model. Therefore the better evaluation criterion is the reduction in the number of application points of the model obtained by our technique in comparison with the number obtained using the classical method. For a constant reduction factor the method is all the more efficient in terms of computation time gain since the model is complex. The “specific treatments” column contains two sorts of data: the computation time itself and the multiplicative coefficient in relation to the reference time. The reference time here is that of the 0R1D combination (9 s in 2DH). As an example, the combination 0R4D takes 3.66 times longer, i.e., 33 s. To evaluate the gain in computation time using the method, the reference time is that used by a classical technique. Hence, for the 30601 fictive receivers of the regular meshing under consideration, the computing time is 11 min for the μG scalar model, and between one hour and several days, for a three-dimensional (3-D) vectorial model, according to the combination of interactions.

[67] For the 2DH version, the analysis of Table 2 underscores the very distinct increase in the coverage as the number of diffractions increases. At the same time, the accuracy does not change, except for the simple case of one diffraction. That means the method is accurate in all the cases considered. However, to be operational, it must be able to be applied with four diffractions. When reflections are investigated, it is observed that they do not increase the coverage rate but improve the accuracy. In concrete terms, this indicates that once the wave arrives in a street by diffraction, one reflection improves the coverage of this street through the creation of small reflection zones.

[68] Globally, we can note that more the complexity of the simulation is, more the accuracy and the coverage rate increase to the detriment of the computation time. For example, between the simplest simulation (0R1D) and the more complex one (2R4D), the coverage rate is improved of 70%, the error decreases of 30% but the computation time is multiplied by 40.

[69] Giving priority to accuracy, it appears that the optimal combination of interactions is one reflection associated with four diffractions. On the one hand, we obtain a very satisfactory level of accuracy because 90% of errors of estimation of the received power are less than 2.2 dB On the other hand, for a coverage rate close to 82%, the reduction in the number of application points of the model is close to 84%.

[70] Searching to optimize the necessary compromise between accuracy and computation time, the combination 0R4D is retained. This allows the treatment time to be 4 times more rapid, the negative aspect being an augmentation of 3 dB of the error estimation at 90%.

[71] In the two cases it is noted that the coverage rate is a long way short of 100%. This is explained by the presence of several interior yards (Figure 17) which are not reached in the horizontal version, and which would demand a very large number of diffractions to be reached in the vertical version.

Figure 17.

Top view of the scene (buildings are gray and the ground is black).

[72] In terms of rapidity, the optimization in computation time is connected mainly to the reduction in the number of application points of the model used. Thus, for the considered scalar or vectorial model, the gain in time is close to a factor of, respectively, 4 and 80 in comparison with classical techniques.

[73] On the other hand, it is important to note that this last combination of interactions (0R4D) for a vertical electromagnetic analysis gives a very poor coverage rate. This result indicates that in a microcellular configuration, the wave propagation occurs essentially in the horizontal plane, in accordance with the literature. The 2.5-D electromagnetic analysis is thus no more useful than the horizontal version, unless we want to study interference phenomena between cells. In this case, it is sufficient to make two coverage calculations for two different transmitter locations and to superpose the two results. Consequently, the influence of vertical paths from neighboring cells is noted.

6.2. Small Cell Configuration

[74] In contrast to the preceding configuration, the transmitter is situated on the roof top and the frequency is always close to 1.8 GHz. Following analogous reasoning in relation to the propagation wave, we may suppose that a vertical electromagnetic analysis is the more appropriate. In such a configuration the significant waves are propagated in vertical planes [Walfisch, 1988; Gonçaves, 2000], as is confirmed by Table 3. It should be noted that, for a grid of 84000 fictive receivers and using the above μG model, the reference time for a classical technique is 15 min.

Table 3. Performances for a Small Cellular Configuration
VersionsInteractionsϕ 50%, dBϕ 90%, dBCoverage Rate, %Computation TimeReduction of Number of Points, %
Specific TreatmentsModelTotal Time
 0R1D1.244.8129.231 mn 18 s → 1t11 mn 18 s + t197.04
 0R2D1.155.0154.51 mn 49 s → 1.4t21 mn 49 s + t292.61
 0R3D1.14.9173.412 mn 31 s → 2.9t32 mn 31 s + t389.32
2DV0R4D1.084.884.113 mn 49 s → 2.9t43 mn 49 s + t487.1
 1R1D1.174.6836.121 mn 50 s → 1.4t51 mn 50 s +t595.35
 1R2D1.124.8358.922 mn 42 s → 2.1t62 mn 42 s +t692
 1R3D1.124.8476.24 mn 18 s → 3.6t74 mn 18 s + t788.97
 1R4D1.124.8288.37 mn 55 s → 6.1t87 mn 55 s + t890.3
2DH0R4D2.359.1732.481 mn 5 s → 0.83t91 mn 5 s + t995.76
2.5D0R4D1.15585.25 mn 12 s → 4t105 mn 1 s + t1088.32

[75] Only a study involving vertical planes provides a coverage zone sufficient to encompass an acceptable number of interactions. In relation to the influence of interaction combinations on estimation accuracy, the conclusions drawn in the previous section apply: (1) the greater the diffraction number, the greater the surface coverage is and (2) the greater the reflection number, the lower the estimation error.

[76] Overall, as for the previous configuration, the best simulation is obtained by considering four diffractions. This leads to coverage of 85% and a reduction in the number of application points of 87%. The level of accuracy is high since 90% of the errors are less than 4.8 dB. Nevertheless, it is acknowledged that the time required for the method under investigation is greater than for the microcellular method. This is explained by the fact that the studied zone is more densely urbanized and thus contains a greater number of building faces and wedges. Figure 18 illustrates the computed coverage zone using the parameters of the electromagnetic analysis.

Figure 18.

Visualization of the coverage zone.

7. Conclusion

[77] The work presented sets out to reduce the computation time necessary to predict a transmitter's coverage zone. The method, which is independent of the propagation model, is based on the reduction in the number of application points of the chosen model, when compared with a classical technique based on a regular grid. The developed method is based on the following hypothesis: the variations of measured signals are directly related to the physical phenomena influencing the wave during its propagation. Thus, for each significant variation of the signal, we observe a changing combination of interactions.

[78] The proposed technique is based on an electromagnetic analysis software developed in the laboratory which allows a geographical zone to be partitioned into elements characterized by having the same interaction combination. There are three versions, the choice of application depending on whether the wave propagation is analyzed in two dimensions (horizontal or vertical) or in 2.5 dimensions.

[79] First, the proposed hypothesis was verified statistically, demonstrating the strong correlation between, on the one hand, the spatial partition achieved by the electromagnetic analysis and, on the other, the segmentation generated by the multiscale treatment of measured signals in different configurations.

[80] Secondly, the effective computation of a coverage zone was described. The method was based on minimizing the number of points considered. The objective was to estimate the mean power received by each element of the studied partition. The result which emerged indicated that the minimum number of points, per element, was two.

[81] The principle of the optimized prediction of coverage obviates the need to measure signals and consists in applying the partitioning software, before launching the propagation model on two points per element of the obtained partition. This partition is determined with optimal input parameters (number of reflections and diffractions) which vary according to the type of studied configuration. The mean power calculated from the two estimates furnished by a propagation model is then applied at all the points of regular meshing of the considered element. It should be noted that the elements in line of sight of the transmitter are treated differently: a free space propagation model is directly applied to the set of the regular meshing points of these regions in order to minimize the errors induced by their high values.

[82] The last part of the study presents the method's performance in terms of accuracy and gain in computation time for the two configurations: small cells and microcells. This evaluation was carried out by comparing the approach with the classical method based on a regular grid.

[83] For a microcellular configuration, it has been shown that an electromagnetic analysis in the horizontal plane is imperative. The achieved gain in computation time reaches a factor of three for a scalar model, while assuring a high level of accuracy, 90% of errors of estimation of the received level being less than 2.2 dB. This satisfactory result is explained by the very large (80%) reduction in the model's application points. This high value indicates that, by employing a complex model (e.g., a 3-D vectorial model), the gain in computation time is considerable. In fact it reaches a factor of 80. In the small cell context, where the vertical electromagnetic analysis is the one performing best, the accuracy is similar to that found in the previous case, the percentage reduction of the number of application points of the model here being close to 90%.

[84] In conclusion, this method allows, using any propagation model, transmitter coverage zones in varied configurations to be estimated. The principle of the method is to present the user with the choice of giving priority to computation time, accuracy, or a compromise between these two aspects.

Acknowledgments

[85] This work has benefited from technical and financial support of France Telecom R&D in contract 01 1B323.

Ancillary