• Open Access

Feature-based probabilistic texture blending with feature variations for terrains


John Ferraris, Bournemouth University, Poole, UK.

E-mail: jferraris@bournemouth.ac.uk


The use of linear interpolation to blend different terrain types with distinct features produces translucency artefacts that can detract from the realism of the scene. The approach presented in this paper addresses the feature agnosticism of linear blending and makes the distinction between features (bricks, cobble stone, etc.) and non-features (cement, mortar, etc.). Using the blend weights from Bloom's texture splatting, intermittent texture transitions are generated on the fly without the need for artistic intervention. Furthermore, feature shapes are modified dynamically to give the illusion of wear and tear, thus further reducing repetition and adding authenticity to the scene. The memory footprint is constant regardless of texture complexity and uses nearly eight times less texture memory when compared to tile-based texture mapping. The scalability and diversity of our approach can be tailored to a wide range of hardware and can utilize textures of any size and shape compared to the grid layout and memory limitations of tile-based texture mapping. Copyright © 2012 John Wiley & Sons, Ltd.


The rendering of large outdoor environments has been the subject of much research over the decades. The ability to generate and texture terrains rapidly and without excessive artistic input is important when modelling landscapes for character animation. Contemporary rendering approaches have allowed for expansive terrains of even greater detail than before. This in turn has placed a burden on artists to generate large quantities of terrain assets for use in applications such as, for example, computer games.

Procedural generation of such assets has been an area of interest because of its seemingly endless supply of varied content with little need for artistic input [1-3]. In this paper, we propose a novel approach on terrain texturing, which we call feature-based probabilistic texture blending (FBPTB). The approach addresses the feature agnosticism of linear blending and makes the distinction between features and non-features. In fact, FBPTB can generate a near-endless number of transitional variations in real time without any artistic intervention.

For the rest of the paper, we first review the related work on terrain texturing in Section 2. The overview of FBPTB is given in Section 3. We then describe the details of FBPTB in Section 4. Results are showcased in Section 5, followed by comparison with existing techniques in Section 6. Finally, we conclude our paper and propose a direction for future work in Section 7.


Bloom [4] popularized an approach for texturing terrain meshes by linearly blending a small set of terrain textures according to blend weights. The linear blend however produces artefacts where distinct features appear with varying degrees of translucency when the blend weight is less than 100%. Hardy and Mc Roberts [5] directly address these shortcomings with blend maps, and, although the technique is efficient, the translucency artefacts of distinct features are only postponed; they are still exhibited at transitions with low blend weights. The work of Zhang et al. [6] introduces the illusion of intermittent trail-off of features by generating a feature mask for the terrain, although close-up viewing exhibits the same artefacts of linear blending. Grundland et al. [7] propose a new blending algorithm that can preserve the colour, contrast or salience with the latter reducing the translucency artefacts in a similar manner to Hardy and Mc Roberts, although the translucency artefacts would not be removed entirely. FBPTB removes the translucency artefacts entirely by using a novel blending equation to ensure that distinct features are drawn with full opacity or full translucency.

The use of Bloom blend weights contrasts with recent approaches that stream a single gigantic texture that spans the world in real time [8, 9]. Although such approaches provide the ultimate artistic control over the textured environment, they still require the texture content to be generated beforehand. For procedural or dynamically generated environments where artistic input is limited, FBPTB can produce convincing terrain transitions on the fly without the need for artistic intervention. We further break up the repetition of tiled textures by synthesizing the effects of wear and tear in real time, intermittently giving features unique shapes and details.

Lai et al. [10] present an offline approach to generating intermediately terrain textures to bridge the transitions between different terrain types. Although the technique produces aesthetically pleasing combinations from minimal user input, the amount of memory needed increases with texture resolution and the number of terrain types. The memory requirement of FBPTB is considerably lower as the only assets required are the texture itself and an accompanying meta-texture that describes how the texture is composed.

Lefebvre and Neyret [11] propose a method for creating high-resolution, varied procedural textures by specifying a virtual texture that contains references to specific texture patterns that are used to generate the final texture on the fly. Neyret and Cani [12] built on this work by producing a wide variety of terrain transitions from a handful of specific triangular textures. However, the technique is limited to abrupt transitions, which restrict the flexibility of the approach. FBPTB breaks up the abruptness of hard transitions by introducing stochaism above and below the transition.

Using Wang tiles for tile-based texturing [13] has potential for generating a near-infinite number of texture transitions, with Wei [14] evolving the idea further to take advantage of graphics processing units (GPU). This approach, however, uses lookup tables that increase drastically with texture complexity and terrain size to the point of being impractical for large terrain assets with numerous, feature-rich textures. FBPTB does not rely on lookup tables and achieves a memory consumption of only one-eighth of the GPU-based Wang tiles whilst at the same time maintaining the performance of lookup table approaches.


We define terrain features as the areas of a texture that protrude through underlying terrain types with full opacity rather than being linearly blended according to Bloom texture weightings. The features include (but are not limited to) bricks, cobble stones and other salient details, whereas non-features are the areas of a texture that are not part of a feature, such as (but not limited to) mortar and cement. Non-features lend themselves to linear blending because they neither protrude from the surface nor contain distinct visual details. Figure 1 shows a sample texture (a) along with the isolated features (b).

Figure 1.

An example terrain texture (a) along with isolated features (b).

FBPTB builds on the general concept introduced by Ferraris et al. [15], which proposed a means of identifying distinct texture detail to be drawn with full opacity or translucency and doing so on a probabilistic basis. Figure 2 illustrates an overview of the FBPTB process. More details will be given in the next section. The key to our approach is to ensure that all texels of a given feature receive the same blend weight and thus are drawn (full opacity) or discarded (full translucency) together.

Figure 2.

Outline of FBPTB.

To deduce whether a feature is drawn, the blend weights used in Bloom texture splatting are used as the probability of a given feature appearing. For each feature a random number is generated. If that number lies under or equals the probability of the feature appearing, the feature will be drawn with full opacity. Otherwise, the feature will be discarded, exposing underlying texture detail. If the texture sample is that of a non-feature, we perform a standard linear blend using the Bloom blend weights.


4.1 Meta-Texture

Each texture to be probabilistically blended has an accompanying meta-texture that describes the parent/child relationship of every texel in the texture and the feature (if any) they belong to. Each texel that lies within a feature is considered a child texel of that feature. To ensure that all child texel samples of a given feature receive the same weight and seed, a single texel for each feature is nominated to be the parent texel whose weight and seed will be shared between all other child texels of that feature. Any child texel may be nominated as the parent texel. We used the centroid texel as the parent, considering it holds the average weight of the feature. The exception to this are features which are split along the texture boundary (such as the features along the perimeter of Figure 1). For these features, a parent texel is assigned to each split feature part and located on the boundary common to all parts (opposing boundaries are considered to be the same boundary).

Figure 3 illustrates the meta-texture generation process. The feature list contains the colour-coded list of features where each feature receives a unique colour; thus, all child texels of a given feature are of the same colour. Split features are considered separate features and thus receive their own colour. Non-feature texels are coloured black. The centroid list is a black-and-white image where the centroid texel for each feature is coloured white whereas all other texels are coloured black.

Figure 3.

Overview of the meta-texture generation (the major U and V coordinates image has been exaggerated for clarity).

The centroid position coordinates of the meta-texture are stored in base 256, where the red and green channels hold the digits of the 0th column for the U and V coordinates, respectively (called the minor coordinates), whereas the blue channel stores two nibbles (packed into a byte, U being the high nibble and V being the low nibble), with each nibble denominating the digits of the first column (called the major coordinates). This allows the centroid positions to be stored in three colour channels rather than four, freeing up the alpha channel for the feature mask.

To construct the meta-texture, the centroid list is parsed to gather the centroid positions of all features in the colour-coded feature list. The feature list is then parsed to populate the meta-texture. For black texels, the position of the texel being read is encoded in the corresponding texel of the meta-texture. For non-black feature texels, the centroid position for the associated feature is instead encoded.

4.2 Centroid Position

The meta-texture is sampled using the texture coordinates math formula to obtain the centroid position of the sampled texel, giving the minor coordinate vector math formula and the major coordinate vector math formula, as unpacked from the nibbles into the range [0,255]. The centroid vector math formula is obtained by adding the vectors math formula and math formula and expressing them as decimal fractions of the meta-texture dimensions in the range of  [0,1].

4.3 Weight/Seed Texture Lookup

The blend weights and seeds are stored in a texture of the same dimensions as the terrain mesh where the coordinates math formula used to perform the weight/seed texture lookup are calculated by truncating the vector math formula to obtain the integer components and adding them to the centroid position. The range of math formula is reduced to [0,1] proportional to the weight/seed texture dimensions. Sampling the weight/seed texture with the newly transformed math formula coordinates yields the weight value p (which is also the probability) and the seed value s from the red and blue channels, respectively.

4.4 Weighting Coefficients

A set of weighting coefficients can be introduced to the weighting equation to further shape and control the blending of feature and non-feature texture samples when p ≤ 1.0. These coefficients are stored in the vector math formula and refer to the feature map, feature texel and non-feature texel coefficients, respectively. Coefficients greater than 1.0 sustain a given parameter, whereas coefficients in the range [0,1] dampen a given parameter (values of 1.0 leave the parameter untouched).

The feature map is a blurred version of the meta-texture's feature mask stored in the alpha channel of the texture itself and is used within the blending equation to taper the perimeter of features from full opacity to slight translucency, causing the edges of features to be smoothed rather than sharp. The sampled feature map coefficient math formula is used to smoothen the edges of features when viewed up close.

The feature coefficient math formula and non-feature coefficient math formula are used to sustain or dampen the feature and non-features. These optional coefficients are stored in the weight/seed texture at each vertex to offer finer control over how little or much features and non-features appear on the terrain mesh.

4.5 Weighting Equation

The weighting equation below yields the blend weight w and uses the variables f, d, r and p, all within the range [0,1]. f is the optional feature map value as sampled from the texture's alpha channel, d is the feature mask as sampled from the meta-texture's alpha channel and r is the random value obtained by sampling the noise texture using the seed value s as input. Areas of the terrain with a 0% blend weight should always fail the probability test; thus, the noise texture should contain values in the range [0,1].

display math(1)

The first part of the weighting equation accommodates feature texels and returns a value of either 0.0 or 1.0. If the probability or feature mask value is zero or if the random value is greater than the probability, the signum function sgn returns 0.0, causing the first part of the equation to null, allowing the second part to generate a non-feature weight.

The second part of the equation accommodates non-feature texels. If the feature mask value d is 1.0 (a feature texel), this side of the equation will null, allowing the left-hand side to generate a feature weight. For non-feature texels, the Bloom weight p is used to perform a linear blend in the range of [0,1] with the underlying texture.

4.6 Correcting Minification and Anisotropic Distortion

As features are drawn or discarded dynamically, minification and anisotropic distortion artefacts can be exhibited when sections of the terrain are viewed at a distance or at oblique angles. The intensity of the aliasing depends on the type of texture and mesh; thus, the implementation of this step should be assessed on a case-by-case basis. For example, terrain meshes featuring large, flat planes will need to address the oblique artefacts, whereas textures with high frequencies of features and large draw distances will need to address the distance artefacts.

To solve these problems, we interpolate between the results of FBPTB and a standard linear blend. For oblique anti-aliasing, the y component of the view vector is multiplied by a coefficient, whereupon it is used as the interpolation value between the FBPTB and linear blend. We found that a value of 0.8 for the coefficient yielded satisfactory results. For minification anti-aliasing, the interpolation is performed using the interpolation value derived from the distance between the fragment position and view space origin.

4.7 Blending Equation

The previous work of Ferraris and Gatzidis [16] and Ferraris et al. [17] blended multiple textures in order of precedence, such that lower precedence textures were masked by higher precedence textures of a higher blend weight. With FBPTB, feature texels of lower precedence textures always take priority over higher precedence non-feature texels. This ensures that features will always be visible even for lower precedence textures. Features for higher precedence textures are blended in priority over features for lower precedence textures.

Equation (2) keeps track of whether any features have been drawn for precedence levels below the level that is being blended. A flag F is initialized to 0.0 and then set to 1.0 whenever a feature texel is blended. As blending Equation (3) takes place from precedence level 2 and upwards (as when mixing from level 1, level n − 1 is non-existent), the feature flag is initially set to d1 (the feature mask value of the lowest precedent texture). Subsequent precedent levels use the following equation to keep track of whether any features reside in preceding levels, where dn is the feature mask value of the texture at precedent level n.

display math(2)

Once the feature flag has been calculated, blending Equation (3) is executed for each precedence level to dictate how much of said precedence level's texture is blended with the previous. The final blend weight b for the texture at precedence level n is calculated as follows, where wn is the weight for texture at precedence level n (as calculated by the weighting equation) and math formula is the Heaviside step function.

display math(3)

4.8 Feature Variations

Feature variations are used to dynamically introduce unique wear and tear to texture features when the probability of appearing lies below 100%. Executed prior to the weighting equation, they are achieved by using the seed and an additional random number at each vertex to sample the grayscale variation map (detailing various cracks, divots and holes) at a random point. This random sample is then used to modulate a given texture to darken the colour. Furthermore, by modulating the texture's normal map with the variation normal map (generated from the variation map) and nullifying d when the variation map lies under a certain threshold, holes can be created and chunks removed, exposing any underlying texture information. Figure 4d illustrates such feature variations.

Figure 4.

Blending comparison of (a) linear blending, (b) blend maps, (c) tile-based texturing and (d) FBPTB.


In this section, we will compare FBPTB with linear blending (used by Bloom [4]), blend maps [5] and tile-based texture mapping [14, 18] (where applicable).

Figure 4 is an example of a tile terrain texture blending from right (100%) to left (0% tile). With both FBPTB and tile-based blending, the tiles trail off intermittently rather than uniformly, whereas FBPTB introduces deterioration in the form of cracks, chipped tiles and scratches, breaking up the uniformity of the transition. The blend map delivers a more convincing result than linear blending, with the dirt appearing in the gaps between the tiles, although compared with FBPTB the uniform nature of the blend would result in obvious texture repetition when blended over a large area of the terrain mesh. Note that the tile-based blends are restricted to such grid-like textures, whereas FBPTB can be used with any feature layout and also produces random variations that increase in frequency and prominence as the blend weight reduces.

The lookup table for the tile-based texturing was generated manually using the results from the FBPTB blend in order to determine which features were drawn and discarded. We found that the limitations of the grid layout and labour intensity of manually populating the lookup tables significantly impacted the workflow and flexibility of results. FBPTB suffers from none of these limitations as the intermittent pattern of features is generated automatically and our approach can be used with any textures with salient detail.

As tile-based texture mapping obtains the lookup table address implicitly from the fragment texture coordinates, the technique is restricted to features that are laid out uniformly on a grid, as shown in Figure 4d. In the following examples, we will not compare it with our approach.

Figure 5 illustrates a particular terrain transition that cannot be reproduced properly using existing blending approaches. Both the tile and mosaic textures blend from 100% (right) to 0% (left). Whereas FBPTB can achieve this complex blend in a convincing manner with no artefacts, the blend map and linear blend simply cannot represent such a transition as the two textures cannot be distinguished from one and the other, giving FBPTB a significant advantage.

Figure 5.

Blending comparison of (a) linear blending, (b) blend maps and (c) FBPTB.

Figure 6 shows close-up shots of a top-to-bottom blend with parallax mapping enabled. For this, we used a cobble stone texture blended with a grass underlay. We chose this particular texture because it represented a hypothetical ‘worst case’ insofar as having features of unique shapes and sizes laid out in an irregular manner. For linear blending, the mixture of texture and underlay at a mid-point of the blend transition produces an artificial result with heavy translucency artefacts as the cobble and grass cannot be distinguished from each other. The illusion of relief the parallax effect should be producing is undermined by these heavy blending artefacts. The blend maps still suffer from the translucency artefacts (albeit to a lesser degree) as the two textures can be distinguished, but, like linear blending, the parallax effect is still lost when the artefacts are at their most prominent as the blend weight approaches closer to 0%.

Figure 6.

Close-up parallax shots of (a and b) linear blending, (c and d) blend maps and (e and f) FBPTB.

For FBPTB, none of these translucency artefacts are displayed, resulting in a far more convincing blend. Here, even at the mid-point of the transition, the stochastic nature of the probability blend breaks up the banding artefacts that are exhibited by linear and blend maps as they blend from top to bottom. Furthermore, the features where chunks have been removed from the corners and sides illustrate how convincing the feature variation process is when combined with parallax mapping, especially when considering the fact that all of the variations were generated dynamically with no artistic input.

Figure 7 depicts a terrain with three patches of cobble blended with a 50/50% mix of grass and cobble using linear blending, blend maps and FBPTB. For linear blending, the 50% mix of grass gives the cobbles an artificially dull appearance, and the translucency artefacts become more pronounced as the blend weight drops off from the centre towards the perimeter of the patch. The blend maps do not suffer from the dull appearance, although the shape of the circular brush used to paint the patch is revealed at the perimeter. This could be fixed by having the artist manually touch up the periphery of the patch to introduce irregularity but in practice this will be limited by time and mesh resolution. FBPTB breaks up the uniform shape of the brush automatically and can be further fine-tuned using the weighting coefficients (either globally or per vertex).

Figure 7.

A terrain shot with examples of a 50/50 mix of cobble and grass using (left) linear blending, (middle) blend maps and (right) FBPTB.


We fabricated a ‘worst case’ scenario by blending two textures using our approach and the existing alternatives along with a base texture across an entire mesh that was rendered with no optimizations. For tile-based texture mapping, we extended the approach described by Wei [14] to draw or discard feature texels together in the same manner as FBPTB. The weighting algorithm was a simplified version of Equation (1) as illustrated below, where m is the binary result of sampling the lookup table:

display math(4)

The terrain mesh consisted of 512 quads ( 513 × 513 vertices) with a texture scale of 1.0. The texture sets used were the lowest common denominator that the tested approaches could support (as discussed in Section 5). The two textures blended algorithmically measured 8 × 8 and 10 × 10 in features, respectively, whereas the base texture was mixed in with a standard linear blend. Three configurations were tested: a straight blend (no lighting or parallax effects), a normal mapped blend and a parallax blend. The viewport was filled entirely with blended fragments and the hardware used was a Radeon Mobility 4600 Series GPU in a Dual Core 1.5 GHz CPU laptop with 3 GB of RAM.

Table 1 details the results of the performance tests. The relative performance of the approaches is in sync with their relative complexity. Once normal and parallax mapping were enabled, the extra overhead of these techniques introduced reduced the relative difference in performance between the approaches. The four key benefits FBPTB offers over tile-based texture mapping are as follows: (i) the considerably lower video memory overhead (the texture usage of tile-based texturing is nearly eight times more); (ii) the complete automation (compared with manually populating lookup tables with feature data); (iii) the scalability and diversity; and (iv) more importantly, the fact that FBPTB can utilize textures of any size and shape compared to the grid layout and memory limitations of tile-based texture mapping.

Table 1. Performance comparison of different blending approaches.
AlgorithmStraight blendNormal mappingNormal + parallax mapping
FPSMs*Memory usageFPSMs*Memory usageFPSMs*Texture usage
  • *

    Frame time (in milliseconds).

  • Total texture memory usage (in kB).

  • Straight blend between two terrain textures and a base texture with no other effects.

No texturing3850.156n/an/an/an/an/an/an/a
Base texture only3130.1921,024n/an/an/an/an/an/a
Linear blending2640.2273,5862150.2795,6342030.2965,634
Blend maps2630.2283,5862140.2805,6341990.3025,634
Tile-based texturing2350.25545,5701370.43847,6181350.44447,618
FBPTB with variations2190.2747,1721270.4729,2201250.489,220


We have proposed a novel approach in this paper to introduce intermittency and irregularity at transitions for terrain types that have distinct features. Our approach completely removes the translucency artefacts that exist in traditional Bloom texture mapping and can generate a near-endless number of transitional variations in real time without any artistic intervention. Compared to tile-based texture mapping, FBPTB uses considerably less memory to store texture data. Furthermore, our approach can handle any number of features at a constant overhead in terms of memory usage and algorithmic operations.

Currently, FBPTB only works with textures that contain salient details. Future work will involve expanding the technique to work with textures that do not contain distinct feature information, such as grass, mud and sand. Instead of using static feature masks, an elaboration of the feature variations aspect of our approach will be explored to generate unique shapes in real time in order to deliver splatters, clumps and pockets of non-feature textures at terrain transitions.


  • Image of creator

    John Ferraris is a PhD researcher at the School Of Design, Engineering & Computing in Bournemouth University (BU), UK. He received his BSc (Hons) in Computing at Bournemouth University in 2009. His research interests include real-time 3D graphics, terrains, lighting and texturing.

  • Image of creator

    Feng Tian is an Associate Professor in the School of Design, Engineering and Computing (DEC) at Bournemouth University, UK. His research focuses on Computer Graphics, Computer Animation, NPR, etc. He has published over 50 papers in peer-reviewed international journals and conferences. Prior to joining Bournemouth University, he was an Assistant Professor in the School of Computer Engineering, Nanyang Technological University (NTU), Singapore.

  • Image of creator

    Christos Gatzidis is a Senior Lecturer in Creative Technology at Bournemouth University, UK. Additionally, he is a Visiting Research Fellow at the School Of Informatics, Department of Information Science, City University London, where he completed his PhD, titled ‘Evaluatingnon-photorealistic rendering for 3D urban models in the context of mobile navigation’. Furthermore, he has a Masters in Arts in Computer Animation from Teesside University and a BSc in Computer Studies (Visualisation) from the University of Derby. He has contributed to several refereed conference, book and journal publications and is also a member of the advisory boards of three journals plus various international program conference committees.