Unmanned aircraft systems help to map aquatic vegetation

Authors


Abstract

Questions

Do high-resolution (sub-decimetre) aerial images taken with unmanned aircraft systems (UASs) allow a human interpreter to recognize aquatic plant species? Can UAS images be used to (1) produce vegetation maps at the species level; and (2) estimate species abundance?

Location

One river and two lake test sites in northern Sweden, middle boreal sub-zone.

Methods

At one lake and at the river site we evaluated accuracy with which aquatic plant species can be identified on printouts of UAS images (scale 1:800, resolution 5.6 cm). As assessment units we used homogeneous vegetation patches, referred to as vegetation stands of one or more species. The accuracy assessment included calibration and validation based on field controls. At the river site, we produced a digital vegetation map based on an UAS orthoimage (geometrically corrected image mosaic) and the results of the species identification evaluation. We applied visual image interpretation and manual mapping. At one of the lake sites, we assessed the abundance (four-grade scale) of the dominating Phragmites australis and produced a cover map.

Results

We identified the species composition of vegetation stands at the lake and the river site with an overall accuracy of 95.1% and 80.4%, respectively. It was feasible to produce a digital vegetation map, albeit with a slight reduction in detail compared to the species identification step. At the site for abundance assessment, P. australis covered 20% of the total lake surface area, and 70% of the covered area had cover ≤25%.

Conclusions

The tested UAS facilitates lake and river vegetation identification and mapping at the species level, as well as abundance estimates.

Nomenclature
Vascular plants: Karlsson (1997)

Mosses: Hallingbäck et al. (2006)

Abbreviations
GIS

geographic information system

GPS

global positioning system

PAMS

personal aerial mapping system

RPAS

remotely piloted aircraft system

UAS

unmanned aircraft system

UAV

unmanned aerial vehicle

VAT

value added tax

Introduction

Lake and river vegetation in and along aquatic systems, here referred to as aquatic and riparian vegetation, has important ecological and regulatory functions, including element cycling, trapping seeds and sediments, as well as serving as food, habitat and corridors (Pieczynska 1993; Tabacchi et al. 1998; Strayer 2010; O'Hare et al. 2011). Frequently, aquatic vegetation has been used as an indicator of environmental conditions in aquatic systems (Baattrup-Pedersen et al. 2001; Penning et al. 2008a,b). In Europe, aquatic vegetation serves as a quality element for the ecological status assessment of surface waters, according to the Water Framework Directive (EU 2000). For rivers, this assessment even includes the structure and condition of the riparian zone as part of the quality element ‘Morphological Conditions’.

To enhance our knowledge of complex vegetation-related processes in aquatic environments, it is crucial to assess plant occurrence and abundance at the species level. Commonly used field methods for sampling lake and river vegetation, such as transect methods (Baattrup-Pedersen et al. 2001; CEN 2007), are not only labour-intensive and restricted to small spatial scales, but may yield inconsistent results (Dudley et al. 2013) due to spatial variability (Spears et al. 2009), number of sampled transects (Leka & Kanninen 2003) and multiple observers (Staniszewski et al. 2006).

Given technical advances in platforms and sensors, from mono- to multi- and hyperspectral data collection (e.g. Edwards & Brown 1960; Howland 1980; Wang et al. 2007; Midwood & Chow-Fraser 2010), use of remote sensing has the potential to overcome many of the limitations of field methods. A variety of remote sensing approaches have been used for vegetation mapping in aquatic, riparian and wetland environments (reviewed in Silva et al. 2008; Adam et al. 2010). Most recent studies have used automated image classification, which is important for large-scale analyses. At the species level, automated classification of multispectral imagery has, for example, been successfully used in the identification of riparian tree/shrub species (Petersen et al. 2005; Dunford et al. 2009), single grass/herbaceous species (Gross et al. 1987; Andrew & Ustin 2009) and floating-leaved and emergent aquatic species (Marshall & Lee 1994; Valta-Hulkkonen et al. 2003a). The higher spectral resolution of hyperspectral imagery increases the potential for automated species discrimination (e.g. Hamada et al. 2007; Tian et al. 2010; Yang & Everitt 2010), but processing is time-consuming, even when small areas are covered (Adam et al. 2010). In a direct comparison, aquatic vegetation mapping based on visual interpretation yielded more species-specific information compared to automated classification (Valta-Hulkkonen et al. 2003b). Insufficient spatial resolution has been pointed out as a major limitation for species-level identification and mapping of aquatic, riparian and wetland vegetation (Muller 1997; Goetz 2006; Adam et al. 2010; Ashraf et al. 2010).

At present, the maximum spatial resolution of commercially available satellite images is about 0.5 m (Richards 2013). Manned aircraft platforms allow for imagery with higher spatial resolution, but are expensive to operate and limited by, for example, the need for flight permission, good weather conditions and infrastructure for take-off and landing. Recent developments in remote sensing systems using unmanned aerial vehicles (UAVs) as platforms offer new opportunities for vegetation surveying. Unmanned aircraft systems (UASs), also referred to as remotely piloted aircraft systems (RPASs), are not only cost-efficient (Lomax et al. 2005) but yield remote sensing data with sub-decimetre spatial resolution and high spatial accuracy (Rango et al. 2009; Bryson et al. 2010).

In our study, we evaluated the use of an UAS for surveying non-submerged aquatic (i.e. emergent and floating-leaved) and riparian vegetation in three boreal freshwater systems. The low spectral resolution of the off-the-shelf digital camera was compensated for by high spatial resolution. We applied visual image interpretation to identify and map vegetation at the species level and to assess species abundance. In a first step, we evaluated if and at what accuracy the UAS-derived images allow for vegetation identification at the species level. In a second step, we tested if the production of digital vegetation maps based on UAS orthoimages is feasible. We discuss the effectiveness and accuracy of our approach in relation to other remote sensing and image classification methods.

Methods

Study sites

To evaluate the potential of the UAS, we chose three test sites in the middle boreal sub-zone (Sjörs 1999) of northern Sweden (Fig. 1).

Figure 1.

Geographic location of the study sites.

Site I: Lake Bälingsträsket

Bälingsträsket (65°37′N, 21°50′E) is located at the land-uplift coast of the Gulf of Bothnia. This natural lake has a surface area of 429 ha, a maximum depth of 6.5 m, and is classified as humic with a Secchi depth of 1 m, pH 7, and a total phosphorus concentration of 28 μg·L−1 (Ecke 2006). The vegetation occurring in Bälingsträsket is well known from earlier work (Lohammar 1938; Ecke 2006).

Site II: River Rakkurijoki

Rakkurijoki receives effluent from the tailings impoundment and process water clarification pond system of the Kiruna mine. Major discharges are nitrogen and phosphorus (Waaranperä 2009). The studied 1.4-km river stretch had a mean width of 16 m and was located between two lakes (Rakkurilompolo and Rakkurijärvi: 67°46′N, ~20°E) about 7 km downstream from the outlet of the clarification pond. In August 2009, the water body had a mean pH of 7.3 and mean total nitrogen and total phosphorus concentrations of 4.1 and 0.03 mg·L−1 (= 5), respectively. The river is mainly surrounded by wetlands.

Site III: Lake Bruträsket

Bruträsket (64°49′N, 20°20′E) is a natural lake with a surface area of 36 ha and a maximum depth of 6.3 m (pH 7, total nitrogen 5.3 mg·L−1, total phosphorus 0.07 mg·L−1; Vestermark & Persson 2007). The site is located about 7 km downstream from the Gillervattnet tailings impoundment of a sulphide ore concentration plant in the city of Boliden. Vegetation at Bruträsket is dominated by Phragmites australis.

Image acquisition

Two sub-areas of site I (8.5 and 9.6 ha, respectively) were surveyed on 8 Aug 2007; site II was surveyed on 15 Jul 2009, and site III on 25 Aug 2009. Image acquisition spanned over 1.5–5.0 hr, resulting in a variation of solar zenith angle. Weather conditions ranged from sunny to overcast, with light to strong winds at ground level.

Aerial surveying was conducted using the Personal Aerial Mapping System (PAMS), a miniature UAS developed by SmartPlanes Sweden AB (Skellefteå, Sweden). The PAMS consists of: (1) the SmartOne aircraft; (2) a ground control station; and (3) the SmartPlanes AerialMapper software for on-site production of image mosaics for quality control. A commercial image post-processing service for the production of sub-decimetre resolution orthoimages with high spatial accuracy was provided from GerMAP GmbH (Welzheim, Germany).

The SmartOne aircraft is a hand-launched flying wing (wingspan 1.2 m, Photo S1) equipped with an autopilot with a LEA-4 GPS module from u-blox (Thalwil, Switzerland) and six infrared thermopiles for pitch and roll control (MLX90247 from Melexis, Ieper, BE; for a general description of infrared-based horizon detection see Taylor et al. 2003). It has a take-off weight of 1.1–1.5 kg including 200–600 g of payload. The cruise speed is 13 m·s−1 with a maximum airtime of 40–90 min. The aircraft complies with the strict safety standards required by civil aviation authorities and has been approved for routine operations in unsegregated airspace (above 120 m) in Sweden since 2007.

The ground station consists of a laptop computer loaded with flight planning and control software, a radio module for the telemetry downlink and command uplink, and a remote-control transmitter for manual flight modes and emergency manual override. The equipment is transported in a backpack.

The camera used in this study was a Canon Ixus 70® digital compact camera (Canon Inc., Tokyo, Japan) with a 7 megapixel charge-coupled device sensor (5.715 × 4.293 mm), an image size of 3072 × 2304 (columns × rows), a focal length of 5.8 mm, and an F-number of 2.8. The lens optical distortion was measured using PhotoModeler software (v. 5.2.3; Eos Systems Inc., Vancouver, CA, USA), with the zoom setting for the widest possible angle and the focus locked at infinity.

We used a flying height of 150 m, which resulted in a ground sampling distance (pixel size) of 5.6 cm. The along- and across-track image overlap was set to 70%. The study sites were surveyed in flight blocks, typically of a size that can be covered in 10–20 min of flying time. Once the flight plan had been uploaded to the autopilot, the system had all the information required to complete the survey and return. One block consisted of 200–300 images, and the block size varied between 15 and 35 ha.

For sites I and II, selected aerial images were printed at a scale of 1:800. Images were selected to entirely cover the two sub-areas of site I and a 720-m central river stretch of site II.

The image data sets for sites II and III were processed by GerMAP GmbH using software from Inpho (Stuttgart, Germany); first into high-resolution digital surface models and then into orthoimages using dense stereo matching techniques including the following steps: automatic tie-point selection, image matching, aerial triangulation, block adjustment, surface modelling, mosaicking and georeferencing. Image distortions were corrected for based on the measured camera calibration parameters. Due to the high overlap between images (both along and across track) the resulting orthoimages consisted almost entirely of close-to-nadir portions of the individual images. Internal height accuracy of the surface models was 8–9 cm, planar accuracy of the orthoimages was 4–5 cm, the surface models had a point spacing of 30 cm, and the orthoimages were produced with a ground sampling distance of 5 cm. The orthoimages were georeferenced to the Swedish national grid using ground control points identified in orthophotographs from the Swedish Land Survey, with a spatial resolution of 0.5 m.

Species identification

Non-submerged aquatic and riparian species were identified from littoral habitats at sites I and II. Vegetation stands, visually defined as homogenous patches that deviated from surrounding vegetation patches in colour, texture or/and shape (exemplified in Fig. 2), were delineated (n = 535 stands at site I, n = 78 at site II) as assessment units (sensu Stehman 2009). Vegetation stands were selected to represent the natural variability of the lake or river being studied, i.e. we selected stands that potentially represented different species. Delineation of vegetation stands was performed by hand on paper printouts at a scale of 1:800, with a minimum mapping area of 0.02 m2. We randomly selected 312 (site I) and 32 (site II) stands for calibration, and 223 (site I) and 46 (site II) stands for validation. For calibration, the stands were surveyed following a similar approach as in Howland (1980), i.e. the stands were located by landmark with the printed aerial images at hand. For site I, calibration comprised 219 single- and 93 multiple-species stands, with a total of 33 species. Nineteen of these 33 species were included in validation (see below and Appendix S1 for image interpretation key sensu Lillesand et al. 2008). Sphagnum species were treated at the genus level. Because of the similar shape, size and colour of their floating leaves it was not possible to differentiate between Nuphar lutea and Nymphea alba subsp. alba. For site II, calibration included 15 single- and 17 multiple-species stands, with a total of seven species (see Appendix S1 for image interpretation key). Salix species were treated at the genus level and included S. lapponum, S. hastata subsp. hastata, S. phylicifolia and S. myrsinifolia. For validation, the presence of plant species in all delineated stands was predicted based on the experience gained during calibration, and subsequently checked in the field. In multiple-species stands, predictions were only treated as correct when they were correct for all species. Field visits to sites I and II took place on 10, 15 and 21 Aug 2007 and 4–6 Aug 2009, respectively.

Figure 2.

Example of vegetation stand delineation and species identification on a digital extract of an UAS image of site I. In practise delineation was done by hand on paper printouts. Note that stands of Schoenoplectus lacustris (1–4) differ significantly in colour and texture.

Accuracy assessment

The accuracy of species identification was assessed using data from the validation stands. We calculated the Producer's accuracy (i.e. probability that a vegetation stand on the ground is correctly identified) and the User's accuracy (i.e. probability that a vegetation stand on the image is correctly identified) for each vegetation class, and the overall accuracy of the identification according to Congalton (1991).

Vegetation mapping

Vegetation mapping, i.e. digitizing the UAS orthoimages, was performed manually by a human interpreter in a GIS using ArcGIS software (v. 9.3; ESRI Inc., Redlands, CA, USA). At site II we mapped species composition of aquatic and riparian vegetation, and at site III vegetation cover of P. australis.

In a first step, at site II, the boundary of the watercourse was delineated (scale 1:300). Helophyte stands in the water (determined at a scale of 1:150) were treated as part of the watercourse. To include riparian vegetation affected by regular water flow, we added a 3-m buffer along each side of the watercourse. Vegetation mapping (scale 1:150) was restricted to the area within the outer boundaries of the buffer. Isolated vegetation stands surrounded by open water were included in the mapping when the largest diameter was ≥1 m.

At site III, P. australis formed large area single-species stands (observed during field visits on 12 and 16 Jul 2008). The abundance of P. australis was estimated by mapping four cover classes at a scale of 1:100: C1 (≤25%), C2 (26–50%), C3 (51–75%) and C4 (>75%). Isolated stands surrounded by open water were included when the largest diameter was ≥0.5 m.

Results

Species identification

At site I, we identified in total 49 vegetation classes, mostly at the species level, with an overall accuracy of 95.1% (Table 1). A total of 20% of the classified stands consisted of more than one species. The Producer's/User's accuracy was higher for single-species stands (97.8%) than for multiple-species stands (84.4%; Table 1). In single-species stands, Alisma plantago-aquatica, Eleocharis palustris, N. lutea/N. alba subsp. alba, P. australis, Schoenoplectus lacustris, Sparganium emersum and Typha latifolia were misclassified in a total of eight of 178 classifications (Table 1). Multiple-species stands including Carex spp., E. palustris and S. emersum were more likely to be misclassified, although not frequently (Table 1). Sparganium emersum was misclassified and confused with various helophyte species (Table 2).

Table 1. Accuracy assessment of species identification at site I, including number of expected (E), observed (O) and correctly identified (C) vegetation stands, and Producer's (PA) and User's accuracy (UA).
Vegetation ClassEOCPA [%]UA [%]
Single-species stands
 Alisma plantago-aquatica 100 0
 Calla palustris 333100100
 Carex canescens 111100100
 Carex rostrata 222222100100
 Comarum palustre 111100100
 Eleocharis palustris 13133.3100
 Equisetum fluviatile 888100100
 Hippuris vulgaris 222100100
 Menyanthes trifoliata 131313100100
 Nuphar lutea/Nymphaea alba subsp. alba18171710094.4
 Phragmites australis 87710087.5
 Polytrichum commune 222100100
 Potamogeton natans 333100100
 Salix lapponum 111100100
 Salix phylicifolia × myrsinifolia222100100
 Schoenoplectus lacustris 59605998.3100
 Sparganium emersum 30292910096.7
 Sphagnum spp.333100100
 Typha latifolia 0100 
Subtotal, single-species stands17817817497.897.8
Multiple-species stands
 Carex acuta, E. palustris0100 
 C. palustris, C. canescens111100100
 C. palustris, Sphagnum spp.222100100
 C. rostrata, C. palustre12150100
 E. palustris, N. lutea/N. alba subsp. alba0200 
 E. fluviatile, S. emersum222100100
 E. palustris, H. vulgaris111100100
 C. rostrata, M. trifoliata31110033.3
 M. trifoliata, S. lacustris111100100
 H. vulgaris, N. lutea/N. alba subsp. alba111100100
 N. lutea/N. alba subsp. alba, S. lacustris222100100
 N. lutea/N. alba subsp. alba, S. emersum45480100
 C. rostrata, P. australis111100100
 C. palustre, P. australis333100100
 S. lacustris, S. emersum222100100
 A. plantago-aquatica, S. emersum300 0
 C. palustre, Sphagnum spp.222100100
 C. palustris, C. canescens, C. palustre111100100
 C. palustris, C. rostrata, Sphagnum spp.222100100
 C. rostrata, P. australis, M. trifoliata111100100
 C. palustris, C. rostrata, C. palustre111100100
 C. rostrata, E. palustris, N. lutea/N. alba subsp. alba111100100
 C. rostrata, E. palustris, S. emersum0200 
 C. rostrata, C. palustre, P. australis111100100
 E. fluviatile, N. lutea/N. alba subsp. alba, P. natans111100100
 E. fluviatile, P. natans, S. emersum100 0
 N. lutea/N. alba subsp. alba, P. natans, S. emersum100 0
 N. lutea/N. alba subsp. alba, S. lacustris, S. emersum111100100
 C. palustris, C. canescens, C. rostrata, Sphagnum spp.111100100
 C. palustris, C. canescens, C. rostrata, C. palustre444100100
Subtotal, multiple-species stands45453884.484.4
Total22322321295.195.1
Table 2. Number of misidentifications at site I. For abbreviated genera see Table 1.
Should BeWas Interpreted As n
E. palustris C. rostrata, M. trifoliata 2
S. lacustris N. lutea/N. alba subsp. alba1
T. latifolia P. australis 1
E. palustris, N. lutea/N. alba subsp. albaN. lutea/N. alba subsp. alba, P. natans, S. emersum1
E. palustris, N. lutea/N. alba subsp. albaA. plantago-aquatica, S. emersum1
C. rostrata, C. palustreA. plantago-aquatica, S. emersum1
C. acuta, E. palustrisA. plantago-aquatica, S. emersum1
N. lutea/N. alba subsp. alba, S. emersum A. plantago-aquatica 1
C. rostrata, E. palustris, S. emersum S. emersum 1
C. rostrata, E. palustris, S. emersum E. fluviatile, P. natans, S. emersum 1

At site II, we identified in total 14 vegetation classes, mostly at the species level, with an overall accuracy of 80.4% (Table 3). A total of 33% of the classified stands consisted of more than one species, and the number of single-species stands was underestimated (Table 3). The User's accuracy of single-species stands was higher than that of multiple-species stands, while the inverse was found for Producer's accuracy (Table 3). Equisetum fluviatile was the species most often correctly identified (Table 3). Misclassifications consisted typically of omission or incorrect inclusion of one species in multiple-species stands (Table 4). For example, Carex rostrata was repeatedly confused with Salix spp., mainly in multiple-species stands (Table 4).

Table 3. Accuracy assessment of species identification at site II, including number of expected (E), observed (O) and correctly identified (C) vegetation stands, and Producer's (PA) and User's accuracy (UA).
Vegetation ClassEOCPA [%]UA [%]
Single-species stands     
Betula pubesens subsp. czerepanovii111100100
Carex nigra 333100100
Carex rostrata 37342.9100
Equisetum fluviatile 999100100
Menyanthes trifoliata 0100 
Nuphar pumila 810880.0100
Salix spp.100 0
Subtotal, single-species stands25312477.496.0
Multiple-species stands     
C. nigra, C. rostrata21110050.0
C. rostrata, N. pumila200 0
C. rostrata, Salix spp.63310050.0
C. nigra, E. fluviatile100 0
C. rostrata, E. fluviatile32210066.7
C. nigra, C. rostrata, Salix spp.57571.4100
C. nigra, C. rostrata,     
Comarum palustre, Salix spp.222100100
Subtotal, multiple-species stands21151386.761.9
Total46463780.480.4
Table 4. Number of misidentifications at site II. For abbreviated genera see Table 3.
Should BeWas Interpreted As n
C. rostrata C. rostrata, Salix spp.3
C. rostrata Salix spp.1
M. trifoliata C. nigra, E. fluviatile1
N. pumila C. rostrata, N. pumila2
C. nigra, C. rostrata, Salix spp.C. nigra, C. rostrata1
C. nigra, C. rostrata, Salix spp.C. rostrata, E. fluviatile1

Vegetation mapping

Because mapping a continuous area was more labour intensive than species identification of single stands, we simplified the vegetation classes at site II before mapping. Based on the results of the species identification evaluation, we included Betula pubesens subsp. czerepanovii, E. fluviatile and Nuphar pumila as single-species classes. Carex nigra was a frequently occurring species, typically forming single tufts of 1–2 m in diameter (corresponding to only ~1 cm on the screen). We therefore combined C. nigra with the surrounding vegetation into multiple-species classes. Menyanthes trifoliata was omitted due to low occurrence. Because Salix spp. occurred mainly as small single shrubs (<1m high) in multiple-species stands with C. rostrata, in which on the orthoimage it was difficult to distinguish (similar colour and texture), these species were combined into one multiple-species class. Other multiple-species classes were: (1) C. nigra, Crostrata and Salix spp., with rare occurrence of Comarum palustre; and (2) C. nigra, C. rostrata and E. fluviatile, with occasional occurrence of Salix spp.

Vegetation mapping showed that site II was dominated by three vegetation classes: (1) C. nigra, Crostrata, Salix spp.; (2) C. nigra, C. rostrata, E. fluviatile; and (3) E. fluviatile, which covered 28%, 27% and 26% of the mapped area along the 1.4-km river stretch, respectively (Fig. 3). The manual mapping speed at site II was about 0.25 ha·hr−1.

Figure 3.

Unmanned aircraft systems orthoimage of site II (a), vegetation map of site II (b) and magnifications (c, d). For abbreviated genera see Table 3.

At site III, 20% of the lake surface area was covered by P. australis corresponding to a total 7.3 ha; 70%, 9%, 7% and 14% of this area belonged to C1, C2, C3 and C4, respectively. The manual mapping speed at site III was about 0.5 ha·hr−1.

Discussion

Sub-decimetre resolution UAS images allowed for accurate identification of lake and river vegetation, mostly at the species level; including shrub and herbaceous riparian as well as non-submerged aquatic species. The Nymphaeaceae species that could not be discriminated due to their similar colour and shape are known to respond similarly along an eutrophication gradient (Penning et al. 2008a). This implies that the confusion is of minor importance, for example, when assessing the ecological status of a water body. Lake vegetation was easier to identify than river vegetation. At the river (site II), most of the vegetation formed a heterogeneous multiple-species band, while vegetation stands in the lake (site I) were more easily discriminated and often surrounded by open water. Our conservative validation criterion for multiple-species stands (i.e. correct classification only if all species were correctly identified) contributed to lower Producer's/User's accuracy for these stands compared to single-species stands at site I. Nevertheless, accuracy was >80%, indicating high reliability of classifications even for multiple-species stands that are common in helophyte communities and especially in riparian zones. Misidentified single-species stands were mostly small (<1 m2), therefore species-specific colour and texture characteristics could not be detected on the printouts. Misidentifications of multiple-species stands were mainly due to omission of single species with low cover, but also due to similar colour and texture of species from different genera. Manual vegetation mapping was feasible, both for species composition and cover, with a slight reduction in detail compared to the species identification step.

The flexibility of the UAS tested here (transportable and operable by a single person) permitted us to perform remote sensing at desired locations and times without being dependent on external image providers. In general, compared to manned aircraft platforms, UAVs allow for flights at low altitudes under the cloud cover, reduce safety risks to the pilot, minimize requirements for launch and recovery, and can easily be adapted to new technical developments. UAVs have been estimated to save up to two-thirds of the operating costs of manned aircraft (Lomax et al. 2005). The cost for purchasing a PAMS is about 24 000 € (excluding VAT, price from SmartPlanes Sweden AB, 2013); this is an upgraded version compared to that used in this study. Costs for orthoimage production vary from 150 to 250 € for an area of ca. 25 ha when using a commercial processing service. In ca. 2–5 hr processing time, image mosaics can also be produced with stand-alone software systems (included in the upgraded PAMS above) running on a high-end laptop.

The cost for image interpretation depends on the complexity of vegetation cover. For example, mapping the heterogeneous river vegetation took about twice as long per hectare compared to mapping single-species stands of different cover classes. Prior knowledge of species occurrence was necessary for correct species identification. This calibration step is also necessary for automated image classification, and hence fieldwork is still necessary for detailed vegetation mapping. For UAS images characterized by low spectral resolution and small pixel sizes, the segmentation into objects (homogenous image areas) prior to automated analysis, referred to as Object-Based Image Analysis (OBIA), is a robust way to deal with image heterogeneity in automated classification approaches (Blaschke & Strobl 2001; Dunford et al. 2009; Bryson et al. 2010; Laliberte & Rango 2011). The colour, texture and shape of these objects can then be considered in the automated classification (Laliberte & Rango 2009; Bryson et al. 2010). For example, automated mapping of shrubs, grasses, bare ground and litter in arid rangeland, divided into 18 plots of 0.5-ha each, resulted in User's accuracies of 95%, 33%, 75% and 95%, respectively (Laliberte & Rango 2011). Developing a set of classification rules for a single plot took 6 hr, while segmenting and classifying the remaining study area took 1.5 hr (in total 1.2 ha·hr−1; Laliberte & Rango 2011). This illustrates the effectiveness of automated image classification for large homogenous areas, such as rangelands. However, at the species level, only one of five species was accurately classified (Laliberte & Rango 2011). In addition, aquatic and riparian vegetation in boreal systems usually occurs at more limited spatial extents and more heterogeneity compared to the studied rangeland vegetation. Such inherent heterogeneity impedes the general applicability of automated classification rule sets to other areas. For example, Marshall & Lee (1994) found that a species can vary in appearance from one site to another due to differences in illumination (e.g. sun angle, haze, water glitter), wind exposure and plant development stage. In our study, individual stands of the same species differed significantly in colour and texture even in adjacent stands (e.g. S. lacustris; Fig. 2), probably due to differences in plant height. Taller stems are more likely to bend, resulting in a larger area exposed to the sun. Also, shading from taller plants in the vicinity altered the appearance of species. These within-species variations could cause problems for automated image classification.

Visual interpretation, as used in our study, is robust regarding within-species variation (see Fig. 2). The human interpreter considers a larger range of interpretation elements including size, shape, shadow, colour, texture, pattern, location, and associations with the object's surrounding (Colwell 1960; Tempfli et al. 2009), and thus uses the full potential of the high-spatial resolution image. Also potential variation caused by distortion and off-nadir perspective could visually be accounted for when identifying species on the paper printouts. Good experience of the interpreter in classifying vegetation stands from different systems and sampled under varying wind and sun conditions will likely reduce misclassification.

Because of high image overlap (70%) and use of external ground control points for georeferencing, the geometric accuracy and quality of the produced orthoimages were not affected by autopilot GPS accuracy or aircraft stability. The close-to-nadir perspective throughout the entire orthoimage reduced the angular variation and associated differences in appearance of vegetation compared to conventional aerial imagery. As orthoimages were assembled from different flights undertaken at different times of the day, the position and size of shadows could vary in different parts of the orthoimage. Also, the degree of cloud reflection in open water could vary between flights. This, however, did not hinder the vision-based manual mapping. High wind speeds can cause blurred images, either by moving the vegetation on the ground, or by accelerating the aircraft when blowing in flight direction. The latter was considered during flight planning by orientating the flight lines perpendicular to the main wind direction. The high image overlap allowed exclusion of individual blurred images, as well as images with high spectral reflection, from the data set. The main limitation of the UAS tested here compared to manned aircraft is the relatively small area of coverage, because operation is restricted to flying within the visual line of sight.

As manual mapping is time-consuming, our method is mainly suited for heterogeneously vegetated areas of limited spatial extent or larger areas with more homogenous vegetation. To further assess the potential of our method, future research should address direct comparisons with other remote sensing methods, e.g. the use of manned aircraft platforms and hyperspectral sensors. Another need is to develop automated classification programmes that can deal with the inherent complexity of sub-decimetre resolution images.

We conclude that UASs generating sub-decimetre resolution orthoimages offer great potential for lake and river vegetation identification and mapping at the species level, as well as for abundance estimates. Considering the accuracy of available state-of-the-art automated classification methods, we currently recommend visual image interpretation.

Acknowledgements

We thank four anonymous reviewers for valuable comments on the manuscript and R.K. Johnson for improving the English. This work was conducted as part of the European project ImpactMin (project number 244166 in FP7-ENV-2009-1) and the research programme WATERS, funded by the Swedish Environmental Protection Agency (Dnr 10/179) and the Swedish Agency for Marine and Water Management. Grants from Stiftelsen för Teknisk Vetenskaplig Forskning till Minne av J. Gust. Richert (grant no. PIAH/07:26) and the Swedish Governmental Agency for Innovation Systems (VINNOVA, P32060-1) supported the contributions of F. Ecke.

Ancillary