Volume 12, Issue 11 p. 2184-2195
RESEARCH ARTICLE
Free Access

A method for low-cost, low-impact insect tracking using retroreflective tags

Michael Thomas Smith

Corresponding Author

Michael Thomas Smith

Department of Computer Science, University of Sheffield, Sheffield, UK

Correspondence

Michael Thomas Smith

Email: [email protected]

Search for more papers by this author
Michael Livingstone

Michael Livingstone

Department of Landscape Architecture, University of Sheffield, Sheffield, UK

Search for more papers by this author
Richard Comont

Richard Comont

Bumblebee Conservation Trust, Stirling, UK

Search for more papers by this author
First published: 06 August 2021
Citations: 2

Handling Editor: Johan Kotze

Abstract

  1. Current methods for direct tracking of individual bee movement behaviour have several limitations. In particular, the weight and size of some types of electronic tag may limit their use to larger species. Radars and other electronic systems are also often large and very expensive to deploy. A tool is needed that complements these electronic-tag methods. In particular, one that is simple to use, low-cost, can have a high spatial resolution and can be used with smaller insects.
  2. This paper presents a candidate method that uses retroreflective tags. These are detected using a camera with a global electronic shutter, with which we take photos with and without a flash. The tags can be detected by comparing these two photos. The small retroreflective tags are simple and lightweight, allowing many bees to be tagged at almost no cost and with little effect on their behaviour.
  3. We demonstrate this retroreflector-based tracking system (RTS) with a series of simple experiments: Training and validation with a manually positioned tag, case studies of individual bees, tracking multiple bees as they forage in a garden, use of real-time monitoring to allow easy reobservation to enable a simple floral preference experiment and a very brief experiment with 3D path reconstruction (integrating two devices). We found we could detect bees to a range of about 35 m with the current configuration.
  4. We envisage the system will be used in the future to increase detection rates in mark–reobservation studies, provide 3D flight path analysis and for automated long-term monitoring. In summary, this novel tracking method has advantages that complement those of electronic-tag tracking which we believe will lead to new applications and areas of research.

1 INTRODUCTION

Tracking individual bees in the wild can provide ecologists and neuroethologists with valuable information, helping them, for example, to investigate foraging (Saville et al., 1997), search strategies (Reynolds et al., 2007), orientation flights (Capaldi et al., 2000) and other aspects of cognition (Capaldi & Dyer, 2020) and can help researchers find nests (Kennedy et al., 2018). Our ability to track and detect individuals as they forage and explore the landscape is fundamental to our understanding of bee behaviour and ecology. The aim of this work was to provide a simple, low-cost, low-impact method for precision tracking of bees in the field.

We briefly review current methods for insect tracking, to motivate this paper's approach. We set aside indirect methods, such as ‘mark and recapture’. Although the most simple, it is reportedly often biased by the location of the observers (Schaffer, 1997) and suffers from very low detection rates, due to the size of the foraging area involved (Schaffer, 1997). We instead focus on direct tracking methods.

Azimuthally scanning harmonic radar has been used to track flying insects for over 20 years (e.g. Cant et al., 2005; Charvat et al., 2003; Milanesio et al., 2016; Osborne et al., 1999; Wolf et al., 2014) and has been vital for many discoveries, from forage search strategies (Reynolds et al., 2007) to orientation flights (Capaldi et al., 2000). These results were achieved due to its impressive range of several hundred metres (Osborne et al., 1996). See Riley and Smith (2002) for more technical details of its design. The technique's main shortcoming is that the equipment is very large, very expensive1 and bespoke, making it inaccessible to most researchers (O'Neal et al., 2004), particularly in low-income countries. Precision typically is of the order of several metres (Cant et al., 2005), which is not sufficient for some applications. Low-power, mobile, direction-only detectors (e.g. using the popular Recco Rescue System, http://www.recco.com/) have also been used, with range reported as 13 m (Lövei et al., 1997), 40 m (Langkilde & Alford, 2002), 50 m (Roland et al., 1996) or 60 m (Psychoudakis et al., 2008). On tag size: Osborne et al. (1996) were able to tag and track Apis mellifera with 12-mg tags, and Chapman et al. (2004) report that 1-mg tags have been used to track tsetse flies.

Very high-frequency (VHF) radiotransmitters have also been used to track bumblebees (and butterflies) over even larger ranges than harmonic radar (Fisher et al., 2020; Hagen et al., 2011), with the support of a light aircraft. This approach requires a battery-powered (200 mg) transmitter to be attached to the bee, leading to significant effects on bee behaviour (Hagen et al., 2011) and largely impossible to use with many insects. For comparison, the median weight of Bombus terrestris subsp. audax workers is only about 250 mg (Connolly & Moffat, 2015). Many of these electronic direct tracking methods require an antenna to be attached to the insect, making it difficult to use with smaller insects, or those which need to access narrow spaces (such as burrowing insects or those which enter sympetalous flowers).

Other methods: Radio-frequency identification (RFID) tags have also been used, although these typically require the insect to be within 3 cm of the sensor (Nunes-Silva et al., 2019). Another recent electronic-tag alternative, with energy harvested from the insect's wing beats, has been developed. This still has a somewhat heavy and complicated payload for the insect to carry (Shearwood et al., 2017), and has a range similar to the method described in this paper. Others have used lidar to track untagged bees (Bender et al., 2003) although this requires a very carefully prepared site and does not allow individual targets to be tracked. Abdel-Raziq et al. (2021) suggest the orientation of the sun could be used to infer flight path using an on-board sensor, using dead-reckoning, although the paper only has simulated data. Non-electronic tags have been used with other insects: Chemiluminescent tags have been used in dark environments (Spencer et al., 2017), and metal foil and metal detectors (Piper & Compton, 2002) for hidden insects.

A few previous papers have used retroreflective tags to track insects: Rennison et al. (1958) used reflective paints to search for tsetse flies, at night, using a torch. Riley and Edwards (1994) experimented with retroreflective tags for green leafhoppers (Nephotettix spp.) in an unlit indoor test. Zanen and Cardé (1999) used them in a wind tunnel to track the orientation of gypsy moths. Recent work explores the potential to use retroreflectors (at very close range) to image the insect using lasers and galvanometer motors (Vo-Doan & Straw, 2002).

In this paper, we describe a low-cost method for detecting and tracking insects in the field using small retroreflective markers. Unlike the above-cited retroreflector papers, we use a camera with a global electronic shutter and a flash, enabling it to work in sunny conditions and work over a greater range. The lightweight, simple tags mean there is likely to be less impact on bee behaviour than the electronic tags mentioned earlier. The range is limited to about 35 m line-of-sight, so we do not envisage it being a complete replacement for harmonic radar and VHF radiotracking, but anticipate it can provide a very useful method in the ecologist's toolkit. For example, the imaging accuracy can allow the precise flower being visited to be identified. A few example applications include: tracking bees that are potentially too small for electronic tagging, supporting reobservation studies to increase reobservation rates, 3D flight path analysis and for automated long-term monitoring and tracking. We anticipate further applications to develop as the method matures.

2 MATERIALS AND METHODS

2.1 Overview

The retroreflector-based tracking system (RTS) was developed using commercially available components and is simple to construct. The on-board software required is provided as open source python modules. See Supporting Information for details.

Its operation is simple: A camera and flash are mounted on an elevated platform. Pairs of photos are taken, with and without the flash. A retroreflector attached to the bee reflects the flash's light back towards the camera. The non-flash photo is subtracted from the flash photo. A bright dot will remain on the photo after the subtraction, indicating where the retroreflector is. This bright dot can be followed if multiple photo pairs are taken. Various additional steps, described later, are required to reduce the false positive (FP) rate. This algorithm is run in real time on the system’s on-board raspberry Pi computer, and the results are both saved internally and made available via a web interface hosted onboard allowing the fieldworker to find the detected bee in real time (e.g. using a mobile phone's browser).

2.2 Bee capture and tagging

Over the course of the project, we successfully tagged over 100 bees, and found this of similar technical difficulty to tagging in normal reobservation studies. After capture we used cold-induced narcosis to immobilise the bees for tagging (see Supporting Information for more details).

We used a retroreflector with fabric substrate (from a hi-vis jacket) and cut it approximately to 8 × 4 mm (we adjust the size depending on the size of the insect). This weighed about 5 mg.2 Using tweezers we folded it in half and spread the ends of the two halves apart, so the reflector has a small ridge. This helped the RTS detect the bee from a wider range of angles. Using tweezers to hold the reflector by the ridge, we applied a small amount of cyanoacrylate adhesive (Loctite superglue) to the reflector and affixed it to the thorax of the insect. We cut small notches in each tag prior to attachment, to allow later unique re-identification of each bee by a human observer.

The species of wild bees were identified during tagging by the fieldworker (using Edwards & Jenner, 2018) and later confirmed (from photos) by the other authors.

2.3 The system

Several iterations of the RTS have been constructed. Here we describe the most advanced. The system's overall cost was about £820, of which £630 was the camera and lens (2020 prices). Figure S1a in Supporting Information illustrates the prototype used.

2.3.1 Computer

The system used a Raspberry Pi computer (Pi 4 model B, Raspberry Pi Foundation) to trigger the camera and flashes, and for processing and saving the camera output. It also ran a webserver to allow the user to control the system via wifi.

2.3.2 Camera and lens

The most important aspect of the camera (the monochrome 2064 × 1544 GCC2062 M, smartek) was that it must have a global electronic shutter. This allowed for very brief exposures (e.g. 25 μs) just covering the duration of the flash. A standard machine vision 2 megapixel lens was used (Kowa LM5JCM 2/3″ 5 mm F2.8 C-mount, Kowa American Corp.)

2.3.3 Flash

Two or four flashes were used (TT560, Neewer, set to 1/16 power), either configured to fire in unison or sequentially. In the former, the flash power was greater, thus potentially increasing the range of the system, but in the latter, the system could take more photos as the flashes could sequentially recharge.

2.4 Algorithm and software

2.4.1 Overview

To find a retroreflective tag in the photos requires some image processing. At its simplest, the algorithm needs to (a) align the flash and no-flash photos together, (b) subtract the no-flash photo from the flash photo and (c) find the brightest point. This simple algorithm is fairly successful; however, we found FPs were being frequently detected. For a reobservation study in which a true positive is relatively rare, the FP rate needs to be very low to avoid masking the rare true positive events.

We found FPs had a variety of causes. The most common were moving bright objects or features. These include the pattern of leaves and branches moving against the relatively bright sky leading to bright dots appearing and disappearing regularly in the subtracted image. Similarly, reflective objects such as lamp posts, plant pots, etc, often had reflections which led to bright pixels in the subtracted image. Humans or other animals moving within the scene could also lead to differences in the flash/no-flash photos. We also found occasional near-camera (out of focus) particles (possibly wind-carried seeds or pollen) were being brightly illuminated by the flash.

A three-stage approach was taken to remove various types of FP event. First, rather than just subtract the non-flash photo, this photo is maximum-dilated (see Supporting Information) to ensure that small movements in the location of bright spots will still lead to them being cancelled in the flash photo. For example, a bright flower moving in the wind, or the dots of sky visible through a tree, can move between the two photos. The camera itself may also move slightly. By dilating the no-flash photo, these bright dots will still be subtracted from their counterparts in the flash photo.

Second, some objects such as litter, a swinging bird feeder, street signs or other reflective objects do produce a bright reflection, which will appear in the flash/no-flash subtracted image. We handle this by noting that foraging bees rarely remain in the same location for long. Thus by combining (finding the maximum of) previous flash/no-flash subtractions, and then subtracting this from our current flash/no-flash image, we remove all the stationary sources of reflection.

Third, some objects may still appear, even in this resulting image. Examples include motes of pollen or wind-transported seed, or other moving retroreflectors such as car number plates etc. The retroreflective tag is far smaller than a pixel so leads to a very concentrated bright ‘blob’ of just a few pixels. Most other FPs that remain at this point are typically considerably larger. We use a machine learning classifier to identify and remove these remaining FP targets.

2.4.2 Classifier training

To collect training and validation data, we visited two locations (and used several camera orientations) over 5 days. We used a retroreflector on the end of a 1-m bamboo cane. We took photos of the reflector at various distances up to 40 m, holding the reflector at different heights. We labelled 812 images. In each, we specified the location of the retroreflector (easily identified as we could visually inspect the image and find the fieldworker holding the cane). We also automatically found another 7,000 maximums in the difference images that were not the retroreflector. We then used six features (see below) associated with each of these points to train a (linear kernel) support vector machine (Chang & Lin, 2011) to distinguish between the two classes.

2.4.3 The algorithm

The algorithm for detecting the tag in the photos consists of the following steps:
  1. Capture: Regularly (between every 0.25 to 6 s) a set, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0001, of photos are taken. These sets we index: urn:x-wiley:2041210X:media:mee313699:mee313699-math-0002. We assume we are searching for the bee in the latest set, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0003. Within each set, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0004, there is one photo with the flash (urn:x-wiley:2041210X:media:mee313699:mee313699-math-0005) and at least one without urn:x-wiley:2041210X:media:mee313699:mee313699-math-0006. These are typically all taken within 2 ms of each other. We typically take the no-flash photo(s) first (to ensure the flash is not lit). Later, during analysis, we found a single no-flash photo was sufficient.
  2. Alignment: A coarse or approximate alignment may be necessary within and between pairs of images if the RTS platform is not stable (e.g. on the tethered balloon). This can be done using a standard library (OpenCV) or for translation-only we found a convolution of a low-resolution version of the image provided very fast coarse alignment.
  3. Subtraction (for current set urn:x-wiley:2041210X:media:mee313699:mee313699-math-0007): For the current set, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0008, we maximum-dilate, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0009 (see Supporting Information) each of the non-flash photos, which is then subtracted from the flash photo. The minimum pixel values, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0010, over these subtracted pairs is computed (where the min operator is per pixel), so for pixel urn:x-wiley:2041210X:media:mee313699:mee313699-math-0011, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0012.
  4. Subtraction (for all previous sets urn:x-wiley:2041210X:media:mee313699:mee313699-math-0013, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0014): For each previous set, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0015, we do not dilate either image, we just subtract each of the non-flash photos from the flash photo. The maximum pixel values, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0016, over all these subtracted pairs, from all the previous sets, is computed (where the max operator is per pixel), so for pixel urn:x-wiley:2041210X:media:mee313699:mee313699-math-0017, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0018. The maximum per set can be cached as it is used repeatedly.
  5. Overall subtraction: We apply the maximum-dilate operation to the maximum subtraction image, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0019, from the previous sets and subtract this from the current difference image to give us our resulting search image, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0020.
  6. Finding Candidate Patch Features: We now find the brightest points in the search image. We iteratively find the location of the maximum pixel urn:x-wiley:2041210X:media:mee313699:mee313699-math-0021 in urn:x-wiley:2041210X:media:mee313699:mee313699-math-0022: urn:x-wiley:2041210X:media:mee313699:mee313699-math-0023. For this maximum point we record six features:
  7. The value of the maximum, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0024.
  8. The value of the maximum pixel from urn:x-wiley:2041210X:media:mee313699:mee313699-math-0025 in a urn:x-wiley:2041210X:media:mee313699:mee313699-math-0026 patch centred on the maximum pixel, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0027.
  9. The background mean: The average of the urn:x-wiley:2041210X:media:mee313699:mee313699-math-0028 patch surrounding the maximum pixel, urn:x-wiley:2041210X:media:mee313699:mee313699-math-0029, excluding a central urn:x-wiley:2041210X:media:mee313699:mee313699-math-0030 square, from urn:x-wiley:2041210X:media:mee313699:mee313699-math-0031.
  10. The maximum of wider surrounding points: We find the maximum pixel value of eight evenly spaced pixels placed in an urn:x-wiley:2041210X:media:mee313699:mee313699-math-0032 square around urn:x-wiley:2041210X:media:mee313699:mee313699-math-0033, from urn:x-wiley:2041210X:media:mee313699:mee313699-math-0034.
  11. The maximum of closer surrounding points: We find the maximum pixel value of eight evenly spaced pixels placed in an urn:x-wiley:2041210X:media:mee313699:mee313699-math-0035 square around urn:x-wiley:2041210X:media:mee313699:mee313699-math-0036, from urn:x-wiley:2041210X:media:mee313699:mee313699-math-0037.
  12. Centre sum: The sum of pixel urn:x-wiley:2041210X:media:mee313699:mee313699-math-0038 and its four neighbours, from urn:x-wiley:2041210X:media:mee313699:mee313699-math-0039.
These simple features are chosen to distinguish between true tags and other sources of maxima in the resulting image. These non-tags typically have a greater spatial extent which these heuristics are chosen to detect. We then delete a 15 × 15 square centred on p, and repeat the search for candidate patches. Typically, we generate 20 candidate patches.
  1. We finally use these six features to query a support vector machine classifier, previously trained on manually generated and labelled data. This gives a score/confidence for each of these candidate patches. We select a threshold depending on the application.

2.5 Applications

To provide a demonstration of the RTS being used, we ran a series of simple experiments.

The first, a simple tracking experiment, recorded the time bees were detected around different flower patches in a garden (Section 3.3.1).

In the second, we investigate floral preference (Section 3.3.2). Individual floral preference is a well-known feature of Bombus sp. (e.g. Wilson & Stine, 1996). We wished to test whether individual bees have a preference for particular flowers in our study (compared to their species' average). When the system detected a bee, a fieldworker found and identified the individual bee (and the flowers it was foraging on). Contingency tables were constructed for each species, to test for forage preference at the level of individuals.

In the third experiment, a 3D flight path reconstruction experiment was conducted (Section 3.3.3). A single RTS can only resolve a tag's location to a vector, which passes through the camera, and with a direction determined by the tag's position in the photo. If the same bee is tracked simultaneously using two tracking systems placed at different locations, the flight path of the bee can be reconstructed by combining this series of vectors. We provide more details of this approach in Supporting Information.

2.6 Field sites

For the evaluation phase, we used Site A (53°22′16.6″N, 1°30′49.7″W) and Site B (53°22′16.7″N, 1°30′45.1″W). These are both on a quiet suburban private road in Sheffield, UK, with a backdrop of buildings, trees, parked cars and plants. We used these with the aim of providing a relatively challenging environment for the RTS during the evaluation. The tracking system was mounted approximately 1.5 m above the ground at these locations. The case studies were conducted at: Site C, an experimental ornamental urban meadow in Sheffield (53°22′46.2″N, 1°26′09.7″W); Site D, a small wild-flower area in part of the University of Sheffield's landscaped campus (53°22′17.9″N, 1°30′26.1″W); Site E, a garden on the edge of the small, rural town of Minchinhampton, Gloucestershire, UK (51°42′34.8″N, 2°10′31.0″W); and Site F, a garden in the village of Chedworth, Gloucestershire, UK (51°47′51.2″N, 1°55′03.7″W). At Site D, the tracking system operated resting on a stepladder at the top of a bank, approximately 3 m above the height of the mini-meadow. At Site E, we placed the tracking system on a flat roof, so it was raised approximately 2.5 m above the ground. At Site F, it was placed on a stepladder (1.5 m above the ground).

The 3D reconstruction experiment, at Site G, was on a lawn at another part of the University of Sheffield campus, surrounded by trees and a nearby small allotment (53°22′21.5″N, 1°30′08.4″W). The two tracking systems were approximately 1 m above the ground.

3 RESULTS

3.1 Detection range evaluation

The supplementary includes both a calculation of the theoretical range and an empirical evaluation of tag detection for different distances, at Sites A and B. To summarise, we found that the RTS worked to approximately 35 m. Using all four flashes and a threshold that achieved a FP rate of zero (for the data collected), we detected 17 of 25 tags at 30 m and one of 18 tags at 39 m. These results are sensitive to the presence or absence of objects that generate FPs, the size and orientation of the reflector and the ambient daylight.

3.2 Case studies

Before looking at tracking, floral preference and 3D trajectories, it is worth looking at the detection of single, tagged bees, as these case studies provide simple, clear examples of the utility of the RTS. Further discussion and results, including the use of a tethered balloon platform, are in Supporting Information.

3.2.1 First field trial: Bumblebees reobserved in pine tree

Our first field trial (12:00–13:00 BST, 20 June 2020) with the current RTS was with a wild worker B. terrestris which we caught foraging in a mini-meadow area at Site D (Figure 1e). It was tagged (Figure 1a,b) and released. We monitored the wild-flower area at the site, with the tracking system, anticipating the bee would return to the forage. However, the system unexpectedly found it 3 m above the ground in a pine tree—33 m from the tracking system (Figure 1c,d).

Details are in the caption following the image
(a) Tagging bees in the field using an icebox of salt/ice, a marking pot with mesh at both ends (*) and an electric fan (Δ). (b). Tagged Bombus terrestris. (c) The same bee having been detected in the pine tree. (d) Image from tracking system, with the bee detected (#) and examples of other reflections: a lifebuoy cover (□) and the retroreflective material on a traffic cone (○). (e) The pine tree where the tagged bee was detected (+) and the tracking system (×)

Figure 1d illustrates reflections from other objects (a lifebuoy cover and the retroreflective material on a traffic cone). Although both of these objects reflected the flash, they were stationary, and so were cancelled by subtracting previous image pair differences (step 5 in Section 2.4.3). They would also have been removed by the classifier due to their extent and shape.

3.2.2 Megachile sp. and A. mellifera tagged and reobserved

In Sections 3.3.1 and 3.3.2, most of the bees tagged were Bombus spp.; however (at Site E), we did also find and tag a Megachile sp., foraging on a yellow Dahlia sp (Figure 2b). The floral preference experiment below involved using the tracking system to quickly find the tagged bees and then manually uniquely identify the individuals. On two occasions, the tagged Megachile sp. was re-identified using the system, once on the same area of Dahlia as it was originally foraging on (Figure 2d), and once on a nearby Verbena bonariensis. This case study demonstrated that the retroreflectors can be used on smaller bees than Bombus spp. At Site F, we tried tagging four A. mellifera but with less success due to the species' relatively small thorax. However, using the RTS, we found one of them, 210 min later, foraging on the same patch of Echinops ritro that we caught it on originally (Figure 2a,c).

Details are in the caption following the image
Newly tagged Apis mellifera (a) and Megachile sp. (b). (c) The A. mellifera later found foraging on an Echinops ritro. (d) The Megachile sp. later found foraging on a Dahlia sp. flower

3.3 Applications

3.3.1 Tracking foraging bees

At Site E, we successfully tagged approximately 25 wild bees. They were caught and tagged largely at random, but with a slight selection preference towards tagging a variety of species (see Section 3.3.2 for species list). The tagging took place on 2 August 2020 and 3 August 2020. We ran the RTS on 2 August 2020 (cumulatively for 66 min between 11:36 and 19:04 BST), 3 August 2020 (for 82 min between 17:07 and 18:29 BST) and 5 August 2020 (cumulatively for 55 min between 11:57 and 14:15 BST). Most of this time it was being used to locate bees for the floral preference study (see Section 3.3.2), but data were recorded over a cumulative time of 3 hr and 23 min. In that time the system stored 1,303 flash photo pairs and acquired approximately 418 confirmed detections. These were semi-autonomously merged into 36 tracks (median length, seven detections, min = 1, max = 45). These are plotted in Figure 3. Some of the tagged bees will be responsible for several of these tracks.

Details are in the caption following the image
The paths of bees detected using the retroreflector-based tracking system. Each line represents one track (some colours may appear similar but this does not imply the bee was the same). Foraging area extended from 8 to 15 m from the tracking system. The plants available for forage are labelled

Besides providing an example of how one can use the system for monitoring high-resolution foraging behaviour, we also used the raw data to investigate the time spent on various foraging resources. Importantly, this approach allows an automatic allocation of foraging time. We segmented the image into plant species. Figure 3 illustrates the plants available for forage and their locations. Table 1 shows the time spent on each plant type. We normalised by the visible area for each plant species. The results in the table suggest that there is a considerable preference for time spent foraging on Nepeta, Dahlia and Phlox.

TABLE 1. Number of times a tag was detected in proximity to each flower species over the 3 hr 23 min. Normalising is performed by dividing count by patch area in photo
Plant Detection events Percentage Normalised percentage
Clematis 4 1 2
Rosa 2 1 0
Salvia 2 1 1
Solidago 10 3 9
Crocosmia 0 0 0
Pelargonium 1 0 1
Rudbeckia 0 0 0
Alstroemeria 2 1 1
Acanthus 15 5 10
Phlox 52 16 22
Verbena bonariensis 13 4 9
Campanula 2 1 1
Dahlia 64 19 15
Nepeta 139 42 26
Verbascum 25 8 3
Fuchsia 0 0 0

3.3.2 Floral preference

We ran a simple experiment over the 3 days to investigate floral preference (again using Site E). We left the system running over a cumulative 11 hr (intermittently from approximately 11:30 BST until 17:30 BST on each day). While it was running, the fieldworker was largely occupied by other activities; however, the system would alert him via a sound from the mobile phone web interface to the detection of a bee. The web interface would show a cross marking the recent location of the bee, allowing him to quickly find the bee and take photos (using a standard camera) for later unique identification (using the shape of the tag).

Overall, 19 tagged bees were reobserved and identified using this approach, 50 times (nine B. terrestris, four B. hortorum, three B. pascuorum, one B. lucorum/terrestris, one B. lapidarius and one Megachile sp.), consisting of a mix of queens and workers. For each reobservation, the unique bee and its forage were recorded. Not all bees were recorded that were detected—some left before the fieldworker reached their location, or sometimes multiple bees were detected simultaneously.

We grouped those uniquely identified by species of bee, and constructed contingency tables (bee ID vs. flower species, see Table 2). We excluded observations in which a previous observation for the same bee was within 10 min, to avoid correlations due to temporal proximity. We wished to test whether bees have individual preferences for particular flower species. If individuals had no floral preference, we would expect no interaction in the contingency table: the expected proportion of visits to a given flower should be the same for all unique bees (this was our null hypothesis). We note that in practice, preferences might be driven by caste, subspecies differences or temporary environmental differences. However, as a simple test to demonstrate the RTS, it should suffice.

TABLE 2. Contingency tables for individual bee flower visits for Bombus hortorum (upper table) and Bombus terrestris (lower table). Other species had fewer than four visits each
Bee Id.
#23 #30 #6 #7
Fuchsia ‘Garden News’ 1 0 0 0 1
Nepeta sp. 0 0 2 1 3
Stachys byzantina 0 4 0 0 4
1 4 2 1 8
Bee Id.
#32 #1 #4 #36 #18 #21 #22 #27 #28
Nepeta 5 0 0 0 1 0 0 1 0 7
Salvia ‘Hot Lips’ 0 0 0 0 0 0 0 0 4 4
Verbena bonariensis 0 0 0 0 0 1 0 0 0 1
Agapanthus africanus 1 1 0 0 0 0 0 0 0 2
Dahlia ‘Bishop of York’ 2 0 0 1 1 3 2 0 0 9
Stachys byzantina 0 0 1 0 1 0 0 0 0 2
8 1 1 1 3 4 2 1 4 25

Clearly, the expected and observed numbers in the cells of these contingency tables are small, so the χ2 test is inappropriate. We instead used the Fisher's exact test.3 We only had enough samples to perform this analysis for two species, We found for both Bombus hortorum (p < 0.01) and B. terrestris (p < 0.001) that there were significant individual preferences for particular forage species beyond that expected by chance (i.e. if the individuals of the same species had the same preference for each plant type).

3.3.3 3D flight trajectory

A final brief experiment used two RTSs to compute the flight path of a bee (Figure 4b), to demonstrate this as a possible application. We used a single B. terrestris subsp. audax commercial nest (Biobest, standard hive, no cotton wool, containing about 80 workers) and tagged workers using the retroreflective tags. We placed the nest at Site G and positioned two RTS units facing the nest, approximately 15 ms from the nest and from each other. The nest had been open for a couple of days, so the workers had been foraging and had gained familiarity with the landscape. We started photographing with two RTSs as we saw workers depart (Figure 4d). The bee we successful tracked flew briefly north but turned south east (Figure 4a) and quickly accelerated (to a recorded speed of about 6 ms−1), gradually climbing to an approximate height of 2.5 m (Figure 4c) before leaving the field of view of one of the cameras. We failed to record other flights mainly due to problems with the nest (possibly due to the absence of cotton wool). This should be considered a proof-of-concept experiment.

Details are in the caption following the image
Reconstructed 3D flight trajectory. (a) Top-down projection of flight path. Orange line and crosses indicate the estimated observation locations. Red line and circles indicate fitted path with markers each second. Blue crosses indicate landmarks. (b) 3D perspective projection. Dashed red line and circles indicate estimated observation locations. Blue line and circles indicate fitted path with markers each second. Crosses and vertical lines indicate various landmarks. Red lines indicate vectors centred on the two cameras, each associated with an observation. Ground plane grid squares 1 × 1 m. (c) Height of bee during flight. (d) Photo taken from new Pi 4 tracking system. The researcher and nest are in the foreground a tagged bee is illuminated and visible near the centre of the frame. The ‘small tree’ landmark is the tree to the right of the image

4 DISCUSSION

Researchers have developed several methods to directly track individual bees. Most of these methods use an electronic tag, either producing an active radio signal or passively providing a radar reflection. Retroreflective tags have also been explored, typically in dark environments and/or at close range. The RTS described in this paper works at greater range and in daylight, using such tags, thanks to two innovations. First is the use of a camera flash combined with a global electronic shutter. This massively reduces the ambient light detected while also providing a very powerful light source. Second is the algorithm devised to remove FPs by both comparing with previous photos and classifying using previous training data. The main disadvantage of the approach, compared to the radar and VHF tags, is the method's limited range of about 35 m. However, its advantages complement those methods: The system can provide very precise tracking data (in 2D or 3D) allowing visits to individual patches of flowers to be identified, for example. It is low cost compared to the other methods. Finally, the tag itself is very small and simple, and does not require the aerial of the electronic devices. In our brief experiment, we found no evidence of behaviour change (although care must be taken if using cotton wool as insulation in an artificial hive). The majority of bee species in the United Kingdom would probably find the electronic tags too cumbersome. Given that we were able to tag and track an A. mellifera, it is likely that the retroreflective tag could be used with many other species of bee of around this size.

We found the initial experiment, tagging a foraging B. terrestris (Section 3.2.1) and using the RTS to later detect the bee, was particularly convincing as to the method's effectiveness. The pine tree would not have been a location we would have looked for the bee, if trying to reobserve during a transect. Even if the bee had returned to the wild-flower area, finding it manually would have been laborious and unreliable. The simplicity and effectiveness of the detection demonstrates the benefits the system can bring to reobservation studies in particular.

The tagging of A. mellifera and Megachile sp. in Section 3.2.2 demonstrates an important capability, as these are considerably smaller than the Bombus spp. tagged elsewhere. The smaller retroreflectors used do reduce detection range, but not as severely as one might expect. This we speculate is due to the urn:x-wiley:2041210X:media:mee313699:mee313699-math-0040 relationship in signal strength. For example, the tag on the A. mellifera probably has only 4 mm2 visible, compared to a typical tag on a Bombus sp. that might have up to 20 mm2 visible. The reduced size is equivalent in terms of simple signal strength to a 33% reduction in distance (e.g. if the RTS works to 30 m with the large tag, it will work to about 20 m with the smaller tag). The A. mellifera was detected approximately 16 m from the RTS, which fits this hypothesis.

Anecdotally, field studies often are limited by fieldworker time. Section 3.3.1 demonstrated the system's capability to perform detection and tracking automatically. To compare our results with anecdotal forage preference estimates, we asked the owner of the garden to recall the top three plant species over August 2020 they felt were most popular with bumblebees. They suggested: Nepeta, Dahlia and Verbascum. This fits quite closely with our results but with Phlox instead of Verbascum. This suggests that the automatic monitoring results reflect plausible foraging activities.

To move beyond single case studies, we performed a simple floral preference experiment (Section 3.3.2) and demonstrated the automatic monitoring and tracking that the system can provide. We also demonstrated in a proof-of-concept experiment how one can combine data from two systems to infer the bee's flight path in 3D (Section 3.3.3).

There is a considerable range of experiments and research questions that could be supported with this technology, we just mention a few and then discuss future improvements to the underlying method.

4.1 Future experiments

First, the most obvious initial use is as part of a standard mark–reobservation study, to increase the detection rate. The system could also be left to run autonomously for long periods, further increasing the rate of reobservation. Goulson (2003) wrote that ‘In more typical, patchy landscapes, the mark–reobservation method seems to be of little use without a huge team of observers to search for bees’. We propose that a small number of these tracking systems deployed at different locations can play the role of this ‘huge team’.

Second, many bees only forage within 100–300 m of the nest (Gathmann & Tscharntke, 2002). Given the low cost of the tracking system, we envisage that one could monitor a useful portion of the foraging range of such doorstep foraging species, by distributing several tracking systems over much of the plausible area likely to be visited.

Third, we found the 3D path reconstruction was relatively simple to achieve with this system, and is likely to be a method of considerable interest to neuroethology, as it allows one to record in considerable detail initial learning flights, and investigate how modifications to the environment lead to changes in behaviour, allowing inference to be made around the cognitive processes involved in the bee's navigation and perception. Ideally, more than two tracking systems should be used together, to further refine the path, and providing some redundancy to the data (see ‘Tracking in three dimensions’ on p420 in Dell et al., 2014; Straw et al., 2011).

Fourth, the RTS can remain in a fixed location, monitoring part of the landscape for considerable periods (with provision of electricity being its only limit). We suggest long-term floral preference studies could easily be conducted. We also noticed anecdotally that in our data we would often see several bees foraging at the same time, then have a period with none. We wonder if this is chance, or due to cyclic patterns in nectar depletion or possibly due to slight changes in the environment (temperature or wind). The high spatial resolution could also allow many individual flowers to be monitored for tagged bees, simultaneously.

Finally, the system's low cost, small size, ease of deployment and simplicity of use will give more researchers greater access to direct tracking. For example, early nest finding could be achieved by tagging a foraging queen and tracking her back to the nest. One might best achieve this with the harmonic radar or VHF radio tags, but access and use of these tools is quite limited. Anecdotally, we have noticed how queen bees, like workers, return on a similar path (or ‘bee line’) regularly to a forage patch. One could move or distribute the tracking system(s) along this path to eventually discover the nest.

4.2 Future improvements

We first note that some simple hardware improvements, such as mounting on a rotating platform (which we experimented with previously) and using a slightly telephoto lens and a Fresnel flash lens could considerably increase the area scanned, potentially to about 1 ha.

The next methodological improvement we are investigating is the potential for unique bee identification by combining the retroreflectors with colour filters to allow a colour camera to uniquely identify different tagged bees. Further refinement could be achieved by adding additional filters to the camera flashes. This will allow, for example, the floral preference study to have been run without fieldworker intervention.

Although tagging bees with the retroreflective labels is relatively simple, it might be possible to tag all the bees using retroreflective powder, using the technology described in the study by Osborne et al. (2008) for mass-marking.

The capacity to take four photos each second suggests that we should use the sequence of images to improve the detection and tracking accuracy (both in 2D and 3D). Future work should investigate the use of Bayesian approaches for such integration. A particle filter is probably an appropriate method as the dimensionality of the domain is low and the state space is nonlinear with a multimodal posterior. In 3D, the particle filter would be particularly effective as the likelihood function will consist of distributions along various vectors (each associated with a bright pixel in a photo).

Finally, building on our earlier work using alternative platforms, we found that, as the tracking system is fairly lightweight it can be used with drones, balloons and 10-m masts. Each of these has its own advantages, in particular one might envisage combining this with work such as Le et al. (2017), tracking the tagged bee with an unmanned aerial vehicle (UAV), allowing the evaluation of landscape features as corridors or barriers to bumblebees.

5 CONCLUSIONS

We have presented a novel method for tracking bees using small, lightweight retroreflective tags, using low-cost equipment. This method complements electronic and indirect tracking tools, for example it allows tracking of smaller bees than harmonic radar or VHF radio. We anticipate that RTS will support a wide range of ecological research.

ACKNOWLEDGEMENTS

This project was part funded by the Sheffield University Socially Enterprising Researcher grant (2016, 2017) and the Eva Crane Trust (2019). The authors thank Jonny Sutton, for providing equipment and expertise during initial experimentation with the retroreflector tag approach.

    CONFLICT OF INTEREST

    The authors have no conflict of interest.

    AUTHORS' CONTRIBUTIONS

    M.T.S. conceived the idea for the tracking system and designed and built the system; M.L. assisted with data collection; R.C. provided useful insight into bee behaviour, tagging and species identification. All authors contributed critically to the drafts and gave final approval for publication.

    PEER REVIEW

    The peer review history for this article is available at https://publons.com/publon/10.1111/2041-210X.13699.

    DATA AVAILABILITY STATEMENT

    Experimental data collected during the experiment has been archived (Smith, 2021c) in the University of Sheffield's Research Data Catalogue and Repository4: https://doi.org/10.15131/shef.data.14650161. Code: Two python modules have been written to collect and analyse the camera images in real-time, on-board the tracking system. The full code is available to download and use from https://github.com/lionfish0/bee_track (Smith, 2021b) and https://github.com/lionfish0/retrodetect (Smith, 2021a).

    • 1 LIFE STOPVESPA (2019, p. 10) reported that it costs €100,000 to construct the harmonic radar system.
    • 2 This can be made lighter if required by removing the fabric on the reverse of the reflector.
    • 3 M × N contingency implementation thanks to Noutahi (2018), using Markov chain Monte Carlo (MCMC) sampling to compute the p-value.
    • 4 https://www.sheffield.ac.uk/library/rdm/orda