Artificial neuronal networks are revolutionizing entomological research

The application of artificial intelligence (AI) in entomological research has gained significant attention in recent years. This review summarizes the current state of research on the potential of AI methods in various subfields of entomology, such as behavioural biology, biodiversity research, climate change research, pest management, and disease vector control. In some cases, AI‐based species identification methods based on deep learning neuronal network models have been shown to outperform traditional morphological identification methods in terms of accuracy and speed. Behavioural biology research has been enhanced through the use of AI‐based tracking systems that can classify insect behaviour and movement patterns. Habitat modelling has also been improved with the use of AI, allowing for the creation of more accurate models that can predict insect distribution and abundance. Climate change and biodiversity research have benefited from AI‐driven tools that can analyse large datasets and predict the impact of environmental changes on insect populations. In agriculture, pest management has been revolutionized by AI methods, with the development of smart traps and monitoring systems that can detect and identify pest species in real‐time, enabling targeted control measures. Disease vector control has also been improved through the use of AI‐based predictive models, which can identify areas at risk of disease transmission and aid in the development of effective control strategies. In conclusion, AI methods have the potential to revolutionize many aspects of entomological research. Future research should focus on developing more sophisticated AI tools and integrating them into entomological research to address even more complex ecological questions.

| 233   HARTBAUER AI algorithms to identify patterns in data that are not readily visible or accessible to humans is a major advantage, leading to new insights and discoveries that were previously not possible (Angermueller et al., 2016;Eraslan et al., 2019).Currently, AI is transforming the way we live and work; however, its impact on entomological research is still underestimated.
Insects play a vital role in many ecosystems, from pollinating crops to serving as a food source for other animals.For example, many bird species depend on insects for their diet.Nevertheless, the importance of insect communities is often overlooked, which is a tragedy because many insect species face threats due to climate change, habitat loss, and invasive species.As a result, entomological research has become increasingly important in recent years, with scientists focusing on research questions in the fields of insect biology, behaviour, pest management, evolution, and ecology.AI has emerged as a powerful tool for analysing large datasets and making predictions about complex ecological systems.In the field of entomology, AI methods have the potential to revolutionize research in several areas, including behavioural biology, habitat modelling, behavioural ecology, evolutionary biology, climate change and biodiversity, pest management, and disease vector control (Amarathunga et al., 2021;Apasrawirote et al., 2022;Høye et al., 2021;Khan et al., 2022;Le et al., 2020;Tannous et al., 2023;Teixeira et al., 2023;Visalli et al., 2017;Wei et al., 2022).This review provides an overview of the current state of research on the potential of AI methods in entomological research.First, I describe three AI tensorflow™ models that can be used for species identification.Then, I review recent developments in AI methods in various fields of entomology.
I also highlight the challenges and opportunities associated with the use of AI in entomology and provide recommendations for future research directions.This review aims to highlight the potential of AI methods for advancing our understanding of insects and their roles in our ecosystems.By making use of the power of AI, we can make significant progress in addressing the challenges facing insect populations and developing effective strategies for their conservation and management.

| Species identification
Insects are one of the most diverse and abundant groups of animals on the planet, with estimates of over 1 million species.Traditional methods of identifying insect species rely on morphological, genetic, or behavioural characteristics, and some of these methods can be time-consuming and are often error-prone.In recent years, AI methods have been developed to improve the accuracy and speed of insect identification in the lab and in the field, which is of key importance for many entomological research questions as well as for pest monitoring and control.For example, acoustic (e.g., Folliot et al., 2022) or computer vision methods based on convolutional neural networks (CNNs) can be used to identify and classify insect species more accurately than traditional methods based on morphological or behavioural characteristics (e.g., Tannous et al., 2023).
Different types of image datasets have been used to train CNNs, including museum collections, citizen science projects, and field surveys (Teixeira et al., 2023).A major challenge is the collection of high-quality images with accurate species labels to ensure the best possible performance of AI algorithms.

Obtaining a comprehensive and diverse dataset for training
CNNs in insect species identification poses several significant challenges.These challenges are inherent to the nature of this complex task and the characteristics of insect datasets.In the following text, I list some key challenges: • Species Diversity: Insects represent an incredibly diverse group with a vast number of species.Obtaining a dataset that adequately covers this diversity is challenging due to the sheer number of insect species worldwide.
• Imbalanced Distributions: Insect species are not evenly distributed in nature, and some species may be more prevalent than others.This can lead to imbalanced datasets where certain species have more training examples than others.Imbalanced datasets can bias the model towards the over-represented classes.
• Labeling Complexity: Accurate labeling of insect species can be a complex task, requiring expertise in entomology.Identifying and annotating large numbers of insect images with correct species labels can be time-consuming and may require collaboration with entomology experts.
• Variability in Life Stages: Insects undergo various life stages (e.g., egg, larva, pupa, and adult), and their appearances can differ significantly at each stage.A comprehensive dataset should include images representing different life stages to ensure the model's ability to generalize across these variations.
• Background and Environmental Variability: Insects are often found in diverse environments, and the background in images can vary significantly.Training a model that is robust to these variations is crucial for real-world applications, but collecting images that cover diverse backgrounds can be challenging.
• Intra-species Variability: Even within a single species, there can be considerable variability in terms of colour, size, and morphology.A comprehensive dataset should capture this intra-species variability to ensure that the model can recognize the same species under different conditions.
• Limited Publicly Available Datasets: The availability of publicly accessible and well-annotated insect datasets may be limited.
Creating a large, diverse dataset often requires extensive resources and collaboration, making it challenging for individual researchers or organizations.
To address these challenges, researchers often collaborate with entomologists, leverage citizen science initiatives, and employ data augmentation techniques (see below).Additionally, transfer learning approaches, where models pretrained on large datasets are fine-tuned for insect identification, can be useful in scenarios with limited labelled data.Despite these challenges, the development of accurate and robust insect species identification models is crucial for applications in agriculture, biodiversity monitoring, and ecological research.Kasinathan et al. (2021)  • Pooling layers: Pooling layers are used to downsample the spatial dimensions of the input, reducing the computational complexity and focussing on the most important image information.Max pooling, for example, retains the maximum value in a local region, emphasizing the most activated features from the previous convolutional layers.Pooling also helps in creating a form of spatial hierarchy, where higher layers capture more abstract and generalized features.
• Convolutional neural networks typically consist of multiple convolutional and pooling layers stacked on top of each other.Each layer learns increasingly complex features.Early layers might capture simple patterns like edges, while deeper layers may recognize more complex structures or even entire objects.This hierarchical representation allows the network to understand and differentiate between intricate patterns by combining simpler features.
• Non-Linearity activation functions: Activation functions, such as ReLU (Rectified Linear Unit), introduce non-linearities to the network by summarizing and normalizing the input weights at each node (neuron) of the network layer.This non-linearity is crucial for the model to learn complex relationships and patterns in the data.
It enables the network to approximate more intricate functions, making it capable of capturing the complex and non-linear nature of visual data.
• Fully connected layers: Towards the end of the network, fully connected layers combine the learned features to make predictions or classifications.These layers take into account the high-level features captured by earlier layers and use them to make decisions about the input.
• Training with backpropagation: CNNs are trained using backpropagation, an optimization algorithm that adjusts the weights of the network, to minimize the difference between the predicted output and the actual target.The learning process allows the network to adapt and improve its feature extraction capabilities over time.Another possibility to deal with a limited image dataset is to adopt a pretrained CNN with collection-based images to include a new taxonomic group and successfully extract relevant features to classify insect species (Knyshov et al., 2021).This approach is called transfer learning and makes use of a pre-trained model as a starting point for training a new model on a different task.Using this method, Knyshov et al. (2021) demonstrated that even a rather small image dataset can be sufficient for rather precise species recognition.Valan et al. (2019) developed an effective method of CNN feature transfer, which achieves expert-level accuracy in taxonomic identification of insects with training sets of 100 images or less per category.These authors extracted a rich representation of intermediate-to highlevel image features from the CNN architecture VGG16 pretrained on the ImageNet data set.This information was submitted to a linear SVM classifier, which was trained on the target problem.In some AI applications, it makes sense to detect the location of different insects on an image with the help of an object detector before passing images to a classifier (e.g., Norman et al., 2022).This method is especially important if sticky traps together with an automated camera system are used for automated insect species classification.
Several studies have demonstrated the effectiveness of AIbased insect identification in the field.For example, one study found that a CNN achieved an accuracy rate of 97% in identifying wild bee species (Buschbacher, Buschbacher, et al., 2020) and various pest species with similar precision (Johari et al., 2023;Liu et al., 2022).
Another study used an AI algorithm to classify ladybird beetles with an accuracy rate of 92% (Venegas et al., 2021).In some cases, researchers have used machine learning algorithms to analyse images of insects and accurately identify species with greater accuracy than human experts (Júnior & Rieder, 2020;Høye et al., 2021;Xia et al., 2018).In summary, the performance of AI has significant implications for the field of entomology and can improve our understanding of insect diversity (Gerovichev et al., 2021) and species distribution (Ong & Hamid, 2022).

| Image-based insect identification
AI systems can be trained to recognize specific morphological features of insects by analysing large image datasets.By analysing more or less high-resolution images of insects, AI algorithms can identify key features and compare them with a database of known species (Apasrawirote et al., 2022;Buschbacher, Ahrens, et al., 2020;Gerovichev et al., 2021;Kaya et al., 2014Kaya et al., , 2015;;Le et al., 2020).Various camera systems are available for generat- In order to enhance the rate of correct classification, image data sets need to be split into training and test datasets.There are several tensorflow functions available for this task, and often it is possible to assign images randomly to one of these categories (see: https:// www.tenso rflow.org/ datas ets/ splits).For example, the following command assigns 80% of the data to the training and 20% to the test data set, whereby X refers to the image data and Y to their labels: (trainX, testX, trainY, testY) = train_test_split(data, labels, test_ size=0.2,random_state=42) It is not always necessary to train a CNN because there are some pre-trained Tensorflow™ Keras models available for the identification of various insect species.

EfficientNetB7(include_top=True, weights='imagenet')
Inception V3 and other pre-trained network models such as EfficientNet, MobileNetV2, Xception, and VGG19 were adopted for the classification of 1000 classes contained in the ImageNet dataset (Russakovsky et al., 2015).Yang et al. (2021) compared deep learning networks Inception V3, VGG16_bn, and ResNet50 to classify insect species.The experimental results showed that all of the three methods had high accuracy, the accuracy of Inception V3 reached 98.69%, that of the VGG16_bn reached 97.80%, and that of ResNet50 reached 97.94%.Similarly, Ong and Hamid (2022) have shown that the InceptionV3 model has advantages over other models due to its high performance in distinguishing insect order and family.The InceptionV3 base model can be expanded to improve the recognition of common insect species like butterflies, dragonflies, grasshoppers, ladybirds, and mosquitoes (see https:// www.kaggle.com/ code/ aryas hah2k/ insec t-type-class ifica tionincep tionv3).
EfficientNet was tested and compared with other CNNs by Tan and Le (2019).Monis et al. (2022) successfully used this CNN for the identification of various crop insects.Faster R-CNN and EfficientNet with appropriate pre-processing and data augmentation can lead to very accurate recognition of challenging insect classes, with accuracies from 81.27% to 99.1% (Deserno & Briassouli, 2021).
If pre-trained models are used, the accuracy of correct species identification may be not high enough for scientific purposes, which is due to the lack of correctly labelled images in the training and test datasets.If there is already a correctly labelled image dataset available, the following deep-learning CNN Keras model can be used to train 10 image categories: # Create the Tensorflow Keras model model = Sequential ([ layers.Conv2D(32,(3,3),activation='relu',input_shape=(224,224,3)

compile(optimizer='adam', loss='categorical_ crossentropy', metrics=['accuracy'])
The model above combines several convolutional steps with a defined kernel size with pooling layers to extract relevant features for image classification.The idea behind CNN networks is to reduce the image information that is necessary for correct classification to a subset of non-linear image details, which makes data processing more efficient.Therefore, it is often difficult to tell which image features are used for reliable object classification that is processed in a complex CNN.In order to learn more about image recognition of a trained CNN, one can use test images that only contain certain features that may be used for classification.

| Genetic species identification
An ANN method was used for classification and identification of Anopheles mosquito species based on the internal transcribed spacer2 (ITS2) data of ribosomal DNA strings (Banerjee et al., 2008).
In this genetic approach, the authors compared two different multilayered feed-forward neural network model forms, termed multiinput single-output neural network (MISONN) and multi-input multi-output neural network (MIMONN).In this study, the bases A, C, T, and G of the network inputs were coded with binary values, and the network output species names were coded with real numbers.
The bases in the network input sequences were assigned with A = {1 0 0 0}, T = {0 1 0 0}, G = {0 0 1 0}, and C = {0 0 0 1}.This input assignment requires the number of active nodes in the input layer of the network as four times of the size of the genetic sequence.MISONN outperformed the MIMONN network regarding the accuracy of species identification.Recently, a powerful hierarchical artificial neural system (HANS) was proposed for genus classification and species identification in mosquitoes (Venkateswarlu et al., 2012).
HANS was also trained on the internal transcribed spacer 2 (ITS2) data of ribosomal DNA sequences.HANS is composed of two levels: the first level has a single network that serves as a genera classifier, and the second level has multiple networks that perform as species identifiers.
In some cases, morphological and genetic data are not available for insect species identification, and it may be helpful to analyse acoustic or vibratory signals that are related to distinct species.

| Sound-based species identification
Artificial intelligence systems can be trained to recognize speciesspecific sounds generated by insects.This is useful in identifying and classifying species that are difficult to observe visually, such as nocturnal insects or those that are too small to be easily seen.By analysing the unique sounds made by different species (Ferreira et al., 2023), AI algorithms can accurately identify and classify most of them even in the presence of moderate background noise (Ashurov et al., 2022;Høye et al., 2021).For example, Santiago et al. ( 2017) described the sound-based detection of pests in stored grains using an ANN that analyses the MFCCs for sound classification.Using the MFCCs for acoustic feature extraction resulted in a test accuracy of more than 90%.Similarly, the detection of adults of 10 species belonging to six genera of common stored-grain insects was achieved with the mean average precision of 94.77% by extracting image features with the help of a CNN network (Li, Zhou, et al., 2020).
For sound data recording, various microphone systems are available.Every type of microphone has its own amplitude and directional sensitivity and determines both the signal-to-noise ratio and the quality of data recording.For more information about various microphone types and microphone characteristics, see: https:// www.grasa coust ics.com/ products or https:// www.bksv.com.If the task is the recording of a rather loud signal in the vicinity of the microphone, very cheap microphones like tiemicrophones may be used as well (e.g., Sennheiser XS Lav USB-C).
Sound recording can also be made with the help of cheap sound level meters (e.g., Zoom F2, Zoom H6, and PeakTech 8005), if the frequency range is located in the audible range of the human ear.

| Behavioural ecology
AI systems can be used to study the complex behaviours of insects in the context of foraging, mating, and social interactions, which are critical for understanding the ecological roles of insects in various ecosystems.

| Insect behaviour
Several scientific studies have demonstrated the potential of AI systems for tracking and monitoring the movements and pose of insects and other animals (Mathis et al., 2018;Mathis & Mathis, 2020;Ratnayake et al., 2021).Object tracking can be as accurate as marker-based tracking and is fast enough for closed-loop experiments, which is important for understanding the link between neural systems and behaviour.Usually, a sequence of video frames is used as input for a CNN for automatic object motion and/or pose analysis, which is a task belonging to computer vision.CNNs used for this purpose were often trained on a high number of scenes showing object motion in complex environments.This machine learning method has been used to estimate honeybee posture, distinguish between pollen-bearing and non-bearing honeybees (Sledevic, 2018), monitor interactions of honeybees in a hive (Boenisch et al., 2018), and monitor hive entry/exits (see: http:// matpa lm.com/ blog/ count ing_ bees/ ).A famous markerless pose and motion tracking tool is DeepLabCut (Mathis et al., 2018), which requires few data to match human object tracking performance and is also applicable to study AI-enabled video tracking was also used to study the gait dynamics of the fruit fly Drosophila melanogaster in a laboratory setting (Pereira et al., 2019).In this study, a training phase with as few as 100 frames results in 95% of peak performance.Fazzari et al. (2023) proposed a novel and simple automated method that can be extended to other animals for the creation of Biohybrid Intelligent Sensing Systems to be exploited in various ecological scenarios.In this study, the pose of the antenna of cockroaches was automatically tracked and evaluated using the CNN to refer the antennal pose response to chemical stimuli.Singh et al. (2023) trained artificial recurrent neural network agents using deep reinforcement learning to locate the source of simulated odour plumes that mimic features of plumes in a turbulent flow.This in silico approach was able to mimic odour tracking of flying insects.Folliot et al. (2022) developed and applied for the first time an acoustic survey to monitor pollination by insects and wood use by woodpeckers in a protected alpine forest in France.
These authors trained a CNN on spectrographic images of longlasting outdoor sound recordings to automatically detect the sounds of flying insects' buzzing and woodpeckers' drumming as they forage and call.Another study used a combination of AI-enabled sensors and satellite imagery to monitor the population dynamics of desert locusts in West Africa (Gómez et al., 2021).Their results suggest that soil moisture data retrieved between 95 and 12 days (before the sighting) provided sufficient information to achieve acceptable predictive performances about possible outbreaks.

| Modelling insect behaviour
AI has enabled researchers to model and simulate insect behaviours in silico, providing a powerful tool for understanding insect ecology and predicting responses to environmental changes (Lichocki et al., 2012).Machine learning techniques have been used to analyse large datasets of insect behaviour, allowing researchers to create detailed models that accurately reflect the complex interactions between insects and their environment.For example, Ratnayake et al. ( 2021) described an elaborated insect detecting method of a video analysis software performing background subtraction and deep learning-based object detection to accurately and efficiently track the motion of single insects among a cluster of wildflowers.
Using a hybrid approach, it was possible to study honeybee foraging outdoors using a dataset that includes complex background detail, wind-blown foliage, and insects moving into and out of occlusion beneath leaves and among three-dimensional plant structures.
This hybrid algorithm combines a background subtraction method (KNN-based background/foreground segmentation provided in the OpenCV library) and the powerful deep learning-based object detection system YOLO9000 that can detect over 9000 object categories (Redmon & Farhadi, 2017).Such CNN-based AI models can be used to test the hypotheses about how endangered insects, like bumblebees, respond to environmental changes, such as temperature, humidity, or food availability, and to predict the effects of such changes on insect populations (Martins et al., 2015).In this context, AI models can aid conservation efforts by predicting the potential negative impacts of environmental changes on insect populations.

| Large data analysis
Among other machine learning methods, AI systems have become increasingly useful in analysing large datasets for the study of animal behaviour that is based on the extraction of certain features recorded by various sensor types (Valletta et al., 2017).Machine learning algorithms based on the CNN can be trained to identify patterns in the movements of individual insects even as small as drosophila (Stern et al., 2015).Manoukis and Collier (2019) reviewed various computer vision techniques that enable researchers to track the positions of individual insects in videos over time.Using multicamera setups, stereo cameras, action cameras, and other optical systems, the measurement of various behavioural parameters such as speed, direction, and proximity to other individuals is possible but also generates large image datasets.These data can be used to train machine learning models to automatically recognize and classify different behaviours, such as walking, flying, oviposition, or social interactions (Mathis & Mathis, 2020;Ratnayake et al., 2021).
Recently, a multi-sensor network was proposed for the monitoring of biodiversity in times of climate change, that generates a large amount of data within a short period of time (Wägele et al., 2022)

| Robotics
Advancements in AI technology have paved the way for the development of robotic insects that mimic the behaviour of real insects.These robots can be used to study insect behaviour in controlled laboratory settings, providing insights into the ecological and biological systems of insects.The use of AI in the development of robotic insects enables researchers to create robots that closely mimic the movements and behaviour of real insects (Manoonpong et al., 2013;Saito et al., 2018).Insect-like robots can be programmed to navigate through complex environments by avoiding collisions (Balasubramanian et al., 2018), and in future, these robots will also respond to environmental cues and interact with other insects (e.g., bees), providing a realistic agent for the study of insect behaviour (Stefanec et al., 2022).Therefore, robotic insects have the potential to revolutionize the field of entomology by providing researchers with a controlled and repeatable platform to study insect behaviour.By using robots that mimic the behaviour of real insects, researchers can create controlled experiments that allow them to test specific hypotheses about insect behaviour (da Silva Guerra et al., 2010).For example, researchers could use stationary robotic systems to study the behaviour of honey bees and other pollinators in response to different environmental conditions, such as changes in temperature or the presence of pesticides (Stefanec et al., 2022).

| Evolutionary biology
One way AI can be used in insect evolution research is through the analysis of genomic data.Protein-coding genes from all major insect orders and close relatives have already been used to describe the placement of taxa in a phylogenomic analysis (Misof et al., 2014).
With the possibility of high-throughput DNA sequencing technologies, it is now possible to generate vast amounts of genomic data from insect species, which can be analysed using machine learning algorithms to identify patterns of genetic variation and infer evolutionary relationships among species.These methods can be used to study the phylogenetic relationships among different insect species, as well as to investigate the genetic basis of key evolutionary innovations such as the evolution of wings, colour patterns, and other morphological features.
AI algorithms have the potential to revolutionize the study of insect evolution by providing novel analytical tools that can assist in uncovering the mechanisms underlying speciation, the origin of novel traits, and the impact of environmental changes on insect diversity.Machine learning algorithms can analyse large-scale genomic and phenotypic datasets, providing insights into the evolutionary history of insect populations and the factors that drive their diversification.For example, a 2048-dimension feature vector was created in a deep learning approach that accurately predicts the mean elevation of moth species in a mountain region based on colour and shape features (Wu et al., 2019).In this study, the images of moths, taken at various elevations, were fed into a trained ResNet model to extract image features that are related to abiotic and biotic factors.Also, SVMs can be used to study the genetic basis of trait variation in insect orders.This machine learning method revealed the 'genetic toolkit' for the division of labour and sociality in distantly related bee and wasp societies by identifying a set of 127 genes with consistent shared patterns of differential expression among the social phenotypes of all six species of bees and wasps (Favreau et al., 2023).Machine learning algorithms can also aid in the identification of genes associated with reproductive isolation, a critical component of speciation, and to assess the impact of hybridization on the evolution of insect species.For example, Blischak A recent study has demonstrated the potential of AI in studying insect evolution, including the identification of genes associated with adaptive traits in butterflies and the prediction of the impact of climate change on butterfly populations (Hoyal et al., 2019).To quantify phenotypic distances between Heliconius butterflies, a deep CNN was trained to classify museum photographs of Heliconius butterflies by subspecies, with 1500 of the 2468 total images used for CNN network training and the remainder for testing.Image classification was performed using a 15-layer deep learning network called ButterflyNet.This study highlights the potential of AI in advancing our understanding of insect evolution and providing a roadmap for future research in this area.

| Climate change and biodiversity
AI methods can be used to estimate the effect of climate change on biodiversity by analysing large amounts of data from various sources, such as satellite imagery, climate models, genetic data, and ecological surveys.Such a big data approach requires more or less automated systems for data preparation, data labelling, classification, and prediction (Christin et al., 2019;Chunhui, 2020;Liu et al., 2010).Most of these tasks can be accomplished with the help of AI systems that have been trained on large public, curated reference databases.Rising global temperature affects species interaction and insect community dynamics (Boukal et al., 2019).In this context, Robinet and Roques (2010) reviewed the key impacts of global warming on insect development and dispersal.In recent years, AI methods have been used to study the impacts of climate change on insect populations and their ecosystems (e.g.Gerovichev et al., 2021), with mature implications for food security (Garrett et al., 2022;Subedi et al., 2023).
For example, researchers have created various distribution models to analyse large datasets of insect distribution and abundance to understand how changing climates are affecting insect populations in future.The key task in this context is to select an optimal species distribution model.For this purpose, "Maxent," a general purpose habitat modelling algorithm, was developed for estimating the probability of distributions based on the principle of maximum entropy (Phillips et al., 2006;Xue et al., 2022).For example, the Maxent modelling technique was used to fit occurrence points and current climate data in order to model potential pine beetle distributions and forest vulnerability (Evangelista et al., 2011).

| Predicting the habitat suitability
AI systems have emerged as powerful tools for analysing and predicting changes in habitat suitability for different species under various climate change scenarios (Choudhary et al., 2019;Garrett et al., 2022;Garzón et al., 2006;Guisan & Thuiller, 2005;Singh et al., 2022;Wu et al., 2019).These systems integrate complex environmental data, such as temperature, precipitation, and landuse change, to identify patterns and relationships that can inform conservation and management strategies (Yu et al., 2022).For instance, recent studies have used AI to infer the ancestral states of morphological traits and to identify the genetic basis of key adaptive traits, such as coloration (Wu et al., 2019).By modelling the interactions between species and their environment, AI methods can provide insights into the impacts of climate change on biodiversity and help identify areas that are most at risk for species loss (Habila et al., 2022;Silvestro et al., 2022).
One key advantage of AI systems is their ability to process and analyse vast amounts of data, including data from remote sensing platforms, such as satellites and drones (Ampatzidis et al., 2020;Jung et al., 2021).This allows researchers to generate detailed and accurate maps of habitat suitability for different species over large areas, which can help prioritize conservation efforts and inform land-use planning decisions (Hilbert, 2001).Additionally, AI algorithms can incorporate a range of climate change scenarios and assess the potential impacts on species distribution and abundance, allowing for more effective conservation strategies (Scoville et al., 2021;Silvestro et al., 2022;Zhang & Li, 2017).However, challenges remain in the development and deployment of AI methods for conservation purposes.One major challenge is the need for highquality, standardized datasets that can be used to train and validate models.Despite this, AI systems show great promise in advancing our understanding of the impacts of climate change on biodiversity and informing conservation strategies to mitigate these impacts (Silvestro et al., 2022).

| Species distribution modelling
Climate change is having a significant impact on the distribution of species worldwide, leading to changes in species' ranges, interactions, and ultimately biodiversity loss.Machine learning algorithms have emerged as a powerful tool to predict how species' distributions will shift under future climate change scenarios (Choudhary et al., 2019;Guisan & Thuiller, 2005).By analysing large datasets of species occurrences and environmental variables, machine learning algorithms can generate accurate predictions of future species distributions based on climate projections (Yan et al., 2017;Yang et al., 2009).Moreover, AI can help researchers identify the most vulnerable species to climate change, allowing for prioritization of conservation efforts and management actions.For this purpose, citizen scientists, conservationists, and scientists are using AI-based mobile phone applications to record and analyse data collected by a large community.For example, eButterfly helps better understand the biological patterns of butterfly species diversity and how environmental conditions shape these patterns in space and time (Prudic et al., 2017).eButterfly has created, in collaboration with thousands of butterfly enthusiasts, a near real-time butterfly data resource producing tens of thousands of observations per year.Similarly, iNaturalist (Matheson, 2014) and other mobile applications have been developed for a similar purpose.All these phone apps use transfer learning on pre-trained CNN networks for image-based object recognition.
Recent studies have shown that machine learning models can accurately predict changes in species distributions under climate change (Garrett et al., 2022).For example, Elith et al. (2006) used machine learning algorithms to model changes in bird species distributions in North America under climate change scenarios.They found that these models were more accurate than traditional statistical methods, highlighting the potential of AI for ecological modelling.AI can also help identify the most vulnerable species to climate change.By analysing species traits and environmental variables, machine learning algorithms can predict which species are most likely to experience range contractions or expansions under future climate scenarios (Tabor & Koch, 2021;Xue et al., 2022).

| Remote sensing and imagery analysis
Satellite imagery has become a valuable dataset in monitoring changes in vegetation, land use, and land cover over time (Alqurashi & Kumar, 2013).Differences in ratios between near-infrared and red reflectance of satellite image data indicate changes in the abundance of vegetation (termed Normalized Difference Vegetation Index, Jin et al., 2013;Meraj et al., 2022).For example, this index can also be related to the loss of vegetation caused by desert locust outbreaks (Geng et al., 2020).Climate change is affecting ecosystems in many ways, and identifying areas that need protection is of great importance for nature conservation (Boukal et al., 2019;Foden et al., 2019;Scoville et al., 2021;Silvestro et al., 2022).Through analysing large amounts of data, trained AI systems can learn to identify subtle changes that might not be visible to the human eye and can be used to better understand how climate change is affecting ecosystems and identify areas that need protection (Liu et al., 2010;Pettorelli et al., 2016;Rehman et al., 2023).

| Genetic approaches
By analysing genomic data, researchers can identify genetic changes that may help animal populations adapt to changing conditions (Passamonti et al., 2021).Several molecular genetic approaches have been used to identify adaptation-related genes.While genome-wide association studies (GWAS) use phenotypes related to adaptation, landscape genomic approaches use environmental variables as proxies for phenotypes.Other genomic approaches analyse the patterns of genomic diversity within and between populations and the level of admixture to identify selection signatures of adaptation (e.g., Blischak et al., 2020).In addition to informing conservation efforts, the use of AI systems to analyse genomic data can also provide valuable insights into the mechanisms of adaptation to climate change.
By identifying the specific genetic changes that are associated with adaptation, researchers can gain a deeper understanding of the underlying biological processes involved in these changes.
The information gained from such studies can be used to inform conservation efforts by identifying populations that may be at risk due to climate change and by identifying potential strategies for preserving these populations.For example, if a population of a particular insect species is found to have genetic variations that are associated with adaptation to warmer temperatures, ecologists may focus on preserving habitats with similar conditions in order to promote the survival of that population.

| Image-based pest identification
AI systems have demonstrated remarkable success in recognizing and identifying objects in images.By training AI to recognize the visual characteristics of different insect pests, such as their colour, shape, and size, these systems can be used to analyse images of crops and ecosystems and identify pests that are present (Azfar et al., 2023;Chithambarathanu & Jeyakumar, 2023;de Telmo & Rieder, 2020;Domingues et al., 2022;Li et al., 2021;Li, Wang, et al., 2020;Lima et al., 2020;Liu et al., 2019;Partel et al., 2019;Zhao, Liu, et al., 2022;Zhao, Zhou, et al., 2022).The training process for AI involves feeding them a large number of images of different pests and allowing them to learn the patterns and features that distinguish them from one another.This is typically done using CNNs capable of learning complex patterns and features from visual data (Hassan et al., 2023;Jackulin & Murugavalli, 2022;Kuzuhara et al., 2020;Yang et al., 2023;Zhao, Liu, et al., 2022).Remarkably, even unmanned aerial vehicles (UAVs) can be used for remote image-based pest identification, which was demonstrated for soybean pests (Tetila et al., 2020).Using information such as temperature, humidity, rainfall, and wind speed, it was possible to model the infestation of crop pests with higher precision (Souza et al., 2017).In the context of effective pest control, Choudhary et al. (2019) emphasized the urgent need to link pest models and climate change projections for a better understanding of the outcomes of climate change-inflicted variations in future pest risk assessment (see also Rehman & Kumar, 2018).Climate change has been found to bring a number of changes in the insect phenology, distribution, species interactions, and biodiversity (Renner & Zohner, 2018).Several studies have demonstrated the effectiveness of using AI methods for pest identification.For example, a recent study by Wang et al. (2022) trained a CNN to identify three common insect pests caught on sticky traps located in apple orchards on the basis of their colour and shape.The YOLO-Diseases and Pests Detection (YOLO-DPD) model achieved a recognition accuracy of over 90% in detecting lesions of three species of diseases and pests on rice canopy, demonstrating its potential as a CNN tool for pest management (Li et al., 2022).Another study tested five different CNNs to recognize multiple insect pests in images of cotton plants (Johari et al., 2023).The system achieved an accuracy of more than 95% in identifying the pests, outperforming traditional manual identification methods.Once an AI has been trained to recognize insect pests, transfer learning methods can be used to improve the identification of various pest species with damaging potential (Johari et al., 2023;Xing & Lee, 2022) 2022) compared various pre-trained CNN models and found that VGG19 and the Regional Proposal Network (RPN) are rather accurate (>90% accuracy) in distinguishing the different insect pest species.This approach can help farmers and other stakeholders to quickly identify and respond to pest infestations, potentially reducing the use of harmful pesticides and increasing crop yields.

| IoT sensors
AI systems are also helpful in analysing data from distributed measurement equipment sensing temperature, humidity, sound, brightness, images, and movies.By analysing these data, trained AI can identify patterns and anomalies that may be indicative of pest infestations (Abreu & van Deventer, 2022).Commonly, temperature and humidity sensors are used in agriculture to monitor the conditions that pests thrive in.Especially, acoustic sensors have a great potential for pest detection.For instance, the sounds produced by rodents, insects, and other pests can be detected via acoustic sensors.AI methods can analyse these sounds to identify the type of the pest present and its location (Dhanya et al., 2022).A future example of this technology may depend on a network of smart IoT outdoor sensors recording species-specific airborne sound or substrateborne vibrations indicative for pest species.Sound will be analysed and classified using onboard AI embedded into hardware.After detecting pest-related signals, the position of the activated sensor will be transmitted to a hub with a connection to a mobile phone data network.A large network of cheap IoT sensors will provide the possibility to identify the location of pest infestation in real-time and enables early measures for pest control, which is an important prerequisite for smart agriculture.
Cameras in combination with AI are also useful in detecting insects captured on sticky surfaces (yellow board insect traps, e.g., iSCOUT® from Pessl Instruments: https:// metos.at/ en/ insec t-monit oring/ ; see also Gerovichev et al., 2021).Trained AI can analyse these images to identify the type of the pest present, which provides information about a possible pest outbreak.By analysing data from distributed sensors and other sources, AI can detect changes in environmental conditions that may increase the risk of pest infestations.This information can be used to develop early warning systems and implement preventative measures (Linaza et al., 2021).

| Predictive modelling
Pest infestations pose a significant threat to global agricultural production, leading to yield losses and increased use of chemical pesticides, which can have negative environmental impacts.Traditional methods of pest control rely on reactive measures after the pest has already caused damage.However, with the emergence of AI systems, it is now possible to develop predictive models that can forecast the spread of pest infestations and aid in the prevention or mitigation of their impact (e.g., Caselli & Petacchi, 2021).The development of such predictive models involves the integration of various data sources, including weather patterns, crop density, and the presence of other pests.The AI can analyse these factors and identify patterns that may indicate an increased risk of infestation (e.g., Toscano-Miranda et al., 2022).The model can then predict where and when these pests are likely to appear, allowing farmers and other stakeholders to take proactive measures to prevent or mitigate their impact.
One of the key benefits of using AI to develop predictive models for pest infestations is the ability to process large amounts of data quickly and accurately.With the use of machine learning algorithms, the system can learn from past data and improve its accuracy over time.This allows for more precise predictions and better-informed decisions about when to apply preventive measures, such as crop rotation, pesticide application, or the release of natural predators.
Another benefit of AI is the ability to customize the predictive model to specific regions or crops.By integrating local data sources and regional weather patterns, the AI can provide tailored predictions that are more accurate and useful to farmers and other stakeholders in that area.This can help reduce the overall use of pesticides and other chemicals, resulting in a more sustainable and environmentally friendly approach to pest management (smart agriculture).
AI can be used to model and predict the distribution of pest insect species based on factors such as temperature, rainfall, and habitat structure.For this purpose, the following steps need to be taken: • Collect data: The first step is to gather data on target insect species, their distribution, and the various factors that affect their distribution.These data can be collected through field surveys, literature reviews, and data from remote sensing and satellite imagery.
• Data cleaning and preprocessing: Once the data have been collected, they need to be cleaned and preprocessed to remove errors, inconsistencies, or missing values.
• Feature engineering: After preprocessing, feature engineering involves selecting the most relevant features that affect the insect's distribution, such as temperature, rainfall, habitat quality, and land use.
• Select and train an AI model: Various AI models can be used for predicting the distribution of pest insect species, including decision trees, random forests, and neural networks.In the latter case, the AI model will be trained using the selected features and the distribution of the insect species.
• Model validation: After training, the model needs to be validated to ensure that it is accurate and can make reliable predictions.This can be done by comparing the predicted distribution with the actual distribution of the insect species.
• Deploy the model: Once the model is validated, it can be deployed to make predictions about the distribution of insect species based on the selected factors.

| DIS E A S E VEC TOR CONTROL
Insect-borne diseases such as malaria, dengue, Zika, and Lyme disease are a major public health concern, causing significant morbidity and mortality worldwide.While the use of insecticides, mosquito nets, and other traditional interventions have helped reduce the incidence of these diseases, there is a growing interest in the use of AI to predict and control their spread (Abeyrathna et al., 2019;Andrade et al., 2010;Bomfim et al., 2020;de Lima et al., 2022;Zeng et al., 2021).AI prediction models employ sophisticated algorithms to analyse extensive datasets containing information on insectrelated diseases with the aim of identifying patterns and forecasting disease outbreaks (Akhtar et al., 2019;Eisen & Eisen, 2011;Sabir et al., 2021).These models process diverse data, including environmental factors like climate conditions and geographical features; social dynamics, such as travel patterns and population density; health-related data like reported cases; and medical records related to insect-borne diseases.Utilizing machine learning techniques like deep learning, these models discern intricate relationships within the data, learning from historical instances of insect-related disease outbreaks and adapting to new information.By integrating environmental, social, and health-related variables, these models contribute to the early detection of potential outbreaks and enable timely and accurate predictions.Usually the data used for training and predictions are organized in large tables, which requires some normalization procedure to transform numerical data representing various variables into a standardized format.This can be done with the normalization function provided by the tensorflow library.After transformation, variables will have the same mean and standard deviation and can be used as input for a sequential ANN.
The following KERAS model can be used to predict a single event (outbreak or not) on the basis of various normalized input variables: normalizer = tf.keras.layers.Normalization(axis=-1)# The last axis is assumed to be a feature dimension normalizer.adapt(numeric_features)model = tf.keras.Sequential ([ normalizer,tf.keras.layers.Dense(10,activation='relu'),tf.keras.layers.Dense(10,activation='relu'),tf.keras.layers.Dense(1) ]) Machine learning algorithms have been used to analyse various data and predict the likelihood of malaria transmission based on factors such as temperature, humidity, and vegetation cover (Parselia et al., 2019).Using a similar AI method, the weather and climate conditions favouring the outbreak of West-Nile virus (transmitted by mosquitoes) can be predicted (Farooq et al., 2022).In another AI approach, a simple mosquito abundance prediction model showed a high performance if temperature, wind speed, humidity, and precipitation were used as model inputs (Lee et al., 2016).AIbased prediction models can also be helpful for the early identification of outbreaks of dengue fever or Lyme disease (Chumachenko et al., 2022;Shashvat et al., 2019).
In addition to predicting disease outbreaks, AI can also be used to control the spread of insect-borne diseases.For example, automated mosquito traps can use AI algorithms to identify specific mosquito species that carry diseases causing dengue or malaria (Kaur et al., 2022).These traps use a combination of visual and chemical cues to lure mosquitoes, and AI methods can analyse the data from the traps to optimize their effectiveness (Awotunde et al., 2021;Santosh et al., 2020).AI can also be used to control the spread of insect-borne diseases through the use of techniques originally developed for the purpose of precision agriculture.By using IoT insect traps and drones or other technologies, the use of insecticides can be restricted to a small area, which reduces the amount of the insecticide used while still effectively controlling disease-carrying insects.

| Predicting outbreaks
Predicting and preventing the spread of malaria still challenges health organizations in many countries, and climate change plays a crucial role to predict future malaria and dengue outbreaks (Ren et al., 2016).This deadly disease is transmitted to humans through the bites of infected Anopheles mosquitoes, and AI can analyse various factors such as weather patterns, mosquito breeding patterns, and human population density to predict when and where an outbreak of malaria might occur (Chen et al., 2019).
One important factor that AI can analyse is weather patterns, as the temperature and humidity levels can affect the breeding and survival of mosquitoes, which in turn can affect the transmission of the disease.For instance, a study conducted in Thailand used AIbased models to analyse meteorological data and predict malaria outbreaks with about 70% accuracy (Kiang et al., 2006).Another study compared different AI-based models to predict dengue outbreaks in Vietnam based on environmental and meteorological data, such as temperature, humidity, rainfall, evaporation, and sunshine hours (Nguyen et al., 2022).These authors tested a long short-term mem- Adam(learning_rate=learning_rate), loss="mse") model.fit(X_train, Y_train, epochs=50, batch_ size=32, validation_data=(X_test, Y_test), verbose=2) In many use cases, it will be necessary to take a sequence of data-points gathered at equal intervals, along with time-series parameters such as length of the windows and spacing between two windows, to produce batches of sub-timeseries inputs and targets sampled from the main timeseries.For this purpose, one can use the 'timeseries_dataset_from_array' function provided by the tensorflow library.
In addition to weather patterns, AI can also analyse mosquito breeding patterns to predict the occurrence of malaria outbreaks.
Mosquitoes require stagnant water to lay their eggs, and thus, identifying and mapping areas with high risk of mosquito breeding by means of remote sensing technology can be useful in predicting the spread of insect-transmitted diseases (Bui et al., 2019).A study conducted in Ethiopia used satellite imagery and AI-based models (ANNs-Cloud Classification System) to identify and map potential mosquito breeding sites, such as irrigation canals and small ponds, and found that these data can accurately predict the risk of malaria transmission (Jiang et al., 2021).Highly relevant, AI can analyse human population density to predict the occurrence of malaria outbreaks because higher population densities can increase the risk of malaria transmission.For example, a study in Brazil used AI-based models to analyse demographic and environmental factors to predict the spatial distribution of malaria cases and found that population density was a significant predictor of malaria transmission (Barboza et al., 2022).The algorithms were able to accurately predict the spread of the disease several months in advance, allowing public health officials to take preventative measures.In summary, using AI approaches to predict dengue and malaria outbreaks has several advantages compared to traditional mosquito labour-intensive monitoring techniques that often make use of a high number of mosquito traps.Species identification of trapped insects is time-consuming and requires expert knowledge.In contrast, AI can analyse vast amounts of diverse data, including climate patterns, mosquito breeding habitats, and human mobility, allowing for a more comprehensive understanding of the factors influencing disease transmission.Machine learning algorithms such as LSTM can identify complex patterns and relationships within input data, enabling more accurate and timely predictions of potential outbreak hotspots.Additionally, AI models can adapt and improve over time as they learn from new data, enhancing their predictive capabilities compared to static, rule-based approaches commonly used in traditional monitoring methods.

| Monitoring problematic insect populations
Malaria is a major public health concern in many parts of the world, with an estimated 229 million cases and 409,000 deaths in 2019 alone (report of the World Health Organization).Insecticide-treated bed nets and indoor residual spraying are effective measures for controlling the mosquito population, but these methods are not always feasible or sustainable in all settings.AI offers a promising new approach for monitoring mosquito populations and predicting areas where transmission of several problematic diseases including Zika and dengue is likely to occur (de Lima et al., 2022;Lorenz et al., 2015).One of the main challenges in monitoring mosquito populations is the high number of insects and the vast areas they inhabit.Traditional methods for monitoring mosquitoes, such as trapping and manual counting, are labour-intensive and time-consuming.
In contrast, AI can process large amounts of data quickly and accurately, making it well-suited for monitoring mosquito populations (Gutiérrez-López et al., 2022;Muñoz et al., 2020).

| CON CLUS ION
In conclusion, the use of AI in entomology has provided valuable support to scientists in understanding the biology, evolution, and ecology of insects.AI-supported methods have facilitated the processing and analysis of vast amounts of data, enabling researchers to make significant advancements in the field of entomology.One of the significant contributions of AI is the ability to identify and classify insect species accurately.AI algorithms can analyse images of insects and automatically identify them, reducing the time and effort needed for manual identification.This has been particularly beneficial in studies of insect population dynamics, where accurate and efficient species identification is critical.Also in applied research, AI methods are used for monitoring and controlling 'pest species', which is of importance for predicting the outbreak of locust swarms and insect-transmitted diseases.AI methods have also been used to analyse complex ecological interactions between insects and their environment.By modelling insect behaviour and interactions, AI can predict the impact of various environmental factors on insect populations, enabling researchers to develop effective strategies for insect management and ecosystem protection, which is highly relevant with respect to global change.Moreover, AI has improved our understanding of insect genetics and evolution.By analysing vast amounts of genomic data, AI algorithms can identify genetic markers associated with particular traits, which opens new avenues for insect conservation efforts and pest control.As AI methods continue to advance, we can expect to see even more significant advancements in this field, with the potential to develop more effective strategies for insect management, pest control, and ecosystem protection.writing -review and editing; validation.

ACK N O WLE D G E M ENTS
Thanks to Google company for providing machine learning models with Tensorflow.

CO N FLI C T O F I NTE R E S T S TATE M E NT
The author declares to have no conflict of interest.
have tested several machine learning methods for automatic insect classification including artificial neural networks (ANNs), support vector machine (SVM), k-nearest neighbours (KNN), naive Bayes (NB), and CNN model.The authors have shown that an improved CNN model outperformed other machine learning methods and achieved the highest classification rate of 91.5% and 90% for nine and 24 class insects, respectively.CNNs are a type of deep learning model specifically designed for processing structured grid data, such as images.They have proven to be highly effective in tasks like image recognition, object detection, and image segmentation.The key to their success lies in their ability to automatically learn hierarchical representations of complex features from raw pixel values.How CNNs achieve this and why they are capable of learning complex patterns and features from visual data are summarized as follows: • Convolutional layers: CNNs use convolutional layers to detect local patterns or features in the input data.Convolution involves sliding a small filter (also called a kernel) over the input image and computing the dot product at each step.This operation allows the network to capture low-level features like edges, textures, and simple shapes.

For
the training of CNNs, it is essential to use large, correctly labelled datasets to enhance classification performance.However, data of known species are often limited, and image datasets need to be augmented with the help of the following tensorflow functions: tf.keras.layers.Rescaling, tf.keras.layers.RandomFlip, and tf.keras.layers.RandomRotation (for more information see: https:// www.tenso rflow.org/ tutor ials/ images/ data_ augme ntation).Using the augmented image dataset has the advantage that classification does not depend on the orientation of the object or object position on the image.To maximize the potential of the limited dataset, Goodwin et al. (2021) used real-time data augmentation, using the default parameters of the get_transforms function in the FastAI PyTorch library36 (random cropping, affine transform, symmetric warp, rotation, zoom, brightness, and contrast modifications, with the exception of the affine transform) to enhance the classification performance for 67 mosquito species.
ing images for datasets.Mobile phone cameras, USB-connected cameras, and high-resolution cameras are often used, whereas hyperspectral cameras and infrared-cameras are less often used for image-based insect identification because morphological criteria for reliable insect recognition are better represented in normal RGB images.Most cameras have their specific image data format with respect to image dimensions, colour depth, and data type, which may create a problem for CNN networks expecting a defined data format.Therefore, it is often necessary to resize images before splitting data into training and test data sets.For this purpose, the following code can be used to resize images that are located in a several directories containing images belonging to certain categories: # initialize the data and labels path = &#x0201C;path to image folders&#x0201D; categories = &#x0201C;list of folder names corresponding to image categories&#x0201D; category in enumerate(categories): for f in os.listdir(path+category): imagePaths.append([path'/'+f,k]) import random random.shuffle(imagePaths)for imagePath in imagePaths: image = cv2.imread(imagePath[0])# load the image, image = cv2.resize(image,(WIDTH, HEIGHT)) # resize the image data.append(image) Inception V3 as well as EfficientNet were trained on 1000 classes of images of the ImageNet dataset (https:// www.imagenet.org/ updat e-mar-11-2021.php).The following code can be executed in Python to load pre-trained models such as InceptionV3 and EfficientNetB7 for image-based insect species identification (also ResNet (He et al., 2019) can be loaded in a similar way): # Load the pre-trained model InceptionV3 model = tf.keras.applications.InceptionV3(include_ top=True, weights='imagenet') # Load the pre-trained EfficientNet model model = tf.keras.applications.
AI systems can be used to analyse genetic data from insects to classify insect species.This is useful when traditional morphological characteristics are difficult to differentiate or are not available.By comparing the genetic data of an unknown specimen to a database of known species, AI algorithms can accurately identify the species.The following Python code can be used to set up a Tensorflow™ Keras model for the identification of DNA sequences in a DNA dataset exhibiting variable length: import tensorflow as tf from tensorflow import keras# Define the model model = keras.Sequential([ keras.layers.InputLayer(input_shape=(None, 4)), # Input shape for DNA sequences of variable length with 4 nucleotides keras.layers.Conv1D(filters=32, kernel_size=10, activation='relu'), # Convolutional layer with 32 filters and kernel size of 10 keras.layers.MaxPooling1D(pool_size=2), # Max pooling layer with pool size of 2 keras.layers.Conv1D(filters=64, kernel_size=10, activation='relu'), # Second convolutional layer with 64 filters and kernel size of 10 keras.layers.GlobalMaxPooling1D(), # Global max pooling layer keras.layers.Dense(64, activation='relu'), # Dense layer with 64 units and ReLU sactivation keras.layers.Dense(1, activation='sigmoid') # Output layer with sigmoid activation ])# Compile the model model.compile(optimizer='adam',loss='binary_crossentropy', metrics=['accuracy']) # Train the model on DNA sequence data model.fit The classification of sound signals is usually based on the frequency content of a signal rather than on amplitude modulations.The idea of CNN-based sound classification is to train the network on images representing the frequency content over rather short time sequences.For this purpose, waveform data are transformed into a sonagram, whereas the Short-Time Fourier Transform (STFT) and Mel-frequency cepstral coefficients (MFCCs) transform of sound signals are more frequently used and are known to improve sound classification performance.The following tensorflow command can be used to transform a waveform into a STFT spectrogram: spectrogram = tf.signal.stft(waveform,frame_length=255, frame_step=128) Acoustic equipment often picks up background noise, which may lead to bad classification results.Generally, background noise can have a negative impact on the classification result when the training dataset used for CNN networks does not include environmental noise of various sources (clear recordings).One possibility to mitigate this noise problem is mixing audio recordings with various relevant noise recordings to amplify the training dataset and to achieve better classification results.The following Python code can be executed to define a Tensorflow™ Keras model that can be trained to identify different sound signals based on their frequency characteristics (spectrogram): # Instantiate the `tf.keras.layers.Normalization` layer.norm_layer = layers.Normalization() # Fit the state of the layer to the spectrograms with `Normalization.adapt`.norm_layer.adapt(data=train_spectrogram_ds.map(-map_func=lambdaspec, label: spec)) model = models.Sequential([ layers.Input(shape=input_shape), # Downsample the input.layers.Resizing(32, 32),   # Normalize.norm_layer,   layers.Conv2D(32, 3, activation='relu'),   layers.Conv2D(64, 3, activation='relu'),   layers.MaxPooling2D(),   layers.Dropout(0.25),# prevents overfitting layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dropout(0.5),layers.Dense(num_labels), ]) insect motion.DeepLabCut combines deep learning techniques based on ResNets, which are powerful for transfer learning, with tracking algorithms to automatically estimate and track the pose of objects in video sequences.The training phase involves teaching the model to recognize and locate key points, and the tracking phase involves applying this knowledge to follow the object's movement across frames.
. This network of Automated multisensor stations for the Monitoring of species Diversity (AMMODs) was proposed to pave the way towards a new generation of biodiversity assessment centres.It combines cutting-edge technologies with biodiversity informatics and expert systems that conserve expert knowledge.Each AMMOD station combines autonomous samplers for insects, pollen, and spores; audio recorders for vocalizing animals; sensors for volatile organic compounds emitted by plants (pVOCs); and camera traps for mammals and small invertebrates.Similarly, a set of four emerging tools and technologies (computer vision, acoustic monitoring, radar, and molecular methods) was proposed by van Klink et al. (2022) to enable unprecedented opportunities for insect ecology.Usually, cameras focus on a screen placed in the field, often in combination with traps (e.g., light traps: Hogeweg et al., 2019, sticky traps: Gerovichev et al., 2021, or pheromone traps: Yalcin, 2015).Trapped insects are usually identified by means of CNN-based computer vision methods.Despite all these efforts, the greatest challenges of the application of these technologies are the improvement of algorithms required for the discrimination of species-specific signal patterns and the completion of reference databases for AI network training.
et al. (2020).used a HyDe-CNN model to analyse image data representing the phylogenomic data that encode coalescence times along the chromosome and phylogenetic relationships among the sampled species of Heliconius butterflies.With this AI method, it was possible to accurately perform model selection for hybridization scenarios across a wide range of parameters to test various models of admixture and introgression.
. Tetila et al. (2020) compared different deep learning architectures such as ResNet-50, Inception v3, Xception, VGG-19, and VGG-16 to classify various pest species that cause problems in soybean production.More recently, Gowthaman and Sankarganesh ( ory (LSTM) model with and without attention mechanisms and compared it with a traditional CNN and a Transformer model.LSTM was specifically designed to cope with time-ordered data, where nodes are connected as a directed graph along a temporal sequence.The Transformer model handles the sequence data by using self-attention mechanisms to learn the complex dynamics of a time series.In the study byNguyen et al. (2022), the LSTM model outperformed the CNN and the Transformer model because it was able to predict dengue outbreaks with higher precision.LSTM networks operate on time-series data by utilizing information from different time steps, allowing them to model complex temporal dependencies.An overview of how LSTM network models operate on time series data is shown as follows:• Sequential input: Time-series data are inherently sequential, where each data point is associated with a specific time index.LSTM networks take this sequential nature into account when processing the data.•Input representation: Each time step in the time series corresponds to an input feature vector.The input at each time step is fed into the LSTM network.The feature vector may include information from the current time step as well as relevant historical information.•Memory cells: LSTMs have memory cells that allow them to store and retrieve information over long periods.These memory cells are equipped with gating mechanisms that control the flow of information.The three main components of an LSTM cell are the input gate, forget gate, and output gate.• Input gate: Decides which information from the current time step should be stored in the cell.• Forget gate: Decides which information from the memory cell should be discarded.• Output gate: Determines the output of the cell based on the current input and the memory cell content.• Learning long-term dependencies: The ability of LSTM networks to retain information for extended periods makes them effective at learning long-term dependencies in time-series data.The gates in the LSTM cell allow the network to selectively update and utilize information from different time steps.• Training: LSTMs are trained using backpropagation through time (BPTT), which is an extension of backpropagation used for training feedforward neural networks.During training, the network learns to adjust the parameters of the gates to minimize the difference between its predicted output and the actual target output.• Output prediction: The output of the LSTM network at each time step can be used for various tasks, such as regression (predicting a continuous value) or classification (predicting a category).• Multiple layers and architectures: LSTMs are often stacked to form deep architectures, where the output of one LSTM layer serves as the input to the next.This enables the network to learn hierarchical representations of the time-series data.The following python code can be used to set up a simple LSTM model that learns temporal dependencies of 600 time steps to predict a disease outbreak: import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from tensorflow.keras.modelsimport Sequential from tensorflow.keras.layersimport LSTM, Dense # Load your dataset data = pd.read_csv('your_dataset.csv') # Assume your dataset has columns like 'temperature', 'humidity', 'rainfall', 'dengue_cases' # Adjust these column names based on your actual dataset features = data[['temperature', 'humidity', 'rainfall']].valueslabels = data['dengue_cases'].values # Normalize features scaler = MinMaxScaler() features_scaled = scaler.fit_transform(features)#Combines fit and transformation method # Create sequences of 600 time steps sequence_length = 600 X, Y = [], [] for i in range(len(features_scaled) -sequence_ length + 1): X.append(features_scaled[i:i+sequence_length]) Y.append(labels[i+sequence_length-1]) X = np.array(X)Y = np.array(Y)# Split the data into training and testing sets X_train, X_test, Y_train, Y _test = train_test_ split(X, Y, test_size=0.2,random_state=42) # Build the LSTM model model = Sequential() model.add(LSTM(50,activation='relu', input_ shape=(sequence_length, features_scaled.shape[1]))) model.add(Dense(1))# Train the model model.compile(optimizer=keras.optimizers.