Predictive assessment of ochratoxin A accumulation in grape juice based-medium by Aspergillus carbonarius using neural networks

Authors


Misericordia Jiménez, Departamento de Microbiología y Ecología, Facultad de Biología, Universidad de Valencia, Dr. Moliner 50, E-46100, Burjassot, Valencia, Spain.
E-mail: misericordia.jimenez@uv.es

Abstract

Aims:  To study the ability of multi-layer perceptron artificial neural networks (MLP-ANN) and radial-basis function networks (RBFNs) to predict ochratoxin A (OTA) concentration over time in grape-based cultures of Aspergillus carbonarius under different conditions of temperature, water activity (aw) and sub-inhibitory doses of the fungicide carbendazim.

Methods and Results:  A strain of A. carbonarius was cultured in a red grape juice-based medium. The input variables to the network were temperature (20–28°C), aw (0·94–0·98), carbendazim level (0–450 ng ml−1) and time (3–15 days after the lag phase). The output of the ANNs was OTA level determined by liquid chromatography. Three algorithms were comparatively tested for MLP. The lowest error was obtained by MLP without validation. Performance decreased when hold-out validation was accomplished but the risk of over-fitting is also lower. The best MLP architecture was determined. RBFNs provided similar performances but a substantially higher number of hidden nodes were needed.

Conclusions:  ANNs are useful to predict OTA level in grape juice cultures of A. carbonarius over a range of aw, temperature and carbendazim doses.

Significance and Impact of the Study:  This is a pioneering study on the application of ANNs to forecast OTA accumulation in food based substrates. These models can be similarly applied to other mycotoxins and fungal species.

Introduction

Phytopathogenic fungi produce plant diseases and, consequently, yield reductions in crops worldwide, which give rise to important economic losses. These fungi may also affect negatively quality and safety of plant-derived food and animal feed, undermining both consumer confidence and profitability to the producer. Safety problems are mainly due to secondary metabolites of fungi, particularly mycotoxins. Mycotoxins are compounds with toxic properties, causing acute and chronic effects collectively known as mycotoxicoses (Krogh 1978; Bennett and Klich 2003; Richard 2007). Many species of Fusarium, Aspergillus, Alternaria and Penicillium produce mycotoxins of concern in human and animal health (Abramson 1997; D’Mello and Macdonald 1997; Panigrahi 1997; Smith 1997). Ochratoxin A (OTA) is a mycotoxin produced by various species of Aspergillus and Penicillium, such as A. ochraceus, A. carbonarius, A. tubingensis, P. verrucosum and P. nordicum (Van der Merwe et al. 1965; Schlatter et al. 1996; Petzinger and Ziegler 2000;Larsen et al. 2001; Magan and Aldred 2005; Medina et al. 2005). OTA exhibits intestinal fragility, nephrotoxicity, immunosuppresion, teratogenicicity, carcinogenicity in male mice and rats, cytotoxicity in hepatic cell lines, and induces iron deficiency anaemia (Lea et al. 1989; Bondy and Armstrong 1998; Castegnaro et al. 1998; Dirheimer 1998; JECFA 2001). The International Agency for Research on Cancer (IARC) classified this mycotoxin into group 2B as a possible human carcinogen (IARC 1993). This mycotoxin has been detected in human blood (Creppy et al. 1991; Burdaspal and Legarda 1998; Ueno et al. 1998; Peraica et al. 2001), food (mainly cereal grains, pork, poultry, beans, pulses, peanuts, dried fruits, bread and coffee) (Jørgensen 1998; Pittet and Royer 2002) and drinks (milk, beer and wine) (Skaug et al. 2001; Mateo et al. 2007). Thus, great concern has risen worldwide about this mycotoxin and limits for its concentration in a variety of food have been regulated (European Commission 2005). Aspergillus carbonarius has been reported to be mainly responsible for contamination of wine, grapes, grape juice and vine fruits with OTA (Magan and Aldred 2005; Medina et al. 2005). Many surveys point out that wine is usually contaminated with OTA but its level depends on factors such as geographical origin, vitivinicultural and oenological practices, class of wine, etc. There are differences between northern and southern European regions and the problem concerns mainly wine producing areas located in the Mediterranean basin where roughly 40–60% of samples were found to contain OTA levels in the range from 0·01–15·6 μg l−1 (Battilani et al. 2006; Mateo et al. 2007). The maximum level of OTA in wine has been set at 2·0 μg kg−1 (European Commission 2005). This alcoholic beverage is considered as the main vehicle of OTA intake in human diet after cereals.

Wine is an important beverage in the world trade. The OTA limits established for wines by the EU to protect consumer health pose a threat to wine exports. Thus, minimizing OTA content in wine is a goal for winemakers. However, OTA is not produced in wine but in grapes in vineyards by fungi (mainly A. carbonarius) and the toxin is carried out into wineries at harvest with contaminated moldy grapes. OTA is transmitted to grape juice at mashing stage and then to wine, where its level remains roughly constant due to toxin stability (López de Cerain et al. 2002).

OTA production is dependent on factors such as temperature, water activity (aw), time, fungal strain, and the presence of fungicides in the substrate where fungal growth occur (Mitchell et al. 2004; Belli et al. 2004, 2006; Medina et al. 2007a; b; Tassou et al. 2007).

Carbendazim is a fungicide widely used to control fungi that cause plant diseases. It has been used to control fungal infections produced by some fungi in vineyards. This fungicide at sub-inhibitory doses has been shown to positively influence the accumulation of OTA by A. carbonarius in vitro (Medina et al. 2007a).

Prediction of the accumulation of mycotoxins in food is a challenging task because of the variety of factors influencing its production. Concerning OTA production by A. carbonarius the most used forecasting models are response surfaces, which are plots of multiple linear equations obtained by regression from in vitro experimental data. The predictor variables are usually temperature and incubation time or aw (Belli et al. 2004; Tassou et al. 2007). Assessment of prediction error associated to these models is not usually reported. In a previous study (Medina et al. 2007a), the effect of carbendazim on OTA accumulation was assessed by means of a polynomial model, although leading to poor quality of fit.

Artificial neural networks (ANNs) are highly interconnected network structures consisting of many simple processing elements that can perform many parallel computations for data processing (Hervás et al. 2001). In the field of predictive microbiology, they have been applied to develop models that could estimate growth parameters of bacteria or fungi (Hajmeer et al. 1997; Hervás et al. 2001; Jeyamkondan et al. 2001; Lou and Nakai 2001; García-Gimeno et al. 2002; Panagou et al. 2007), heat resistance of bacteria (Esnoz et al. 2006) or production of bacterial metabolites (Poirazi et al. 2007). Coupling of ANN to electronic noses or gas chromatograph-mass spectrometers have found application to classify cereal grains as mouldy or healthy, and to identify spoilage fungi and bacteria (Gibson et al. 1997; Evans et al. 2000; Magan et al. 2003; Pavlou et al. 2004). Application of ANNs to predict the accumulation of mycotoxins in commodities and manufactured foods may be useful to prevent them from entering the food chain. So far, no study on the ability of ANNs to forecast OTA accumulation in food or beverages has been reported.

The aim of this study was to improve the predictor capability of a previous polynomial model (Medina et al. 2007a) by exploring and comparing the potential ability of various ANN models to predict OTA accumulation over time by a strain of A. carbonarius on grape based-medium at different temperatures, carbendazim concentrations and aw values. Design of predictive ANN models can be basic to forecast toxin accumulation by fungi contaminating agricultural commodities under known environmental conditions.

Materials and methods

Fungal strain

An OTA-producing strain of Aspergillus carbonarius isolated from wine grapes was selected for the study. This strain is kept in the fungal collection of the Department of Microbiology and Ecology at the University of Valencia with reference Ac25.

Carbendazim solutions

The formulation containing carbendazim used in this study was Carzim® (50% w/v a.i.; Fitolux S.A., Madrid, Spain). A stock emulsion containing 100 mg of carbendazim l−1 was prepared by dilution of the formulation in water. This emulsion was diluted with water and homogenised. Appropriate volumes of carbendazim solution were added to the medium to provide the following levels: 0 (negative control), 50, 250, 350 and 450 ng ml−1. These doses were sub-inhibitory and allowed the fungus to grow and to produce the toxin (Medina et al. 2007a).

Cultures

In vitro assays to study the production and accumulation of OTA were carried out in a grape juice medium. Fresh grape juice was prepared by pressing red wine grapes (Vitis vinifera, Bobal variety). This medium, made from natural grapes, is different from synthetic media used in other studies on OTA production by A. carbonarius (Belli et al. 2004, 2006; Marin et al. 2006; Mitchell et al. 2006; Tassou et al. 2007). No preservative was added. It was divided into three portions and each portion was modified by mixing 20% grape juice with 80% of a mixture of water/glycerol of variable composition to provide aw values of 0·98, 0·96 or 0·94 in the final media (after addition of 2% w/v of agar). The pH was adjusted to 4·5. After agar addition, the media were autoclaved at 115°C for 30 min. Then, appropriate volumes of carbendazim solution were added at around 45°C to obtain the desired fungicide concentrations in the final medium, which were 0 (control), 50, 250, 350 and 450 ng ml−1. After vigorous shaking, the media were poured into Petri dishes. The fungal strain to be inoculated had been previously incubated in potato dextrose agar (7 days, 28°C) and this culture was used to prepare a spore suspension containing 1 x 106 conidia ml−1 in saline solution. An aliquot (2 μl) of this suspension was used to inoculate the grape juice based medium, which were statically incubated in closed chambers with beakers containing water-glycerol solutions to provide the same aw (Llorens et al. 2004). The incubation temperatures were 20, 25 and 28°C, based on reports indicating that optimum temperature for OTA production is about 20°C (Mitchell et al. 2004; Marin et al. 2006) and the seasonal weather conditions during grape ripening and harvest in the crop areas. Lag phase for growth was considered as the time necessary to grow a colony showing an average diameter equal or slightly higher than 5 mm.

Ochratoxin A determination

Once the lag phase was reached, the level of OTA accumulated in the grape juice cultures was determined daily from the 3rd up to the 15th day from the lag phase in two Petri dishes with the same treatment. About 20 g of each fungal culture (substrate and fungal biomass) was cut into small pieces and extracted with 50 ml of methanol (Sigma-Aldrich, Alcobendas, Spain) in orbital shaker (110 rev min−1, 1 h, 25°C) in the dark. The extracts were filtered through 5–10 g of Celite 545 (Sigma-Aldrich). One ml of filtered extract was centrifuged at 1100 rev min−1 for 15 min. The supernatant was carefully transferred to an amber vial for LC analysis.

The LC system consisted of a Waters 600E system controller, a Millipore Waters 717 plus autosampler and a Waters 470 scanning fluorescence detector (Waters, Milford, MA, USA). Excitation and emission wavelengths were 330 and 460 nm, respectively. Chromatographic separation took place at 35°C in a C18 Phenomenex Gemini column (150 × 4·6 mm, 5 μm particle size) (Phenomenex, Macclesfield, UK), provided with a guard column of the same material. The mobile phase was acetonitrile-water-acetic acid (44 : 55 : 1, v/v/v) at a flow rate of 1 ml min−1. Concentrations of OTA were determined using a regression equation obtained with standard solutions of OTA run under the same conditions as the samples with the help of Millennium 4.0 software (Waters). Mean OTA recovery was 89% and the limit of detection (based on a signal-to-noise ratio of 3 : 1) was <0·01 μg OTA g−1 medium. Averaged results of duplicate measurements were used as single output data for ANN designs. Undetectable levels were considered as zero for computation purposes.

Neural network models

All values in the dataset obtained after the determination of OTA in the cultures were scaled between −1 and +1 and used to train and test different ANN models, which were evaluated with regard to their performance. Multilayer perceptrons (MLPs) with one and two hidden layers and radial-basis function networks (RBFNs) were assayed.

The functions used to assess the prediction accuracy and goodness of the different models were the mean-square error (MSE), the root mean-square error (RMSE) function, and the standard error of prediction (SEP) (Hervás et al. 2001; Lou and Nakai 2001; Garcia-Gimeno et al. 2005). Training was optimized according to the Neural Network Toolbox for matlab 7.5 (The Mathworks Inc., Natick, MA, USA) default criterion, which assumes that the lower the MSE, the better the model simulates the data. The coefficient of determination (R2) was also calculated. Other indices related to the goodness of fit are the bias factor (Bf) and the accuracy factor (Af) developed initially by Ross (1996) for bacterial growth models and further applied by several authors to ANN models (García-Gimeno et al. 2005; Zurera-Cosano et al. 2005; Panagou et al. 2007; Panagou and Kodogiannis 2009). Both, Bf and Af are equal to one in a perfect model. Bf indicates by how much, on average, a model over- or under-predicts the observed values depending of whether it is >1 or <1. Af indicates by how much, on average, the predicted values differ from the observed data. Both indices are useless when any observed value is zero (Zhao et al. 2001).

The data set was split into a training data subset used to build the model, a validation subset used to determine the stopping point to avoid or minimise over-fitting (early stopping method using hold-out validation), and a test subset. The test subset is an independent set of examples, not previously shown to the ANN, and only used to confirm the generalisation ability (performance) of the selected network (Bishop 1995). Hold-out validation can be omitted and then the data set is divided into only two subsets (training and test). The two approaches were assayed in this work. The more accurately an ANN model can predict actual OTA levels that were omitted in the ANN training process, the better the model is. Negative output values were forced to be equal to zero because concentrations cannot be negative.

Single-layer perceptrons

Figure 1 shows a simple diagram of a MLP. The number of layers and the number of nodes in each layer are called the architecture. By removing one hidden layer in Fig. 1, the topology corresponds to a single-layer perceptron. In our case, single-layer perceptrons were built using a layer with four input nodes, one hidden layer of n hidden neurones and an output layer with one neurone. Their architecture can be noted as 4 : n : 1, where n is the number of neurones in the hidden layer.

Figure 1.

 Topology of a MLP ANN with an input layer of m inputs, two hidden layers containing n1 and n2 nodes, and an output layer that, in our case, delivers only one value (OTA accumulation). Weights and biases have been omitted in the plot to reduce the complexity. The hidden layers apply a hyperbolic tangent (tansig) function while the output layer response is linear.

The input layer sends information to the neurones in the hidden layer; inputs are weighted, summed up and a bias term is added. A non-linear function, the activation function (usually, a sigmoid function), is applied to the result of this sum to provide the output. We used a hyperbolic tangent activation function because it usually helps to learn faster. The outputs from the hidden neurones are sent to the layer of output neurones where they are totalled to produce the final output (predicted value). Outputs are compared with the target values and error parameters are computed. In perceptrons, signals flow through the net, from inputs to outputs, in a forward direction, from left to right in Fig. 1, whereas error signals flow in backward direction (Haykin 1999). In the backward pass, the error observed between the outputs and the target responses is used to adapt the synaptic weights, which were randomly assigned when training started. This process continues through the training process to minimize the error parameters, usually the MSE.

The input signals were the experimental values for temperature, aw, concentration of carbendazim and time. The output was the concentration of OTA in the culture medium. The number of nodes in the hidden layer (n) was varied from two up to 30, by adding two nodes in each test. The value of MSE served as a primary tool to assess network performance. The neural network toolbox built in the matlab 7.5 package was used to design the models. The following algorithms were used: Levenberg-Marquardt (LM), Resilient Propagation (RP) and Bayesian Regularization (BR). For each value of n, models were systematically tested using the three algorithms both without and with validation, which was performed by the hold-out method.

The total data set included 585 samples and was split in different subsets. Without hold-out validation, the training set included 500 samples, which were randomly chosen and the test set included the remaining 85 samples. In this case, training was carried out with 100 epochs or iterations as it showed to be a good trade-off between processing time and quality of the estimation. With hold-out validation, training was stopped (early stopping) when validation MSE reached a minimum value. The training, validation and test sets were composed of 500, 40 and 45 samples, respectively. The performance or cost function to minimize during training was the MSE both for LM and RP. However, for the BR algorithm it is modified to enhance generalisation and uses the MSE and the sum of squares of the network weights and biases, both multiplied by regularization parameters (MacKay 1992a,b).

For both approaches, 20 runs with different training/test sets or training/validation/test sets and different initial weights were performed for each architecture. Data set splitting was randomized in each run (random sub-sampling). The results were averaged to avoid or minimize any bias from the initial random choice of samples and weights. The goal was to obtain the lowest value of the MSE when the network was applied to test subset (MSEtest). Usually, the RMSE follows a similar pattern although slight differences can appear. This last parameter and the SEP have been used by some authors to evaluate the magnitude of the difference between the observed and predicted values of predictive models (Garcia-Gimeno et al. 2002, 2005; Panogou et al. 2007; Panagou and Kodogiannis 2009). Moreover, Bf and Af (Ross 1996) were applied as validation indices to the data sets.

Sensitivity analysis according to Goh (1995) has been used to indicate the relative influence of each predictor variable on the output. If a variable has low relative importance it can be omitted from the model without significant lost of performance.

Multilayer perceptrons

MLP feed-forward structure is considered the most widely used neural network paradigm and has proven nonlinear modelling capabilities. The knowledge of the network is stored in the weights connecting the artificial neurones (Panagou et al. 2007).

The architecture of two-layer perceptrons was 4 : n1 : n2 : 1, being ni the number of neurones in the ith hidden layer (i = 1, 2). Every possible number of neurones in each hidden layer was not tried. Instead, a selection was made by giving an arbitrary even value to n1 and two to five even values to n2. The n2 values were changing from two to n1 (n2 ≤ n1), but without exceeding the maximum number of neurones, which was set to 32 to maintain this number not too high. We began with n1 = 10 and n2 = (n1, n1–2, n1–4,…), and continued with n1 = 12, 14, …20. For each combination (n1, n2), training was performed with the three previously indicated algorithms (RP, LM, and BR) using either random sub-sampling or hold-out validation as explained for single-layer perceptrons. The number of data in the subsets for training, validation and testing were the same as previously indicated for single-layer perceptrons. The objective was also to find a model with the lowest MSEtest.

Radial-basis function networks

This type of network consists of one layer of input nodes, one hidden radial-basis function layer and one output linear layer. The hidden layer contained n neurones. The hidden layer computes the vector distance (or radius) between the hidden layer weight vectors (which can be interpreted as the centres of the radial-basis functions of each neurone) and the input vectors. The resulting distances are multiplied by the hidden layer biases of each neurone and then a RBF (usually, a Gaussian function) is applied to the result. This function is shaped by a parameter called spread that affects the sensitivity of the approximation by changing the width of the RBF. Spreads were set at 0·2, 0·4, 0·6, 0·8, and 1·0, to choose the best value on the basis of the minimum average MSEtraining and MSEtest for a RBFN with a high number of nodes. Spread values higher than 1·0 resulted in a decrease of performance. ‘n’ was varied in steps of five up to 80; moreover, a 200-node network was tried to test the effect of a very large number of neurones. The data set for training included 500 samples while the data set for test included the 85 remaining samples. As in the case of MLP, for each architecture, the results of 20 runs were averaged and in each run the 500 samples of the training set were randomly taken. The test set included the remaining samples. The same statistics as in the case of perceptrons were calculated.

Results

The levels of OTA in the cultures varied from undetectable levels (<0·01 ng g−1) to 5980 ng g−1. Figure 2 shows a general view of the data trend at 20°C, at the three aw values tested and at the five carbendazim levels. The maximum OTA level was attained at day 15th from the lag-time, in cultures incubated with 450 ng carbendazim ml−1 at 20°C and 0·98 aw. Toxin levels usually increased with time although in some cases a plateau or even a decrease was observed after 7 days. The highest levels were found at 0·98 aw and decreased when aw decreased. The lowest assayed temperature (20°C) favoured OTA accumulation at the three aw values in absence of carbendazim. In general, at the carbendazim tested levels, OTA accumulation was favoured. Once the general trends of OTA accumulation dependence were depicted, prediction of the OTA content was undertaken using ANN models in an attempt to obtain the best forecasting accuracy. All the variables that significantly affect OTA accumulation can be chosen as predictor variables in the course of ANN design.

Figure 2.

 Change of OTA levels (ng of OTA g−1 of culture, i.e. substrate plus fungal biomass) with time after the lag phase (model lines) in grape-based solid media inoculated with A. carbonarius and incubated at 20°C, at three aw values (0·98, 0·96 and 0·94) and five different levels of carbendazim: (•) Control (0 ng ml−1), ( bsl00001 ) 50 ng ml−1, ( bsl00066 ) 250 ng ml−1, (○) 350 ng ml−1, ( □ ) 450 ng ml−1.

Single-layer perceptrons

Figure 3 shows the variation of MSEtest for various single-layer perceptrons with the number of nodes or neurones (two to 30) in the hidden layer using the three algorithms without and with validation by the hold-out method. When the BR algorithm without validation was used, the MSEtest values were generally lower and the variation was very smooth. The lowest value was 0·0029 and corresponded to 26 nodes. Slightly higher values were obtained when working with fewer (16–24) nodes. The RP algorithm gave the worst results in terms of MSEtest values. The LM algorithm without validation gave similar MSEtest values for certain number of nodes (26) but sudden shifts were noticeable. When hold-out validation was accomplished, the lowest MSEtest (0·0043) was reached with LM algorithm and n = 30. Validation usually increased the values of MSEtest as compared with the values obtained without validation. However, without validation over-fitting to the data used for training usually occurs. The RMSE values did not show exactly the same patterns but the optimal architectures would not be very different if this statistic were used as the selection criterion. It is worth to take this similarity into account, as the RMSE has been employed in various reports to optimize ANN models in predictive microbiology (Lou and Nakai 2001; García-Gimeno et al. 2005; Zurera-Cosano et al. 2005; Esnoz et al. 2006; Panagou et al. 2007). The Bf and Af for the optimized 26-node single-layer perceptron (test set) were 0·977 and 1·210, respectively.

Figure 3.

 Variation of MSEtest with the number of nodes in the hidden layer obtained by training single-layer perceptron ANNs using three algorithms [Resilient Propagation (RP), Levenberg–Marquardt (LM), and Bayesian Regularization (BR)] without hold-out validation and with hold-out validation (–V in abbreviations): ( □ ) RP, ( bsl00001 ) RP-V, (○), LM, (•) LM-V, (◊) BR, ( bsl00066 ) BR-V.

Sensitivity analysis performed according to Goh (1995) indicated that all four input variables have roughly the same relative importance (25% each) with slight variation depending on the particular ANN tested.

Multi-layer perceptrons

Table 1 lists the MSEtest values computed for MLP with two hidden layers by averaging 20 independent runs. The lowest MSEtest value (0·0018) was obtained by a network with the architecture of 4 : 18 : 12 : 1 trained by the BR algorithm without hold-out validation. Similar but slightly higher values were obtained with the architectures of 4 : 14 : 12 : 1 and 4 : 20 : 10 : 1. Any of these networks could be applied with similar performance. Using hold-out validation, the BR algorithm also provided a network with the lowest MSEtest, (0·0026, Tables 1 and 2), which was, however, higher than that obtained with the same architecture (4 : 20 : 6 : 1) when hold-out validation was omitted. In Table 2, MSEtest and R2test values are shown for a few of the best architectures. Usually R2test is higher when MSEtest decreases. The two other algorithms provided higher MSEtest values, especially the RP algorithm, which, in our case, is considered the worst choice among the three.

Table 1.   MSEtest values found with two-layer perceptron ANNs trained by RP, LM and BR algorithms without and with hold-out validation
Algorithm*V†n1n2§
246810121416
  1. *RP: Resilient Propagation; LM: Levenberg–Marquardt; BR: Bayesian Regularization.

  2. †V: validation (No: hold-out validation was not performed; Yes: hold-out validation).

  3. n1 = number of nodes in the first hidden layer.

  4. §n2 = number of nodes in the second hidden layer.

RPNo100·05210·04030·03790·03340·0314   
RPYes100·10090·07610·07090·06740·0457   
RPNo120·04800·03530·03490·03150·02520·0291  
RPYes120·08480·07460·06320·05760·08240·0617  
RPNo140·04360·03270·02980·02670·02400·02200·0211 
RPYes140·07220·05600·04840·05750·06030·05330·0470 
RPNo160·04100·03480·02450·02620·02140·02470·01630·0222
RPYes160·09390·10130·05920·06560·02960·04700·04950·0487
RPNo180·04190·02910·02790·02000·02000·01870·0187 
RPYes180·06860·07160·04820·05260·03480·03400·0340 
RPNo200·03900·03030·02420·02210·01700·0196  
RPYes200·14150·06770·13880·05110·03000·0358  
LMNo100·09830·09480·06520·07020·0466   
LMYes100·07640·06190·07320·01040·0056   
LMNo120·17240·14910·09310·02170·00820·0525  
LMYes120·07390·08130·02910·00680·01890·0673  
LMNo140·11320·10250·04790·08890·01240·01810·0454 
LMYes140·11380·09190·09460·06910·07420·02050·0082 
LMNo160·02530·07140·01380·04510·00470·04220·06320·0093
LMYes160·23540·09970·10320·00680·01360·00990·11570·0084
LMNo180·10370·11120·03780·01760·02440·01670·0171 
LMYes180·19130·08730·01610·01060·00950·00800·0359 
LMNo200·13540·08060·04390·14220·10510·0456  
LMYes200·15510·16100·08510·02830·00440·0060  
BRNo100·00510·00360·00270·00250·0028   
BRYes100·01750·00720·00470·00900·0264   
BRNo120·00430·00290·00260·00260·00240·0023  
BRYes120·01090·00770·00790·00960·00680·0139  
BRNo140·00370·00270·00240·00310·00240·00190·0027 
BRYes140·00780·00530·00520·01160·00870·00960·0083 
BRNo160·00300·00250·00240·00250·00260·00230·00230·0030
BRYes160·00800·00570·00820·00580·00890·01000·01270·0148
BRNo180·00270·00250·00230·00300·00210·00180·0021 
BRYes180·01170·00380·01220·00560·00660·01440·0120 
BRNo200·00260·00260·00220·00310·00200·0025  
BRYes200·00710·00510·00280·00430·01090·0111  
Table 2.   Some of the better two-layer perceptrons obtained using the minimum MSEtest value as optimization criterion
Algorithm*Validation†n1n2MSEtestR2test
  1. *BR: Bayesian Regularization; LM: Levenberg–Marquardt; RP: Resilient Propagation.

  2. †No: hold-out validation was not performed; Yes: hold-out validation.

  3. n1 and n2 : number of nodes in the first and second hidden layer, respectively.

BRNo18120·00180·9982
No14120·00190·9983
No20100·00200·9980
Yes2060·00280·9972
Yes1840·00380·9969
Yes2080·00430·9964
LMNo16100·00470·9957
Yes20100·00440·9955
RPNo16140·01630·9859
Yes16100·02960·9688

Figure 4 is a regression line of predicted against observed OTA levels corresponding to the data set used to test the model with architecture 4 : 18 : 12 : 1, which gave the lowest MSEtest value among the MLP trained without hold-out validation. The high R2 value (>0·999) indicates that the model provides an excellent fit between predicted and observed OTA levels. The slope and intercept are included within the 95% confidence interval of the theoretical parameters one and zero, respectively. Bf and Af were 0·987 and 1·075. These parameters can vary slightly between runs as a result of the random selection of the initial samples and weights. The test data set is excluded from network design calculations.

Figure 4.

 Predicted OTA levels against observed OTA levels obtained by a MLP ANN with n1 = 18 and n2 = 12 nodes, trained by the BR algorithm without hold-out validation for 85 samples not used to train the ANN. The slope and intercept, their standard deviations, R2, bias factor (Bf) and accuracy factor (Af) are shown. Different runs usually provide slightly different parameters.

In the same way, Fig. 5 shows a regression line of predicted against observed OTA levels for the data set used to test the MLP with architecture 4 : 20 : 6 : 1 trained by the BR algorithm (with hold-out validation) to predict OTA accumulation in the culture medium. Only the data points used to test the model (45 in this case) were included and, as before, the slope and intercept were not significantly different (= 0·95) from the theoretical values corresponding to the equality line. The R2 value >0·995, indicates a very acceptable performance of this MLP in the range 0–6 μg OTA g−1. Bf was slightly lower than one (0·992) and Af was 1·093. These validation parameters are both nearer to one than the ones obtained with the best single layer perceptron.

Figure 5.

 Predicted OTA levels against observed OTA levels obtained by a MLP ANN with n1 = 20 and n2 = 6 nodes, trained by the BR algorithm with hold-out validation for the test dataset (45 samples).

Table 3 lists the means and standard deviations for the slope and intercept of the regression lines of predicted against observed OTA levels, obtained by ANNs with architectures 4 : 18 : 12 : 1 (BR, no hold-out validation) and 4 : 20 :6 : 1 (BR, hold-out validation) when they were applied to the data sets used for training and test. Other listed parameters are the limits of confidence of the means (= 0·95) for slope and intercept, and the values for R2, Bf and Af. For both ANNs the theoretical values for perfect fit (one for slope and zero for intercept) remain between the limits of confidence of the means in the case of using the test data. When applied to the training data set, only the mean slope contains the theoretical value (one) between the confidence limits (= 0·95) while the zero value lies out of the 95% confidence interval of the mean intercept due to the low standard deviation. In any case, these models can very accurately predict the levels of OTA accumulated in the tested substrates within the limits of the experimental variables.

Table 3.   Regression parameters for linear plots of predicted against observed OTA levels and bias (Bf) and accuracy factors (Af) obtained by the best two-layer perceptron ANNs without and with hold-out validation after five independent runs
ANN architectureAlgorithm/hold-out validationData usedSlopeInterceptR2BfAf
Mean±SDConfidence limits of the mean (P = 0·95)Mean±SDConfidence limits of the mean (P = 0·95)
  1. *BR: Bayesian Regularization.

4 : 18 : 12 : 1BR*/No500 (training)0·999120·000720·99823–1·0000110·000570·000280·000225–0·0009150·99970·9951·10
BR/No85 (test)0·999020·0075 0·98971–1·00833 −0·00470·0083 −0·014993–0·0056650·99840·9721·13
4 :20 : 6 : 1BR/Yes500 (training)0·9963 0·0050 0·99015–1·0025  0·001130·000370·00068–0·001590·99910·9991·16
BR/Yes45 (test)0·9861 0·017  0·9655–1·00663 0·00940·013  −0·0072–0·02600·99700·9801·17

Radial-basis function networks

The results of the different RBFNs proved worse than the ones attained with MLPs of a similar number of neurones. A value of spread = 1 performed usually better than other values. The data set was split into 500 and 85 samples for training and testing, respectively. Twenty random combinations of data were computed and the results were averaged. The MSE values for both training and test decreased sharply when increasing from five to 25 neurones in the hidden layer. Further addition of nodes produced only a minor decrease in the MSE values (Fig. 6). The lowest MSEtraining and MSEtest values (0·0004 and 0·0025, respectively) were obtained by a 200-node RBFN. Training was more rapid that in the case of MLP. For the training set the values of Bf and Af were 1·016 and 1·097, respectively. When applied to the test set Bf was similar (1·02) while Af was a bit higher (1·18).

Figure 6.

 Variation of the MSEtraining and MSEtest with the number of nodes in the hidden layer for RBF networks. MSEtraining: solid line; MSEtest: dotted line.

Discussion

The fact that OTA level varies in cultures once a plateau is reached has been observed under some treatment conditions (Belli et al. 2004). The different shape of the graph of OTA accumulation with time at 20°C and 0·96 aw (Fig. 2) may be related with a delay in toxin production rate with respect to 0·98 and 0·94 aw. It is known that secondary metabolites are produced under stress conditions that are unfavourable to fungal growth and at these two aw-values the growth rates of the fungus were lower than at 0·96 aw (Medina et al. 2007a).

Multifactor anova was carried out by Statgraphics 5.1 Plus Professional Edition (Statpoint Inc., Herndon, VA, USA) as previously reported (Medina et al. 2007a). It was shown that all factors (temperature, aw, carbendazim level and time) significantly influenced OTA accumulation in the cultures (P < 0·005). Moreover, a certain degree of interaction aw × temperature and temperature × time was significant. Thus, temperature, aw and carbendazim dose influenced OTA accumulation with time in the grape-based cultures. An optimized multiple linear regression equation of the predictor variables was able to explain about 77·4% of the variability of OTA concentration in cultures (Medina et al. 2007a).

In general, our results concerning the influence of temperature and aw on OTA production agree with those of Mitchell et al. (2004) and Marín et al. (2006), who found that 15–20°C and 0·95–0·98 aw were optimum conditions for OTA production in synthetic grape medium (SGM) at pH 4·0–4·2 by A. carbonarius strains. Our results partially agree with those reported by Tassou et al. (2007), who also studied the influence of temperature and aw on growth and OTA production of two A. carbonarius strains from South Greece using SGM, but at a lower pH (3·5). According to Tassou et al. (2007), 20°C was the optimum temperature for OTA production; however, they found that at 0·93–0·96 aw OTA production was higher than at 0·98 aw. Other authors have reported optimum OTA production at 0·99 aw values (Belli et al. 2006; Leong et al. 2006). In general, at the carbendazim tested levels (sub-inhibitory doses), no substantial decrease of the growth rate of A. carbonarius was found in the cultures after the lag phase (1–7 days), but OTA accumulation was favoured (Medina et al. 2007a). Other fungicides have shown similar effects on OTA production (Battilani et al. 2003; Belli et al. 2006).

Amongst the tested ANN models, the network that provided the minimum error to estimate OTA accumulation by A. carbonarius in grape-based media under the studied conditions was a MLP with the architecture of 4 : 18 : 12 : 1 trained without hold-out validation using the BR algorithm. A less complex structure (4 : 14 : 12 : 1) provided similar result in terms of MSEtest. The optimum single-layer perceptron had the structure 4 : 26 : 1 and was obtained by the BR algorithm without hold-out validation. The two other algorithms tested for comparison (LM and RP) provided higher MSEtest values, especially the latter. Therefore, the BR algorithm proved to be the best among the three ones here compared. As a result of sensitivity analysis, the relative importance of the four input variables was similar, which is consistent with the significant influence of all variables on OTA accumulation shown by the anova (Medina et al. 2007a). This means that none of the four input variables can be removed to improve the models.

The MSEtest values obtained by using hold-out validation were higher than those obtained by avoiding hold-out validation. Validated models avoid the drawbacks of overtraining and, in spite of the higher MSE values over-fitting is less likely to happen and the performance is more realistic. In the case of MLP validated by the hold-out method, the best architecture (4 : 20 : 6 : 1) was provided by the BR algorithm. The R2 value obtained by linear regression of predicted against observed OTA levels was very high even for the test data set, which points out that these networks can be very useful to predict OTA accumulation in these cultures. Concerning the optimised MLP, Bf and Af were close to unity, which indicates a good fit between the observations and the predictions although they were always closer to one for the training set than for the test set in agreement with Panagou and Kodogiannis (2009). A slight trend to under-prediction is apparent from the Bf values (<1) calculated for MLP.

RBFN designs can attain similar performances to those of MLPs, although the number of hidden nodes must be much higher (n ≥ 200). Except for the 200-node RBFN, MSE values were higher than the ones found for the best MLP assayed. In general, this way of modelling requires a very high number of neurones in the hidden layer to provide similar performances to those attained by validated MLP of 26–30 hidden neurones. However, RBFNs were trained rapidly, usually orders of magnitude faster than MLP and they did not show local minima problems during training in agreement with Panagou et al. (2007). For the 200-node RBFN, Bf was slightly higher than one, which points out that it tends to a over-predict OTA levels, on average, in contrast to the values of the best MLP models, which, on average, have a tendency to under-prediction. For this same RBFN, Af for the test data set was similar to those provided by the best MLPs.

No ANN design to predict OTA levels in any commodity or culture medium has been previously reported and comparison of our results is not possible. Therefore, this is a pioneering work in this area. Improvement of the predicting ability of these designs in other matrices and by other strains is expected to happen in further research. The prediction accuracy attained with neural network models is better than that obtained with multiple linear regression models (Medina et al. 2007a). This is in agreement with recent reports where both methodologies have been compared in food mycology (Panagou et al. 2007; Panagou and Kodogiannis 2009). Response surface models suggested by other authors for prediction of OTA production by A. carbonarius (Belli et al. 2006; Tassou et al. 2007) used only two predictor variables (usually, temperature and aw) and indices for prediction accuracy or precision were not reported.

The main objective of the present work on neural network applications to predictive mycology was to assess the potential ability of these models (MLP-ANN and RBFN) to predict as accurately as possible the accumulation of OTA over time in grape-based liquid cultures of A. carbonarius under a set of experimental conditions (temperature, aw, carbendazim concentration). The results show that accurate predictions are possible using these models. The final aim of this and other future applications in this research area is to prevent this mycotoxin from accumulating in agricultural commodities. The results obtained aim to ANNs as useful tools that should be fully explored in the field of food safety.

Acknowledgements

This work was supported by the Spanish ‘Ministerio de Educación y Ciencia’ (projects AGL-2004-07549-C05-02 and AGL2007-66416-C05-01 and a research grant) and the Valencian Government ‘Conselleria de Empresa, Universitat i Ciencia’ (project GV04B-111 and ACOMP/2007/155 and a research grant).

Ancillary