The performance of three empirical models describing white bean yield loss (YL) from common ragweed competition was compared using field experiments from Staffa and Woodstock, both in Ontario, Canada, in 1991 and 1992. One model was based upon both weed density and relative time of emergence. The other two models described yield loss as a function of weed leaf area relative to the crop. The model based on both weed density and relative time of emergence best described the data sets. The predicted maximum yield loss (A) and the parameter for relative time of weed emergence (C) varied across locations and years whereas the yield loss at low weed density (I) was relatively more consistent across locations and years. Use of thermal time (base temperature=10oC) rather than calendar days did not change the overall fit of the model, but reduced the value of the parameter for the relative time of weed emergence (C). The two parameter leaf area model accounting for maximum yield loss (m) gave a better fit to the data compared with the one parameter model. The relative damage coefficient (q) varied with time of leaf area assessment, location and year. Values of q calculated from relative leaf area growth rates of the crop and weed were similar to observed values.
The relationship between q and accumulated thermal time was linear but varied with location and year. As management tools, models based upon relative leaf area have advantages over models based on density and relative time of emergence since the level of weed infestation needs only to be assessed once, whereas density and emergence time require frequent observations. The ability to assess accurately and quickly both the crop and weed leaf area, however, may limit the practical application of models based on leaf area. The inability of empirical models to account for year–to–year variation in environmental conditions was observed.