## Introduction

Stable isotope analysis (SIA) is a popular tool for analysing food webs, and mixing models are used to quantify the links between consumers and their dietary sources based on their stable isotopic signatures (Fry 2006; Layman *et al*. 2012). The numerous assumptions and methods of SIA necessitate an evaluation of a proposed mixing model, before that model is run. It is assumed, for example, that every source in a mixing model contributes to the consumer's diet and that the model adequately explains the isotopic signature of every consumer. It is further assumed that an analysis has correctly estimated or applied: trophic enrichment factors, isotopic turnover rates, variance in source signatures and the aggregation of dietary sources to constrain model outputs (Cabana & Rasmussen 1996; Phillips, Newsome & Gregg 2005; Boecklen *et al*. 2011; Layman *et al*. 2012). Violations of these assumptions (such as a missing dietary source) have often been assessed using the ‘point-in-polygon’ approach (e.g. Benstead *et al*. 2006); that is, for mass balance to be established in a linear mixing model, a consumer's isotopic signature must be within a polygon bounding the signatures of the sources (Phillips & Gregg 2003). If a consumer's signature is outside this polygon, then no solution exists for that consumer, and one or more assumptions of mixing models have been violated.

A recent development are Bayesian mixing models that formally incorporate the uncertainty of trophic enrichment factors (the consistent difference in an isotopic ratio between a predator and its prey) and of isotopic signatures (e.g. SIAR, Parnell *et al*. 2010; MixSIR, Moore & Semmens 2008). Modelling this uncertainty has created more powerful and realistic models (Layman *et al*. 2012), but it has also made model evaluation more difficult. This is because the source data are distributions, not average values, and there is no longer a discrete mixing polygon to assess for point-in-polygon (as exists in IsoSource, Phillips & Gregg 2003). The Bayesian mixing models will calculate source contributions even when a model is very unlikely to satisfy point-in-polygon for every consumer (Parnell *et al*. 2010). Points can be visually inspected in reference to confidence intervals or to the area enclosed in lines joining confidence ellipses (Hopkins & Ferguson 2012), but this assessment is largely qualitative, is not practical in three dimensions and may not accurately predict the mixing space predicted by frequentist sampling methods. A quantitative tool for point-in-polygon is needed to allow an *a priori* evaluation of these mixing models, by indicating when the data are unlikely to create the mixing geometry needed for a logical model.

Our method for the evaluation of these models is to generate a large number of possible mixing polygons with a Monte Carlo simulation, using the same uncertainty incorporated in the Bayesian mixing models and testing these polygons for point-in-polygon (i.e. the ability to establish mass balance). The proportion of iterated polygons that satisfy the point-in-polygon assumption is calculated for each consumer and is interpreted as the frequentist probability that a consumer's isotopic signature can be explained by the proposed model. This probability provides a quantitative basis for consumer exclusion (any consumer outside the 95% mixing region, for example), for the correction of trophic enrichment factors (to ensure all consumers are within the 95% mixing region) or for outright model rejection. The mixing polygon simulation is demonstrated using an SIA of an Australian freshwater food web.