Food fraud detection using explainable artificial intelligence

Recently, the global food supply chain has become increasingly complex, and its scalability has grown. From farm to fork, the performance of food-producing systems is influenced by significant changes in the environment, population and economy. These changes may cause an increase in food fraud and safety hazards and hence, harm human health. Adopting artificial intelligence (AI) technology in the food supply chain is one strategy to reduce these hazards. Although the use of AI has been rising in numerous industries, such as precision nutrition, self-driving cars, precision agriculture, precision medicine and food safety, much of what AI systems do is a black box due to its poor explainability. This study covers numerous use cases of food fraud risk prediction using explainable artificial intelligence (XAI) techniques, such as LIME, SHAP and WIT. We aimed to interpret the predictions of a machine learning model with the aid of these technologies. The case study was performed on a food fraud dataset using adulteration/fraud notifications retrieved from the Rapid Alert System for Food and Feed system and economically motivated adulteration database. A deep learning model was built based on this dataset and XAI tools have been investigated on the proposed deep learning model. Both features and shortcomings of the current XAI tools in the food fraud area have been presented.


| INTRODUCTION
The world is becoming more interconnected every day and almost anything shipped from a variety of areas throughout the world is purchased by consumers.The globalization of food production, processing and marketing has increased some food hazards, such as infectious diseases (Käferstein et al., 1997).This type of globalization manifests itself in many different industries and involves numerous stakeholders.One crucial industry where its effects cannot be overlooked is the transportation of goods, particularly food.As such, people are generally concerned about food fraud and safety (Nogales et al., 2020).One of the most traded commodities worldwide is food.The global food chain is advancing rapidly and becoming more complex as the world's population grows and markets become more accessible.These factors make food fraud and safety an increasingly important issue.
The availability of sufficient and healthy food is a crucial requirement for maintaining human health (Fukuda, 2015).From farm to fork, every step in the food production process poses a risk to consumer health.This results in the requirement for better defence.The European Commission declared that rigorous food safety rules are their top priority in response to these worries (Mutukumira & Jukes, 2003).The European Food Safety Authority (EFSA) collects scientific data and offers unbiased scientific advice on food-related risks in order to carry out this policy.The Rapid Alert System for Food and Feed (RASFF) is made up of the EFSA, national food safety agencies from EU Member States, the European Commission, Norway, Liechtenstein, Iceland and Switzerland.The RASFF facilitates information sharing among its members and offers a service to guarantee that urgent notifications are handled properly to prevent problems with food safety.Additionally, RASFF has an online platform called the RASFF Portal that makes it easier to register incidents of health problems when they are discovered.Manual approaches to identifying these hazards take a lot of effort and are prone to mistakes.As a result, automated smart techniques must be used.AI techniques can be used to foresee these risks and significantly lessen their impact.In this study, we built a predictive model powered by AI and ran our experiments using a dataset generated from notifications entered into the RASFF system.
Machine learning (ML), a branch of artificial intelligence, enables software to predict outcomes more accurately without being explicitly programmed.ML algorithms predict new output values using historical data.Deep learning (DL), a branch of ML, uses algorithms modelled based on biological neurons.Powerful processors, enormous datasets and adaptable software libraries have helped make DL one of the most widely used ML approaches today.The definition of DL is models with numerous processing layers that develop a representation of the data with various degrees of abstraction (Le Cun et al., 2015).Modern AI-powered systems provide cutting-edge solutions to a wide range of challenging issues in areas like health care, nutrition, agriculture, energy and transportation that impact people's lives.These systems require low human involvement, and their error margin is small.Recently, DL algorithms have achieved the highest accuracy performance for challenging problems including face recognition, object identification and image segmentation.Although these algorithms provide incredibly precise responses, it is sometimes challenging for people to understand how the machine arrived at that conclusion.Because of this, AI researchers have put a lot of effort into developing tools, processes and strategies that allow people to understand and trust the results of ML-based models.This research area is called explainable artificial intelligence (XAI).It is used to describe the reasons behind and mechanisms underlying the biases of an AI-based model (Arrieta et al., 2020).
It has become necessary to comprehend how AI-based models in such systems make decisions (Goodman & Flaxman, 2017).Knowing why a model generates a particular prediction may be just as important as understanding how accurate its forecasts are.Even experts, engineers and data scientists find it challenging to evaluate complex models like ensemble models (like multiple classifier systems) or DL (Lundberg & Lee, 2017).
The field of XAI includes a number of techniques and procedures that let users of a ML-based model trust and understand their outputs.It is generally accepted that improving our understanding of a system can aid in resolving any potential issues it may have.As a result, XAI is crucial to the implementation of AI-based models.While production rules-based, simpler AI systems, such as those that use if-then-else statements, can be clearly understood, deep neural networks (DNNs)-based models are too complex to be easily explained (Arrieta et al., 2020).DNNs are criticized for being black-box models because they have millions of parameters and multilayer nonlinear architectures (Castelvecchi, 2016).The interpretability of DL models is essential due to the large range of applications they are used in.Assuring the model's fairness, enhancing its resilience and guaranteeing that significant inputs infer the conclusion are three advantages of treating interpretability as a separate design consideration (Arrieta et al., 2020).
The goal of this research is to develop a DL model that predicts the type of food fraud in order to increase the interpretability of the suggested DL model in light of the aforementioned challenges.Marvin and colleagues published the data utilized for this article in a previous study (Bouzembrak & Marvin, 2015) that used a probabilistic Bayesian Network model.Because the model did not use DL approaches, the previous study did not concentrate on the models' explainability.In this research, XAI tools had to be looked at because we wanted to boost the model's overall performance and address the explainability issues.
The contributions of this study are as follows: 1.This study is the first to use XAI methods on a real-world dataset related to the field of food fraud and safety.
2. The XAI tools' weaknesses and advantages have been emphasized.This will spur more study in this area and aid in the development of original XAI tools for the food safety domain.
3. Professionals in food fraud and safety can benefit from the findings of this study to enhance the food supply chain.
The following sections are arranged as follows: Section 2 provides background information and related work.The dataset's characteristics are explained in Section 3, along with the methods and techniques used for pre-processing and building DNNs.The XAI tools are explained in Section 4. Section 5 presents the results and Section 6 shows the discussion.The conclusion and future work are presented in Section 7.

| BACKGROUND AND RELATED WORK
This section presents DL tools first, followed by XAI tools.

| Deep learning
A part of ML known as DL is a neural network model.Most of the important features are learned using a DL technique via the hidden layer architecture.Each node in the network represents a different aspect of the object of interest, and when those facets are combined, they make up the entire thing.The strength of each node's connection to the object can be measured using the weights that have been given to each node.The weights are adjusted when the model changes across various iterations.
The fact that the DL algorithms need a huge number of data points is one of their key characteristics.As a result, experiments to determine the optimal weights, which take place throughout the training of the network model, require a significant amount of processing resources.The accuracy of findings from DL algorithms can vary depending on the dataset.However, because these models are black-box models by design, the end user cannot immediately see the connection between the input and the output.It is necessary to investigate the explainability of these hypotheses using a variety of instruments, methodologies and approaches.

| Explainable artificial intelligence tools
An emerging field called XAI offers a number of methods for transforming the opaque nature of models based on ML or DL and creating explanations that are understandable to humans.Researchers in AI and ML are paying more attention to XAI as a result of the considerable advancement in the application of ML and DL techniques.To comprehend the black-box models, researchers have created a variety of tools.Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro et al., 2016) creates explanations regarding feature contributions to the creation of a prediction using any ML model as input.The model's output is condensed in Shapley Additive exPlanations (SHAP) (Lundberg & Lee, 2017), which also identifies the key characteristics that influence the model's choices.A model-specific approach to discussing DL models is Deep Learning Important FeaTures (DeepLIFT) (Shrikumar et al., 2017).A program called ELI5 (TeamHG-Memex, 2022) is used to demonstrate several Python implementations of ML models.Skater (Skater, 2022) offers a method for comprehending the various ML libraries' learning frameworks.Understanding the choices made by ML models is made easier by Machine Learning extensions (MLxtend) (Raschka, 2018).Microsoft's given ML models are better-understood thanks to InterpretML (Kaur et al., 2020).Alibi (Getting Started, 2022) explains and provides information on ML models.
What-if Practitioners can investigate, visualize and research ML systems using the tool (WIT) (Wexler et al., 2019).In this work, we used LIME, SHAP and WIT tools to forecast food fraud and observed their outputs.Significantly more studies used the SHAP and LIME tools (Aldughayfiq et al., 2023;Dieber & Kirrane, 2022;Kuzlu et al., 2020;Lombardi et al., 2021;Parsa et al., 2020;Sahay et al., 2021;Wenbo et al., 2018).Additionally, a large technology company and undergraduate students investigated the WIT tool for testing purposes; some significant results were found using this tool (Wexler et al., 2019).These tools are chosen for use in this research as a consequence.These three tools have been chosen based on their potential, adaptability and suitability for this situation.We want to see the features that affect the DL model's prediction, the behaviour of the model depending on these features, and the characteristics of the dataset using the data provided by these tools so that we may build a better DL model for early warning and prediction of food safety issues.

| Local Interpretable Model-Agnostic Explanations
By roughly modelling any black box ML model for elucidating each prediction, LIME is an approach that offers a local interpretable model (Ribeiro et al., 2016).This technique modifies the initial data points and then gets the associated predictions by feeding the underlying model the new data points.Finally, LIME trains an understandable model using these new data points and weights them based on how close they are to the original points.Then, each original data point may be explained using this explanation model.In Equation ( 1), G represents the family of potentially interpretable models, and g is the explanation model of instance x where g ϵ G and Ω g ð Þ is the complexity of the interpretable model g.The proximity measure π x defines how large the neighbourhood around instances x and f is the complex model being explained.
The explanation model g minimizes L f, g, π x ð Þ , which measures how close the explanation is to the prediction of the original model f while having complexity Ω g ð Þlow enough to be interpretable.

| SHapley Additive Explanation
The game theory led to the development of the Shapley Value, which measures a player's marginal contribution (Shapley, 1953).Simply using Shapley values is how the explainable AI tool SHAP operates.These numbers define the average feature contribution to the prediction (Christoph, 2020).SHAP is defined in Equation ( 2), where g is the interpretation model, z 0 0,1 f g M shows whether the corresponding feature can be observed (1 or 0).ϕ i ℝ is the attribute value (Shapley value) of each feature and M is the number of inputs.The basic idea of Shapley values is to calculate a player's contribution for each subset and then, simply average over all these contributions (Christoph, 2020).SHAP offers global and local interpretability and also suggests a model-agnostic approximation for SHAP values.
The collective SHAP values are used to demonstrate how much each predictor contributes, either positively or negatively, to the target variable.The importance of a feature j is defined by the SHAP as shown in Equation (3): where f x S ð Þ is the output of the model to be interpreted using a set of S features.N denotes the complete set of all features.The contribution of feature j ϕ j À Á is determined as the average of its contribution among all possible permutations of the feature set.

| What if Tool
A visual interface called the What-if Tool (WIT) makes it easier to comprehend any dataset and the results of ML models that operate in a black box (Wexler et al., 2019).The PAIR (People + AI Research) initiative released WIT.With the help of this tool, trained ML models may be tested effectively, simply and without writing any code.Data point editor, performance and features overview are the three tabs that make up the WIT interface.WIT offers a number of benefits, including the ability to compare multiple models within the same workflow, edit a data point to see how the ML model performs, contrast the data points with counterfactuals, visualize inference outputs, arrange the data points by similarity, and test algorithmic fairness constraints.

| MATERIALS AND METHODS
The dataset and procedures utilized in this study are described in this section.This dataset was created for the authors' prior paper (Bouzembrak & Marvin, 2015), which used Bayesian networks to address the issue of food fraud.

| Data description
Two numerical features and 27 categorical features make up the dataset.Cases of food fraud were gathered from the RASFF and EMA databases.
The occurrences of food fraud were chosen from the time frame of 1 January 2000 to 31 December 2015.Seven different categories were used to examine the cases of food fraud that were reported to the EMA and RASFF databases.Consequently, the dataset contains seven different fraud types, as illustrated in Table 1.
Four food fraud specialists, the EMA and RASFF databases, and finally some brainstorming techniques are used to identify the features (Bouzembrak & Marvin, 2015).The dataset includes features such as the year, month and data source of the food fraud case, as well as the name, category, price in year and month, and country of origin of the product.There are also some missing values in the dataset.These values ought to have either been imputed or eliminated before using the DL approach.A Python library called DataWig (Biessmann et al., 2019) aids in the imputation of missing values for categorical characteristics in datasets.DataWig combines autonomous hyperparameter tweaking and DL feature extractors.It is a reliable and scalable method for the imputation of missing values that may be used with tables that contain a variety of data kinds.The StandartScaler imputation algorithm (ScikitLearn Documentation, n.d.) was used to impute the year and month using numeric features.
This technique substitutes a new value computed by the technique for the missing values.

| Methods
Following generic pre-processing, one-hot encoding is used to encode categorical variables so that ML techniques can be used.The majority of ML models only function with numerical variables.It is intended that all inputs and outputs will be numerical.As a result, in order to convert categorical features into numeric features, all categorical feature variables in the dataset must be encoded.Integer encoding and binary encoding are two examples of encoding methods.The one-hot encoding approach is the most used strategy for encoding categorical information.
According to this method, each category is converted into a vector that contains the numbers 1 and 0, signifying whether the feature is present or not.During the dataset pre-processing, one hot encoding was used.Table 2 provides an illustration of one.
Many ML methods work better with characteristics that are roughly equal in size and near to having a normal distribution.When features are approximately the same size across the dataset, they become equally essential and simpler for most ML models to process.The StandartScaler method of the ML toolkit scikit-learn is used to scale the two numerical features of the dataset.Through the removal of the mean and scaling to unit variance, this approach standardizes a feature.
In this paper, we developed a DNN model made up of numerous perceptron layers.It is also known as a class of feed-forward supervised neural networks known as multi-layer perceptron (MLP).Input, hidden and output layers are the three different kinds of layers found in MLP.It has an arbitrary number of hidden levels in addition to a single output layer and a single input layer.The network's neurons activate nonlinearly using methods like sigmoid and rectified linear units (ReLU).The DNN algorithm was selected in this research because it is capable of learning multiple levels of input representations, it can handle complex problems, and it is flexible to be applied to a wide range of tasks.Also, the learning curve is not steep.
To improve the model's accuracy, DNNs include a few hyperparameters that can be adjusted.Grid search is one technique for hyperparameter tuning that identifies a model's optimal hyperparameters.There are also several other techniques, such as random search, genetic algorithms, gradient-based methods and simulated annealing to optimize the hyperparameters.
Seven neurons are used to construct the output layer of a DNN model since our multi-class dataset contains seven different types of food fraud.This output layer's activation function, Softmax, is chosen to give each output neuron a value between 0 and 1.This number is a likelihood score for the network's right output.In the output layer, the total of all the neurons' outputs equals 1.Given that the challenge is a multi-class classification task, the categorical cross-entropy is chosen as a loss function.Keras, a high-level API built on top of the open-source library Ten-sorFlow, is used to create MLP models.The factors that affected to select Keras framework are as follows: 1.It is easy to use and helps for fast experimentation without the need for low-level code.
2. It is powerful and scalable because it is built on top of TensorFlow.
3. Commonly used algorithms are available within the framework, which shortens the implementation time.
Table 3 displays the model's final configuration as well as the evaluated hyperparameters.
T A B L E 1 Food fraud types.

Fraud type Description
Fraud T A B L E 2 One hot encoding example for Product_category variables.

Original data Transformed data
Product_category Product_category_Fish Product_category_Meat Product_category_Nuts In this section, the implementation of XAI tools is examined.

| LIME
First, the LIME package was installed.A function is created to return the target variable's expected probability given the set of attributes.The next step is to list the feature names.The parameters used to generate a LIME explanation are as follows: • Training data values • All feature names • Class names-the target values • Categorical features-the list of categorical columns • Categorical features-the list of categorical column names • Mode-the prediction mode-classification • Kernel width-a parameter that controls the linearity of the induced model; the wider the width, the more linear the model.
Figure 1 shows how to create the LIME Explainer.
The explanations for the stated values in the test dataset are then obtained from LIME as the last step.We select particular observations from the test dataset in order to extract the probability values for each class.LIME will give the initial justification for the probability assignment.We can compare the probability values for that prediction to the target variable's actual class.To create an explanation figure, the following criteria may be used: 1.An instance picked from the test dataset.
2. The function the returns predicted probability for the target variable.
3. The number of features desired to be displayed in the figure .4. The number of top predictions to be shown specifies the top class(es) with the highest probability from the prediction.

| SHAP
We began by installing the SHAP package.A function is created to return the target variable's expected probability given the set of attributes.To estimate SHAP values for any model using a specifically weighted local linear regression, we used Kernel SHAP.Train data and the function are used to build the Kernel Explainer.By providing the test dataset and the number of samples as parameters to the algorithm shown in Figure 3, we calculated the Shapley values.
The Explainer uses a number of techniques to generate explanation figures.For instance, the Summary Plot can be made in the way depicted in Figure 4.A list of the SHAP charts shown in Figure 4 is provided as follows: 1. Summary plot: Based on shape values taken from the test dataset, we created a summary plot with a bar plot type.The average impact of each attribute on the outcome of the prediction is shown by the bar chart.The relevance of the features in this graphic is also highlighted by the Shapley values.To create a Summary Plot, the following variables can be used: The creation of LIME explanation figure.
F I G U R E 3 SHAP explainer creation and Shapley values calculation.
F I G U R E 4 Creation of SHAP explanation.The original feature values are displayed in the right-hand part (C), while the middle part (A) displays the LIME explanations of selected characteristics.The left part (A) displays the Product Fraud Type prediction probabilities.The findings show that tampering, expiration date and origin labelling were correctly predicted as product fraud types.Tampering is the projected product fraud type for instance 12, as depicted in Figure 7. Data source: EMA; press index: 20.00-29.9;human development index: 0.6000-0.6999;product category: Meat; and Price year: 60-8568.4 have favourable effects; Origin country: South Africa, risk index: 69-69, and technology index: 30.0-39.9 have adverse effects.

| Applying LIME XAI tool to food fraud prediction
In Figure 8, the product fraud type is predicted as the expiration date.Origin country = Belgium; data source = RASFF; press index = 0.0-9.9;transparency index = 50-59; and product name = Cereals in orange have positive impacts on the prediction.
F I G U R E 5 WIT visualization creation.
F I G U R E 6 Confusion matrix.
F I G U R E 7 LIME explanation of instance 12.
Figure 9 shows that product fraud type is predicted as origin labelling.Origin country = Mexico; data source = EMA; press index = 70.0-79.9;and product name = Fish in purple have positive impacts while GDP = 4146-12,745; demand increase = Yes; and trade volume = 3-298,932 in grey have a negative impact on the prediction.
In the vicinity of the data point of interest, the LIME tool builds a basic interpretable model that approximates the complicated model locally.
The incorrectly anticipated instance is shown in Figure 10.Although CED is the actual type of product fraud, illegal importation has a higher prediction probability value according to the model.Additionally, LIME enables us to view the prediction probabilities for all labels; Figure 10 shows two product fraud types.
F I G U R E 8 LIME explanation of instance 15.
F I G U R E 9 LIME explanation of instance 16.
F I G U R E 1 0 LIME explanation of instance 98.Additionally, SHAP offers force plot local interpretability of the models for each data point.The force plots in Figures 12, 13 and 14 were produced with the SHAP tool using the same instances that were previously utilized with LIME.The outcomes show how easily one model prediction may be explained.The model score is pushed higher by characteristics with red colouring, and lower by features with blue colouring.Additionally, the wider the range, the greater the impact on a feature's prediction.Figure 12 shows that data source = EMA enhances the value of the property while risk index = 60-69 decreases the value of the property.
In Figure 13, data source = RASFF and origin country = Belgium, fraud profitability = Low increase the property values.
In Figure 14, data source = EMA, origin country = Mexico increase the property values while supply chain index = 3.50-3.99reduces the property value.
F I G U R E 1 1 SHAP summary plot.
F I G U R E 1 2 SHAP explanation for instance 12.

| Applying WIT XAI tool to food fraud prediction
In the features tab of WIT, we can view the distribution of values for each feature in the dataset.Figure 15 shows the features of an imbalanced distribution by sorting the data points according to non-uniformity.The figure shows that fraud complexity has a unique value of Easy, indicating that it is not distributed equally.Demand growth is also not uniform.Additionally, the distribution of trade volume, price month, price year and RASFF ratio is not uniform.
The data point editor tab can be used to better understand how the model behaves.We can see how the model modifies its choice by introducing a series of modifications, or "counterfactuals".The data point with the most comparable feature values and the opposite prediction is shown here.Figure 16 shows the chosen case (in turquoise) and its counterfactual datapoint (in blue).The closest counterfactual examples are the same, but the values for the product name, fraud profitability and data source are different.
The 2D histogram in Figure 17 shows that the Product name = Fish-Seafood is mostly labelled tampering and origin labelling as food fraud type in data source-EMA, while is mostly labelled HC as food fraud type in data source-RASFF.
F I G U R E 1 3 SHAP explanation for the instance 15.
F I G U R E 1 5 WIT features tab.
F I G U R E 1 4 SHAP explanation for the instance 16.F I G U R E 1 7 WIT data point editor (Table 2).
these instruments has its own restrictions, benefits and drawbacks.All of these tools offer binary/multi-class classification models as well as WIT has a very user-friendly user interface.Some of this tool's advantages include testing the model and displaying the dataset.We also include the execution times for each XAI tool in Table 4. WIT and LIME were quite quick, but SHAP required the most time to execute.Additionally, their execution times are proportionate to the tools' functionality.In Table 5, we also compared XAI tools based on several dimensions.

| DISCUSSION
Food fraud and safety have gotten more difficult, according to a number of publications, and there are a number of problems that need to be resolved.For instance, it is necessary to adapt the intricate supply chain to changing conditions.To the greatest extent possible, AI technology should be used to stop food fraud.In order to get further insights, the current study created a DNN model and used XAI technologies.One conclusion from the analysis is that the feature data source is significant among others, either in a globally or locally explicative sense.
We can see that only the RASFF data source returns data instances that are labelled as CED, Expiration date, HC, or Illegal Importation.Additionally, only the EMA data source is used to extract data instances classified as theft and resale.This observation suggests that the Data Source feature has the most influence on the model's forecast.Product Category and Origin Country are also regarded as features that primarily influence the model's forecast.
Observations were also made while XAI tools were being implemented.Some of them are in line with earlier findings reported in (Kuzlu et al., 2020) about LIME and SHAP.A total of 100 data instances were used in the Shapley values calculation.The computation of Shapley values takes longer the more data there is.As a result, not all of the data instances could be used.The same data instances and models are always used by SHAP to produce the same results.In contrast to SHAP, LIME's results, which were generated at diverse periods, varied from one another.This problem is caused by an algorithm that messes with randomly chosen data instances.Both SHAP and LIME generate graphs that offer regional or global justifications.WIT, on the other hand, lacks this capability; instead, it offers an interface that enables users to assess the behaviour of models, view counterfactuals, and view the data distribution of the dataset.WIT produced some results for us, however, it calls for manual work.
It is fairly simple to use, and this tool does not require the use of any specific model.Users can quickly explore a dataset using WIT.
T A B L E 4 Execution time of the XAI tools.
number of target features desired to display prediction probability values.

Figure 2
Figure 2 shows how to create the LIME Explanation figure.
Instead of the global surrogate model, LIME concentrates on the local interpretable model.Figures 7, 8, 9, and 10 show the outcomes of three different test data examples' local explanations.LIME creates a notebook format with three parts that contain all of the results of a certain feature.
Figure11groups the features according to how important they are to the model.The graph demonstrates that the feature data source has a greater impact on food fraud type tampering prediction than HC.The most influential features are also the data source, origin country and product name.
Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/exsy.13387 by University Of Twente Finance Department, Wiley Online Library on [28/06/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 5.4 | Observations Three different XAI tool types-LIME, SHAP and WIT-were examined for the food fraud domain in this work, and several observations were made.For example, computation time, code complexity, usability and explainability type aspects were considered.It is determined that each of F I G U R E 1 6 WIT data point editor tab.
Figures A1-A7 represent the ROC and PR curves at different categorization levels.
Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/exsy.13387 by University Of Twente Finance Department, Wiley Online Library on [28/06/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License F I G U R E A 4 Receiver operating characteristic (ROC) and precision-recall (PR) curves for illegal importation.F I G U R E A 5 Receiver operating characteristic (ROC) and precision-recall (PR) curves for tampering.F I G U R E A 6 Receiver operating characteristic (ROC) and precision-recall (PR) curves for origin labelling.
Comparison XAI tools.