This article considers how evaluation pertains to taxonomies. Taxonomies and evaluation are both rich concepts, so it is best to start out with some definitions that help to define our discussion. What do we mean by taxonomy? And what do we mean by evaluation?
Direction on the construction and application of classification schemes such as taxonomies is readily available, but relatively little has been offered on evaluating the schemes themselves and their use to categorize content. A classification scheme can be judged for how well it meets its purpose and complies with standards, and a strong evaluative framework is reflected in S.R. Ranganathan's principles of classification. The degree of certainty of classification decisions depends on objective understanding of the object to be classified, the scope and details of the class and the coverage and organization of the overall classification scheme. The more complete the information about each class, the more reliable the goodness- of-fit for an object to a class is likely to be, whether chosen by human or machine classifiers. This information comes through definitions, examples, prior use and semantic relationships. The risk of misclassification can be reduced by analyzing the goodness-of-fit of objects to classes and the patterns of missed or erroneous selections.
For seasoned information professionals the traditional characterization of a taxonomy is as a hierarchical classification scheme. This characterization has expanded in the last 20 years as the taxonomy community and the information environment have expanded. Today the taxonomy community includes people who design taxonomies, those who build systems that support them and those who use them. Our complex information environment may call for a variety of taxonomic structures, including
flat taxonomies such as lists of languages or lists of countries;
hierarchical taxonomies such as topical or subject classifications, business classifications or service classifications;
faceted taxonomies such as metadata or parametric search structures;
ring taxonomies such as synonyms or authority control data; and
network taxonomies such as fully relationed thesauri or knowledge networks.
Each of these structures has its own set of principles and behaviors. And each requires an evaluation method that aligns with those principles and behaviors. This article focuses on the second type of taxonomy – the traditional classification scheme or hierarchical taxonomy. Classification schemes govern the organization of objects into groups according to explicit properties or values. Classification schemes are in widespread use in everyday life – from grocery stores to websites to personal information spaces.
To evaluate something is to determine or fix a value through careful appraisal. There seem to be two important evaluation points related to classification schemes. The first is an evaluation of the classification scheme itself. The second is how well the scheme supports classification decisions. Each requires its own framework and context.
Evaluating a Classification Scheme
We can evaluate a classification scheme based on its intended goal and purpose and by how well it aligns with professional standards and principles. Goals and purpose will be institution-specific and are best addressed internally by those who design and work with the scheme. Evaluating a classification scheme according to professional standards and principles, though, is a process that can be generalized. ISO 11179-2 Information Technology – Metadata Registries (MDR). Part 2. Classification (2005)  provides advice for constructing data structures and relationships that are used to represent a scheme. ISO 25964 Thesaurus Schemas  and ANSI-NISO Z39.19 (R2010) Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies  provide some guidance on the distinction between a thesaurus (a network taxonomy structure) and a classification scheme (a hierarchical taxonomy). Important sources for advice on how to construct classes in a classification scheme, though, derive from other sources such as S.R. Ranganathan's Prolegomena to Library Classification  and discussions of set theory found in the mathematical sciences literature . Table 1 provides a sample set of principles for constructing classes derived, but reinterpreted, from Ranganathan for an information technology team. The challenge with using these sources for evaluation is that they generally require substantial translation to be understandable by the teams that are building and appraising the scheme. A full set of interpreted principles is available from the author upon request.
It has generally been my experience that Ranganathan's principles align with but are more exhaustive than the popular guidelines found in the usability engineering literature. The challenge, though, is that they are difficult to interpret by anyone outside of the information science profession. Adapting and interpreting Ranganathan's principles will provide you with a very strong framework for evaluating the strength of your classification scheme. In fact, the principles convert very nicely into a working checklist for periodic evaluations.
Evaluating Classification Decisions
Evaluating classification decisions is less straightforward. In order to evaluate a classification decision, we need a good description of the classification process. Within the process description we can pinpoint what to evaluate. Classification is a decision-making process that involves making choices. Typically, choices are made in the context of an existing classification scheme by a human or machine classifier and for a given object (Figure 1). In theory, this choice seems like a straight-forward decision process. The classifier who knows the classification scheme, considers what is known about the object, what is known about the possible classes and makes a decision as to which class in the scheme is the best fit for the object.
The research on classification evaluation is extensive. The citations in this article are illustrative and not comprehensive. The research tends to focus on several contexts for evaluation, including
One fundamental perspective appears to receive less treatment – the simple question of how well the object fits a class in a classification scheme. We suggest there are two simple reasons why this perspective has not received more attention. First, to date most classification is done by people, and we have always assumed that humans make optimal decisions. Second, until recently we have not had the capacity to evaluate decisions in a direct and controlled way. Rather, we have had to evaluate them from an information-retrieval and end-user perspective. What would a direct evaluation of the fit between an object and a chosen class look like?
In an expanding universe of information, classification decisions may be made by people who have neither professional information science training nor subject expertise. Classification decisions may also be made by machine classifiers. Regardless of who makes decisions, the goal is to ensure that those decisions are optimal. An optimal classification decision reflects the best choice that can be made given the information available at the time. An optimal choice may be defined as a good fit between the object and possible classes. An optimal decision also reduces the risk of misclassification. Misclassification may take two forms. The first form occurs when we assign the object to a class for which it is not good fit. In this case, the object will be presented to the user in error. The second form occurs when we fail to assign the object to a class for which it is a good fit. In this case, the object will be overlooked because it is not in the class. So our evaluation point for classification decisions is determining how well the classified object aligns with the chosen class(es).
Reducing Uncertainty in the Classification Decision
Information economists tell us that optimal decisions result from reducing uncertainty. One way to improve a classification decision, then, is to reduce the uncertainty in the process. Classification is characterized by several kinds of uncertainty. The classifier may have an incomplete understanding of the object. Uncertainty may be high where the classifier has access only to an abstract or summary of the object. The classifier may be uncertain as to what properties or attributes define the class. Uncertainty about the class may result from an incomplete understanding of the scheme or an incomplete specification of the domain – perhaps not all relevant classes have been defined in the schema. Perhaps the classifier has imperfect knowledge of all the topics covered in the scheme. The scope and coverage of the classes may not be explicitly available, requiring subjective interpretation by the classifier.
Any of these uncertainties may lead to a suboptimal classification decision. In some cases, making a less than perfect classification decision may be acceptable – perhaps the risk resulting from a classification decision is not so great. If a young reader overlooks a book about the role of snakes in a desert ecosystem for a school project because it was misclassified, the risk is low. In other cases, though, the risk of misclassification may be significant. Where an energetic-materials scientist overlooks an important report of a chemical experiment, national security risks may arise.
These uncertainties are important when we are making a classification decision and when we are evaluating a classification decision. How can we reduce the uncertainty we find in the classification decision? How can we improve the information we need to evaluate classification decisions? The answer is simple. By expanding the information we have about the object, individual classes and the overall makeup and purpose of the classification scheme. Table 2 identifies some of the conditions that might produce low, moderate and high rates of risk in classification decisions.
Notice that the more explicit and objective information we have about each of the factors, the lower the uncertainty. Uncertainty is highest when we rely on subjective interpretation of objects, where there is no direct access to objects and where there is no formal and extensive representation of a class. High levels of uncertainty may result in higher probabilities of misclassification.
Today we are not likely to encounter uncertainty about an object because the classifier – human or machine – will have the object in hand or will be able to access it in its entirety in digital form. It is more probable that we will encounter uncertainty about a class. While humans have constructed hierarchical classification schemes for centuries, often they have not provided rigorous characterizations of those classes sufficient to reduce uncertainty in the decision process. For example, classification schemes are often represented
through narrative scope notes (Figure 2);
through dictionary definitions (Figure 3);
by default through subclasses (Figure 4);
by de facto practice as defined in collections (Figure 5); and
through associated subject headings and descriptors (Figure 6).
In each of these cases, the classifier has little explicit knowledge to work with. As a result, the choice is made based on a subjective interpretation of the class. A human classifier relies on personal subject knowledge and experience. The choice made by the machine classifier will be simple word matching and relevancy ranking.
Optimizing the Classification Decision Using Extensive Class Descriptions
Our first evaluation criterion for a classification decision was the alignment or goodness of fit of an object to the chosen class. The challenge is that we likely don't have enough information about the class to conduct a good evaluation. Uncertainty rules in this situation . The easiest way to reduce uncertainty is to provide a full and explicit representation of class, its properties and values. Such representation is not a trivial task, though. Today subject experts and human classifiers rely on a deep understanding of a field that they have built up over time. What we need is a way to efficiently and reliably create a full and explicit class definition that can be used to evaluate the choice of class for any object. And, to evaluate that choice, we need an objective, quantifiable and verifiable approach.
One approach that appears to work leverages a combination of machine and human methods. The first step in this process is to assemble a rich but representative sample of objects for the class. It must represent a variety of perspectives – expert-novice, popular-academic, brand specific-generic. And it must represent all aspects or facets of the class. To this collection we apply natural language processing and concept extraction methods to construct a draft representation. Domain experts and classifiers review and revise the draft representation, perhaps several times. When the representation has passed their review, it may serve as an explicit representation of a class. Figure 7 provides an example of a fully elaborated class representation for Livestock that was generated using this approach. This is one of 750 classes in a scheme. The representation comprises 3,341 concepts that were reviewed and approved by domain experts and professional indexers.
Evaluating the Classification Decision
Given an extensive representation of a class, we can make a strong classification decision. It also supports an objective and verifiable evaluation. This approach provides the information we need to evaluate our first criterion – the goodness of fit of an object to a class. Generally, a good fit will result from a high number of matching properties and a high occurrence of those properties. Figure 8 illustrates the way in which a machine categorization engine might report on the goodness of fit to one or more classes. The classification engine can swiftly conduct a property-by-property, value-by-value comparison of the object and class.
Our second evaluation criterion for the classification process pertained to minimizing the risk of misclassification. We can better manage misclassification when we have a full picture of goodness of fit of an object to all classes in the classification scheme. A goodness of fit indicator can be calculated for any class where a full representation is available. Figure 9 illustrates the way in which a machine categorization engine might report on the goodness of fit to all classes. Using this approach, institutions may establish thresholds for classification decisions that can be monitored and evaluated. Misclassification is minimized where we can explicitly see which classes may have been overlooked or which may have been selected in error.
Conclusions and Observations
We considered evaluation of a hierarchical taxonomy or classification scheme and the classification decisions made when working with a hierarchical taxonomy. We offer three observations for evaluating hierarchical taxonomies.
- 1The principles we need to evaluate and improve classification schemes are readily available. While they are understandable to information science students, some interpretation is needed for designers, engineers and the general public.
- 2We can convert these principles into institutional checklists to support periodic evaluation and improvement of classification schemes.
- 3Information science education should include assessment of general hierarchical taxonomies in the curriculum, in addition to introducing students to the commonly used classification schemes.
In regard to evaluation of classification decisions, we offer four observations.
- 1Classifications are often evaluated indirectly. We have suggested an approach that more directly targets the classification decision directly.
- 2This approach requires more information about the classes in a classification scheme. It makes explicit the implicit knowledge of classifiers used to make decisions. Providing more information about a class is not a trivial task for existing schemes. However, it is manageable for new schemes.
- 3The approach allows an institution to objectively judge the goodness-of-fit of any decision and to assess the risk of misclassification.
- 4This approach supports rather than substitutes for other evaluation perspectives. Understanding the nature of the classification decision helps us to better understand end user responses to that decision.
While we considered evaluation of the scheme and the decision separately, we should not overlook the dependencies between a well-formed classification scheme and a well-executed classification decision. Making an optimal classification choice is dependent upon a good representation of the class, a well-formed class and a well-formed classification scheme. While the context in which taxonomies are used has expanded significantly in the past 20 years, the criteria for evaluation have not changed. The expansion in affordable computing power and the availability of semantic technologies provides the capacity to make and evaluate classification decisions in low risk and object ways.