This paper reports results of an evaluation study of MAP (Multi-faceted Access to PubMed), a metadata induced query suggestion interface for PubMed bibliographic search.
A novel evaluation methodology was used to address the challenges involved in evaluating an IIR (Interactive Information Retrieval) system such as the MAP interface. The most significant aspect of this methodology is that, instead of using assigned tasks common in traditional IR evaluation, it asks real users with real search requests to search with real systems in an experimental setting. Several performance measures were created based on which comparisons were made between MAP and PubMed baseline. MAP was shown to perform better in several of these measures, especially when the search requests had not been attempted before.
The finding pointed to search characteristics as an important intervening variable in IIR evaluation. The advantages of and potential threats to our methodology were also discussed.