We present a comparative study of abstracts and machine-generated summaries. This study bridges two hitherto independent lines of research: the descriptive analyses of abstracts as a genre and the testing of summaries produced by automatic text summarization (ATS). A pilot sample of eight articles was gathered from Library and Information Science Abstracts (LISA) database, with each article including an author-written abstract and one of four types of indexed abstracts. Three ATS systems (Copernic Summarizer, Microsoft AutoSummarize, SweSum) were used to produce three additional summaries per article. The structure, content and style of abstracts and summaries were analyzed by building on genre analysis methods, creating ten functional categories. Summaries and abstracts demonstrate variability in analyzed features and captured concepts, with some consistencies and overlap. Incorporating ATS output can be useful to information seekers: summaries complement abstracts by expanding representativeness of source articles. Yet certain cognitive processes performed by abstractors remain irreplaceable.
Abstracts are compressed representations of larger documents. From the perspective of information behavior, abstracts are important resources for activities such as information retrieval, browsing, and classification (Moens, 2000). Yet there is little consistency in the writing of abstracts, whether between writers or even by the same writer (Rath et al, 1999). Pioneers of automatic text summarization (ATS) saw it as an ‘objective’ and cost-effective alternative to manually produced abstracts (Luhn, 1958/1999). Yet approaches created to evaluate ATS summaries and thereby develop better systems struggle with the lack of a ‘gold standard’ against which to measure ATS summary quality (Hirschman & Mani, 2004).
Linguistic studies, by contrast, approach the variability of abstracts descriptively. Abstracts are treated as a genre of communication; differences in structure, content and style are attributed to the intended audience, intended effect or background of the writer (Swales, 1990). Abstracts are written to fulfill any number of functions, such as attracting a specific reader or promoting one's work. Differences in goals of authors versus abstracting and indexing services are recognized (Cross & Oppenheim, 2006). Montesi and Owen (2007), for example, compared author abstracts and indexed abstracts (author abstracts amended by staff editors) in Library and Information Science Abstracts (LISA) and noted content and stylistic modifications, suggesting the value-added contribution of LISA editors improves the accessibility of documents to a wider audience than initially conceived by the article author. Yet, LISA increasingly relies on unedited abstracts in their database.
So far, studies have not explored the use of ATS summaries to supplement ‘deficiencies’ in author abstracts, nor have ATS has been incorporated in the genre studies of abstracts. Though summaries are the product of computational algorithms rather than volitional communicators, analyzing ATS summaries alongside human abstracts can help highlight the linguistic and cognitive mechanisms behind author and professional abstracting. The goal of this study is to develop a descriptive vocabulary and classification system to allow for comparative assessments in the context of real-world information practices. We aim to demonstrate the strengths, weaknesses, and convergences among abstractors and ATS systems towards synergistically incorporating cognitive and computational processes to improve information seeking services.
The pilot data consist of 40 abstracts and summaries for a stratified convenience sample of 8 machine-readable articles, selected from the LISA database, published in English-language journals within in Library Technology section between 2000–2003. Each selected article included the author abstract (Fig. 1A) and 1 of the 4 types of indexed LISA abstracts – LISA-written abstracts, amended author abstracts (Fig. 1B), abstracts comprised of quotes from the article, or unedited author abstracts. Each article was processed through 3 ATS systems (commercial stand-alone Copernic Summarizer (Fig. 1C), AutoSummarize within Microsoft Word 2007 (Fig. 1D), free online SweSum), to create outputs of comparable length to human abstracts.
Following genre analysis methodology, all texts were divided in moves - a syntactic unit serving a communicative function. Each move (e.g., Fig. 1, 1–3) was analyzed for its global features (i.e., types of content such as an ‘argument/conclusion’, ‘background’, or ‘method/activity’) and local features (i.e., how the content is expressed, such as whether it describes details of the ‘method’ (i.e., is informative) or talks about the ‘method’ indirectly (i.e., is indicative); and what style is used (e.g., mood, voice)). ATS summaries often captured superfluous extracts from texts; hence incidental (describing peripheral, incomplete or incoherent text segments) were distinguished from significant moves.
RESULTS AND DISCUSSION
Global and local features vary across summaries and abstracts, nonetheless demonstrating some consistencies and overlap. Indexed abstracts are most conservatively structured, varying least across articles. Summaries all generated many incidental moves, but also significantly informative units regarding methods, findings, and background. Ten categories of global features were identified (e.g., incidental moves (Fig. 1D1: heading), external moves (Fig. 1C1: authors) or explication moves (Fig. 1B3: examples; Fig. 1D3: definition). This is beyond the standard 5 moves of a scientific abstracts (following the IMRaD structure of introduction, methods/materials, results, and discussion), and is well suited for LIS literature. Interpreting incidental moves in summaries is challenging since extracted sentences lack the context necessary for proper attribution (e.g., is this claim made by the author or a cited source?).
Moves in abstracts are largely indicative, talking indirectly about the article. This is in-keeping with LISA's guidelines, which instruct that “abstract is not intended to be a replacement for the original article” (LISA, 2006). Summaries are mostly informative, presenting content explicitly, and functioning more like document surrogates. Furthermore, summaries occasionally include segments of text left out of abstracts – such as negative finding or definitions (Fig. 1D3) – which are potentially useful to information seekers without full article access. ATS is an imperfect process but can legitimately supplement abstracts. This study confirms and builds upon the findings of Montesi & Owen (2007) by expanding the examination of LISA to all types of indexed abstracts. A larger sample size is necessary to test and refine the observed linguistic patterns that constitute or indicate global and local features. Findings suggest that ATS summaries could present a useful resource for improved information retrieval systems.
In all, we envision combining human cognitive labor with automation to improve the representativeness of documents – what we refer to as a cyborg-solution to the plethora of abstracting approaches. This differs from suggestions of semi-automated summarization (Hovy, 2004), which prescribe an editorial role for people to simply improve the readability and coherence of ATS outputs.
Automatic summaries and human abstracts each bring different perspectives to the document representation. This study developed a framework of analysis across summaries and abstracts. It reveals considerable overlaps, differences, and disjunctions among ATS systems, but also between abstractors with different motivations. A merger between abstracting practices and automated summarization offers new horizons for representation that can ultimately benefit information seekers, article writers, and abstracting services
Thanks to Tyrone Nagai, Senior Supervising Editor of Social Sciences, for his invaluable support and assistance.