We present a comparative study of abstracts and machine-generated summaries. This study bridges two hitherto independent lines of research: the descriptive analyses of abstracts as a genre and the testing of summaries produced by automatic text summarization (ATS). A pilot sample of eight articles was gathered from Library and Information Science Abstracts (LISA) database, with each article including an author-written abstract and one of four types of indexed abstracts. Three ATS systems (Copernic Summarizer, Microsoft AutoSummarize, SweSum) were used to produce three additional summaries per article. The structure, content and style of abstracts and summaries were analyzed by building on genre analysis methods, creating ten functional categories. Summaries and abstracts demonstrate variability in analyzed features and captured concepts, with some consistencies and overlap. Incorporating ATS output can be useful to information seekers: summaries complement abstracts by expanding representativeness of source articles. Yet certain cognitive processes performed by abstractors remain irreplaceable.