Latent semantic analysis has been used for several years to improve the performance of document library searches. We show that latent semantic analysis, augmented with a Part–of–Speech Tagger, may be an effective algorithm for classifying a textual document as well. Using Brille's Part–of–Speech Tagger, we truncate the singular value decomposition used in latent semantic analysis to reduce the size of the word–frequency matrix. This method is then tested on a toy problem, and has shown to increase search accuracy. We then relate these results to natural language processing and show that latent semantic analysis can be combined with context free grammars to infer semantic meaning from natural language. English is the natural language currently being used.