This paper presents a comparative user study that investigates the relevance judgments made by assessors with a law background and assessors without. Four law students and four library and information studies (LIS) students were recruited to judge independently the relevance of 100 documents for each of four requests for production of responsive documents for litigation purposes, two requests for the tobacco Master Settlement Agreement collection and two for the Enron corporate emails collection. Both quantitative and qualitative methods are used to analyze data collected through the relevance judgment task, an entry questionnaire, and an exit interview. Being given the same task guidelines, the LIS student assessors judged relevant documents just as accurately as the law student assessors, while they judged nonrelevant documents slightly less accurately than the law student assessors. In addition, participants achieved moderate to substantial agreement on their relevance judgments. Relevance judgment speed varied significantly among participants, although on average it was about the same for the two groups. Factors influencing the accuracy and the speed of participants' relevance judgments are discussed based on preliminary analysis of qualitative data collected through the exit interviews.