This investigation assesses interobserver agreement on conversation analytic (CA) transcription. Four professional CA transcribers spent a maximum of 3 hours transcribing 2.5 minutes of a previously unknown, naturally occurring, mundane telephone call. Researchers unitized transcripts into words, sounds, silences, inbreaths, outbreaths, and laugh tokens, and then coded each of 1,827 units on as many as 15 transcription dimensions. Agreement was assessed using Cohen's kappa for nominal level data: Speaker designation, unit sequencing, semantics, orthography, cutoff, and plosiveness reached the level of “substantial” agreement (90% or greater accuracy). Pitch, overlap, doubt, and smile voice reached the level of “moderate” agreement (80–89% accuracy), while pace, sound stretch, underline/amplitude, and intonation fell below acceptability except when examined post hoc as presence versus absence of the feature. Silence lengths, examined as ratio-level data, were reliable at the “acceptable” level (alpha >.70) among those using a counting method (as opposed to stopwatch or other mechanical means). We make recommendations for transcription training.