This study examined the neurophysiological mechanisms of speech segmentation, the process of parsing the continuous speech signal into isolated words. Individuals listened to sequences of two monosyllabic words (e.g. gas source) and non-words (e.g. nas sorf). When these phrases are spoken, talkers usually produce one continuous s-sound, not two distinct s-sounds, making it unclear where one word ends and the next one begins. This ambiguity in the signal can also result in perceptual ambiguity, causing the sequence to be heard as one word (failed to segment) or two words (segmented). We compared listeners' electroencephalogram activity when they reported hearing one word or two words, and found that bursts of fronto-central alpha activity (9–14 Hz), following the onset of the physical /s/ and end of phrase, indexed speech segmentation. Left-lateralized beta activity (14–18 Hz) following the end of phrase distinguished word from non-word segmentation. A hallmark of enhanced alpha activity is that it reflects inhibition of task-irrelevant neural populations. Thus, the current results suggest that disengagement of neural processes that become irrelevant as the words unfold marks word boundaries in continuous speech, leading to segmentation. Beta activity is likely associated with unifying word representations into coherent phrases.