This graduate seminar will be concerned with how human listeners perceive speech and recognize spoken words. The course is divided into two parts. In the first part of the course, we will review and critically analyze what is currently known about speech perception. Our primary emphasis will be on several "core" theoretical problems in the field including: linearity, acoustic-phonetic invariance and segmentation, representational specificity of speech, perceptual normalization, units of perceptual analysis, phonological recoding, categorical perception and multimodal perception of speech. We will also consider several theoretical approaches to speech perception such as motor theory, analysis-by-synthesis, feature detection, event perception and connectionist proposals. In the second part of this seminar, we will consider a number of problems dealing with spoken word recognition. We are interested in how words are organized and stored in the mental lexicon and how they are recognized from the acoustic-phonetic information in the speech signal. We will review and critically analyze Logogen Theory, Autonomous Search Theory, LAFS, Cohort Theory, TRACE and Neighborhood Activation Model (NAM). Central to our approach is a concern for frequency, density and context effects in spoken word recognition and the interaction of various knowledge sources employed in spoken language processing. We will also consider the role of "indexical" properties of speech and discuss the results of recent investigations suggesting that these attributes of speech are just as important for speech perception and spoken word recognition as the more conventional context-free symbol-processing approach traditionally assumed in theoretical linguistics.