The ability to recognize sounds allows humans and animals to efficiently detect behaviorally relevant events, even in the absence of visual information. Sound recognition in the human brain has been assumed to proceed through several functionally specialized areas, culminating in cortical modules where category-specific processing is carried out. In the present high-resolution fMRI experiment, we challenged this model by using well-controlled natural auditory stimuli and by employing an advanced analysis strategy based on an iterative machine-learning algorithm that allows modeling of spatially distributed, as well as localized, response patterns. Sounds of cats, female singers, acoustic guitars, and tones were controlled for their time-varying spectral characteristics and presented to subjects at three different pitch levels. Sound category information--not detectable with conventional contrast-based methods analysis--could be detected with multivoxel pattern analyses and attributed to spatially distributed areas over the supratemporal cortices. A more localized pattern was observed for processing of pitch laterally to primary auditory areas. Our findings indicate that distributed neuronal populations within the human auditory cortices, including areas conventionally associated with lower-level auditory processing, entail categorical representations of sounds beyond their physical properties.