Hearing in noise is a challenge for all listeners, especially for those with hearing loss. This study compares cues used for detection of a low-frequency tone in noise by older listeners with and without hearing loss to those of younger listeners with normal hearing. Performance varies significantly across different reproducible, or "frozen," masker waveforms. Analysis of these waveforms allows identification of the cues that are used for detection. This study included diotic (N0S0) and dichotic (N0Sπ) detection of a 500-Hz tone, with either narrowband or wideband masker waveforms. Both diotic and dichotic detection patterns (hit and false alarm rates) across the ensembles of noise maskers were predicted by envelope-slope cues, and diotic results were also predicted by energy cues. The relative importance of energy and envelope cues for diotic detection was explored with a roving-level paradigm that made energy cues unreliable. Most older listeners with normal hearing or mild hearing loss depended on envelope-related temporal cues, even for this low-frequency target. As hearing threshold at 500 Hz increased, the cues for diotic detection transitioned from envelope to energy cues. Diotic detection patterns for young listeners with normal hearing are best predicted by a model that combines temporal- and energy-related cues; in contrast, combining cues did not improve predictions for older listeners with or without hearing loss. Dichotic detection results for all groups of listeners were best predicted by interaural envelope cues, which significantly outperformed the classic cues based on interaural time and level differences or their optimal combination.