Binaural signal detection in an NoS pi task relies on interaural disparities introduced by adding an antiphasic signal to diotic noise. What metric of interaural disparity best predicts performance? Some models use interaural correlation; others differentiate between dynamic interaural time differences (ITDs) and interaural level differences (ILDs) of the effective stimulus. To examine the relative contributions of ITDs and ILDs in binaural detection, we developed a novel signal processing technique that selectively degrades different aspects (potential cues) of binaural stimuli (e.g., only ITDs are scrambled). Degrading a particular cue will affect performance only if that cue is relevant to the binaural processing underlying detection. This selective scrambling technique was applied to the stimuli of a classic N0S pi task in which the listener had to detect an antiphasic 500-Hz signal in the presence of a diotic wideband noise masker. Data obtained from five listeners showed that (1) selective scrambling of ILDs had little effect on binaural detection, (2) selective scrambling of ITDs significantly degraded detection, and (3) combined scrambling of ILDs and ITDs had the same effect as exclusive scrambling of ITDs. Regarding the question which stimulus properties determine detection, we conclude that for this binaural task (1) dynamic ITDs dominate detection performance, (2) ILDs are largely irrelevant, and (3) interaural correlation of the stimulus is a poor predictor of detection. Two simple stimulus-based models that each reproduce all binaural aspects of the data quite well are described: (1) a single-parameter detection model using ITD variance as detection criterion and (2) a compressive transformation followed by a crosscorrelation analysis. The success of both of these contrasting models shows that our data alone cannot reveal the mechanisms underlying the dominance of ITD cues. The physiological implications of our findings are discussed.