Tuesday, January 09, 2007

Interference with Bottom-Up Feature Detection by Higher-Level Object Recognition

Carrying on very nicely from yesterdays post related to enactive perception comes this paper. It has found its way onto the BBC news pages, where the authors discuss the utility of making 'snap' decisions. Not read it fully yet, but the abstract is below:

"Drawing portraits upside down is a trick that allows novice artists to reproduce lower-level image features, e.g., contours, while reducing interference from higher-level face cognition. Limiting the available processing time to suffice for lower- but not higher-level operations is a more general way of reducing interference. We elucidate this interference in a novel visual-search task to find a target among distractors. The target had a unique lower-level orientation feature but was identical to distractors in its higher-level object shape. Through bottom-up processes, the unique feature attracted gaze to the target. Subsequently, recognizing the attended object as identically shaped as the distractors, viewpoint invariant object recognition interfered. Consequently, gaze often abandoned the target to search elsewhere. If the search stimulus was extinguished at time T after the gaze arrived at the target, reports of target location were more accurate for shorter presentations. This object-to-feature interference, though perhaps unexpected, could underlie common phenomena such as the visual-search asymmetry that finding a familiar letter N among its mirror images is more difficult than the converse. Our results should enable additional examination of known phenomena and interactions between different levels of visual processes. "

Link to abstract, here.

No comments: