Publications

2002

Stefanucci, J., & Proffitt, D. (2002). Providing distinctive cues to augment human memory. Proceedings of the Annual Meeting of the Cognitive Science Society, 24(24).
Previous research in our lab (Tan, Stefanucci, Proffitt & Pausch, 2001) demonstrated that a multimodal prototype computer system, the InfoCockpit, could increase users’ memory of information compared to a standard desktop computer. Displaying information on multiple monitors with ambient visual and auditory dispays engages context-dependent memory and memory for location, thus facilitating recall. We replicate this finding and isolate the memory cues to find whether the combination of contextual information and spatial location is necessary to obtain this memory advantage. Our findings show that contextual information alone provides users with the best strategy for later recall. 
Tan, D., Pausch, R., Stefanucci, J., & Proffitt, D. (2002). Kinesthetic cues aid spatial memory. CHI’02 Extended Abstracts on Human Factors in Computing Systems, 806–807.
We are interestind in building and evaluating human computer interfaces that make information more memorable. Psychology research informs us that humans access memories through cues, or "memory hooks," acquired at the time we learn the information. In this paper, we show that kinesthetic cues, or the awareness of parts of our body's position with respect to itself or to the environment, are useful for recalling the positions of objects in space. We report a user study demonstrating a 19% increase in spatial memory for information controlled with a touchscreen, which provides direct kinesthetic cues, as compared to a standard mouse interface. We also report results indicating that females may benefit more than males from using the touchscreen device.

2001

Creem, S., & Proffitt, D. (2001). Grasping objects by their handles: a necessary interaction between cognition and action.. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 218.
Research has illustrated dissociations between “cognitive” and “action” systems, suggesting that different representations may underlie phenomenal experience and visuomotor behavior. However, these systems also interact. The present studies show a necessary interaction when semantic processing of an object is required for an appropriate action. Experiment 1 demonstrated that a semantic task interfered with grasping objects appropriately by their handles, but a visuospatial task did not. Experiment 2 assessed performance on a visuomotor task that had no semantic component and showed a reversal of the effects of the concurrent tasks. In Experiment 3, variations on concurrent word tasks suggested that retrieval of semantic information was necessary for appropriate grasping. In all, without semantic processing, the visuomotor system can direct the effective grasp of an object, but not in a manner that is appropriate for its use.
Creem, S., & Proffitt, D. (2001). Defining the cortical visual systems:“what”,“where”, and “how”. Acta Psychologica, 107(1-3), 43–68.
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral “what” stream to process object information and a dorsal “where” stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a “how” system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both “where” and “how” systems that are functionally and structurally organized within the posterior parietal lobe.
Carpenter, M., & Proffitt, D. (2001). Comparing viewer and array mental rotations in different planes. Memory & Cognition, 29(3), 441–448.
Participants imagined rotating either themselves or an array of objects that surrounded them. Their task was to report on the egocentric position of an item in the array following the imagined rotation. The dependent measures were response latency and number of errors committed. Past research has shown that self-rotation is easier than array rotation. However, we found that imagined egocentric rotations were as difficult to imagine as rotations of the environment when people performed imagined rotations in the midsagittal or coronal plane. The advantages of imagined self-rotations are specific to mental rotations performed in the transverse plane.
Creem, S., Wraga, M., & Proffitt, D. (2001). Imagining physically impossible self-rotations: Geometry is more important than gravity. Cognition, 81(1), 41–64.
Previous studies found that it is easier for observers to spatially update displays during imagined self-rotation versus array rotation. The present study examined whether either the physics of gravity or the geometric relationship between the viewer and array guided this self-rotation advantage. Experiments 1–3 preserved a real or imagined orthogonal relationship between the viewer and the array, requiring a rotation in the observer's transverse plane. Despite imagined self-rotations that defied gravity, a viewer advantage remained. Without this orthogonal relationship (Experiment 4), the viewer advantage was lost. We suggest that efficient transformation of the egocentric reference frame relies on the representation of body–environment relations that allow rotation around the observer's principal axis. This efficiency persists across different and conflicting physical and imagined postures.
Proffitt, D., Creem, S., & Zosh, W. (2001). Seeing mountains in mole hills: Geographical-slant perception. Psychological Science, 12(5), 418–423.
When observers face directly toward the incline of a hill, their awareness of the slant of the hill is greatly overestimated, but motoric estimates are much more accurate. The present study examined whether similar results would be found when observers were allowed to view the side of a hill. Observers viewed the cross-sections of hills in real (Experiment 1) and virtual (Experiment 2) environments and estimated the inclines with verbal estimates, by adjusting the cross-section of a disk, and by adjusting a board with their unseen hand to match the inclines. We found that the results for cross-section viewing replicated those found when observers directly face the incline. Even though the angles of hills are directly evident when viewed from the side, slant perceptions are still grossly overestimated.
Creem, S., Downs, T. H., Wraga, M., Harrington, G., Proffitt, D., & Downs, H. (2001). An fMRI study of imagined self-rotation. Cognitive, Affective, & Behavioral Neuroscience, 1(3), 239–249.
In the present study, functional magnetic resonance imaging was used to examine the neural mechanisms involved in the imagined spatial transformation of one’s body. The task required subjects to update the position of one of four external objects from memory after they had performed an imagined self-rotation to a new position. Activation in the rotation condition was compared with that in a control condition in which subjects located the positions of objects without imagining a change in selfposition. The results indicated similar networks of activation to other egocentric transformation tasks involving decisions about body parts. The most significant area of activation was in the left posterior parietal cortex. Other regions of activation common among several of the subjects were secondary visual, premotor, and frontal lobe regions. These results are discussed relative to motor and visual imagery processes as well as to the distinctions between the present task and other imagined egocentric transformation tasks.
Tan, D., Stefanucci, J., Proffitt, D., Pausch, R., & Pausch, R. (2001). The Infocockpit: Providing location and place to aid human memory. Proceedings of the 2001 Workshop on Perceptive User Interfaces, 1–4.
Our work focuses on building and evaluating computer system interfaces that make information memorable. Psychology research tells us people remember spatially distributed information based on its location relative to their body, as well as the environment in which the information was learned. We apply these principles in the implementation of a multimodal prototype system, the Infocockpit (for "Information Cockpit"). The Infocockpit not only uses multiple monitors surrounding the user to engage human memory for location, but also provides ambient visual and auditory displays to engage human memory for place. We report a user study demonstrating a 56% increase in memory for information presented with our Infocockpit system as compared to a standard desktop system.

2000

Wraga, M., Creem, S., & Proffitt, D. (2000). Updating displays after imagined object and viewer rotations.. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(1), 151.
Six experiments compared spatial updating of an array after imagined rotations of the array versus viewer. Participants responded faster and made fewer errors in viewer tasks than in array tasks while positioned outside (Experiment 1) or inside (Experiment 2) the array. An apparent array advantage for updating objects rather than locations was attributable to participants imagining translations of single objects rather than rotations of the array (Experiment 3). Superior viewer performance persisted when the array was reduced to 1 object (Experiment 4); however, an object with a familiar configuration improved object performance somewhat (Experiment 5). Object performance reached near-viewer levels when rotations included haptic information for the turning object. The researchers discuss these findings in terms of the relative differences in which the human cognitive system transforms the spatial reference frames corresponding to each imagined rotation.