As we converse, facial expressions, head movements and vocal prosody are produced that are important sources of information for communication. Semantic content of conversation is accompanied by vocal prosody, non-word vocalizations, head movement, gestures, postural adjustments, eye movements, smiles, eyebrow movements, and other facial muscle changes. Conversational partners tend to coordinate their head movements, facial expressions, and vocal prosody as they speak with one another.
Conversational coordination can be defined as when an action generated by one individual is predictive of a symmetric action by another. The Human Dynamics Laboratory considers conversational coordination as a form of spatiotemporal symmetry between individuals that has behaviorally useful outcomes.
In a dyadic conversation, each conversant’s perception of the other person produces an ever-evolving behavioral context that in turn influences her/his ensuing actions, thus creating a nonstationary feedback system as conversants form patterns of movements, expressions, and vocal inflections; sometimes with high symmetry between the conversants and sometimes with little or no similarity. This skill is so automatic that little thought is given to it unless it begins to break down.
Experiments underway in the Human Dynamics Laboratory test hypotheses about how conversational symmetry forms and how the symmetry is broken. Novel videoconferencing technology in use at the lab allows the manipulation of facial expression dynamics and apparent identity in real time.