ABSTRACT

This chapter introduces a corpus of speech-gesture data derived from naturalistic video recordings of face-to-face conversations in Mandarin Chinese, including the methodologies for the selection of the data and for the identification of the semantic and temporal relationships between speech and gesture in the context of discourse interaction. A corpus of 771 cross-modal cases, including those of entity, motion, action, spatial orientation, and time, is established; 52.7% of the total refer to concrete ideas and 47.3% are abstract ideas. Different portions of the data are employed for the investigation and discussion of the issues under research.