Integrated Understanding of Speech and Gestures

Children develop language, social, and communicative competence in a multimodal environment where adults frequently accompany speech with gestures, facial expressions, and gaze cues. This project investigates how children perceive and interpret such multimodal information during early communication. Specifically, we explore when and how children begin to integrate visual gestures with auditory speech, and how this ability develops across infancy and early childhood.

Using behavioural experiments, eye-tracking, and physiological measures such as EEG, we are examining the developmental trajectory of multimodal comprehension. Our findings indicate that children’s sensitivity to gesture–speech congruency emerges gradually and is closely tied to both linguistic and social development. Ultimately, this research aims to clarify how multimodal communication scaffolds language acquisition, and how early perceptual mechanisms evolve into adult-like comprehension processes.


Neural Mechanisms of Gesture Production

This line of research examines how the human brain plans and executes co-speech gestures, focusing on the temporal and functional coordination between linguistic and motor systems. We employ neuroimaging techniques such as MEG, fMRI, and EEG to identify the neural signatures of gesture production. Our findings show that gesture planning engages the left frontal–parietal network as well as auditory–motor integration regions, suggesting parallel activation between gesture and speech systems.

Building on these discoveries, we are now investigating how gesture timing aligns with speech production andhow individual differences in gesture use relate to neural efficiency. Through this work, we aim to establish a neurocognitive framework of gesture–speech integration that connects behavioural evidence with neural dynamics, ultimately contributing to models of embodied language processing.

 


Gestures in Comics and Visual Art

This project explores how gestures are visually represented in static media such as comics, picture books, and paintings across different cultures. We are analysing how artists depict motion, emotion, and interaction in still images, and how readers interpret these embodied cues. Our research combines multimodal annotation with cross-cultural coding frameworks to compare visual strategies for representing communicative actions.

By linking methods from cognitive science and visual semiotics, we aim to understand how artists across cultures imagine, abstract, and convey human movement and interpersonal dynamics. This line of research also contributes to educational and design applications, such as how children learn about body language through illustrated materials and how visual storytelling conventions reflect cultural models of communication.


Functional Roles of Self-Adaptors

This project investigates self-adaptors—spontaneous bodily actions such as touching one’s face, hair, or arms—as embodied markers of cognitive and emotional regulation. Traditionally regarded as minor or unconscious behaviours, self-adaptors are now understood as meaningful indicators of mental state. Through behavioural observation and physiological measurement, we are examining when and why people engage in self-touch, and how these movements relate to stress, attention, and lexical retrieval.

Our results suggest that self-touch frequency increases during cognitive difficulty and stressful tasks, supporting their role in maintaining homeostasis and cognitive control. Ongoing work extends this approach to developmental and cross-cultural contexts, exploring how children and adults in different societies use self-adaptors as coping strategies. These studies aim to illuminate how subtle bodily movements support mental regulation and social adaptation.


Multimodal Communication in People with Aphasia

In collaboration with clinical researchers and speech–language pathologists, this project investigates how individuals with aphasia use gestures to compensate for verbal impairments and to support communication. Many people with aphasia experience disruptions in speech production and comprehension, yet retain strong nonverbal and cognitive abilities. Gestures often serve as a vital medium for expressing meaning when linguistic output is limited.

We are systematically analysing the types and frequencies of gestures used by people with different forms of aphasia and how these gestures relate to communicative success. In addition, we are developing multimodal rehabilitation and assessment methods that integrate speech, gesture, and visual aids. Our goal is to contribute to both theoretical understanding of language–gesture interplay and the practical improvement of therapy approaches for individuals with language disorders.


Cultural Influences on Gesture and Language Developmental Pathways

Children acquire communication skills not only by learning what to express, but also by discovering how to express it in culturally appropriate ways. This project investigates how gestures and bodily movements are learned, adapted, and conventionalised within specific cultural settings. We are currently observing Japanese children’s use of communicative body actions—such as nodding, head shaking, posture shifts, and gestures—in natural and experimental contexts.

These observations help reveal how cultural norms shape developmental trajectories in multimodal communication. In future cross-cultural comparisons, we will examine how children from different cultural backgrounds internalise gesture conventions and integrate them into their linguistic and social repertoires. Through this research, we aim to clarify how culture guides the developmental pathways linking gesture, cognition, and language.


In addition to the above themes, We are also involved in applied research on the role of physical movements in sports and musical activities.