<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2313-8912</journal-id><journal-title-group><journal-title>Research Result. Theoretical and Applied Linguistics</journal-title></journal-title-group><issn pub-type="epub">2313-8912</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2313-8912-2024-10-2-0-5</article-id><article-id pub-id-type="publisher-id">3500</article-id><article-categories><subj-group subj-group-type="heading"><subject>APPLIED LINGUISTICS</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;Can gestures communicate the time of described events? The perception of temporal gestures executed by a companion robot&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;Can gestures communicate the time of described events? The perception of temporal gestures executed by a companion robot&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Kotov</surname><given-names>Artemy Aleksandrovich</given-names></name><name xml:lang="en"><surname>Kotov</surname><given-names>Artemy Aleksandrovich</given-names></name></name-alternatives><xref ref-type="aff" rid="aff1" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Zinina</surname><given-names>Anna Aleksandrovna</given-names></name><name xml:lang="en"><surname>Zinina</surname><given-names>Anna Aleksandrovna</given-names></name></name-alternatives><email>anna.zinina.22@gmail.com</email><xref ref-type="aff" rid="aff2" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Arinkin</surname><given-names>Nikita Alekseevich</given-names></name><name xml:lang="en"><surname>Arinkin</surname><given-names>Nikita Alekseevich</given-names></name></name-alternatives><email>nikita.arinkin@gmail.com</email><xref ref-type="aff" rid="aff3" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Berkuta</surname><given-names>Daria Igorevna</given-names></name><name xml:lang="en"><surname>Berkuta</surname><given-names>Daria Igorevna</given-names></name></name-alternatives><email>dariberkut@icloud.com</email><xref ref-type="aff" rid="aff4" /></contrib></contrib-group><aff id="aff4"><institution>Moscow State Linguistic University</institution></aff><aff id="aff2"><institution>Kurchatov Institute National Research Center, Russian State University for the Humanities, Moscow State Linguistic University,</institution></aff><aff id="aff1"><institution>Kurchatov Institute National Research Center, Russian State University for the Humanities, Moscow State Linguistic University</institution></aff><aff id="aff3"><institution>Kurchatov Institute National Research Center, Russian State University for the Humanities</institution></aff><pub-date pub-type="epub"><year>2024</year></pub-date><volume>10</volume><issue>2</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/linguistics/2024/2/2024-02_июнь_Том_10_2-100-116.pdf" /><abstract xml:lang="ru"><p>In this paper, we investigate whether temporal gestures indicating different events in time can convey information about the relative timing of these events. We depart from the assertion, that in many cultures time is metaphorically represented as an axis, oriented from left to right, i.e. recent or future events have rightmost position, as compared to the events of the past. We test the hypothesis that temporal gestures, referring to the events on the axis, may communicate to the addressee the temporal reference of the described events. The hypothesis was tested in an experiment (N&amp;nbsp;=&amp;nbsp;22&amp;nbsp;people, average age 21.6&amp;nbsp;years, 17&amp;nbsp;female) where two companion robots told stories, accompanied by left-oriented, symmetric or right oriented gestures. Each story included events from two different cases &amp;ndash; recent and past, and each statement was accompanied by (a)&amp;nbsp;left-oriented or symmetric gesture (first condition), or (b) right-oriented or symmetric gesture (second condition). The task of the subjects was to correlate the described events with time, assigning each statement with recent or past event. As a result of this study, the hypothesis that the orientation of temporal gestures can communicate the time of the designated events was partially confirmed. The results correspond to the concept of time oriented for the narrator from left to right, where the events of the past are in the center, and later events (recent events, or &amp;lsquo;today&amp;rsquo;) are located on the right. This conceptual orientation in the experiment was interpreted by the human listeners, although from their point of view, &amp;ldquo;recent events&amp;rdquo; are located on the left side of the metaphoric axis.</p></abstract><trans-abstract xml:lang="en"><p>In this paper, we investigate whether temporal gestures indicating different events in time can convey information about the relative timing of these events. We depart from the assertion, that in many cultures time is metaphorically represented as an axis, oriented from left to right, i.e. recent or future events have rightmost position, as compared to the events of the past. We test the hypothesis that temporal gestures, referring to the events on the axis, may communicate to the addressee the temporal reference of the described events. The hypothesis was tested in an experiment (N&amp;nbsp;=&amp;nbsp;22&amp;nbsp;people, average age 21.6&amp;nbsp;years, 17&amp;nbsp;female) where two companion robots told stories, accompanied by left-oriented, symmetric or right oriented gestures. Each story included events from two different cases &amp;ndash; recent and past, and each statement was accompanied by (a)&amp;nbsp;left-oriented or symmetric gesture (first condition), or (b) right-oriented or symmetric gesture (second condition). The task of the subjects was to correlate the described events with time, assigning each statement with recent or past event. As a result of this study, the hypothesis that the orientation of temporal gestures can communicate the time of the designated events was partially confirmed. The results correspond to the concept of time oriented for the narrator from left to right, where the events of the past are in the center, and later events (recent events, or &amp;lsquo;today&amp;rsquo;) are located on the right. This conceptual orientation in the experiment was interpreted by the human listeners, although from their point of view, &amp;ldquo;recent events&amp;rdquo; are located on the left side of the metaphoric axis.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>Multimodal communication</kwd><kwd>Temporal gestures</kwd><kwd>Human-machine interaction</kwd><kwd>Robot companion</kwd><kwd>Distribution of events in time</kwd></kwd-group><kwd-group xml:lang="en"><kwd>Multimodal communication</kwd><kwd>Temporal gestures</kwd><kwd>Human-machine interaction</kwd><kwd>Robot companion</kwd><kwd>Distribution of events in time</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Barsalou,&amp;nbsp;L.&amp;nbsp;W. (1999). Perceptual Symbol Systems, Behavioral and Brain Sciences, 22, 577&amp;ndash;660. https://doi.org/10.1017/S0140525X99002149 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B2"><mixed-citation>Barsalou,&amp;nbsp;L.&amp;nbsp;W. (2005). Abstraction as dynamic interpretation in perceptual symbol systems, in Gershkoff-Stowe,&amp;nbsp;L. and Rakison,&amp;nbsp;D. (eds.), Building object categories, Carnegie Symposium Series, Erlbaum, 389&amp;ndash;431. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B3"><mixed-citation>Boroditsky,&amp;nbsp;L. (2000). Metaphoric structuring: Understanding time through spatial metaphors, Cognition, 75&amp;nbsp;(1), 1&amp;ndash;28. https://doi.org/10.1016/S0010-0277(99)00073-6 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B4"><mixed-citation>Boroditsky,&amp;nbsp;L., Fuhrman,&amp;nbsp;O. and McCormick,&amp;nbsp;K. (2011). Do English and Mandarin speakers think about time differently?, Cognition, 118&amp;nbsp;(1), 123&amp;ndash;129. https://doi.org/10.1016/j.cognition.2010.09.010 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B5"><mixed-citation>Casasanto,&amp;nbsp;D. and Bottini,&amp;nbsp;R. (2010). Can mirror-reading reverse the flow of time?, in H&amp;ouml;lscher,&amp;nbsp;C., Shipley,&amp;nbsp;T.&amp;nbsp;F., Olivetti Belardinelli,&amp;nbsp;M., Bateman,&amp;nbsp;J.&amp;nbsp;A. and Newcombe,&amp;nbsp;N.&amp;nbsp;S. (eds.), Spatial Cognition VII. Spatial Cognition 2010. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 6222, 335&amp;ndash;345. https://doi.org/10.1007/978-3-642-14749-4_28 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B6"><mixed-citation>Clark,&amp;nbsp;H.&amp;nbsp;H. (1973). Space, time, semantics, and the child, Cognitive development and acquisition of language, 27&amp;ndash;63. https://doi.org/10.1016/B978-0-12-505850-6.50008-6 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B7"><mixed-citation>Cooperrider,&amp;nbsp;K. and N&amp;uacute;&amp;ntilde;ez,&amp;nbsp;R. (2007). Doing time: speech, gesture, and the conceptualization of time, Center for Research in Language Technical Reports, 19, 3&amp;ndash;19. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B8"><mixed-citation>Fuhrman,&amp;nbsp;O. and Boroditsky,&amp;nbsp;L. (2007). Mental time-lines follow writing direction: Comparing English and Hebrew speakers, Proceedings of the Annual Meeting of the Cognitive Science Society, 29&amp;nbsp;(29), 1007-1011. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B9"><mixed-citation>Fuhrman,&amp;nbsp;O. and Boroditsky,&amp;nbsp;L. (2010). Cross-cultural differences in mental representations of time: Evidence from an implicit nonlinguistic task, Cognitive Science, 34&amp;nbsp;(8), 1430&amp;ndash;1451. https://doi.org/10.1111/j.1551-6709.2010.01105.x (In&amp;nbsp;English)</mixed-citation></ref><ref id="B10"><mixed-citation>Grishina,&amp;nbsp;E.&amp;nbsp;A. (2012). Ukazaniya rukoj kak sistema (po dannym Mul&amp;#39;timedijnogo russkogo korpusa) [Hand indications as a system (on the data of the Multimedia Russian Corpus)], Voprosy Yazykoznaniya, 3, 3-50. (In&amp;nbsp;Russian)</mixed-citation></ref><ref id="B11"><mixed-citation>Grishina,&amp;nbsp;E.&amp;nbsp;A. (2018). Russkaya zhestikulyaciya s lingvisticheskoj tochki zreniya. Korpusnye issledovaniya [Russian gesticulation from the linguistic point of view, Corpus studies], Yazyki slavyanskoj kul&amp;#39;tury, Moscow, Russia. (In&amp;nbsp;Russian)</mixed-citation></ref><ref id="B12"><mixed-citation>Grishina,&amp;nbsp;E., Savchuk,&amp;nbsp;S. and Sichinava,&amp;nbsp;D. (2012). Multimodal Parallel Russian Corpus (MultiPARC): Main Tasks and General Structure, LREC 2012 Workshop on Best Practices for Speech Corpora in Linguistic Research, Istanbul, Turkey, 13&amp;ndash;16. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B13"><mixed-citation>Gu,&amp;nbsp;Y., Zheng,&amp;nbsp;Y. and Swerts,&amp;nbsp;M. (2019). Which is in front of Chinese people, past or future? The effect of language and culture on temporal gestures and spatial conceptions of time, Cognitive Science, 43&amp;nbsp;(12), e12804. https://doi.org/10.1111/cogs.12804 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B14"><mixed-citation>Haviland,&amp;nbsp;J.&amp;nbsp;B. (1993). Anchoring, iconicity, and orientation in Guugu Yimithirr pointing gestures, Journal of Linguistic Anthropology, 3&amp;nbsp;(1), 3&amp;ndash;45. https://doi.org/10.1525/jlin.1993.3.1.3 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B15"><mixed-citation>Kendon,&amp;nbsp;A. (1980). Gesticulation and speech: Two aspects of the process of utterance, The Relationship of Verbal and Nonverbal Communication, 25&amp;nbsp;(1980), 207&amp;ndash;227. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B16"><mixed-citation>Kibrik,&amp;nbsp;A.&amp;nbsp;A. (2011). Reference in Discourse, Oxford University Press, UK. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B17"><mixed-citation>Kopp,&amp;nbsp;S., Krenn,&amp;nbsp;B., Marsella,&amp;nbsp;S., Marshall,&amp;nbsp;A., Pelachaud,&amp;nbsp;C., Pirker,&amp;nbsp;H., Th&amp;oacute;risson,&amp;nbsp;K. and Vilhj&amp;aacute;lmsson,&amp;nbsp;H. (2006). Towards a Common Framework for Multimodal Generation: The Behavior Markup Language, in Gratch,&amp;nbsp;J., Young,&amp;nbsp;M., Aylett,&amp;nbsp;R., Ballin,&amp;nbsp;D. and Olivier,&amp;nbsp;P. (eds.), Intelligent Virtual Agents, 4133, Springer, Berlin, Heidelberg, 205&amp;ndash;217. http://dx.doi.org/10.1007/11821830_17 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B18"><mixed-citation>Kotov,&amp;nbsp;A.&amp;nbsp;A. and Zinina,&amp;nbsp;A.&amp;nbsp;A. (2015). Funkcional&amp;#39;nyj analiz neverbal&amp;#39;nogogo kommunikativnogo povedeniya [Functional Analysis of Non‑Verbal Communicative Behavior], Proceedings of the International Conference Computer linguistics and intellectual technologies, Moscow, Russia, 287&amp;ndash;295. (In&amp;nbsp;Russian)</mixed-citation></ref><ref id="B19"><mixed-citation>Lakoff,&amp;nbsp;G. and Johnson,&amp;nbsp;M. (2003). Metaphors we live by, University of Chicago press Chicago, IL. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B20"><mixed-citation>McNeill,&amp;nbsp;D. (1992). Hand and mind: What gestures reveal about thought, University of Chicago press Chicago, IL. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B21"><mixed-citation>Miles,&amp;nbsp;L., Nind,&amp;nbsp;L. and Macrae,&amp;nbsp;C. (2010). Moving through time, Psychological Science, 21&amp;nbsp;(2), 222. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B22"><mixed-citation>Nagai,&amp;nbsp;Y., Hosoda,&amp;nbsp;K., Morita,&amp;nbsp;A. and Asada,&amp;nbsp;M. (2003). A constructive model for the development of joint attention, Connection Science, 15&amp;nbsp;(4), 211&amp;ndash;229. https://doi.org/10.1080/09540090310001655101 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B23"><mixed-citation>N&amp;uacute;&amp;ntilde;ez,&amp;nbsp;R.&amp;nbsp;E. and Sweetser,&amp;nbsp;E. (2006). With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time, Cognitive Science, 30&amp;nbsp;(3), 401&amp;ndash;450. https://doi.org/10.1207/s15516709cog0000_62 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B24"><mixed-citation>Ouellet,&amp;nbsp;M., Santiago,&amp;nbsp;J., Israeli,&amp;nbsp;Z. and Gabay,&amp;nbsp;S. (2010). Is the future the right time?, Experimental Psychology, 57&amp;nbsp;(4). https://doi.org/10.1027/1618-3169/a000036 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B25"><mixed-citation>Sidner,&amp;nbsp;C.&amp;nbsp;L., Kidd,&amp;nbsp;C.&amp;nbsp;D., Lee,&amp;nbsp;C. and Lesh,&amp;nbsp;N. (2004). Where to look: a study of human-robot engagement, Proceedings of the 9th International Conference on Intelligent User Interfaces, New York, NY, United States, 78&amp;ndash;84. https://doi.org/10.1145/964442.964458 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B26"><mixed-citation>Ulrich,&amp;nbsp;R., Eikmeier,&amp;nbsp;V., de&amp;nbsp;la&amp;nbsp;Vega,&amp;nbsp;I., Ruiz Fern&amp;aacute;ndez,&amp;nbsp;S., Alex-Ruf,&amp;nbsp;S. and Maienborn,&amp;nbsp;C. (2012). With the past behind and the future ahead: Back-to-front representation of past and future sentences, Memory &amp;amp; Cognition, 40, 483&amp;ndash;495. https://doi.org/10.3758/s13421-011-0162-4 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B27"><mixed-citation>Vilhj&amp;aacute;lmsson,&amp;nbsp;H., Cantelmo,&amp;nbsp;N., Cassell,&amp;nbsp;J.&amp;nbsp;E. Chafai,&amp;nbsp;N., Kipp,&amp;nbsp;M., Kopp,&amp;nbsp;S., Mancini,&amp;nbsp;M., Marsella,&amp;nbsp;S., Marshall,&amp;nbsp;A., Pelachaud,&amp;nbsp;C., Ruttkay,&amp;nbsp;Z., Th&amp;oacute;risson,&amp;nbsp;K., van&amp;nbsp;Welbergen,&amp;nbsp;H. and van&amp;nbsp;der&amp;nbsp;Werf,&amp;nbsp;R. (2007). The Behavior Markup Language: Recent Developments and Challenges, Intelligent Virtual Agents, 99&amp;ndash;111. http://dx.doi.org/10.1007/978-3-540-74997-4_10 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B28"><mixed-citation>Xiao,&amp;nbsp;C., Zhao,&amp;nbsp;M. and Chen,&amp;nbsp;L. (2018). Both Earlier Times and the Future Are &amp;ldquo;Front&amp;rdquo;: The Distinction Between Time-and Ego-Reference-Points in Mandarin Speakers&amp;rsquo; Temporal Representation, Cognitive Science, 42&amp;nbsp;(3), 1026&amp;ndash;1040. https://doi.org/10.1111/cogs.12552 (In&amp;nbsp;English)</mixed-citation></ref><ref id="B29"><mixed-citation>Zinina,&amp;nbsp;A., Arinkin,&amp;nbsp;N., Zaydelman,&amp;nbsp;L. and Kotov,&amp;nbsp;A. (2019). The role of oriented gestures during robot&amp;rsquo;s communication to a human, Proceedings of the International Conference Computer linguistics and intellectual technologies, Moscow, Russia, 800&amp;ndash;808. (In&amp;nbsp;English)</mixed-citation></ref><ref id="B30"><mixed-citation>Zinina,&amp;nbsp;A., Kotov,&amp;nbsp;A., Arinkin,&amp;nbsp;N. and Zaidelman,&amp;nbsp;L. (2022). Learning a foreign language vocabulary with a companion robot, Cognitive Systems Research, 77, 110-114 https://doi.org/10.1016/j.cogsys.2022.10.007 (In&amp;nbsp;English)</mixed-citation></ref></ref-list></back></article>