<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2313-8912</journal-id><journal-title-group><journal-title>Research Result. Theoretical and Applied Linguistics</journal-title></journal-title-group><issn pub-type="epub">2313-8912</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2313-8912-2024-10-4-0-7</article-id><article-id pub-id-type="publisher-id">3678</article-id><article-categories><subj-group subj-group-type="heading"><subject>Human Language Behaviour in Machine-Generated Environments</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;Technosemantics of gesture: on the possibilities of using Perm sign notation in software-generated environments&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;Technosemantics of gesture: on the possibilities of using Perm sign notation in software-generated environments&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Belousov</surname><given-names>Konstantin I.</given-names></name><name xml:lang="en"><surname>Belousov</surname><given-names>Konstantin I.</given-names></name></name-alternatives><email>belousovki@gmail.com</email><xref ref-type="aff" rid="aff1" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Taleski</surname><given-names>Aleksandar</given-names></name><name xml:lang="en"><surname>Taleski</surname><given-names>Aleksandar</given-names></name></name-alternatives><email>taleski87@yahoo.com</email><xref ref-type="aff" rid="aff1" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Agaev</surname><given-names>Artem R.</given-names></name><name xml:lang="en"><surname>Agaev</surname><given-names>Artem R.</given-names></name></name-alternatives><email>agaev-artem1@yandex.ru</email><xref ref-type="aff" rid="aff2" /></contrib></contrib-group><aff id="aff1"><institution>Perm State University</institution></aff><aff id="aff2"><institution>Human Semantics LLC</institution></aff><pub-date pub-type="epub"><year>2024</year></pub-date><volume>10</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/linguistics/2024/4/Research_Result_4-42-140-181.pdf" /><abstract xml:lang="ru"><p>This paper is dedicated to the development of a concept and software solution for generating human movements based on a semantically-oriented language notation created by the authors. The language notation is presented as a formula with a flexible structure of concepts and rules for their implementation, allowing easy adaptation of movement parameter changes to match an ideal or real sample.

To model movements represented in the language notation, a cross-platform application was developed using the Blender 4.2 for the visualization and generation of gestures for anthropomorphic models. The movement control system consists of the following stages: translating it into a gesture notation record; parsing this record; constructing an internal representation of the movement, which is a sequence of frames. Frames contain information about which bone (body part) of the performer they refer to, how they affect its position, and at what time from the start of the movement this frame is relevant. In the final stage, the internal representation of the movement is transformed into the performer&amp;rsquo;s movement, which can be either virtual anthropomorphic 3D models or physical software-hardware systems in the form of anthropomorphic robots.

To better correspond to anthropomorphic behavior, in addition to &amp;ldquo;ideal samples&amp;rdquo; of human movement, models of human gesture behavior presented in a multimodal corpus specially created for this purpose by the team were used. The material consists of audiovisual recordings of spontaneous oral texts where recipients describe a wide range of their own emotional states. The data obtained during the experimental research confirmed the flexibility, enhanced controllability, and modularity of the language notation, as well as the ability to model the continuous space of human motor activity.



</p></abstract><trans-abstract xml:lang="en"><p>This paper is dedicated to the development of a concept and software solution for generating human movements based on a semantically-oriented language notation created by the authors. The language notation is presented as a formula with a flexible structure of concepts and rules for their implementation, allowing easy adaptation of movement parameter changes to match an ideal or real sample.

To model movements represented in the language notation, a cross-platform application was developed using the Blender 4.2 for the visualization and generation of gestures for anthropomorphic models. The movement control system consists of the following stages: translating it into a gesture notation record; parsing this record; constructing an internal representation of the movement, which is a sequence of frames. Frames contain information about which bone (body part) of the performer they refer to, how they affect its position, and at what time from the start of the movement this frame is relevant. In the final stage, the internal representation of the movement is transformed into the performer&amp;rsquo;s movement, which can be either virtual anthropomorphic 3D models or physical software-hardware systems in the form of anthropomorphic robots.

To better correspond to anthropomorphic behavior, in addition to &amp;ldquo;ideal samples&amp;rdquo; of human movement, models of human gesture behavior presented in a multimodal corpus specially created for this purpose by the team were used. The material consists of audiovisual recordings of spontaneous oral texts where recipients describe a wide range of their own emotional states. The data obtained during the experimental research confirmed the flexibility, enhanced controllability, and modularity of the language notation, as well as the ability to model the continuous space of human motor activity.



</p></trans-abstract><kwd-group xml:lang="ru"><kwd>Technosemantics</kwd><kwd>Movement generation</kwd><kwd>Language notation оf gestures</kwd><kwd>Gesture</kwd><kwd>Visualization</kwd><kwd>Multimodal corpus</kwd><kwd>3D graphics</kwd><kwd>Interpretability</kwd></kwd-group><kwd-group xml:lang="en"><kwd>Technosemantics</kwd><kwd>Movement generation</kwd><kwd>Language notation оf gestures</kwd><kwd>Gesture</kwd><kwd>Visualization</kwd><kwd>Multimodal corpus</kwd><kwd>3D graphics</kwd><kwd>Interpretability</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Abbie,&amp;nbsp;M. (1974). Movement notation, The Australian journal of physiotherapy, 20&amp;nbsp;(2), 61&amp;ndash;69. https://doi.org/10.1016/S0004-9514(14)61177-6 (In English)</mixed-citation></ref><ref id="B2"><mixed-citation>Bashan,&amp;nbsp;M., Einbinder,&amp;nbsp;H., Harries,&amp;nbsp;J., Shosani,&amp;nbsp;M. and Shoval, D. (2024). Movement Notation: Eshkol and Abraham Wachmann, Verlag der Buchhandlung Walther K&amp;ouml;nig, K&amp;ouml;ln, Germany. (In English)</mixed-citation></ref><ref id="B3"><mixed-citation>Belousov,&amp;nbsp;K.&amp;nbsp;I., Sazina,&amp;nbsp;D.&amp;nbsp;A., Ryabinin,&amp;nbsp;K.&amp;nbsp;V. and. Brokhin,&amp;nbsp;L.&amp;nbsp;Yu. (2024). Sensory Technolinguistics: On Mechanisms of Transmitting Multimodal Messages in Perceptual-Cognitive Interfaces, Automatic Documentation and Mathematical Linguistics, 58 (2), 108&amp;ndash;116. https://doi.org/10.3103/s0005105524700079 (In English)</mixed-citation></ref><ref id="B4"><mixed-citation>Benesh,&amp;nbsp;R. and Benesh,&amp;nbsp;J. (1956). An Introduction to Benesh Dance Notation, A &amp;amp; C Black, London, UK. (In English)</mixed-citation></ref><ref id="B5"><mixed-citation>Bernardet,&amp;nbsp;U., Fdili Alaoui,&amp;nbsp;S., Studd,&amp;nbsp;K., Bradley,&amp;nbsp;K., Pasquier,&amp;nbsp;P. and Schiphorst,&amp;nbsp;T. (2019) Assessing the reliability of the Laban Movement Analysis system, PLoS ONE, 14&amp;nbsp;(6): e0218179. https://doi.org/10.1371/journal.pone.0218179 (In English)</mixed-citation></ref><ref id="B6"><mixed-citation>Birdwhistell,&amp;nbsp;R.&amp;nbsp;L. (1952). Introduction to Kinesics: An Annotation System for Analysis of Body Motion and Gesture, Foreign Service Institute, Washington, DC, USA. (In English)</mixed-citation></ref><ref id="B7"><mixed-citation>Bull,&amp;nbsp;P. and Doody,&amp;nbsp;J.&amp;nbsp;P. (2013). 8 Gesture and body movement, De Gruyter eBooks, 205&amp;ndash;228. https://doi.org/10.1515/9783110238150.205 (In English)</mixed-citation></ref><ref id="B8"><mixed-citation>Calvert,&amp;nbsp;T. (2015). Approaches to the Representation of Human Movement: Notation, Animation and Motion Capture, Dance Notations and Robot Motion, Springer Tracts in Advanced Robotics, 111, 49&amp;ndash;68. https://doi.org/10.1007/978-3-319-25739-6_3 (In English)</mixed-citation></ref><ref id="B9"><mixed-citation>Dael,&amp;nbsp;N., Mortillaro,&amp;nbsp;M. and Scherer,&amp;nbsp;K.&amp;nbsp;R. (2012). The Body Action and Posture Coding System (BAP): Development and Reliability, Journal of Nonverbal Behavior, 36&amp;nbsp;(2), 97&amp;ndash;121. https://doi.org/10.1007/s10919-012-0130-0 (In English)</mixed-citation></ref><ref id="B10"><mixed-citation>Dell,&amp;nbsp;C. (1977). A Primer for Movement Description: Using Effort-shape and Supplementary Concepts, Dance Notation Bureau Press, New York, USA. (In English)</mixed-citation></ref><ref id="B11"><mixed-citation>Duprey,&amp;nbsp;S., Naaim,&amp;nbsp;A., Moissenet,&amp;nbsp;F., Begon,&amp;nbsp;M. and Ch&amp;egrave;ze,&amp;nbsp;L. (2017). Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview, Journal of Biomechanics, 62, 87&amp;ndash;94. DOI: 10.1016/j.jbiomech.2016.12.005 (In English)</mixed-citation></ref><ref id="B12"><mixed-citation>Ekman,&amp;nbsp;P. and Friesen,&amp;nbsp;W.&amp;nbsp;V. (1978). Facial Action Coding System, Consulting Psychologists, Palo Alto, CA, USA. (In English)</mixed-citation></ref><ref id="B13"><mixed-citation>El Raheb,&amp;nbsp;K. and Ioannidis,&amp;nbsp;Y. (2014). From dance notation to conceptual models: a multilayer approach, Proceedings of the 2014 International Workshop on Movement and Computing, MOCO, ACM, New York, 25&amp;ndash;30. (In English)</mixed-citation></ref><ref id="B14"><mixed-citation>El Raheb,&amp;nbsp;K., Buccoli,&amp;nbsp;M., Zanoni,&amp;nbsp;M., Katifori,&amp;nbsp;A., Kasomoulis,&amp;nbsp;A., Sarti,&amp;nbsp;A. and Ioannidis,&amp;nbsp;Y. (2023). Towards a general framework for the annotation of dance motion sequences, Multimed Tools Appl, 82, 3363&amp;ndash;3395. https://doi.org/10.1007/s11042-022-12602-y (In English)</mixed-citation></ref><ref id="B15"><mixed-citation>Eshkol,&amp;nbsp;N. and Wachmann,&amp;nbsp;A. (1958). Movement Notation, Weidenfeld and Nicolson, London, UK. (In English)</mixed-citation></ref><ref id="B16"><mixed-citation>Farnell,&amp;nbsp;B.&amp;nbsp;M. (1996). Movement Notation Systems, The World&amp;rsquo;s Writing Systems, in Daniels P. T. (ed.), 855&amp;ndash;879. (In English)</mixed-citation></ref><ref id="B17"><mixed-citation>Frishberg,&amp;nbsp;N. (1983). Writing systems and problems for sign language notation, Journal for the Anthropological Study of Human Movement, 2&amp;nbsp;(4), 169&amp;ndash;195. (In English)</mixed-citation></ref><ref id="B18"><mixed-citation>Frey,&amp;nbsp;S., Hirsbrunner,&amp;nbsp;H-P. and Jorns,&amp;nbsp;U. (1982). Time-Series Notation: A Coding Principle for the Unified Assessment of Speech and Movement in Communication Research, Gunter NarrVerlag, T&amp;uuml;bingen, Germany. (In English)</mixed-citation></ref><ref id="B19"><mixed-citation>Grushkin,&amp;nbsp;D.&amp;nbsp;A. (2017). Writing Signed Languages: What For? What Form?, American Annals of the Deaf, 161&amp;nbsp;(5), 509&amp;ndash;527. https://doi.org/10.1353/aad.2017.0001 (In English)</mixed-citation></ref><ref id="B20"><mixed-citation>Guest,&amp;nbsp;A.&amp;nbsp;H. (1984). Dance Notation: The Process of Recording Movement on Paper, Dance Horizons, New York, USA. (In English)</mixed-citation></ref><ref id="B21"><mixed-citation>Guest,&amp;nbsp;A.&amp;nbsp;H. (2005). Labanotation: The System of Analyzing and Recording Movement (4th ed.), Routledge, New York, USA. https://doi.org/10.4324/9780203823866 (In English)</mixed-citation></ref><ref id="B22"><mixed-citation>Harrigan,&amp;nbsp;J.&amp;nbsp;A. (2008). Proxemics, Kinesics, and Gaze, The New Handbook of Methods in Nonverbal Behavior Research, 136&amp;ndash;198. https://doi.org/10.1093/acprof:oso/9780198529620.003.0004 (In English)</mixed-citation></ref><ref id="B23"><mixed-citation>Izquierdo,&amp;nbsp;C. and Anguera,&amp;nbsp;M.&amp;nbsp;T. (2018). Movement notation revisited: syntax of the common morphokinetic alphabet (CMA) system, Front. Psychol, 9:1416. https://doi.org/10.3389/fpsyg.2018.01416 (In English)</mixed-citation></ref><ref id="B24"><mixed-citation>Karg,&amp;nbsp;M., Samadani,&amp;nbsp;A.-A., Gorbet,&amp;nbsp;R., Kuhnlenz,&amp;nbsp;K., Hoey,&amp;nbsp;J. and Kulic,&amp;nbsp;D. (2013). Body Movements for Affective Expression: A Survey of Automatic Recognition and Generation, IEEE Transactions on Affective Computing, 4&amp;nbsp;(4), 341&amp;ndash;359. https://doi.org/10.1109/t-affc.2013.29 (In English)</mixed-citation></ref><ref id="B25"><mixed-citation>Kendon,&amp;nbsp;A. (1997). Gesture, Annual Review of Anthropology, 26&amp;nbsp;(1), 109&amp;ndash;128. https://doi.org/10.1146/annurev.anthro.26.1.109 (In English)</mixed-citation></ref><ref id="B26"><mixed-citation>Key,&amp;nbsp;M.&amp;nbsp;R. (1977). Nonverbal communication: a research guide and bibliography, The Scarecrow Press, Metuchen, N.J., USA. (In English)</mixed-citation></ref><ref id="B27"><mixed-citation>Kilpatrick,&amp;nbsp;C.&amp;nbsp;E. (2020). Movement, Gesture, and Singing: A Review of Literature. Update: Applications of Research in Music Education, 38&amp;nbsp;(3), 29-37. &amp;nbsp;DOI: 10.1177/8755123320908612 (In English)</mixed-citation></ref><ref id="B28"><mixed-citation>Laban,&amp;nbsp;R.&amp;nbsp;von and Lawrence,&amp;nbsp;F.&amp;nbsp;C. (1974). Effort: Economy of Human Movement 2nd ed., Macdonald &amp;amp; Evans, London, UK. (In English)</mixed-citation></ref><ref id="B29"><mixed-citation>Laumond,&amp;nbsp;J. and Abe,&amp;nbsp;N. (2016). Dance Notations and Robot Motion, Springer International Publishing AG, Cham (ZG), Switzerland. https://doi.org/10.1007/978-3-319-25739-6 (In English)</mixed-citation></ref><ref id="B30"><mixed-citation>Liu,&amp;nbsp;H., Zhu,&amp;nbsp;Z., Iwamoto,&amp;nbsp;N., Peng,&amp;nbsp;Y., Li,&amp;nbsp;Zh., Zhou,&amp;nbsp;Y., Bozkurt,&amp;nbsp;E. and Zheng,&amp;nbsp;B. (2022). BEAT: A Large-Scale Semantic and Emotional Multi-modal Dataset for Conversational Gestures Synthesis, Computer Vision &amp;ndash; ECCV 2022, 612&amp;ndash;630. https://doi.org/10.48550/arXiv.2203.05297 (In English)</mixed-citation></ref><ref id="B31"><mixed-citation>Murillo,&amp;nbsp;E., Montero,&amp;nbsp;I. and Casla,&amp;nbsp;M. (2021). On the multimodal path to language: The relationship between rhythmic movements and deictic gestures at the end of the first year, Frontiers in Psychology, 12, 1&amp;ndash;8. https://doi.org/10.3389/fpsyg.2021.616812 (In English)</mixed-citation></ref><ref id="B32"><mixed-citation>Novack,&amp;nbsp;A.&amp;nbsp;M. and Wakefield,&amp;nbsp;E.&amp;nbsp;M. (2016). Goldin-Meadow S. What makes a movement a gesture?, Cognition, 146, 339-348. https://doi.org/10.1016/j.cognition.2015.10.014 (In English)</mixed-citation></ref><ref id="B33"><mixed-citation>Qi,&amp;nbsp;X., Liu,&amp;nbsp;C., Li,&amp;nbsp;L., Hou,&amp;nbsp;J., Xin,&amp;nbsp;H. and Yu,&amp;nbsp;X. (2024). Emotion Gesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation, IEEE Transactions on Multimedia, 1&amp;ndash;11. https://doi.org/10.1109/TMM.2024.3407692 (In English)</mixed-citation></ref><ref id="B34"><mixed-citation>Streeck,&amp;nbsp;J. (2010). The Significance of Gesture: How it is Established, Papers in Pragmatics; 2 (1-2). https://doi.org/2.10.1075/iprapip.2.1-2.03str (In English)</mixed-citation></ref><ref id="B35"><mixed-citation>Shafir,&amp;nbsp;T., Tsachor,&amp;nbsp;R. and Welch,&amp;nbsp;K.&amp;nbsp;B. (2016). Emotion Regulation through Movement: Unique Sets of Movement Characteristics are Associated with and Enhance Basic Emotions, Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.02030 (In English)</mixed-citation></ref><ref id="B36"><mixed-citation>Stults-Kolehmainen,&amp;nbsp;M.&amp;nbsp;A. (2023). Humans have a basic physical and psychological need to move the body: Physical activity as a primary drive, Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1134049 (In English)</mixed-citation></ref><ref id="B37"><mixed-citation>Tonoli,&amp;nbsp;R.&amp;nbsp;L., Costa,&amp;nbsp;P.&amp;nbsp;D.&amp;nbsp;P., Marques, L.&amp;nbsp;B.&amp;nbsp;d.&amp;nbsp;M.&amp;nbsp;M. and Ueda,&amp;nbsp;L.&amp;nbsp;H. (2024). Gesture Area Coverage to Assess Gesture Expressiveness and Human-Likeness&amp;rsquo;, International Conference on Multimodal Interaction (ICMI Companion &amp;lsquo;24), 4&amp;ndash;8 November 2024, San Jose, Costa Rica. ACM, New York, NY, USA. https://doi.org/10.1145/3686215.3688822 (In English)</mixed-citation></ref><ref id="B38"><mixed-citation>Trujillo,&amp;nbsp;J.&amp;nbsp;P., Vaitonyte,&amp;nbsp;J., Simanova,&amp;nbsp;I. and &amp;Ouml;zy&amp;uuml;rek,&amp;nbsp;A. (2018). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research, Behavior Research Methods, 51&amp;nbsp;(2), 769&amp;ndash;777. https://doi.org/10.3758/s13428-018-1086-8 (In English</mixed-citation></ref><ref id="B39"><mixed-citation>Van&amp;nbsp;Elk,&amp;nbsp;M.,&amp;nbsp;van&amp;nbsp;Schie,&amp;nbsp;H.&amp;nbsp;T. and Bekkering,&amp;nbsp;H. (2009). Short-term action intentions overrule long-term semantic knowledge, Cognition, 111&amp;nbsp;(1), 72&amp;ndash;83. https://doi.org/10.1016/j.cognition.2008.12.002 (In English))</mixed-citation></ref><ref id="B40"><mixed-citation>Yang,&amp;nbsp;S., Wu,&amp;nbsp;Z., Li,&amp;nbsp;M., Zhang,&amp;nbsp;Z., Hao,&amp;nbsp;L., Bao,&amp;nbsp;W. and Zhuang,&amp;nbsp;H. (2023). QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2321&amp;ndash;2330. https://doi.org/10.48550/arXiv.2305.11094 (In English)</mixed-citation></ref><ref id="B41"><mixed-citation>Yoon,&amp;nbsp;Y., Cha,&amp;nbsp;B., Lee,&amp;nbsp;J.-H., Jang,&amp;nbsp;M., Lee,&amp;nbsp;J., Kim,&amp;nbsp;J. and Lee,&amp;nbsp;G. (2020). Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity, ACM Transactions on Graphics, 39&amp;nbsp;(6). https://doi.org/10.1145/3414685.3417838 (In English)</mixed-citation></ref><ref id="B42"><mixed-citation>Zhi,&amp;nbsp;Y., Cun,&amp;nbsp;X., Chen,&amp;nbsp;X., Shen,&amp;nbsp;X., Guo,&amp;nbsp;W., Huang,&amp;nbsp;S. and Gao,&amp;nbsp;S. (2023). LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 20807&amp;ndash;20817. https://doi.org/10.1109/ICCV51070.2023.01902 (In English)</mixed-citation></ref></ref-list></back></article>