<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2313-8912</journal-id><journal-title-group><journal-title>Научный результат. Вопросы теоретической и прикладной лингвистики</journal-title></journal-title-group><issn pub-type="epub">2313-8912</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2313-8912-2024-10-4-0-7</article-id><article-id pub-id-type="publisher-id">3678</article-id><article-categories><subj-group subj-group-type="heading"><subject>Языковое поведение в машинно-генерируемых средах</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;Техносемантика жеста: о возможностях использования пермской жестовой нотации в программно-генерируемых средах&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;Technosemantics of gesture: on the possibilities of using Perm sign notation in software-generated environments&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Белоусов</surname><given-names>Константин Игоревич</given-names></name><name xml:lang="en"><surname>Belousov</surname><given-names>Konstantin I.</given-names></name></name-alternatives><email>belousovki@gmail.com</email><xref ref-type="aff" rid="aff1" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Талески</surname><given-names>Александар</given-names></name><name xml:lang="en"><surname>Taleski</surname><given-names>Aleksandar</given-names></name></name-alternatives><email>taleski87@yahoo.com</email><xref ref-type="aff" rid="aff1" /></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Агаев</surname><given-names>Артем Русланович</given-names></name><name xml:lang="en"><surname>Agaev</surname><given-names>Artem R.</given-names></name></name-alternatives><email>agaev-artem1@yandex.ru</email><xref ref-type="aff" rid="aff2" /></contrib></contrib-group><aff id="aff1"><institution>Пермский государственный национальный исследовательский университет</institution></aff><aff id="aff2"><institution>ООО «Хьюмен Семантикс»</institution></aff><pub-date pub-type="epub"><year>2024</year></pub-date><volume>10</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/linguistics/2024/4/Research_Result_4-42-140-181.pdf" /><abstract xml:lang="ru"><p>Статья посвящена разработке концепции и программного решения генерации движений человека на основе созданной коллективом семантико-ориентированной жестовой нотации. Нотация представляется в виде формулы, отличающейся гибкой структурой понятий и правил их реализации, что позволяет легко адаптировать изменения параметров движения для соответствия идеальному или реальному образцу.

Для моделирования движений, представленных в языковой нотации, разработано кроссплатформенное приложение на базе игрового движка Blender 4.2 для визуализации и генерации анимации жестов антропоморфных моделей. Система управления движений состоит из следующих стадий: запись жестовой нотации; разбор этой записи; построение внутреннего представления движения, которое является последовательностью ряда кадров. Кадры имеют информацию о том, к какой кости (части тела) исполнителя они относятся, как влияют на её положение, в какой момент времени от начала движения этот кадр актуален. На последнем этапе осуществляется преобразование внутреннего представления движения в движение исполнителя, в качестве которого могут выступать как виртуальные антропоморфные 3D-модели, так и материальные программно-аппаратные комплексы в виде антропоморфных роботов.

Для большего соответствия антропоморфному поведению помимо &amp;laquo;идеальных образцов&amp;raquo; движения человека были использованы модели жестового поведения людей, представленные в специально создаваемом для этой цели коллективом мультимодальном корпусе. Материал представляет собой аудиовизуальные записи устных спонтанных текстов описания реципиентами широкого спектра собственных эмоциональных состояний. Полученные в ходе экспериментального исследования данные подтвердили гибкость, повышенную контролируемость и модульность языковой нотации, возможность моделирования непрерывного пространства человеческой двигательной активности.



</p></abstract><trans-abstract xml:lang="en"><p>This paper is dedicated to the development of a concept and software solution for generating human movements based on a semantically-oriented language notation created by the authors. The language notation is presented as a formula with a flexible structure of concepts and rules for their implementation, allowing easy adaptation of movement parameter changes to match an ideal or real sample.

To model movements represented in the language notation, a cross-platform application was developed using the Blender 4.2 for the visualization and generation of gestures for anthropomorphic models. The movement control system consists of the following stages: translating it into a gesture notation record; parsing this record; constructing an internal representation of the movement, which is a sequence of frames. Frames contain information about which bone (body part) of the performer they refer to, how they affect its position, and at what time from the start of the movement this frame is relevant. In the final stage, the internal representation of the movement is transformed into the performer&amp;rsquo;s movement, which can be either virtual anthropomorphic 3D models or physical software-hardware systems in the form of anthropomorphic robots.

To better correspond to anthropomorphic behavior, in addition to &amp;ldquo;ideal samples&amp;rdquo; of human movement, models of human gesture behavior presented in a multimodal corpus specially created for this purpose by the team were used. The material consists of audiovisual recordings of spontaneous oral texts where recipients describe a wide range of their own emotional states. The data obtained during the experimental research confirmed the flexibility, enhanced controllability, and modularity of the language notation, as well as the ability to model the continuous space of human motor activity.



</p></trans-abstract><kwd-group xml:lang="ru"><kwd>Техносемантика</kwd><kwd>Генерация движений</kwd><kwd>Языковая нотация жестов</kwd><kwd>Жест</kwd><kwd>Визуализация</kwd><kwd>Мультимодальный корпус</kwd><kwd>3D-графика</kwd><kwd>Интерпретируемость</kwd></kwd-group><kwd-group xml:lang="en"><kwd>Technosemantics</kwd><kwd>Movement generation</kwd><kwd>Language notation оf gestures</kwd><kwd>Gesture</kwd><kwd>Visualization</kwd><kwd>Multimodal corpus</kwd><kwd>3D graphics</kwd><kwd>Interpretability</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Abbie&amp;nbsp;M. Movement notation // The Australian journal of physiotherapy. 1974. Vol.&amp;nbsp;20&amp;nbsp;(2). Pp. 61&amp;ndash;69. https://doi.org/10.1016/S0004-9514(14)61177-6</mixed-citation></ref><ref id="B2"><mixed-citation>Bashan&amp;nbsp;M., Einbinder&amp;nbsp;H., Harries&amp;nbsp;J., Shosani&amp;nbsp;M., Shoval&amp;nbsp;D. Movement Notation: Eshkol and Abraham Wachmann. K&amp;ouml;ln: Verlag der Buchhandlung Walther K&amp;ouml;nig, 2024.</mixed-citation></ref><ref id="B3"><mixed-citation>Belousov&amp;nbsp;K.&amp;nbsp;I., Sazina&amp;nbsp;D.&amp;nbsp;A., Ryabinin&amp;nbsp;K.&amp;nbsp;V., Brokhin,&amp;nbsp;L.&amp;nbsp;Yu. Sensory Technolinguistics: On Mechanisms of Transmitting Multimodal Messages in Perceptual-Cognitive Interfaces // Automatic Documentation and Mathematical Linguistics. 2024. Vol.&amp;nbsp;58, Issue 2. Pp.&amp;nbsp;108&amp;ndash;116. https://doi.org/10.3103/s0005105524700079</mixed-citation></ref><ref id="B4"><mixed-citation>Benesh&amp;nbsp;R., Benesh&amp;nbsp;J. An Introduction to Benesh Dance Notation. London: A &amp;amp; C Black, 1956.</mixed-citation></ref><ref id="B5"><mixed-citation>Bernardet&amp;nbsp;U., Fdili Alaoui&amp;nbsp;S., Studd&amp;nbsp;K., Bradley&amp;nbsp;K., Pasquier&amp;nbsp;P., Schiphorst,&amp;nbsp;T. Assessing the reliability of the Laban Movement Analysis system // PLoS ONE. 2019. Vol.&amp;nbsp;14&amp;nbsp;(6): e0218179. https://doi.org/10.1371/journal.pone.0218179</mixed-citation></ref><ref id="B6"><mixed-citation>Birdwhistell&amp;nbsp;R.&amp;nbsp;L. Introduction to Kinesics: An Annotation System for Analysis of Body Motion and Gesture. Washington, DC.: Foreign Service Institute, 1952.</mixed-citation></ref><ref id="B7"><mixed-citation>Bull&amp;nbsp;P., Doody&amp;nbsp;J.&amp;nbsp;P. 8 Gesture and body movement. De Gruyter eBooks, 2013. Pp.&amp;nbsp;205&amp;ndash;228. https://doi.org/10.1515/9783110238150.205</mixed-citation></ref><ref id="B8"><mixed-citation>Calvert&amp;nbsp;T. Approaches to the Representation of Human Movement: Notation, Animation and Motion Capture // Dance Notations and Robot Motion, Springer Tracts in Advanced Robotics. 2015. Pp.&amp;nbsp;49&amp;ndash;68. https://doi.org/10.1007/978-3-319-25739-6_3</mixed-citation></ref><ref id="B9"><mixed-citation>Dael&amp;nbsp;N., Mortillaro&amp;nbsp;M., Scherer&amp;nbsp;K.&amp;nbsp;R. The Body Action and Posture Coding System (BAP): Development and Reliability // Journal of Nonverbal Behavior. 2012. Vol.&amp;nbsp;36&amp;nbsp;(2). Pp.&amp;nbsp;97&amp;ndash;121. https://doi.org/10.1007/s10919-012-0130-0</mixed-citation></ref><ref id="B10"><mixed-citation>Dell&amp;nbsp;C. A Primer for Movement Description: Using Effort-shape and Supplementary Concepts. New York: Dance Notation Bureau Press, 1977.</mixed-citation></ref><ref id="B11"><mixed-citation>Duprey&amp;nbsp;S., Naaim&amp;nbsp;A., Moissenet&amp;nbsp;F., Begon&amp;nbsp;M., Ch&amp;egrave;ze,&amp;nbsp;L. Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview // Journal of Biomechanics. 2017. Vol.&amp;nbsp;62. Pp.&amp;nbsp;87&amp;ndash;94. &amp;nbsp;DOI: 10.1016/j.jbiomech.2016.12.005</mixed-citation></ref><ref id="B12"><mixed-citation>Ekman&amp;nbsp;P., Friesen&amp;nbsp;W.&amp;nbsp;V. Facial Action Coding System. Palo Alto, CA: Consulting Psychologists, 1978.</mixed-citation></ref><ref id="B13"><mixed-citation>El Raheb&amp;nbsp;K., Ioannidis&amp;nbsp;Y. From dance notation to conceptual models: a multilayer approach // Proceedings of the 2014 International Workshop on Movement and Computing, MOCO, ACM. New York, 2014. Pp.&amp;nbsp;25&amp;ndash;30.</mixed-citation></ref><ref id="B14"><mixed-citation>El Raheb&amp;nbsp;K., Buccoli&amp;nbsp;M., Zanoni&amp;nbsp;M., Katifori&amp;nbsp;A., Kasomoulis&amp;nbsp;A., Sarti&amp;nbsp;A., Ioannidis&amp;nbsp;Y. Towards a general framework for the annotation of dance motion sequences // Multimed Tools Appl. 2023. Vol.&amp;nbsp;82. Issue&amp;nbsp;3. Pp.&amp;nbsp;3363&amp;ndash;3395. https://doi.org/10.1007/s11042-022-12602-y</mixed-citation></ref><ref id="B15"><mixed-citation>Eshkol&amp;nbsp;N., Wachmann&amp;nbsp;A. Movement Notation. London: Weidenfeld and Nicolson, 1958.</mixed-citation></ref><ref id="B16"><mixed-citation>Farnell&amp;nbsp;B.&amp;nbsp;M. Movement Notation Systems // The World&amp;rsquo;s Writing Systems / ed. Peter T. Daniels. Oxford University Press, 1996. Pp.&amp;nbsp;855&amp;ndash;879.</mixed-citation></ref><ref id="B17"><mixed-citation>Frishberg&amp;nbsp;N. Writing systems and problems for sign language notation // Journal for the Anthropological Study of Human Movement. 1983. Vol.&amp;nbsp;2&amp;nbsp;(4). Pp.&amp;nbsp;169&amp;ndash;195.</mixed-citation></ref><ref id="B18"><mixed-citation>Frey&amp;nbsp;S., Hirsbrunner&amp;nbsp;H-P., Jorns&amp;nbsp;U. Time-Series Notation: A Coding Principle for the Unified Assessment of Speech and Movement in Communication Research. T&amp;uuml;bingen: Gunter NarrVerlag, 1982.</mixed-citation></ref><ref id="B19"><mixed-citation>Grushkin&amp;nbsp;D.&amp;nbsp;A. Writing Signed Languages: What For? What Form? // American Annals of the Deaf. 2017. Vol.&amp;nbsp;161&amp;nbsp;(5). Pp.&amp;nbsp;509&amp;ndash;527. https://doi.org/10.1353/aad.2017.0001</mixed-citation></ref><ref id="B20"><mixed-citation>Guest&amp;nbsp;A.&amp;nbsp;H. Dance Notation: The Process of Recording Movement on Paper. New York: Dance Horizons, 1984.</mixed-citation></ref><ref id="B21"><mixed-citation>Guest&amp;nbsp;A.&amp;nbsp;H. Labanotation: The System of Analyzing and Recording Movement (4th ed.). New York: Routledge, 2005. https://doi.org/10.4324/9780203823866</mixed-citation></ref><ref id="B22"><mixed-citation>Harrigan&amp;nbsp;J.&amp;nbsp;A. Proxemics, Kinesics, and Gaze // The New Handbook of Methods in Nonverbal Behavior Research. 2008. Pp.&amp;nbsp;136&amp;ndash;198. https://doi.org/10.1093/acprof:oso/9780198529620.003.0004</mixed-citation></ref><ref id="B23"><mixed-citation>Izquierdo&amp;nbsp;C., Anguera&amp;nbsp;M.&amp;nbsp;T. Movement notation revisited: syntax of the common morphokinetic alphabet (CMA) system // Front. Psychol. 2018. Vol.&amp;nbsp;9:1416. https://doi.org/10.3389/fpsyg.2018.01416</mixed-citation></ref><ref id="B24"><mixed-citation>Karg&amp;nbsp;M., Samadani&amp;nbsp;A.-A., Gorbet&amp;nbsp;R., Kuhnlenz&amp;nbsp;K., Hoey&amp;nbsp;J., Kulic&amp;nbsp;D. Body Movements for Affective Expression: A Survey of Automatic Recognition and Generation // IEEE Transactions on Affective Computing. 2013. Vol.&amp;nbsp;4. Issue&amp;nbsp;4. Pp.&amp;nbsp;341&amp;ndash;359. https://doi.org/10.1109/t-affc.2013.29</mixed-citation></ref><ref id="B25"><mixed-citation>Kendon&amp;nbsp;A. Gesture // Annual Review of Anthropology. 1997. Vol.&amp;nbsp;26&amp;nbsp;(1). Pp.&amp;nbsp;109&amp;ndash;128. https://doi.org/10.1146/annurev.anthro.26.1.109</mixed-citation></ref><ref id="B26"><mixed-citation>Key&amp;nbsp;M.&amp;nbsp;R. Nonverbal communication: a research guide and bibliography. Metuchen, N.J.: The Scarecrow Press, 1977.</mixed-citation></ref><ref id="B27"><mixed-citation>Kilpatrick&amp;nbsp;C.&amp;nbsp;E. Movement, Gesture, and Singing: A Review of Literature // Update: Applications of Research in Music Education. 2020. Vol.&amp;nbsp;38. Issue 3. Pp.&amp;nbsp;29&amp;ndash;37. &amp;nbsp;DOI: 10.1177/8755123320908612</mixed-citation></ref><ref id="B28"><mixed-citation>Laban&amp;nbsp;R.&amp;nbsp;von, Lawrence&amp;nbsp;F.&amp;nbsp;C. Effort: Economy of Human Movement / 2nd ed. London: Macdonald &amp;amp; Evans, 1974.</mixed-citation></ref><ref id="B29"><mixed-citation>Laumond&amp;nbsp;J., Abe&amp;nbsp;N. Dance Notations and Robot Motion. Cham (ZG): Springer International Publishing AG, 2016. https://doi.org/10.1007/978-3-319-25739-6</mixed-citation></ref><ref id="B30"><mixed-citation>Liu&amp;nbsp;H., Zhu&amp;nbsp;Z., Iwamoto&amp;nbsp;N., Peng&amp;nbsp;Y., Li&amp;nbsp;Zh., Zhou&amp;nbsp;Y., Bozkurt&amp;nbsp;E., Zheng,&amp;nbsp;B. BEAT: A Large-Scale Semantic and Emotional Multi-modal Dataset for Conversational Gestures Synthesis // Computer Vision &amp;ndash; ECCV 2022. 2022. Pp.&amp;nbsp;612&amp;ndash;630. https://doi.org/10.48550/arXiv.2203.05297</mixed-citation></ref><ref id="B31"><mixed-citation>Murillo&amp;nbsp;E., Montero&amp;nbsp;I., Casla&amp;nbsp;M. On the multimodal path to language: The relationship between rhythmic movements and deictic gestures at the end of the first year // Frontiers in Psychology. 2021. Vol.&amp;nbsp;12. Pp.&amp;nbsp;1&amp;ndash;8. https://doi.org/10.3389/fpsyg.2021.616812</mixed-citation></ref><ref id="B32"><mixed-citation>Novack&amp;nbsp;A.&amp;nbsp;M., Wakefield&amp;nbsp;E.&amp;nbsp;M. Goldin-Meadow S. What makes a movement a gesture? // Cognition. 2016. Vol.&amp;nbsp;146. Pp.&amp;nbsp;339&amp;ndash;348. https://doi.org/10.1016/j.cognition.2015.10.014</mixed-citation></ref><ref id="B33"><mixed-citation>Qi&amp;nbsp;X., Liu&amp;nbsp;C., Li&amp;nbsp;L., Hou&amp;nbsp;J., Xin&amp;nbsp;H., Yu,&amp;nbsp;X. Emotion Gesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation, IEEE Transactions on Multimedia. 2024. Pp.&amp;nbsp;1&amp;ndash;11. https://doi.org/10.1109/TMM.2024.3407692</mixed-citation></ref><ref id="B34"><mixed-citation>Streeck&amp;nbsp;J. The Significance of Gesture: How it is Established // Papers in Pragmatics. 2010. Vol.&amp;nbsp;2. Issue 1-2. https://doi.org/2.10.1075/iprapip.2.1-2.03str</mixed-citation></ref><ref id="B35"><mixed-citation>Shafir&amp;nbsp;T., Tsachor&amp;nbsp;R., Welch&amp;nbsp;K.&amp;nbsp;B. Emotion Regulation through Movement: Unique Sets of Movement Characteristics are Associated with and Enhance Basic Emotions // Frontiers in Psychology. 2016. Vol.&amp;nbsp;6. https://doi.org/10.3389/fpsyg.2015.02030</mixed-citation></ref><ref id="B36"><mixed-citation>Stults-Kolehmainen&amp;nbsp;M.&amp;nbsp;A. Humans have a basic physical and psychological need to move the body: Physical activity as a primary drive // Frontiers in Psychology. 2023. Vol.&amp;nbsp;14. https://doi.org/10.3389/fpsyg.2023.1134049</mixed-citation></ref><ref id="B37"><mixed-citation>Tonoli&amp;nbsp;R.&amp;nbsp;L., Costa&amp;nbsp;P.&amp;nbsp;D.&amp;nbsp;P., Marques L.&amp;nbsp;B.&amp;nbsp;d.&amp;nbsp;M.&amp;nbsp;M., Ueda&amp;nbsp;L.&amp;nbsp;H. Gesture Area Coverage to Assess Gesture Expressiveness and Human-Likeness&amp;rsquo; // International Conference on Multimodal Interaction (ICMI Companion &amp;lsquo;24), 4&amp;ndash;8 November 2024, San Jose, Costa Rica. ACM, New York, NY, USA. https://doi.org/10.1145/3686215.3688822</mixed-citation></ref><ref id="B38"><mixed-citation>Trujillo&amp;nbsp;J.&amp;nbsp;P., Vaitonyte&amp;nbsp;J., Simanova&amp;nbsp;I., &amp;Ouml;zy&amp;uuml;rek&amp;nbsp;A. Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research // Behavior Research Methods. 2018. Vol.&amp;nbsp;51&amp;nbsp;(2). Pp.&amp;nbsp;769&amp;ndash;777. https://doi.org/10.3758/s13428-018-1086-8</mixed-citation></ref><ref id="B39"><mixed-citation>Van&amp;nbsp;Elk&amp;nbsp;M.,&amp;nbsp;van&amp;nbsp;Schie&amp;nbsp;H.&amp;nbsp;T., Bekkering&amp;nbsp;H. Short-term action intentions overrule long-term semantic knowledge // Cognition. 2009. Vol.&amp;nbsp;111. Issue&amp;nbsp;1. Pp.&amp;nbsp;72&amp;ndash;83. https://doi.org/10.1016/j.cognition.2008.12.002</mixed-citation></ref><ref id="B40"><mixed-citation>Yang&amp;nbsp;S., Wu&amp;nbsp;Z., Li&amp;nbsp;M., Zhang&amp;nbsp;Z., Hao&amp;nbsp;L., Bao&amp;nbsp;W., Zhuang&amp;nbsp;H. QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023. Pp.&amp;nbsp;2321&amp;ndash;2330. https://doi.org/10.48550/arXiv.2305.11094</mixed-citation></ref><ref id="B41"><mixed-citation>Yoon&amp;nbsp;Y., Cha&amp;nbsp;B., Lee&amp;nbsp;J.-H., Jang&amp;nbsp;M., Lee&amp;nbsp;J., Kim&amp;nbsp;J., Lee&amp;nbsp;G. Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity // ACM Transactions on Graphics. 2020. Vol.&amp;nbsp;39. Issue&amp;nbsp;6. https://doi.org/10.1145/3414685.3417838</mixed-citation></ref><ref id="B42"><mixed-citation>Zhi&amp;nbsp;Y., Cun&amp;nbsp;X., Chen&amp;nbsp;X., Shen&amp;nbsp;X., Guo&amp;nbsp;W., Huang&amp;nbsp;S., Gao&amp;nbsp;S. LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation // Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023. Pp.&amp;nbsp;20807&amp;ndash;20817. https://doi.org/10.1109/ICCV51070.2023.01902</mixed-citation></ref></ref-list></back></article>