<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2313-8912</journal-id><journal-title-group><journal-title>Research Result. Theoretical and Applied Linguistics</journal-title></journal-title-group><issn pub-type="epub">2313-8912</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2313-8912-2024-10-3-0-7</article-id><article-id pub-id-type="publisher-id">3546</article-id><article-categories><subj-group subj-group-type="heading"><subject>APPLIED LINGUISTICS</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;A graph-based approach to closed-domain natural language generation&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;A graph-based approach to closed-domain natural language generation&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Firsanova</surname><given-names>Victoria I.</given-names></name><name xml:lang="en"><surname>Firsanova</surname><given-names>Victoria I.</given-names></name></name-alternatives><email>st085687@student.spbu.ru</email><xref ref-type="aff" rid="aff1" /></contrib></contrib-group><aff id="aff1"><institution>Petersburg State University, St. Petersburg, Russia</institution></aff><pub-date pub-type="epub"><year>2024</year></pub-date><volume>10</volume><issue>3</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/linguistics/2024/3/ВТиПЛ_2024_3_135-167.pdf" /><abstract xml:lang="ru"><p>Graph-based Natural Language Processing (NLP) methods have seen significant advancements in recent years with the development of Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). LLMs are sophisticated models that recognize numerous NLP tasks by analyzing the users&amp;#39; natural language instructions called prompts. However, their industrial use is questionable due to such ethical concerns as false information generation called hallucinations, high risks of data breaches, and plagiarism. The paper introduces a novel NLP architecture, the Graph-Based Block-to-Block Generation (G3BG), which leverages state-of-the-art deep learning techniques, the power of attention mechanisms, distributional semantics, graph-based information retrieval, and decentralized networks. The model encodes user prompts to mitigate data breach risk, retrieves relevant information from a graph knowledge base, and forms a block for a conditional language model using LLMs to perform a new secure type of RAG. The model is closed-domain and small-scale oriented. It exhibits superior performance across low-resource NLP tasks, which makes it prominent for industrial use. The research presents a novel graph-based dataset. The dataset comprises private data features to encode and closed-domain textual information for information retrieval. The dataset is used to train and evaluate the G3BG model. The model allows cutting 100x training dataset volume achieving Perplexity ~6.51 on the Language Generation task and F1-Score ~90.3 on the Information Retrieval task comparable to most state-of-the-art language models. The experimental results prove the effectiveness of the proposed method and contribute to the algorithmic approaches toward LLM risk mitigation.</p></abstract><trans-abstract xml:lang="en"><p>Graph-based Natural Language Processing (NLP) methods have seen significant advancements in recent years with the development of Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). LLMs are sophisticated models that recognize numerous NLP tasks by analyzing the users&amp;#39; natural language instructions called prompts. However, their industrial use is questionable due to such ethical concerns as false information generation called hallucinations, high risks of data breaches, and plagiarism. The paper introduces a novel NLP architecture, the Graph-Based Block-to-Block Generation (G3BG), which leverages state-of-the-art deep learning techniques, the power of attention mechanisms, distributional semantics, graph-based information retrieval, and decentralized networks. The model encodes user prompts to mitigate data breach risk, retrieves relevant information from a graph knowledge base, and forms a block for a conditional language model using LLMs to perform a new secure type of RAG. The model is closed-domain and small-scale oriented. It exhibits superior performance across low-resource NLP tasks, which makes it prominent for industrial use. The research presents a novel graph-based dataset. The dataset comprises private data features to encode and closed-domain textual information for information retrieval. The dataset is used to train and evaluate the G3BG model. The model allows cutting 100x training dataset volume achieving Perplexity ~6.51 on the Language Generation task and F1-Score ~90.3 on the Information Retrieval task comparable to most state-of-the-art language models. The experimental results prove the effectiveness of the proposed method and contribute to the algorithmic approaches toward LLM risk mitigation.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>Language Generation</kwd><kwd>Language Understanding</kwd><kwd>Generative Artificial Intelligence</kwd><kwd>Large Language Models</kwd><kwd>Decentralized Networks</kwd><kwd>Data Encoding</kwd><kwd>Distributional Semantics</kwd><kwd>Closed-Domain Systems</kwd></kwd-group><kwd-group xml:lang="en"><kwd>Language Generation</kwd><kwd>Language Understanding</kwd><kwd>Generative Artificial Intelligence</kwd><kwd>Large Language Models</kwd><kwd>Decentralized Networks</kwd><kwd>Data Encoding</kwd><kwd>Distributional Semantics</kwd><kwd>Closed-Domain Systems</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Andriushchenko,&amp;nbsp;M. and Flammarion,&amp;nbsp;N. (2024). Does Refusal Training in LLMs Generalize to the Past Tense? arXiv preprint arXiv:2407.11969. DOI: 10.48550/arXiv.2407.11969</mixed-citation></ref><ref id="B2"><mixed-citation>Anthropic. (2024). Claude 3.5 Sonnet Model Card Addendum. [Online], available at: https://www-cdn.anthropic.com/fed9cc193a14b84131812372d8d5857f8f304c52/Model_Card_Claude_3_Addendum.pdf (Accessed 06 September 2024)</mixed-citation></ref><ref id="B3"><mixed-citation>Ayyamperumal,&amp;nbsp;S.&amp;nbsp;G. and Ge,&amp;nbsp;L. (2024). Current state of LLM Risks and AI Guardrails, arXiv preprint arXiv:2406.12934. DOI: 10.48550/arXiv.2406.12934</mixed-citation></ref><ref id="B4"><mixed-citation>Choi,&amp;nbsp;E., Jo,&amp;nbsp;Y., Jang,&amp;nbsp;J. and Seo,&amp;nbsp;M. (2022). Prompt injection: Parameterization of fixed inputs, arXiv preprint arXiv:2206.11349. DOI: 10.48550/arXiv.2206.11349</mixed-citation></ref><ref id="B5"><mixed-citation>Christiano,&amp;nbsp;P.&amp;nbsp;F., Leike,&amp;nbsp;J., Brown,&amp;nbsp;T., Martic,&amp;nbsp;M., Legg,&amp;nbsp;S. and Amodei,&amp;nbsp;D. (2017). Deep reinforcement learning from human preferences, Advances in neural information processing systems, 30, 1&amp;ndash;9. DOI: 10.5555/3294996.3295184</mixed-citation></ref><ref id="B6"><mixed-citation>Dettmers,&amp;nbsp;T., Pagnoni,&amp;nbsp;A., Holtzman,&amp;nbsp;A. and Zettlemoyer,&amp;nbsp;L. (2024). QLoRA: Efficient finetuning of quantized LLMs, Advances in Neural Information Processing Systems, 36, 1&amp;ndash;28. DOI: 10.48550/arXiv.2305.14314</mixed-citation></ref><ref id="B7"><mixed-citation>Devlin,&amp;nbsp;J., Chang,&amp;nbsp;M.&amp;nbsp;W., Lee,&amp;nbsp;K. and Toutanova,&amp;nbsp;K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805. DOI: 10.48550/arXiv.1810.04805</mixed-citation></ref><ref id="B8"><mixed-citation>Dong,&amp;nbsp;Y., Mu,&amp;nbsp;R., Jin,&amp;nbsp;G., Qi,&amp;nbsp;Y., Hu,&amp;nbsp;J., Zhao,&amp;nbsp;X., Meng,&amp;nbsp;J., Ruan,&amp;nbsp;W. and Huang,&amp;nbsp;X. (2024). Building Guardrails for Large Language Models, arXiv preprint arXiv:2402.01822. DOI: 10.48550/arXiv.2402.01822</mixed-citation></ref><ref id="B9"><mixed-citation>Firsanova,&amp;nbsp;V. (2023). Towards building a mobile app for people on the spectrum, Companion Proceedings of the ACM Web Conference 2023, 555&amp;ndash;559. DOI: 10.1145/3543873.3587533</mixed-citation></ref><ref id="B10"><mixed-citation>Firsanova,&amp;nbsp;V. (2021). The advantages of human evaluation of sociomedical question answering systems, International Journal of Open Information Technologies, 12, 53&amp;ndash;59. DOI: 10.25559/INJOIT.2307-8162.09.202112.53-59</mixed-citation></ref><ref id="B11"><mixed-citation>Gage,&amp;nbsp;P. (1994). A new algorithm for data compression, The C Users Journal, 12&amp;nbsp;(2), 23&amp;ndash;38.</mixed-citation></ref><ref id="B12"><mixed-citation>Gao,&amp;nbsp;J., Galley,&amp;nbsp;M. and Li,&amp;nbsp;L. (2018). Neural approaches to conversational AI, The 41st international ACM SIGIR conference on research &amp;amp; development in information retrieval, 1371&amp;ndash;1374. DOI: 10.1145/3209978.3210183</mixed-citation></ref><ref id="B13"><mixed-citation>Goodfellow,&amp;nbsp;I., Bengio,&amp;nbsp;Y. and Courville,&amp;nbsp;A. (2016). Deep learning, MIT press.</mixed-citation></ref><ref id="B14"><mixed-citation>Google Cloud. (2024). Cloud Computing Services. [Online], available at: https://cloud.google.com/ (Accessed 06 September 2024)</mixed-citation></ref><ref id="B15"><mixed-citation>Guu,&amp;nbsp;K., Lee,&amp;nbsp;K., Tung,&amp;nbsp;Z., Pasupat,&amp;nbsp;P. and Chang,&amp;nbsp;M. (2020). Retrieval augmented language model pre-training, International conference on machine learning, 3929&amp;ndash;3938.</mixed-citation></ref><ref id="B16"><mixed-citation>Hendrycks,&amp;nbsp;D., Burns,&amp;nbsp;C., Basart,&amp;nbsp;S., Zou,&amp;nbsp;A., Mazeika,&amp;nbsp;M., Song,&amp;nbsp;D. and Steinhardt,&amp;nbsp;J. (2020). Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. DOI: 10.48550/arXiv.2009.03300</mixed-citation></ref><ref id="B17"><mixed-citation>Hewitt,&amp;nbsp;J. and Manning,&amp;nbsp;C.&amp;nbsp;D. (2019). A structural probe for finding syntax in word representations, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4129&amp;ndash;4138. DOI: 10.18653/v1/N19-1419</mixed-citation></ref><ref id="B18"><mixed-citation>Hu,&amp;nbsp;E.&amp;nbsp;J., Shen,&amp;nbsp;Y., Wallis,&amp;nbsp;P., Allen-Zhu,&amp;nbsp;Z., Li,&amp;nbsp;Y., Wang,&amp;nbsp;S., Wang,&amp;nbsp;L and Chen,&amp;nbsp;W. (2021). LoRA: Low-rank adaptation of large language models, arXiv preprint arXiv:2106.09685. DOI: 10.48550/arXiv.2106.09685</mixed-citation></ref><ref id="B19"><mixed-citation>Jacob,&amp;nbsp;B., Kligys,&amp;nbsp;S., Chen,&amp;nbsp;B., Zhu,&amp;nbsp;M., Tang,&amp;nbsp;M., Howard,&amp;nbsp;A., Adam,&amp;nbsp;H. and Kalenichenko,&amp;nbsp;D. (2018). Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, arXiv preprint arXiv:1712.05877. DOI: 10.48550/arXiv.1712.05877</mixed-citation></ref><ref id="B20"><mixed-citation>Jelinek,&amp;nbsp;F., Mercer,&amp;nbsp;R.&amp;nbsp;L., Bahl,&amp;nbsp;L.&amp;nbsp;R. and Baker,&amp;nbsp;J.&amp;nbsp;K. (1977). Perplexity &amp;ndash; a measure of the difficulty of speech recognition tasks, The Journal of the Acoustical Society of America, 62 (S1), S63&amp;ndash;S63.</mixed-citation></ref><ref id="B21"><mixed-citation>Ji,&amp;nbsp;Z., Lee,&amp;nbsp;N., Frieske,&amp;nbsp;R., Yu,&amp;nbsp;T., Su,&amp;nbsp;D., Xu,&amp;nbsp;Y., Ishii,&amp;nbsp;E., Bang,&amp;nbsp;Y., Chen,&amp;nbsp;D., Dai,&amp;nbsp;W., Chan,&amp;nbsp;H.&amp;nbsp;S., Madotto,&amp;nbsp;A. and Fung,&amp;nbsp;P. (2023). Survey of hallucination in natural language generation, ACM Computing Surveys, 55 (12), 1&amp;ndash;38.</mixed-citation></ref><ref id="B22"><mixed-citation>Jiang,&amp;nbsp;A.&amp;nbsp;Q., Sablayrolles,&amp;nbsp;A., Mensch,&amp;nbsp;A., Bamford,&amp;nbsp;C., Chaplot,&amp;nbsp;D.&amp;nbsp;S., Casas,&amp;nbsp;D.&amp;nbsp;D.&amp;nbsp;L., Bressand,&amp;nbsp;F., Lengyel,&amp;nbsp;G., Lample,&amp;nbsp;G., Saulnier,&amp;nbsp;L. and Lavaud,&amp;nbsp;L.R. (2023). Mistral 7B, arXiv preprint arXiv:2310.06825. DOI: 10.48550/arXiv.2310.06825</mixed-citation></ref><ref id="B23"><mixed-citation>Jurafsky,&amp;nbsp;D. and Martin,&amp;nbsp;J.&amp;nbsp;H. (2023). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition, Stanford University, University of Colorado at Boulder.</mixed-citation></ref><ref id="B24"><mixed-citation>LM Studio. (2024). LM Studio Documentation. [Online], available at: https://lmstudio.ai/docs/welcome (Accessed 06 September 2024).</mixed-citation></ref><ref id="B25"><mixed-citation>Luo,&amp;nbsp;H., Luo,&amp;nbsp;J. and Vasilakos,&amp;nbsp;A.&amp;nbsp;V. (2023). BC4LLM: Trusted artificial intelligence when blockchain meets large language models, arXiv preprint arXiv:2310.06278. DOI: 10.48550/arXiv.2310.06278</mixed-citation></ref><ref id="B26"><mixed-citation>McCarthy, J. (1987). Generality in artificial intelligence, Communications of the ACM, 30 (12), 1030&amp;ndash;1035.</mixed-citation></ref><ref id="B27"><mixed-citation>Meister,&amp;nbsp;C., Cotterell,&amp;nbsp;R. (2021). Language Model Evaluation Beyond Perplexity, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 5328&amp;ndash;5339.</mixed-citation></ref><ref id="B28"><mixed-citation>Mikolov,&amp;nbsp;T., Chen,&amp;nbsp;K., Corrado,&amp;nbsp;G. and Dean,&amp;nbsp;J. (2013). Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781. DOI: 10.48550/arXiv.1301.3781</mixed-citation></ref><ref id="B29"><mixed-citation>Mistral. (2024). Mistral Large 2. [Online], available at: https://mistral.ai/news/mistral-large-2407/ (Accessed 06 September 2024)</mixed-citation></ref><ref id="B30"><mixed-citation>Morris,&amp;nbsp;J., Hirst,&amp;nbsp;G. (1991). Lexical Cohesion Computed by Thesaural relations as an indicator of the structure of text, Computational Linguistics, 17 (1), 21&amp;ndash;48.</mixed-citation></ref><ref id="B31"><mixed-citation>OpenAI. (2024). GPT-4o mini: advancing cost-efficient intelligence. [Online], available at: https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/ (Accessed 06 September 2024)</mixed-citation></ref><ref id="B32"><mixed-citation>OpenAI API. (2024). Open AI API. [Online], available at: https://openai.com/index/openai-api (Accessed 06 September 2024)</mixed-citation></ref><ref id="B33"><mixed-citation>Ouyang,&amp;nbsp;L., Wu,&amp;nbsp;J., Jiang,&amp;nbsp;X., Almeida,&amp;nbsp;D., Wainwright,&amp;nbsp;C., Mishkin,&amp;nbsp;P., Zhang,&amp;nbsp;C., Agarwal,&amp;nbsp;S., Slama,&amp;nbsp;K., Ray,&amp;nbsp;A. and Schulman,&amp;nbsp;J. (2022). Training language models to follow instructions with human feedback, Advances in neural information processing systems, 6 (35), 27730&amp;ndash;27744. DOI: 10.48550/arXiv.2203.02155</mixed-citation></ref><ref id="B34"><mixed-citation>Polyzotis,&amp;nbsp;N. and Zaharia,&amp;nbsp;M. (2021). What can data-centric AI learn from data and ML engineering? arXiv preprint arXiv:2112.06439. DOI: 10.48550/arXiv.2112.06439</mixed-citation></ref><ref id="B35"><mixed-citation>Priest, G. (2000). Logic: A Very Short Introduction, Oxford University Press, Oxford, UK.</mixed-citation></ref><ref id="B36"><mixed-citation>Raffel,&amp;nbsp;C., Shazeer,&amp;nbsp;N., Roberts,&amp;nbsp;A., Lee,&amp;nbsp;K., Narang,&amp;nbsp;S., Matena,&amp;nbsp;M., Zhou,&amp;nbsp;Y., Li,&amp;nbsp;W. and Liu,&amp;nbsp;P.&amp;nbsp;J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer, Journal of machine learning research, 21 (140), 1&amp;ndash;67.</mixed-citation></ref><ref id="B37"><mixed-citation>Rajpurkar,&amp;nbsp;P., Zhang,&amp;nbsp;J., Lopyrev,&amp;nbsp;K. and Liang,&amp;nbsp;P (2016). SQuAD: 100,000+ questions for machine comprehension of text, arXiv preprint arXiv:1606.05250. DOI: 10.48550/arXiv.1606.05250</mixed-citation></ref><ref id="B38"><mixed-citation>Rajpurkar,&amp;nbsp;P.,&amp;nbsp;Jia,&amp;nbsp;R. and Liang,&amp;nbsp;P. (2018). Know what you don&amp;#39;t know: Unanswerable questions for SQuAD, arXiv preprint arXiv:1806.03822. DOI: 10.48550/arXiv.1806.03822</mixed-citation></ref><ref id="B39"><mixed-citation>Ruder,&amp;nbsp;S. (2019). Neural transfer learning for natural language processing, NUI Galway.</mixed-citation></ref><ref id="B40"><mixed-citation>Schmidhuber,&amp;nbsp;J. (1987). Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook, Technische Universit&amp;auml;t M&amp;uuml;nchen.</mixed-citation></ref><ref id="B41"><mixed-citation>Talmor,&amp;nbsp;A, Herzig,&amp;nbsp;J, Lourie,&amp;nbsp;N. and Berant,&amp;nbsp;J. (2018). Commonsenseqa: A question answering challenge targeting commonsense knowledge, arXiv preprint arXiv:1811.00937. DOI: 10.48550/arXiv.1811.00937</mixed-citation></ref><ref id="B42"><mixed-citation>Thakur, N., Reimers, N., R&amp;uuml;ckl&amp;eacute;, A., Srivastava, A. and Gurevych, I. (2021). Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663. DOI: 10.48550/arXiv.2104.08663bs/2104.08663</mixed-citation></ref><ref id="B43"><mixed-citation>Van Rijsbergen,&amp;nbsp;P.&amp;nbsp;J. (1979). Information Retrieval, London: Butterworths.</mixed-citation></ref><ref id="B44"><mixed-citation>Vaswani,&amp;nbsp;A., Shazeer,&amp;nbsp;N., Parmar,&amp;nbsp;N., Uszkoreit,&amp;nbsp;J., Jones,&amp;nbsp;L., Gomez,&amp;nbsp;A.&amp;nbsp;N., Kaiser,&amp;nbsp;Ł. and Polosukhin,&amp;nbsp;I. (2017). Attention is all you need, Advances in neural information processing systems, 30, 261&amp;ndash;272. DOI: 10.48550/arXiv.1706.03762</mixed-citation></ref><ref id="B45"><mixed-citation>Wolf,&amp;nbsp;T., Debut,&amp;nbsp;L., Sanh,&amp;nbsp;V., Chaumond,&amp;nbsp;J., Delangue,&amp;nbsp;C., Moi,&amp;nbsp;A., Cistac,&amp;nbsp;P., Rault,&amp;nbsp;T., Louf,&amp;nbsp;R., Funtowicz,&amp;nbsp;M. and Davison,&amp;nbsp;J. (2019). HuggingFace&amp;#39;s Transformers: State-of-the-art natural language processing, arXiv preprint arXiv:1910.03771. DOI: 10.48550/arXiv.1910.03771</mixed-citation></ref><ref id="B46"><mixed-citation>Zhang,&amp;nbsp;P., Xiao,&amp;nbsp;S., Liu,&amp;nbsp;Z., Dou,&amp;nbsp;Z. and Nie,&amp;nbsp;J.&amp;nbsp;Y. (2023). Retrieve anything to augment large language models, arXiv preprint arXiv:2310.07554. DOI: 10.48550/arXiv.2310.07554</mixed-citation></ref><ref id="B47"><mixed-citation>Zhong,&amp;nbsp;W., Cui,&amp;nbsp;R., Guo,&amp;nbsp;Y., Liang,&amp;nbsp;Y., Lu,&amp;nbsp;S., Wang,&amp;nbsp;Y., Saied,&amp;nbsp;A., Chen,&amp;nbsp;W. and Duan,&amp;nbsp;N. (2023). AGIEval: A human-centric benchmark for evaluating foundation models, arXiv preprint arXiv:2304.06364. DOI: 10.48550/arXiv.2304.06364</mixed-citation></ref></ref-list></back></article>