Software System Testing Assisted by Large Language Models: An Exploratory Study

Jan 25, 2025·
Cristian Augusto
Cristian Augusto
Jesús Morán
Jesús Morán
Antonia Bertolino
Antonia Bertolino
Claudio de la Riva
Claudio de la Riva
Javier Tuya
Javier Tuya
· 0 min read
Abstract
Large language models (LLMs) based on transformer architecture have revolutionized natural language processing (NLP), demonstrating excellent capabilities in understanding and generating human-like text. In Software Engineering, LLMs have been applied in code generation, documentation, and report writing tasks, to support the developer and reduce the amount of manual work. In Software Testing, one of the cornerstones of Software Engineering, LLMs have been explored for generating test code, test inputs, automating the oracle process or generating test scenarios. However, their application to high-level testing stages such as system testing, in which a deep knowledge of the business and the technological stack is needed, remains largely unexplored. This paper presents an exploratory study about how LLMs can support system test development. Given that LLM performance depends on input data quality, the study focuses on how to query general purpose LLMs to first obtain test scenarios and then derive test cases from them. The study evaluates two popular LLMs (GPT-4o and GPT-4o-mini), using as a benchmark a European project demonstrator. The study compares two different prompt strategies and employs well-established prompt patterns, showing promising results as well as room for improvement in the application of LLMs to support system testing.
Type
Publication
In *36th International Conference on Testing Software and Systems, London, United Kingdom *