Международный научный журнал «ВЕСТНИК НАУКИ» № 1 (70) Том 4. ЯНВАРЬ 2024 г. УДК 004
Shauchuk V.I.
software quality assurance engineer, International software company «Yucca Digital» (Vitebsk, Belarus)
ADAPTIVE RANDOM TESTING FOR WEB APPLICATIONS
Abstract: the purpose of this article is to review adaptive random testing is an improved version of random testing, that adapts the test case generation process based on the feedback from the previous test cases. Adaptive random testing aims to increase the fault detection capability of random testing, by generating test cases that are more evenly distributed and more diverse in the test space. It can use different strategies or algorithms to adapt the test case generation process, such as exclusion, partitioning, distance, or density.
Keywords: testing, software, adaptive random testing, web application, test case, strategies, algorithms.
While random testing generates test cases randomly without any specific criteria, adaptive random testing adapts the test case generation process based on feedback from previous tests. This improved technique aims to increase fault detection by generating more diverse, evenly distributed test cases across the test space. The adaptive random testing methodology follows six key steps: modeling the application, representing test cases, generating test cases, executing tests, evaluating results, and adapting the process based on evaluations. This adaptive approach makes testing more efficient and effective compared to basic random testing [1].
Web Application Modeling.
The first step of the adaptive random testing methodology is to model the web application under test, by capturing its structure, behavior, and feedback.
The web application structure is the static aspect of the web application, that defines the components, technologies, and platforms of the web application, such as HTML, CSS, JavaScript, PHP, SQL, Apache, or Linux. It can be represented as a set of web pages, or as a graph of web pages and web links, or as a tree of web pages and web elements.
The web application behavior is the dynamic aspect of the web application, that defines the functionalities, services, and interactions of the web application, such as clicking, scrolling, or searching. The web application behavior can be represented as a set of web events, or as a graph of web events and web transitions, or as a finite state machine of web states and web actions.
The web application feedback is the output or response of the web application, that reflects the outcome or effect of the web application, such as text, image, or sound. The web application feedback can be represented as a set of web outputs, or as a matrix of web inputs and web outputs, or as a function of web inputs and web outputs [2].
Test Case Representation.
The second step is to represent the test cases for the web application, by defining the format, content, and scope of the test cases.
The test case format is the way of expressing and storing the test cases, such as text, XML, JSON, or binary. The test case format can affect the readability, usability, and portability of the test cases, and it can be chosen according to the preferences, requirements, or constraints of the testers, the web application, or the testing tools and frameworks.
The test case content is the information or data that the test cases contain, such as web inputs, web outputs, or web expectations. The test case content can affect the completeness, consistency, and accuracy of the test cases, and it can be chosen according to the objectives, criteria, or metrics of the testing, such as fault detection capability, mutation score, or test oracle [1].
The test case scope is the range or extent of the test cases, such as web pages, web events, or web outputs. The test case scope can affect the coverage, diversity, and
relevance of the test cases, and it can be chosen according to the characteristics, features, or functions of the web application, such as complexity, diversity, or dynamism.
Test Case Generation.
The third step of the adaptive random testing methodology is to generate the test cases for the web application. It can be described as follows:
Initialize an empty test suite, and a set of candidate test cases that contains all the test cases in the test space.
Repeat until the test suite satisfies the test adequacy criterion, or the set of candidate test cases is empty, or some other termination condition is met:
Select a test case from the set of candidate test cases randomly, according to some probability distribution or selection criterion.
Execute the test case on the web application, and observe the web output or feedback.
Evaluate the test case based on the web output or feedback, and the test adequacy criterion, and the objective function.
If the test case is valid and useful, add it to the test suite, and remove it from the set of candidate test cases.
Adapt the test case generation process based on the web output or feedback, and the test adequacy criterion, and the objective function, by using some adaptation strategy or algorithm. [3]
Test Case Execution.
The fourth step is to execute the test cases, by using various tools and techniques. The test case execution mode is the way of executing the test cases, which can be:
Sequential - test cases are executed one by one, in a fixed or predetermined order. Sequential can ensure the consistency and accuracy of the test case execution,
but it may be slow and inefficient, as it does not utilize the parallel or distributed computing resources or capabilities.
Parallel - test cases are executed simultaneously or concurrently, in a random or independent order. Parallel can improve the speed and efficiency of the test case execution, but it may introduce some inconsistency or inaccuracy, as it may cause some interference or contention among the test cases or the test agents.
Distributed - test cases are executed on different or multiple test agents, in a coordinated or cooperative way. Distributed can improve the scalability and reliability of the test case execution, but it may introduce some complexity or overhead, as it requires some communication or synchronization among the test agents.
Adaptive - test cases are executed in a dynamic or interactive way, based on the feedback from the previous or current test cases. Adaptive can improve the effectiveness and intelligence of the test case execution, but it may introduce some uncertainty or variability, as it may change or adjust the test case execution order, mode, or method. [3]
Test Case Evaluation.
The fifth step is to evaluate the test cases based on the web output or feedback, and the test adequacy criterion, and the objective function. It produces different types of results or outcomes, such as pass, fail, error, or inconclusive, and different types of values or scores, such as fault detection capability, mutation score, or test oracle.
The test case evaluation affects the validity, usefulness, and quality of the test cases, and it is used to decide whether to add or remove the test cases to or from the test suite, and how to adapt the test case generation process [4].
Test Case Adaptation.
The sixth and final step is to adapt the test case generation process based on the web output or feedback, and the test adequacy criterion, and the objective function, by using some adaptation strategy or algorithm. It modifies the probability distribution or selection criterion of the test case generation process, or the set of candidate test cases,
or some other parameters or variables of the test case generation process, based on the test outcome or feedback, and the test adequacy criterion, and the objective function.
Some of the common and representative adaptation strategies or algorithms are:
Exclusion - excludes or eliminates some test cases from the set of candidate test cases, based on some exclusion criterion, such as similarity, redundancy, or irrelevance. It reduces the size and increase the diversity of the test suite, by avoiding or removing duplicate or unnecessary test cases.
Partitioning - divides or splits the test space into smaller or finer subspaces or partitions, based on some partitioning criterion, such as distance, density, or entropy. It increases the coverage and diversity of the test suite, by exploring or sampling different or diverse regions or areas of the test space.
Distance - measures or calculates the distance or dissimilarity between test cases, based on some distance metric, such as Euclidean, Hamming, or Jaccard. It increases the diversity and fault detection capability of the test suite, by selecting or generating test cases that are far or different from the previous or existing test cases.
Density - measures or calculates the density or concentration of test cases, based on some density metric, such as frequency, probability, or weight. It increases the relevance and fault detection capability of the test suite, by selecting or generating test cases that are in or near the high-density or high-probability regions or areas of the test space. [4]
СПИСОК ЛИТЕРАТУРЫ:
1. Chen, T. Y., Kuo, F. C., & Merkel, R. G. (2005). Adaptive random testing. In Advances in Computer Science-ASIAN 2004. Higher-Level Decision Making (pp. 320-329). Springer, Berlin, Heidelberg;
2. Chen, T. Y., Kuo, F. C., & Zhang, Z. Q. (2010). Adaptive random testing: The ART of test case diversity. Journal of Systems and Software, 83(1), 60-66;
3. Chen, Z., Chen, J., & Chen, T. Y. (2019). A survey on adaptive random testing. IE Transactions on Software Engineering, 47(10), 2052-2083;
4. Zhang, Z. Q., Chen, T. Y., & Kuo, F. C. (2007). Adaptive random testing for web applications. In Proceedings of the 2007 Australian Software Engineering Conference (pp. 66-75). IE.