For more than 50 years, researchers have been studying adaptive systems that continuously analyze their own performance and modify their control procedures to better it. When the plants (or processes) to be regulated are linear and time-invariant, the theory of such systems is currently well understood. When the system’s uncertainty is small, there are a variety of approaches for achieving a satisfying and robust response. Having a fully robust system is impossible, but to be able to come somehow near it is doable. Throughout our research, We saw that there is not much of exploration regarding the Test Oracles generation for the adaptive system. We took this opportunity and explored to see which approach might be a good fit to solve the testing difficultness in adaptive systems. This thesis, explores and implements an automated way of generating test oracles for adaptive systems, that detect the inputs that generate errors in the system. The test oracle generation is based on so-called derived oracles, where in an automated approach the oracles are derived from the system execution. For generation of the values from the System Under Test we are going to collaborate with another thesis, called Automated Random Testing for Adaptive Systems based on R+ Testing Strategy, which is an extension of the R strategy a previous research approach in the area of random testing Chen et al. [2010]. After the first part of the preparation of the data to be used for testing, the Test Oracles are generated according to those data, which our model through our experiments showed great potential as it showed great efficiency and high correctness. The thesis presents the whole process of how the system invariants are extracted, the generation of tests cases to the oracle generation and validation steps.
Advisor
Abstract
Publication Type
Publication Year
Subject
Computer Science and Software Engineering