How Machine Learning and AI Are Revolutionizing Testing
Artificial intelligence and machine learning can help with testing, testing and re-testing. Most companies today rely on complex IT system architectures to manage the multitude of different tasks. The core is usually a cloud-based SAP S/4, supplemented by other specific software solutions. Depending on requirements, these can be manufacturing execution systems (MES), customer relationship management solutions (CRM) or product lifecycle software (PLM), for example.
This provides companies with high-performance IT. However, this also results in numerous interfaces and media disruptions - and therefore many potential sources of error, for example as a result of unclean data transfer. The Walldorf-based IT group SAP is also aware of the associated challenges and has therefore been offering the SAP Business Technology Platform (BTP) for some time, a toolbox that is designed to significantly simplify the interaction of the various individual components, among other things. BTP provides various tools for this purpose, such as API management within the SAP Integration Suite.
The BTP tools are extremely helpful and often make the work of internal IT departments much easier. Nevertheless, an old principle remains valid: if companies want to guarantee smooth, cross-system processes, they cannot do without regular testing of their system environments. In view of increasingly frequent updates and upgrades, this is a laborious undertaking. Modern processes that rely on the use of artificial intelligence (AI) and machine learning concepts can help here.
Recognize causes of errors with ML
"For some time now, companies have been increasingly relying on test automation," reports Thomas Steirer, testing and AI specialist at digital engineering company Nagarro and currently involved in various research projects in this area. "But they often don't use the most modern methods - partly because they simply don't have an overview of what is already technically possible. There are also numerous conceptual question marks."
"It is essential for consultants to gain an overview of a company's current processes in the shortest possible time frame."
Thomas Steirer,
Testing and AI specialist,
Nagarro
These include, for example, the question of where modern testing structures should be applied. Thomas Steirer points out that the automatic error analysis of failed test cases in particular offers helpful insights for companies: "Our practical experience shows that many failed system tests can usually be traced back to just a few causes. Once these are known, IT departments can optimize their testing infrastructure in a much more targeted manner. The prerequisite for this is that they evaluate and classify the log files of the failed tests using machine learning models. Unfortunately, there is often a need to catch up here."
The reason for this is that this procedure is not trivial. To enable a suitable classification, testing experts must first train the necessary ML algorithm, for example: To do this, they give it the common error categories - such as database, network or UI errors - and then practise the correct assignment manually using training data. In this way, the algorithm gradually learns to automatically recognize patterns and cluster errors independently. Correct assignment is then achieved in most cases.
The particular advantages of this approach are that the IT department no longer has to view and process the identified errors in isolation due to the pre-defined classification, but can instead eliminate the root causes of the errors directly. The process is also compatible with most automation and testing frameworks and tools. This keeps the investment costs for companies manageable. Thomas Steirer comments: "Basically, companies use it for a kind of meta-testing and thus simplify root cause analysis. The findings ultimately help them to eliminate common sources of error as efficiently as possible."
AI-based clustering is one way of systematically optimizing existing test landscapes. But clever visualizations can also help - especially when companies want to evaluate large test portfolios. Thomas Steirer reports: "Computers are excellent at evaluating large amounts of data according to predefined criteria. However, current AI solutions are not (yet) capable of independently searching for inefficient structures behind the mountains of data, for example. Humans have a clear advantage here, as long as these structures are prepared in a way that is easy for them to understand."
Thomas Steirer suggests a simple trick that Nagarro uses very successfully with its customers time and again: So that the IT department does not have to evaluate each test individually, it can visually prepare various scenarios using graphs. These can be easily derived from the existing log files and visualize the logic "behind" the individual test cases - in particular the sequence of the individual steps carried out - as a user-friendly model.
These models are ultimately a useful tool for humans: This is because - unlike today's AI - they are extremely good at recognizing patterns, overlaps or even structural planning errors - often at first glance. Many potential planning errors can therefore be identified and eliminated at an early stage. Thomas Steirer: "Of course, errors still occur. The testing team must therefore practically validate every adjustment to the logic. Basically, however, the procedure provides a simple tool for gradually adapting the testing and automation tools used to better suit your own needs - especially as visualizations are becoming increasingly simple thanks to relevant AI tools."
Both during individual software tests and when testing entire system landscapes, communication difficulties between users, IT specialists and the software solutions used occur time and again.
Autonomous error correction and self-healing
This is another reason why companies currently like to use DevOps concepts: they are intended to simplify and shorten the communication channels between all parties involved - and thus contribute to easier error correction.
A simple analogy shows why this is necessary: if a user discovers inconsistent language usage on a website - such as a mixture of German and English - they must report this so that the IT department can then make corrections in the CMS system or directly in the HTML code. In the case of more complex errors, this quickly becomes very tedious and cumbersome.
It would therefore be better if authorized (and possibly specially trained) users could report such errors directly to the software and initiate these independent corrections - or at least make suggestions to the development team. The prerequisite for this is intelligent, reliable and virtually error-free semantic recognition as well as very powerful algorithms that can directly assign descriptions or paraphrases of an error in human language to probable error causes. These are not yet market-ready solutions. AI applications are therefore currently limited to actively supporting programmers with coding, for example.
However, the rapid progress made in recent years is also bringing these advanced AI application concepts within reach. Thomas Steirer is therefore confident: "AI and machine learning are already massively changing the way companies test their system environments and software solutions. And we are only at the beginning of this development. With the progress made in AI research, it will probably be possible to automate operational testing in ways that we can hardly imagine today. However, humans will remain indispensable for conceptual and creative work."
To the partner entry: