How can the quality of AI be ensured?
If AI within a system only performs certain subtasks or if a system is AI-based but contains conventional components (in the sense of non-AI) or if there are interfaces with conventional systems, the conventional parts of these systems and also the interfaces between AI and non-AI will be tested using conventional methods. So previous test methods and quality metrics will continue to be applicable here, and a mix of different methods and approaches will become established.
For AI, too, the risks associated with its use must be analyzed as part of the testing process. In order to avoid these risks or at least make them transparent, testing is carried out accordingly. The testing effort should therefore also be concentrated in this case where the risks are greatest (risk-based testing). These are largely the same risks as in any other system: What happens if the system fails or if the system does not recognize input information correctly? What happens if the system provides wrong data, makes wrong decisions, or reacts wrongly?
But on top of that, with AI, there are additional AI-specific risks. For example, the AI-based system may make incorrect decisions based on incorrect or insufficiently representative training data. Here, too, the corresponding risks must be analyzed and safeguarded with suitable quality assurance.
Furthermore, due to the nature of AI-based systems, additional AI-specific risks may arise, such as a system objectively fulfilling all requirements but not being purchased or used because the or a certain part of the users do not trust decisions made by artificial intelligence.
On the question of "What risks can arise when using AI and how can these be managed through an appropriate approach to quality assurance and testing?" imbus can advise you and offer support in optimizing your development processes.
Ihr Ansprechpartner bei imbus
Mr. Tilo Linz
mail: tilo.linz@imbus.de
phone: 09131 7518-210
fax: +49 9131 / 7518-50