Should We Trust AI for Testing?

Artificial Intelligence (AI) gets a lot of bad press and this has been even more true in recent times. Articles and webpages seem rife with examples of AI systems showcasing sexism, racial bias and hilariously flawed decision making. To top it all off, the debate that whether AI will replace humans at their jobs is still a concern for many. All of this and more has caused companies offering Security Testing Services to develop ethical AI algorithms.

At Kualitatem, we take this matter seriously. Our goal is to help you trust AI especially when it comes to testing software in automated testing platforms. Read on to find out how you can keep control as AI looks to transform your testing processes.

Trust, Ethics and AI

Last year, the European Union published a stimulating report watching the wants for ethical AI. the main target of the report was on systems that make decisions that affect our lives. Things like AIs that decide whether you deserve a loan, or who set the limit on your MasterCard. As a result, the report’s authors checked out what steps were needed to make an AI that was both ethical and trustworthy.

Despite the main target, much of what’s within the report is really really relevant to AI-powered testing. The authors divide the matter into three. Lawfulness, Ethics, and Robustness. Clearly, for a system like Kualitatem, lawfulness isn’t really relevant. However, ethics and robustness still are. Within Ethics, there are several key requirements. These are:

Human agency and oversight. Put simply, this suggests there should be a person’s involved somewhere within the system. this might be an immediate involvement, where a person has got to approve every action, or it’d simply be having oversight.

Transparency. By its very nature, an AI model may be a complete recorder. This is often one of the items that create it hard for people to trust AI. So, you ought to attempt to make the actions of the AI as transparent as possible.

Accountability. This aspect is usually overlooked in AI-powered systems. At the top of the day, someone has got to take responsibility for the actions of the system.

The other key aspect is robustness. AI systems quickly become indispensable. So, it’s vital that you simply ensure they’re robust and reliable. Moreover, if they are doing fail, they ought to do so safely.

Trust Issues with AI

It’s not at all surprising that some people inherently find it difficult to trust AI. There are some truly terrible stories over the years of where AI possesses it wrong. 2015 saw reports of Black people being tagged as “gorillas” by Google’s image recognition. This was because the system had primarily been trained with pictures of caucasian people. In 2018, reports came out of a person who was wrongly dismissed by an AI system. He only acknowledged because suddenly his building and system access was revoked. And In 2019, Apple hit the headlines for the incorrect reasons after their automated system awarded David Heinemeier Hansson’s wife a credit limit 20x less than his. Despite the very fact, they share their checking account and all of their wealth.

Added thereto is that the very natural and real fear that AI will find itself taking all our jobs. Why trust AI if it’s close to causing you to unemployed? Research has suggested that certain occupations may vanish altogether. As an example, truck drivers have a risk of obsolescence of 80–100%. Certainly, long-distance truck driving will likely get replaced by autonomous trucks within the next 20 years.

This is often far away from being a replacement phenomenon — we already saw this exact pattern happen in traditional industry and manufacturing. Indeed, it’s been the story since the earliest days of the economic Revolution when saboteurs smashed factory machinery. Automation reduces the necessity for human workers while (often) increasing the standard of the output and efficiency of the method.

This is also a worry for engineers working in security testing companies where AI is already being leveraged to help with testing and development.

How Kualitatem Builds Trust in AI

Here at Kualitatem, we understand and relate to the trust issues people and businesses have with AI. We also understand that state-of-the-art testing systems for security testing services put the jobs of testers at risk but we also understand the human input of it all.

While AI may be the saving grace in terms of establishing productivity and quality, there are certain human elements that it may always struggle with. This is why we emphasise systems that work in conjunction with humans instead of without them.

Kualitatem is an independent software testing and information security company in New York.