We use cookies to ensure our website operates correctly and to monitor visits to our site. This helps us to improve the way our website works, ensuring that users easily find what they are looking for. To allow us to keep doing this, click 'Accept All Cookies'. Alternatively, you can personalise your cookie settings.

Accept All Cookies Personalise settings

TechWatch Live Q&A - Advai

Q&A: David Sully, Co-Founder & CEO, Advai

Q: What is it that Advai does?

A: We’re a UK based start-up that looks at adversarial AI, AI robustness, security and fairness. We do stress-testing and identify holes in AI systems.

Q: What is different about your technology compared to others out there in the market?

A: Typically, we get badged with things like explainability and transparency, which in our view have a couple of limitations in that they’re all dependent on the data that’s collected by a company. Essentially what we do is ‘chaos-testing’ AI systems, so that we’re not so dependent upon hypothesising a problem and then running it through a system to see if it’s good or bad. Instead, our system is able to proactively identify where the weaknesses are in the AI. So it tells the data scientist exactly which classes have an issue and then these can be reasonably addressed, as opposed to not knowing the unknowns.

Q: What sort of organisation is having trouble with adversarial AI?

A: Adversarial AI is a new technology that’s been heavily researched and the attacks are starting to come through right now. For example, we’ve had direct interactions with companies who use facial recognition, where they’re experiencing attacks and algorithmic trading. Where AI is being implemented in truly business critical situations, using genuine AI systems, they’re starting to experience these attacks now. However, adversarial AI is only a small part of the wider problem of AI robustness. When you deploy an AI system and you suddenly see a dip in performance, that’s because it’s taking in the sample inputs and data that it’s never seen before, and it’s a very good way for us to start addressing that problem.

Q: Where do you see this technology going over the next 5-10 years?

A: Our belief is that AI systems, as we’re seeing them today, are going to move towards a point where they all need to be assured and validated. Meaning that you will need to be clear where the AI has been deployed, how it’s going to react to adverse inputs and how it’s going to behave - we see our technology being fundamental to that.

Q: What do organisations need to do in order to best adopt these technologies?

A: To adopt technology such as ours, we are simply creating it so they can be part of the end product pipeline, so it becomes part of that standard process, both in pre-deployment and post-deployment. That’s the simple way of adopting it. I also think there’s going to be a much wider element around how AI is adopted, in what functions is it effective etc., so the wider assurance principles around that, and it gets really complicated. But again, our product is part of that discourse.

Q: What are the real benefits in your technology, for both commercial and defence?

A: When it comes to AI adoption, if you have a look at a lot of the statistics that are out there, there’s this big fear about putting AI out into the real world and not knowing how it’s going to behave. That fear centres around what happens when something comes in that we didn’t prepare for. That’s where the real benefit from our system comes from; we are able to do that before the AI is released. Consequently, it gives you that reassurance and confidence that the system that’s been built is actually going to do what it’s meant to do. What’s more, it’s also going to start identifying how adversarial AI is going to attack these systems, because that is going to happen - there’s no two ways about it.

Q: So lots of research and development in this area at the moment. When is it really going to mature and come out of the lab?

A: So we’re at the stage now where we already have partners that we’re working with, with technologies coming out of the lab from our side. In terms of the adversarial attacks themselves, they are already out there. There’s published attacks out there. The National Security Commission on AI in the US has just said that they are seeing it happen themselves. It’s just a case of whether or not you’re aware of it.

Q: So adversarial AI is one to watch out for…!

A: Yes. It’s one of the key threats that have been highlighted several times over the past year. By, Microsoft, DoD, Alan Turing Institute, the MOD and the NCSC. It’s one of the top threats to AI at the moment.