DEV Community

Jayesh Bapu Ahire for AI Guardrails

Posted on

Adopt AI, But Responsibly!

In this riveting episode of the AI Guardrails Podcast, we dive deep into the transformative impact of artificial intelligence on the field of cybersecurity. Join us as we sit down with ⁠Andy Martin⁠, founder and CEO of ⁠Control Plane⁠, who shares his expert insights on the integration of AI technologies in protecting digital infrastructures. From the origins of AI in gaming to its pivotal role in modern cybersecurity strategies, Andy unpacks the complexities and potential of AI to reshape security paradigms.

We explore cutting-edge topics such as AI's ability to automate vulnerability assessments, the ethical considerations of AI deployment, and the future of AI-enhanced security measures. Whether you're a cybersecurity professional, AI enthusiast, or just keen to understand the next big thing in tech, this episode will arm you with the knowledge you need to navigate the AI-powered security landscape. Tune in to discover if AI is truly the cybersecurity hero we've been waiting for or if it's a Pandora's box that might unleash new challenges.

Guest Bio:

Andrew Martin is founder and CEO of ⁠⁠Control Plane, ⁠specializing in securely unlocking next-generation cloud technologies for highly regulated clients. He holds pro bono positions as CISO at OpenUK and co-chair of the CNCF’s Technical Advisory Group for Security. His clients include Google, the UK Government, Citibank, JPMC, PWC, BP, Visa, British Gas, The Economist, and News UK.
He is co-author of Hacking Kubernetes (O’Reilly Media, 2022), has published training material and numerous whitepapers for clients including the Linux Foundation and SANS, and regularly speaks and delivers training at international security conferences and events.

Transcript

Jayesh Ahire: Hello, everyone. Welcome to the AI Guardians Podcast, where we discuss AI, the adoption challenges, and the security opportunities around building new AI and ML applications. Today, we have with us Andy from Control Plane.

Andy Martin: Hello, thank you very much for having me on. I'm Andy, founder and CEO at Control Plane. I have a background in classic computer science, did a lot of development and infrastructure engineering, DevOps, and all that good stuff, and worked through a number of different regulated organizations and started Control Plane to bring consulting and assurance around cloud-native and next-generation technologies to regulated industries. We're based out of London with a lot of colleagues across Europe, Asia Pacific, and North America.

Jayesh Ahire: Interesting. Andy and I met back in 2019 on the streets in Sydney where we shared some good kebabs and good talks, and that's where we actually got to know each other. Control Plane has been doing fantastic work in different domains of security and different strategic relationships they have with various organizations in the UK and across the world.

Today, we'll be discussing different use cases Andy has come across specifically in the cybersecurity domain when it comes to AI and the new gen AI, the "LM" thing which has been catching up since last year. So to begin with, how did you get interested in this specific domain? And what are some of the interesting use cases you have seen while working on this?

Andy Martin: Well, one of the first things that sparked the interest was watching some of the things that OpenAI did many years ago, where they released a playground, and you could build your own agents to play classic games so you could teach an agent to play pong and some side scrolling games and that kind of thing. A very dear friend of mine who actually trained as a lawyer turns around to me and says there's so much stuff going on here. DeepMind, they were just starting the AlphaGo project as well, and one of the DeepMind founders was, or still is, I think a lecturer at University College, London. His reinforcement learning course was live on YouTube. So my friend and I sat down and watched a lecture module on reinforcement learning and discovered that it was reasonably complicated. But this very simple process of having an agent with a feedback loop, observation of the environment, and a reward function, it's a very, very simple way of modeling something that's actually exponentially complex across multiple dimensions. We did this course, it was fantastically interesting during that time. DeepMind then got acquired by Google and integrated internally there. We've got Demis Hassabis, the CEO, now deeply integrated into Google's plans and all of their latest models. So the last two years since the release of OpenAI's ChatGPT in the open AI playground before the big launch, it was clear that there was something there, the token prediction capabilities of LLMs got to a point where they were really useful.

Jayesh Ahire: As you rightly mentioned, right specifically around the things like vulnerability assessment, and then like at least reducing the repetitive work which has been happening in these workflows.

Andy Martin: Absolutely, and it's becoming one of the interesting use cases. We have been seeing this across things like SAS, and even the DAST tooling, even SCA, for that matter. And traditionally, SAST is producing a lot of false positives. But how can we minimize them using all of this context we are feeding into these models?

Jayesh Ahire: And I know you have been running some of the initiatives, including the FinOps AI readiness work group, and all of the others. We want to touch base on that, and go through. What are the different things you have been doing and exploring in this specific domain?

Andy Martin: Yes. So, the financial services open source organization which sits under the Linux Foundation has soon to be announced AI readiness working group. I'm on the steering committee, and the goal of the group is to bring together stakeholders from all the major banks and some other financial services organizations and figure out what is the simplest way to get AI into these organizations. Now, roadblocks involved there include well, who owns this data and is the leak of this data an existential event for the bank, for a regulator. For existing security controls and mechanisms in place in the banks that we want to continue to respect because we spent many years building them, and they're compliant with regulation. So the AI Readiness working group is a collection of individuals operating under Chatham house rules to share experiences and to build out a set of common guidelines and frameworks for secure adoption of AI. So that that's very much based on the legal implications, privacy, and how we deal with the governance, risk, and compliance of the entire end-to

-end AI ML lifecycle.

Jayesh Ahire: You made an interesting point at the end, right where? As you're adopting these technologies, it's very important to be responsible. And at the same time make sure that we are aware of all the caveats that come with it to protect ourselves from any of this damages.

Andy Martin: It's extremely difficult. Again, if we model what we've done with AI as an extension of human patterns of behavior, capturing of what biases, as well from from an ethical perspective, and our performance at keeping things private as a species, then we can see that we have just made a far more flamboyant rods for our for our own backs. But for data protection, it's a huge open goal. It's back to this question of where do we delineate those security boundaries? And do we put financial information in the same place as legal information in the same place as proprietary information? The answer, of course, is No, but what the the initial dream has been, or perhaps in this space of uncertainty, as we begin to adopt this as a commercial enterprise across the world. Is that? Yes, we can do this with a single model. And really, this is where I see the the greatest greatest issue. The inherent risks when the model is inherently opaque as to its training, data, composition. One piece, and then add on top of that. the mixing of different data classifications and sensitivities. And we're in quite a risky space. So one of the things that that I'm also involved with is open UK. which is a charity sorry, a nonprofit focused on open technology hardware data and software. And we try and stop governments from making non technically based legislation decisions. And and it is wonderful to work with the the UK Government and the European Union and and the White house on those things. One thing across the 3 is that the European Union has had an AI act in place and moving along for for a number of years. That's neo completion. We'll we'll be enforced in the next, I think. Couple of years from a UK perspective we got a consortium together of of countries from around the world, and agreed on safe and sort of trusted AI. But also, you know, the UK Government is really rushing into this. The UK is one of the biggest service industries in the world, and it's clear the AI is very well placed to eat some of that breakfast, if you like, and and and to make services more optimal so suddenly, there's a lot of challenges. So the the government in the UK has really pushed for you know they courted open AI which will open an office in London, and already. and a really push from that perspective. And then the US. Is is kind of just observing it. It doesn't have legislation moving in in the same way for these things. Of course it is the birthplace, and it's now looking more at. The embargoes around the hardware that needed for AI, so that Chip will with China, for example, TSMC being protected. In in Taiwan. spoke in terms of of where that privacy and data protection is sat. The European Union are doing the best job by a long way of actually codifying and sort of gathering those ethereal threats.

Jayesh Ahire: Yeah. And that will be definitely very interesting to see where this evolves to that extent, where everything is in context. And we can actually just go and ask the question, and we get the answer. It will be just one more teammate in the whole process, I guess. The one thing which comes along with that is. as these, models are getting perfected. There are also some of these humans who are trying to perfect in the whole AI driven cybersecurity, or even learn more about the new advancements in LLM and the learn LLM space. What will be the advice from you, for them.

Andy Martin: My favourite piece of advice in this space is. consider the model as a frenemy. It's not your enemy, but it's not your friend and I, I'd say that it is an untrusted, non-deterministic. potentially hostile threat actor behind your firewall. So as long as we consider that we've dropped something in that is completely new in the history of computing. We haven't given an agent this level of autonomy. Now we have to some extent that those agents are testable, deterministic. In this case, we don't understand how models make their decisions. We understand how they're trained, but actually getting explainability back out of the model, which is also part of the EU AI act. Getting that explainability is not uniform across everything, and it's not necessarily accurate all the time. So we have to consider that we've installed something new we haven't dealt with before at this scale that is evolving extremely quickly. So just consider what actually is that model. as I say. half friend, half enemy, with the capacity to do incredible

goods, and to reduce toil for humanity, and reduce the burden of repetitive tasks. But also we. We must use this to embellish ourselves and not replace ourselves, because we can't trust that. and we have society evolved to give us these constraints and and legal requirements and ethical concerns that we have baked into our souls, our personas, as people that are not replicated by the machines. So Firewall and everything, and trust but verify.

Jayesh Ahire: Thank you. And absolutely so. That's I guess. more of a drawing drawing this boundary somewhere and making sure that everything you do remains in that specific constrained environment. and making sure all of the things you want to put it up, protected all of the things you want insights on are better served by the models you are using cool. So I guess, though, those are all the questions I had, and I wanted to go through on the sidelines like, Have you? Do you? As I can see you like reading a lot, I guess. And then you go through a bunch of these things. Any interesting book recommendations you have come across recently?

Andy Martin: Yes. What? my friend and colleague Mr. Vicente Herrera, who has been doing a lot of work control plane on red teaming models, and and how we consider them as that's. And again, as these insecure things with security potential just recommended, a book to me called Generative AI Security. It's published by Springer and had it the yeah, I'll I'll share a link to that. That that's quite new, and it's very comprehensive. I've also been reading things like the OWASP LLM top 10, things like the NCSC Secure AI adoption guidelines. A lot of people have put out a lot of different literature. and it tends to come in at different levels. So on the one hand, you'll have security of data and privacy and keeping it very high level. At the other end, with something like OWASP, you have in depth. On what do these attacks look like? How might they actually happen? So so I find those very interesting as well. And it's a and buying and friends is the generative Bio Security Book.

Jayesh Ahire: We? Yeah. thank you. And that, that that must be interesting to go through as so. thanks, Andy for joining in. And I guess. It was pretty long and detailed. Very interesting conversation one of the longest episode, for sure. and but I really like talking to you, and all the insights you have from different things you do. In your in your day to day. Again. Thanks for joining in. I hope you enjoyed it as much as I did. and see you again.

Andy Martin: It was tremendous. I enjoyed it thoroughly, and thank you very much for having me.

Jayesh Ahire: Thank you. Have a good time.

Conclusion

Thanks for listening/ reading the Second episode of AI Guardrails podcast. You can find the latest episode here: https://podcasters.spotify.com/pod/show/ai-guardrails . We are available on Spotify, Apple Podcast, or any of your favorite podcast apps. What are the next topics you want to listen about? Let us know in comments!
⁠⁠

Top comments (0)