-
サマリー
あらすじ・解説
Red models associated with AI technologies highlight real-world vulnerabilities and the importance of proactive security measures. It is vital to educate users about how to explore the challenges and keep AI systems secure. Today’s guest is Dr. Aditya Sood. Dr. Sood is the VP of Security Engineering and AI Strategy at Aryaka and is a security practitioner, researcher, and consultant with more than 16 years of experience. He obtained his PhD in computer science from Michigan State University and has authored several papers for various magazines and journals. In this conversation, he will shed light on AI-driven threats, supply chain risks, and practical ways organizations can stay protected in an ever-changing environment. Get ready to learn how the latest innovations and evolving attack surfaces affect everyone from large companies to everyday users, and why a proactive mindset is key to staying ahead. Show Notes: [01:02] Dr. Sood has been working in the security industry for the last 17 years. He has a PhD from Michigan State University. Prior to Aryaka, he was a Senior Director of Threat Research and Security Strategy for the Office of the CTO at F5.[02:57] We discuss how security issues with AI are on the rise because of the recent popularity and increased use of AI.[04:18] The large amounts of data are convoluting how things are understood, the complexity is rising, and the threat model is changing.[05:14] We talk about the different AI attacks that are being encountered and how AI can be used to defend against these attacks.[06:00] Pre-trained models can contain vulnerabilities.[07:01] AI drift or model or concept drift is when data in the training sets is not updated. The data can be used in a different way. AI hallucinations also can create false output.[08:46] Dr. Sood explains several types of attacks that malicious actors are using.[10:07] Prompt injections are also a risk.[12:13] We learn about the injection mapping strategy.[13:54] We discuss the possibilities of using AI as a tool to bypass its own guardrails.[15:18] It's an arms race using AI to attack Ai and using AI to secure AI.[16:01] We discuss AI workload analysis. This helps to understand the way AI processes. This helps see the authorization boundary and the security controls that need to be enforced.[17:48] Being aware of the shadow AI running in the background.[19:38] Challenges around corporations having the right security people in place to understand and fight vulnerabilities.[20:55] There is risk with the data going to the cloud through the LLM interface.[21:47] Dr. Sood breaks down the concept of shadow AI.[23:50] There are also risks for consumers using AI.[29:39] The concept of Black Box AI models and bias being built into the particular AI.[33:45] The issue of the ground set of truth and how the models are trained.[37:09] It's a balancing act when thinking about the ground set of truth for data.[39:08] Dr. Sood shares an example from when he was researching for his book.[39:51] Using the push and pretend technique to trick AI into bypassing guardrails.[42:51] We talk about the dangers of using APIs that aren't secure.[43:58] The importance of understanding the entire AI ecosystem. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web PageFacebook Pagewhatismyipaddress.comEasy Prey on InstagramEasy Prey on TwitterEasy Prey on LinkedInEasy Prey on YouTubeEasy Prey on PinterestAditya K SoodAditya K Sood - LinkedInAditya K Sood - XAryaka COMBATING CYBERATTACKS TARGETING THE AI ECOSYSTEM: Assessing Threats, Risks, and Vulnerabilities Empirical Cloud Security: Practical Intelligence to Evaluate Risks and Attacks Empirical Cloud Security: Practical Intelligence to Evaluate Risks and Attacks