Sreeram Sandrapati recently attended BSidesSF, where he explored how agentic AI and evolving cloud architectures redefine security strategies across the industry. Key concerns covered at the event included securing large language models (LLMs) handling sensitive data, addressing prompt injection and model poisoning, and adapting frameworks for AI-specific threats. The following is a summary of a longer article Sreeram wrote, which is posted on Medium.
Implementing consistent security controls across environments with different native security models offers several benefits:
TrustLogix's platform enables no-code policies that can be written once and deployed everywhere, allowing for near-instantaneous updates and real-time risk monitoring.
Cloud environments have expanded attack surfaces beyond traditional perimeters:
Traditional indicators of compromise are insufficient, necessitating behavioral indicators. TrustLogix mitigates these risks through the automated inventory of data stores, behavioral baselining to flag deviations, and fine-grained enforcement at the data layer.
AI agents face unique vulnerabilities, particularly Authorization and Control Hijacking through direct exploitation, permission escalation, and role inheritance manipulation. Key attack scenarios include:
TrustLogix addresses these with agent-specific credentials, just-in-time entitlements, and immutable audit trails.
BSidesSF 2025 revealed an industry at an inflection point where AI, cloud architectures, and human factors create challenges and opportunities. Key principles include specialized security for AI systems, centralized visibility across environments, and the continued importance of human psychology in security.
For a more in-depth look at the topics and how TrustLogix can help with these issues, please read the full blog: 🐉 BSidesSF 2025: Focusing on the Evolution of Cloud Security in The Age of AI