Future of AI

How security leaders are handling the growing pressure to safeguard AI while still deploying it quickly

How security leaders are handling the growing pressure to safeguard AI while still deploying it quickly
Credit: Outlever
Key Points
  • The future of cybersecurity hinges on a dual mandate: securing AI systems and simultaneously using it to enhance security measures.

  • Anshuman Bhartiya of Lyft emphasizes the need for proactive security leadership as AI adoption exposes new vulnerabilities.

  • Emerging agentic systems promise autonomous problem-solving, unlocking new creative opportunities in cybersecurity.

Key Points
  • The future of cybersecurity hinges on a dual mandate: securing AI systems and simultaneously using it to enhance security measures.

  • Anshuman Bhartiya of Lyft emphasizes the need for proactive security leadership as AI adoption exposes new vulnerabilities.

  • Emerging agentic systems promise autonomous problem-solving, unlocking new creative opportunities in cybersecurity.

As some of the RFCs of these protocols are continuously improving, it’s very important for us to keep up to speed and make sure we understand how to secure these technologies. It’s almost a black hole in some respects. People are just figuring things out on the go, and I don’t think there’s a robust solution about how to do things securely just yet.
Anshuman Bhartiya
Staff Security Engineer | Lyft

The future of cybersecurity runs on a dual mandate: security for AI, and AI for security. One is about protecting emerging systems. The other is about empowering security teams to move faster, think clearer, and communicate better.

Anshuman Bhartiya, Staff Security Engineer at Lyft and Co-Host of The Boring AppSec Podcast, has spent over a decade tackling complex security challenges across enterprise products, startups, and critical infrastructure. Now, he's exploring how agentic AI is reshaping both the systems we secure and the people doing the securing.

Beware the black hole: "Using AI to help solve problems is very important, but doing it in a way that doesn’t compromise security and the risk is even more important," Bhartiya says. As companies rush to adopt AI IDEs, MCP servers, and autonomous agents, they're also exposing new layers of vulnerability, many of which remain poorly understood. "People still don’t really understand the security aspects of it—things like authentication, authorization, data leakage."

He sees this as a call for stronger, more proactive security leadership. "As some of the RFCs of these protocols are continuously improving, it’s very important for us to keep up to speed and make sure we understand how to secure these technologies." Right now, there are more questions than answers. "It’s almost a black hole in some respects," Bhartiya says. "People are just figuring things out on the go, and I don’t think there’s a robust solution about how to do things securely just yet."

Skepticism ingrained: "The biggest roadblock I’ve experienced when it comes to security folks using AI is trust," says Bhartiya. "We’re generally very skeptical." That skepticism isn’t a fault—it’s the job. Which is why trust needs to be earned, not assumed. Bhartiya recommends starting small: "Try to compare how the AI is solving a problem versus how you would solve it. If you see a gap, ask: how can you close that gap?"

Bhartiya relies on three tactics to bridge that gap: prompt engineering, supplying the right context, and giving clear examples of what good and bad outputs look like. "If I give the right prompt with the right context and explain what 'good' looks like and what 'bad' looks like, AI reasoning models are actually very good in taking that data and giving you the output you expect," he explains. Still, it’s not as simple as plugging into an API. "You have to know what data to provide to get the right output. Knowing that is the secret sauce."

Try to compare how the AI is solving a problem versus how you would solve it. If you see a gap, ask: how can you close that gap?
Anshuman Bhartiya
Staff Security Engineer | Lyft

Context is everything: Once trust is earned, the next hurdle is efficiency. Security teams aren’t short on judgment; they’re short on time. Bhartiya points to the daily drag of manual triage: checking Jira tickets, digging through Confluence pages, scanning code—just to understand what’s going on. "I find myself spending more time gathering the context than actually making the decision," he says.

That’s where AI steps in. "If AI can bring all the context and show me everything I need to know in order to make a decision, I can do my job a lot faster." It’s not just about automation, it’s about letting security professionals spend less time digging and more time defending. But, speed only matters if the system can be trusted.

Set and forget: What excites Bhartiya most isn’t just what agentic systems can do now; it's what they’ll make possible next. He points to tools like Google’s Jules and OpenAI’s Codex as early examples of agents that can tackle complex problems on their own. "You can ask these agents to do something and then go about your day, and the agents will run in the background remotely. That’s just wild to me," he says.

With that kind of hands-off execution, he sees endless creative potential. "Every human being has so many ideas on a daily basis. If we can start solving some of these problems with agents, I think that’s awesome," says Bhartiya. "I’m really looking forward to building with this kind of tech."

Most Popular