Absolute AppSec Titelbild

Absolute AppSec

Absolute AppSec

Von: Ken Johnson and Seth Law
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.
  • Episode 315 - Risks of "AI-Native" Security Products, Rapid Software Development
    Mar 3 2026
    In episode 315 of Absolute AppSec, Ken Johnson and Seth Law discuss the rapidly evolving challenges of securing software in an era of AI-assisted development. The hosts provide updates on their "Harnessing LLMs for Application Security" training, noting that the field is changing so fast that they must constantly update their exercises to include new agents and advanced tools like Claude Code. A primary concern raised is the "naivete" of many new security tools, where prompts are often automatically generated by AI rather than expertly crafted, causing a loss of essential nuance. The hosts also warn against AI companies building security products without specialized expertise, citing a zero-click exploit in the "Comet" AI browser that could exfiltrate sensitive secrets via calendar summaries. As development teams now ship code at "AI speed," the hosts argue that traditional AppSec methods are too slow, necessitating a strategic pivot toward automated design reviews, governance, and observability rather than just chasing individual vulnerabilities. Despite the inherent risks and the ongoing difficulty of managing AI reasoning drift, they remain optimistic that these tools can eventually unlock more efficient, hands-off AppSec workflows if managed with proper guardrails and deterministic oversight.
    Mehr anzeigen Weniger anzeigen
    Weniger als 1 Minute
  • Episode 314 - LLM AppSec Disruption, Limitations of AI in Security, AppSec Oversight
    Feb 24 2026
    In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic’s "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.
    Mehr anzeigen Weniger anzeigen
    Weniger als 1 Minute
  • Episode 313 - AppSec Role Evolution, AI Skills & Risks, Phishing AI Agents
    Feb 17 2026
    Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password’s SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.
    Mehr anzeigen Weniger anzeigen
    Weniger als 1 Minute
Noch keine Rezensionen vorhanden