Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI Titelbild

Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI

Along The Edge Podcast: Breaking, Defending, and Understanding Agentic AI

Von: Andrius Useckas
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

Along The Edge is a podcast about life on the frontier of AI security—where large language models turn into agents, tools get wired into everything, and the old web-app threat models stop being enough. Hosted by Andrius Useckas (Co-founder & CTO of ZioSec), Along The Edge dives deep into agentic AI security: jailbreaks, prompt injection, data leaks, MCP/tooling risks, least privilege for agents, and what “don’t trust, verify” really means in an AI-native stack. Each episode features hands-on practitioners—security architects, red teamers, researchers, and builders—who are actively breaking and defending real systems in production. If you’re building, deploying, or testing AI agents (SDR agents, SOC assistants, coding copilots, internal HR or payroll agents, etc.), this show gives you concrete attack paths, defensive patterns, and hard-earned lessons you won’t get from marketing decks and “AI safety” platitudes. Along The Edge is for: Security engineers and architects responsible for AI/agentic systems Red teams, pentesters, and researchers exploring AI-native attack surfaces Engineering leaders who don’t want to bolt security on after the breach Anyone who suspects “the model will handle it” is not a real security strategy© 2026 Andrius Useckas Politik & Regierungen
  • Along The Edge e2: OpenClaw Is Incredible... and Completely Unhinged
    Jan 30 2026

    OpenClaw (formerly Clawdbot / Moltbot / whatever it’s called today) is the first agent that feels like “Siri, but real” — and it’s moving so fast it’s breaking everyone’s threat models in real time.

    In this episode of Along The Edge, we unpack why OpenClaw is blowing up, what it can do when you hook it into your email, calendar, code, and tools… and why the security tradeoff is brutal: the more capable it is, the more dangerous it becomes.

    We cover:

    • Why “credentials in cleartext” is just the beginning
    • How Discord / chat integrations can leak gateway + session details
    • Tool invocation endpoints and bypass paths
    • MCP prompt injection turning “normal workflow” into command execution
    • What attackers will fingerprint and scan for in the wild
    • What CISOs should do on day 1
    • The big question: can defense keep up, or do we go “offense-driven defense”?

    Buckle up.

    Mehr anzeigen Weniger anzeigen
    45 Min.
  • Along The Edge – Episode 1: Agentic AI Security, Jailbreaks, and Why You Shouldn’t Trust Your Agents
    Jan 13 2026

    Welcome to Along The Edge, a podcast about AI security and agentic AI.

    In Episode 1, Andrius Useckas (Co-founder & CTO, ZioSec) sits down with Alex Gatz (Staff Security Architect, ZioSec) to break down the emerging world of agentic AI security: jailbreaks, prompt injection, SDR and SOC agents, data leaks, least privilege, and why “don’t worry, the model will filter it” is a dangerous assumption.

    They also walk through V-HACK, an intentionally vulnerable agentic lab project that lets security researchers and pentesters safely experiment with agent exploits, tool calling, jailbreaks, and attack paths—helping define what “pen tester 2.0” looks like.

    Chapters / In this episode:

    00:00 – Intro: who we are & why a new AI security podcast
    02:00 – What is agentic AI vs a plain LLM?
    03:10 – SDR agents, SOC workflows & new “Layer 8 / Layer 9” problems
    09:00 – Prompt injection 101: direct vs indirect attacks & context windows
    12:00 – Chatbots vs agents and why agent risk is higher
    15:00 – Foundation model trust & the Anthropic horror-story jailbreak demo
    19:30 – Why jailbreaks are (currently) an unsolved problem
    22:30 – Social engineering parallels & detecting AI / agentic attacks
    27:00 – V-HACK: intentionally vulnerable agent lab for pentesters
    32:00 – Securing agents: WAFs, runtime protection, identity & MCP proxies
    36:00 – Scanners, evals vs real pentesting & terrifying token bills
    39:00 – Least privilege, DLP & identity for SDR and payroll-style agents
    44:00 – “Don’t trust, verify”: threat modeling & testing agents early
    46:00 – Future of AI security: consolidation, CNAPs & SOC-as-an-agent
    49:00 – Magic wand: fixing context & memory in agents
    50:30 – Closing thoughts & what’s next

    Links mentioned:

    ZioSec – www.ziosec.com
    V-HACK (GitHub) – https://github.com/ZioSec/VHACK

    About the guests:

    Andrius Useckas has 25+ years in security and now focuses on agentic AI security, offensive testing, and red teaming for enterprise AI deployments.

    Alex Gatz is a Staff Security Architect at ZioSec. He has a background in emergency medicine and construction, then transitioned into AI in 2014 working on NLP, deep learning, anomaly detection, and now AI security.

    If you’re building or testing agents in 2026, this episode gives you a practical look at how real attack paths work, what breaks in production, and how to defend before attackers get there first.

    Mehr anzeigen Weniger anzeigen
    51 Min.
Noch keine Rezensionen vorhanden