• Along The Edge e2: OpenClaw Is Incredible... and Completely Unhinged
    Jan 30 2026

    OpenClaw (formerly Clawdbot / Moltbot / whatever it’s called today) is the first agent that feels like “Siri, but real” — and it’s moving so fast it’s breaking everyone’s threat models in real time.

    In this episode of Along The Edge, we unpack why OpenClaw is blowing up, what it can do when you hook it into your email, calendar, code, and tools… and why the security tradeoff is brutal: the more capable it is, the more dangerous it becomes.

    We cover:

    • Why “credentials in cleartext” is just the beginning
    • How Discord / chat integrations can leak gateway + session details
    • Tool invocation endpoints and bypass paths
    • MCP prompt injection turning “normal workflow” into command execution
    • What attackers will fingerprint and scan for in the wild
    • What CISOs should do on day 1
    • The big question: can defense keep up, or do we go “offense-driven defense”?

    Buckle up.

    Mehr anzeigen Weniger anzeigen
    45 Min.
  • Along The Edge – Episode 1: Agentic AI Security, Jailbreaks, and Why You Shouldn’t Trust Your Agents
    Jan 13 2026

    Welcome to Along The Edge, a podcast about AI security and agentic AI.

    In Episode 1, Andrius Useckas (Co-founder & CTO, ZioSec) sits down with Alex Gatz (Staff Security Architect, ZioSec) to break down the emerging world of agentic AI security: jailbreaks, prompt injection, SDR and SOC agents, data leaks, least privilege, and why “don’t worry, the model will filter it” is a dangerous assumption.

    They also walk through V-HACK, an intentionally vulnerable agentic lab project that lets security researchers and pentesters safely experiment with agent exploits, tool calling, jailbreaks, and attack paths—helping define what “pen tester 2.0” looks like.

    Chapters / In this episode:

    00:00 – Intro: who we are & why a new AI security podcast
    02:00 – What is agentic AI vs a plain LLM?
    03:10 – SDR agents, SOC workflows & new “Layer 8 / Layer 9” problems
    09:00 – Prompt injection 101: direct vs indirect attacks & context windows
    12:00 – Chatbots vs agents and why agent risk is higher
    15:00 – Foundation model trust & the Anthropic horror-story jailbreak demo
    19:30 – Why jailbreaks are (currently) an unsolved problem
    22:30 – Social engineering parallels & detecting AI / agentic attacks
    27:00 – V-HACK: intentionally vulnerable agent lab for pentesters
    32:00 – Securing agents: WAFs, runtime protection, identity & MCP proxies
    36:00 – Scanners, evals vs real pentesting & terrifying token bills
    39:00 – Least privilege, DLP & identity for SDR and payroll-style agents
    44:00 – “Don’t trust, verify”: threat modeling & testing agents early
    46:00 – Future of AI security: consolidation, CNAPs & SOC-as-an-agent
    49:00 – Magic wand: fixing context & memory in agents
    50:30 – Closing thoughts & what’s next

    Links mentioned:

    ZioSec – www.ziosec.com
    V-HACK (GitHub) – https://github.com/ZioSec/VHACK

    About the guests:

    Andrius Useckas has 25+ years in security and now focuses on agentic AI security, offensive testing, and red teaming for enterprise AI deployments.

    Alex Gatz is a Staff Security Architect at ZioSec. He has a background in emergency medicine and construction, then transitioned into AI in 2014 working on NLP, deep learning, anomaly detection, and now AI security.

    If you’re building or testing agents in 2026, this episode gives you a practical look at how real attack paths work, what breaks in production, and how to defend before attackers get there first.

    Mehr anzeigen Weniger anzeigen
    51 Min.