Exploring Information Security - Exploring Information Security Titelbild

Exploring Information Security - Exploring Information Security

Exploring Information Security - Exploring Information Security

Von: Timothy De Block
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.
  • What are the AI Vulnerabilities We Need to Worry About
    Feb 17 2026
    Episode Summary Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security. They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on. Key Topics Learning in the AI Era: The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team. Why accessible tools like YouTube and AI make learning technical concepts easier than ever. Understanding the "Black Box": How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history). AI Vulnerabilities vs. Attacks: Prompt Injection: "Social engineering" the chatbot to perform unintended actions. Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten". Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data. Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character. The "Agent" Threat: Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk. The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week. Supply Chain & Disinformation: Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware. The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media. Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models. Memorable Quotes "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block Resources Mentioned Books: Active Measures: The Secret History of Disinformation and Political Warfare by Thomas Rid. The Intelligent Investor by Benjamin Graham. Thinking, Fast and Slow by Daniel Kahneman. Churchill: A Life by Martin Gilbert. Videos & Articles: 3Blue1Brown (YouTube): "But what is a neural network?" (Deep Learning Series) . Keith’s Blog: "Life After the AI Apocalypse". About the Guest Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior. Website: securing.dev Mastodon: Keith on Infosec.Exchange
    Mehr anzeigen Weniger anzeigen
    52 Min.
  • [RERELEASE] How to make time for a home lab (Copy)
    Feb 10 2026
    Chris (@cmaddalena) and I were asked the question on Twitter, "How do you make time for a home lab?" We answered the question on Twitter, but also decided the question was a good topic for an EIS episode. Home labs are great for advancing a career or breaking into information security. To find the time for them requires making them a priority. It's also good to have a purpose. The time I spend with a home lab is often sporadic and coincides with research on a given area.
    Mehr anzeigen Weniger anzeigen
    23 Min.
  • [RERELEASE] How to build a home lab
    Feb 3 2026
    Chris (@cmaddy) and I have submitted to a couple of calls for training at CircleCityCon and Converge and BSides Detroit this summer on the topic of building a home lab. I will also be speaking on this subject at ShowMeCon. Home labs are great for advancing a career or breaking into information security. The bar is really low on getting started with one. A gaming laptop with decent specifications works great. For those with a lack of hardware or funds there are plenty of online resources to take advantage of.
    Mehr anzeigen Weniger anzeigen
    30 Min.
Noch keine Rezensionen vorhanden