Folgen

  • Multimodal Models: Combining Vision, Language, and More
    Feb 17 2026

    This episode explores multimodal AI : models that can see, read, and even hear. We explain how models like OpenAI’s CLIP learn joint representations of images and text (by matching pictures with their captions), enabling capabilities like image captioning and visual search. You’ll learn why multimodal systems represent the next leap toward more human-like AI, processing text, images, and audio together for richer understanding. We also discuss recent multimodal breakthroughs (from GPT-4’s vision features to Google’s Gemini) and how they allow AI to analyze content the way we do with multiple senses.

    Mehr anzeigen Weniger anzeigen
    29 Min.
  • Efficient Fine-Tuning: Adapting Large Models on a Budget
    Feb 3 2026

    This episode dives into strategies for fine-tuning gigantic AI models without needing massive compute. We explain parameter-efficient fine-tuning methods like LoRA (Low-Rank Adaptation), which freezes the original model and trains only small adapter weights, and QLoRA, which goes a step further by quantizing model parameters to 4-bit precision. You’ll learn why techniques like these have become essential for customizing large language models on modest hardware, how they preserve full performance, and what recent results (like fine-tuning a 65B model on a single GPU) mean for practitioners.

    Mehr anzeigen Weniger anzeigen
    29 Min.
  • Diffusion Models: AI Image Generation Through Noise
    Jan 20 2026

    In this episode, we break down what diffusion models are and why they’ve become the go-to method for AI image generation. You’ll learn how these models gradually add and remove noise to transform random pixels into coherent images, enabling use cases from art creation to image restoration. We also explore recent advances like latent diffusion, which compresses the generation process for efficiency, and discuss how diffusion techniques have achieved state-of-the-art results in text-to-image tasks while remaining flexible for control and guidance.

    Mehr anzeigen Weniger anzeigen
    25 Min.
  • Graph Neural Networks: Learning from Connections, Not Just Data
    Sep 30 2025

    This episode breaks down what graph neural networks (GNNs) are and why they matter. You’ll learn how GNNs use nodes and edges to represent relationships and how message passing lets models make sense of social, biological, and networked data. We’ll also cover recent advancements like PGNN for multi-modal graphs and common pitfalls like scalability and over-smoothing.

    Mehr anzeigen Weniger anzeigen
    31 Min.
  • Neuro-Symbolic AI: Combining Learning With Logic
    Sep 16 2025

    In this episode, we explain what neuro-symbolic AI is and why it matters. You’ll learn how neural networks handle patterns, how symbolic systems handle rules, and how combining the two can help models reason more reliably. We also cover real examples where this approach is already being applied in assistants and robotics, showing how it could make AI systems more trustworthy and useful.

    Mehr anzeigen Weniger anzeigen
    25 Min.
  • LLMs in Chip Design: How AI Is Entering the Hardware Workflow
    Sep 2 2025

    In this episode, we look at how large language models are being used in chip and hardware design. We break down what LLM-aided design actually means, how models can generate HDL code, assist with testbench creation, and even support formal verification. You'll also hear about real-world tools like ChatEDA and how companies are starting to use AI in their silicon development workflows.

    Mehr anzeigen Weniger anzeigen
    20 Min.
  • How Embeddings and Vector Databases Power Generative AI
    Aug 19 2025

    This episode explains how embedding models turn language into numerical vectors and how vector databases like Pinecone, FAISS, or Weaviate store and search those vectors efficiently. You'll learn how this system enables GenAI models to retrieve relevant information in real-time, power RAG pipelines, and scale up tool-augmented LLM workflows.

    Mehr anzeigen Weniger anzeigen
    19 Min.
  • Agentic AI: What Happens When Models Start Acting
    Aug 5 2025

    In this episode, we explore agentic AI systems built to not just predict or classify, but to plan, reason, and act autonomously. We break down what makes these models different, how they use tools, memory, and feedback to complete tasks, and why they represent the next step beyond traditional LLMs. You’ll hear how concepts like action loops, world modeling, and autonomous decision-making are shaping everything from research tools to enterprise automation.

    Mehr anzeigen Weniger anzeigen
    19 Min.