Folgen

  • How Block Deployed AI Agents to 12,000 Employees in 8 Weeks w/ MCP | Angie Jones
    Jan 21 2026

    How do you deploy AI agents to 12,000 employees in just 8 weeks? How do you do it safely? Angie Jones, VP of Engineering for AI Tools and Enablement at Block, joins the show to share exactly how her team pulled it off.


    Block (the company behind Square and Cash App) became an early adopter of Model Context Protocol (MCP) and built Goose, their open-source AI agent that's now a reference implementation for the Agentic AI Foundation. Angie shares the challenges they faced, the security guardrails they built, and why letting employees choose their own models was critical to adoption.


    We also dive into vibe coding (including Angie's experience watching Jack Dorsey vibe code a feature in 2 hours), how non-engineers are building their own tools, and what MCP unlocks when you connect multiple systems together.


    Chapters:

    00:00 Introduction

    02:02 How Block deployed AI agents to 12,000 employees

    05:04 Challenges with MCP adoption and security at scale

    07:10 Why Block supports multiple AI models (Claude, GPT, Gemini)

    08:40 Open source models and local LLM usage

    09:58 Measuring velocity gains across the organization

    10:49 Vibe coding: Benefits, risks & Jack Dorsey's 2-hour feature build

    13:46 Block's contributions to the MCP protocol

    14:38 MCP in action: Incident management + GitHub workflow demo

    15:52 Addressing MCP criticism and security concerns

    18:41 The Agentic AI Foundation announcement (Block, Anthropic, OpenAI, Google, Microsoft)

    21:46 AI democratization: Non-engineers building MCP servers

    24:11 How to get started with MCP and prompting tips

    25:42 Security guardrails for enterprise AI deployment

    29:25 Tool annotations and human-in-the-loop controls

    30:22 OAuth and authentication in Goose

    32:11 Use cases: Engineering, data analysis, fraud detection

    35:22 Goose in Slack: Bug detection and PR creation in 5 minutes

    38:05 Goose vs Claude Code: Open source, model-agnostic philosophy

    38:17 Live Demo: Council of Minds MCP server (9-persona debate)

    45:52 What's next for Goose: IDE support, ACP, and the $100K contributor grant

    47:57 Where to get started with Goose


    Connect with Angie on LinkedIn: https://www.linkedin.com/in/angiejones/

    Angie's Website: https://angiejones.tech/

    Follow Angie on X: https://x.com/techgirl1908

    Goose GitHub: https://github.com/block/goose


    Connect with Conor on LinkedIn: https://www.linkedin.com/in/conorbronsdon/

    Follow Conor on X: https://x.com/conorbronsdon

    Modular: https://www.modular.com/


    Presented By: Galileo AI

    Download Galileo's Mastering Multi-Agent Systems for free here: https://galileo.ai/mastering-multi-agent-systems


    Topics Covered:

    - How Block deployed Goose to all 12,000 employees

    - Building enterprise security guardrails for AI agents

    - Model Context Protocol (MCP) deep dive

    - Vibe coding benefits and risks

    - The Agentic AI Foundation (Block, Anthropic, OpenAI, Google, Microsoft, AWS)

    - MCP sampling and the Council of Minds demo

    - OAuth authentication for MCP servers

    - Goose vs Claude Code and other AI coding tools

    - Non-engineers building AI tools

    - Fraud detection with AI agents

    - Goose in Slack for real-time bug fixing

    Mehr anzeigen Weniger anzeigen
    50 Min.
  • Gemini 3 & Robot Dogs: Inside Google DeepMind's AI Experiments | Paige Bailey
    Jan 14 2026

    Google DeepMind is reshaping the AI landscape with an unprecedented wave of releases—from Gemini 3 to robotics and even data centers in space.

    Paige Bailey, AI Developer Relations Lead at Google DeepMind, joins us to break down the full Google AI ecosystem. From her unique journey as a geophysicist-turned-AI-leader who helped ship GitHub Copilot, to now running developer experience for DeepMind's entire platform, Paige offers an insider's view of how Google is thinking about the future of AI.

    The conversation covers the practical differences between Gemini 3 Pro and Flash, when to use the open-source Gemma models, and how tools like Anti-Gravity IDE, Jules, and Gemini CLI fit into developer workflows. Paige also demonstrates Space Math Academy—a gamified NASA curriculum she built using AI Studio, Colab, and Anti-Gravity—showing how modern AI tools enable rapid prototyping.


    The discussion then ventures into AI's physical frontier: robotics powered by Gemini on Raspberry Pi, Google's robotics trusted tester program, and the ambitious Project Suncatcher exploring data centers in space.

    00:00 Introduction

    01:30 Paige's Background & Connection to Modular

    02:29 Gemini Integration Across Google Products

    03:04 Jules, Gemini CLI & Anti-Gravity IDE Overview

    03:48 Gemini 3 Flash vs Pro: Live Demo & Pricing

    06:10 Choosing the Right Gemini Model

    09:42 Google's Hardware Advantage: TPUs & JAX

    10:16 TensorFlow History & Evolution to JAX

    11:45 NeurIPS 2025 & Google's Research Culture

    14:40 Google Brain to DeepMind: The Merger Story

    15:24 Palm II to Gemini: Scaling from 40 People

    18:42 Gemma Open Source Models

    20:46 Anti-Gravity IDE Deep Dive

    23:53 MCP Protocol & Chrome DevTools Integration

    26:57 Gemini CLI in Google Colab

    28:00 Image Generation & AI Studio Traffic Spikes

    28:46 Space Math Academy: Gamified NASA Curriculum

    31:31 Vibe Coding: Building with AI Studio & Anti-Gravity

    36:02 AI From Bits to Atoms: The Robotics Frontier

    36:40 Stanford Puppers: Gemini on Raspberry Pi Robots

    38:35 Google's Robotics Trusted Tester Program

    40:59 AI in Scientific Research & Automation

    42:25 Project Suncatcher: Data Centers in Space

    45:00 Sustainable AI Infrastructure

    47:14 Non-Dystopian Sci-Fi Futures

    47:48 Closing Thoughts & Resources


    - Connect with Paige on LinkedIn: https://www.linkedin.com/in/dynamicwebpaige/

    - Follow Paige on X: https://x.com/DynamicWebPaige

    - Paige's Website: https://webpaige.dev/

    - Google DeepMind: https://deepmind.google/

    - AI Studio: https://ai.google.dev


    Connect with our host Conor Bronsdon:

    - Substack – https://conorbronsdon.substack.com/

    - LinkedIn https://www.linkedin.com/in/conorbronsdon/


    Presented By: Galileo.ai

    Download Galileo's Mastering Multi-Agent Systems for free here!: https://galileo.ai/mastering-multi-agent-systems


    Topics Covered:

    - Gemini 3 Pro vs Flash comparison (pricing, speed, capabilities)

    - When to use Gemma open-source models

    - Anti-Gravity IDE, Jules, and Gemini CLI workflows

    - Google's TPU hardware advantage

    - History of TensorFlow, JAX, and Google Brain

    - Space Math Academy demo (gamified education)

    - AI-powered robotics (Stanford Puppers on Raspberry Pi)

    - Project Suncatcher (orbital data centers)

    Mehr anzeigen Weniger anzeigen
    51 Min.
  • Explaining Eval Engineering | Galileo's Vikram Chatterji
    Dec 19 2025

    You've heard of evaluations—but eval engineering is the difference between AI that ships and AI that's stuck in prototype.

    Most teams still treat evals like unit tests: write them once, check a box, move on. But when you're deploying agents that make real decisions, touch real customers, and cost real money, those one-time tests don't cut it. The companies actually shipping production AI at scale have figured out something different—they've turned evaluations into infrastructure, into IP, into the layer where domain expertise becomes executable governance.

    Vikram Chatterji, CEO and Co-founder of Galileo, returns to Chain of Thought to break down eval engineering: what it is, why it's becoming a dedicated discipline, and what it takes to actually make it work. Vikram shares why generic evals are plateauing, how continuous learning loops drive accuracy, and why he predicts "eval engineer" will become as common a role as "prompt engineer" once was.

    In this conversation, Conor and Vikram explore:

    • Why treating evals as infrastructure—not checkboxes—separates production AI from prototypes
    • The plateau problem: why generic LLM-as-a-judge metrics can't break 90% accuracy
    • How continuous human feedback loops improve eval precision over time
    • The emerging "eval engineer" role and what the job actually looks like
    • Why 60-70% of AI engineers' time is already spent on evals
    • What multi-agent systems mean for the future of evaluation
    • Vikram's framework for baking trust AND control into agentic applications

    Plus: Conor shares news about his move to Modular and what it means for Chain of Thought going forward.

    Chapters:00:00 – Introduction: Why Evals Are Becoming IP01:37 – What Is Eval Engineering?04:24 – The Eval Engineering Course for Developers05:24 – Generic Evals Are Plateauing08:21 – Continuous Learning and Human Feedback11:01 – Human Feedback Loops and Eval Calibration13:37 – The Emerging Eval Engineer Role16:15 – What Production AI Teams Actually Spend Time On18:52 – Customer Impact and Lessons Learned24:28 – Multi-Agent Systems and the Future of Evals30:27 – MCP, A2A Protocols, and Agent Authentication33:23 – The Eval Engineer Role: Product-Minded + Technical34:53 – Final Thoughts: Trust, Control, and What's Next

    Connect with Conor Bronsdon:Substack – https://conorbronsdon.substack.com/LinkedIn – https://www.linkedin.com/in/conorbronsdon/X (Twitter) – https://x.com/ConorBronsdon

    Learn more about Eval Engineering:⁠https://galileo.ai/evalengineering⁠

    Connect with Vikram Chatterji:LinkedIn – ⁠https://www.linkedin.com/in/vikram-chatterji/⁠

    Mehr anzeigen Weniger anzeigen
    37 Min.
  • Debunking AI's Environmental Panic | Andy Masley
    Nov 26 2025

    AI is destroying the planet—or so we've been told. This week on Chain of Thought, we tackle one of the most persistent and misleading narratives in the AI conversation.

    Andy Masley, Director of Effective Altruism DC, joins host Conor Bronsdon to fact-check the absurd AI environmental claims you've heard at parties, in articles, and even in bestselling books. Andy recently went viral for discovering what he calls "the single most egregious math mistake" he's ever seen in a book—a data center water usage calculation in Karen Hao's NYT Bestseller, Empire of AI, that was off by a factor of 4,500.

    In this conversation, Andy and Conor break down the myths around AI’s water and energy usage and explore:

    • The viral Empire of AI error and what it reveals about the broader debate

    • Why most AI water usage statistics are misleading or flat-out wrong

    • How one ChatGPT prompt represents just 1/150,000th of your daily emissions

    • Trade-offs around data center cooling + decision making

    • Why "tribal thinking" about AI is distorting environmental activism

    • Where AI might actually help the climate through deep learning optimization

    If you've ever felt guilty about using AI tools, been cornered at a party about AI's environmental impact, or simply want to understand what the data actually says, this episode, and Andy’s deep dive articles, arm you with the facts.

    Chapters:

    00:00 – Introduction: The Party Guilt Problem

    01:54 – Andy's Background and What Sparked This Work

    03:50 – The 4,500x Error in Empire of AI

    06:39 – Breaking Down the Math: Liters vs. Cubic Meters

    10:39 – The Unintended Consequence: Air Cooling vs. Water Cooling

    12:51 – Karen Hao's Response and What's Still Missing

    19:08 – Why Environmentalists Should Focus Elsewhere

    21:41 – The Danger of Tribal Thinking About AI

    25:49 – What Is Effective Altruism (And Why People Attack It)

    29:15 – EA, AI Risk, and P(doom)

    34:31 – Why Misinformation Hurts Your Own Side

    37:39 – Using ChatGPT Is Not Bad for the Environment

    42:14 – The Party Rebuttal: Practical Comparisons

    45:23 – Water Use Reality: 1/800,000th of Your Daily Footprint

    48:27 – The Personal Carbon Footprint Distraction

    53:38 – Data Centers: Efficiency vs. Whether to Build

    55:13 – AI's Net Climate Impact: The Positive Case

    59:34 – Deep Learning, Smart Grids, and Climate Optimization

    1:03:45 – Final Thoughts


    Key references

    IEA Study: AI and climate change - https://www.iea.org/reports/energy-and-ai/ai-and-climate-change#abstract

    Nature: https://www.nature.com/articles/s44168-025-00252-3

    The Empire of AI Error: https://andymasley.substack.com/p/empire-of-ai-is-wildly-misleading

    Using ChatGPT isn’t bad for the environment: https://andymasley.substack.com/p/a-short-summary-of-my-argument-that

    https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about


    Connect with Andy Masley:

    Substack – https://andymasley.substack.com/

    X (Twitter) – https://x.com/AndyMasley

    Connect with Conor Bronsdon:

    Substack – https://conorbronsdon.substack.com/

    LinkedIn – https://www.linkedin.com/in/conorbronsdon/

    X (Twitter) – https://x.com/ConorBronsdon

    Mehr anzeigen Weniger anzeigen
    59 Min.
  • The Critical Infrastructure Behind the AI Boom | Cisco CPO Jeetu Patel
    Nov 19 2025

    AI is accelerating at a breakneck pace, but model quality isn’t the only constraint we face.. There are major infrastructure requirements, energy needs, security, and data pipelines to run AI at scale. This week on Chain of Thought, Cisco’s President and Chief Product Officer Jeetu Patel joins host Conor Bronsdon to reveal what it actually takes to build the critical foundation for the AI era.

    Jeetu breaks down the three bottlenecks he sees holding AI back today:

    • Infrastructure limits: not enough power, compute, or data center capacity

    • A trust deficit: non-deterministic models powering systems that must be predictable

    • A widening data gap: human-generated data plateauing while machine data explodes

    Jeetu then shares how Cisco is tackling these challenges through secure AI factories, edge inference, open multi-model architectures, and global partnerships with Nvidia, G42, and sovereign cloud providers. Jeetu also explains why he thinks enterprises will soon rely on thousands of specialized models — not just one — and how routing, latency, cost, and security shape this new landscape.

    Conor and Jeetu also explore high-performance leadership and team culture, discussing building high-trust teams, embracing constructive tension, staying vigilant in moments of success, and the personal experiences that shaped Jeetu’s approach to innovation and resilience.

    If you want a clearer picture of the global AI infrastructure race, how high-level leaders are thinking about the future, and what it all means for enterprises, developers, and the future of work, this conversation is essential.

    Chapters:

    00:00 – Welcome to Chain of Thought

    0:48 - AI and Jobs: Beyond the Hype

    6:15 - The Real AI Opportunity: Original Insights

    10:00 - Three Critical AI Constraints: Infrastructure, Trust, and Data

    16:27 - Cisco's AI Strategy and Platform Approach

    19:18 - Edge Computing and Model Innovation

    22:06 - Strategic Partnerships: Nvidia, G42, and the Middle East

    29:18 - Acquisition Strategy: Platform Over Products

    32:03 - Power and Infrastructure Challenges

    36:06 - Building Trust Across Global Partnerships

    38:03 - US vs. China: The AI Infrastructure Race

    40:33 - America's Venture Capital Advantage

    42:06 - Acquisition Philosophy: Strategy First

    45:45 - Defining Cisco's True North

    48:06 - Mission-Driven Innovation Culture

    50:15 - Hiring for Hunger, Curiosity, and Clarity

    56:27 - The Power of Constructive Conflict

    1:00:00 - Career Lessons: Continuous Learning

    1:02:24 - The Email Question

    1:04:12 - Joe Tucci's Four-Column Exercise

    1:08:15 - Building High-Trust Teams

    1:10:12 - The Five Dysfunctions Framework

    1:12:09 - Leading with Vulnerability

    1:16:18 - Closing Thoughts and Where to Connect


    Connect with Jeetu Patel:

    LinkedIn – https://www.linkedin.com/in/jeetupatel/

    X(twitter) – https://x.com/jpatel41

    Cisco - https://www.cisco.com/


    Connect with ConorBronsdon

    Substack – https://conorbronsdon.substack.com/

    LinkedIn – https://www.linkedin.com/in/conorbronsdon/

    X (twitter) – https://x.com/ConorBronsdon


    Mehr anzeigen Weniger anzeigen
    1 Std. und 18 Min.
  • Beyond Transformers: Maxime Labonne on Post-Training, Edge AI, and the Liquid Foundation Model Breakthrough
    Nov 12 2025

    The transformer architecture has dominated AI since 2017, but it’s not the only approach to building LLMs - and new architectures are bringing LLMs to edge devices


    Maxime Labonne, Head of Post-Training at Liquid AI and creator of the 67,000+ star LLM Course, joins Conor Bronsdon to challenge the AI architecture status quo. Liquid AI’s hybrid architecture, combining transformers with convolutional layers, delivers faster inference, lower latency, and dramatically smaller footprints without sacrificing capability.

    This alternative architectural philosophy creates models that run effectively on phones and laptops without compromise.


    But reimagined architecture is only half the story. Maxime unpacks the post-training reality most teams struggle with: challenges and opportunities of synthetic data, how to balance helpfulness against safety, Liquid AI’s approach to evals, RAG architectural approaches, how he sees AI on edge devices evolving, hard won lessons from shipping LFM1 through 2, and much more.

    If you're tired of surface-level AI takes and want to understand the architectural and engineering decisions behind production LLMs from someone building them in the trenches, this is your episode.


    Connect with ⁨Maxime Labonne⁩ :

    LinkedIn – https://www.linkedin.com/in/maxime-labonne/

    X (Twitter) – @maximelabonne

    About Maxime – https://mlabonne.github.io/blog/about.html

    HuggingFace – https://huggingface.co/mlabonne

    The LLM Course – https://github.com/mlabonne/llm-course

    Liquid AI – https://liquid.ai


    Connect with ⁨Conor Bronsdon⁩ :

    X (twitter) – @conorbronsdon

    Substack – https://conorbronsdon.substack.com/

    LinkedIn – https://www.linkedin.com/in/conorbronsdon/


    00:00 Intro — Welcome to Chain of Thought

    00:27 Guest Intro — Maxime Labonne of Liquid AI

    02:21 The Hybrid LLM Architecture Explained

    06:30 Why Bigger Models Aren’t Always Better

    11:10 Convolution + Transformers: A New Approach to Efficiency

    18:00 Running LLMs on Laptops and Wearables

    22:20 Post-Training as the Real Moat

    25:45 Synthetic Data and Reliability in Model Refinement

    32:30 Evaluating AI in the Real World

    38:11 Benchmarks vs Functional Evals

    43:05 The Future of Edge-Native Intelligence

    48:10 Closing Thoughts & Where to Find Maxime Online


    Mehr anzeigen Weniger anzeigen
    53 Min.
  • Architecting AI Agents: The Shift from Models to Systems | Aishwarya Srinivasan, Fireworks AI Head of AI Developer Relations
    Oct 8 2025

    Most AI agents are built backwards, starting with models instead of system architecture.

    Aishwarya Srinivasan, Head of AI Developer Relations at Fireworks AI, joins host Conor Bronsdon to explain the shift required to build reliable agents: stop treating them as model problems and start architecting them as complete software systems. Benchmarks alone won't save you.

    Aish breaks down the evolution from prompt engineering to context engineering, revealing how production agents demand careful orchestration of multiple models, memory systems, and tool calls. She shares battle-tested insights on evaluation-driven development, the rise of open source models like DeepSeek v3, and practical strategies for managing autonomy with human-in-the-loop systems. The conversation addresses critical production challenges, ranging from LLM-as-judge techniques to navigating compliance in regulated environments.

    Connect with Aishwarya Srinivasan:

    LinkedIn: https://www.linkedin.com/in/aishwarya-srinivasan/

    Instagram: https://www.instagram.com/the.datascience.gal/

    Connect with Conor: https://www.linkedin.com/in/conorbronsdon/

    00:00 Intro — Welcome to Chain of Thought

    00:22 Guest Intro — Ash Srinivasan of Fireworks AI

    02:37 The Challenge of Responsible AI

    05:44 The Hidden Risks of Reward Hacking

    07:22 From Prompt to Context Engineering

    10:14 Data Quality and Human Feedback

    14:43 Quantifying Trust and Observability

    20:27 Evaluation-Driven Development

    30:10 Open Source Models vs. Proprietary Systems

    34:56 Gaps in the Open-Source AI Stack

    38:45 When to Use Different Models

    45:36 Governance and Compliance in AI Systems

    50:11 The Future of AI Builders

    56:00 Closing Thoughts & Follow Ash Online

    Follow the hosts

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Mehr anzeigen Weniger anzeigen
    53 Min.
  • The accidental algorithm: Melisa Russak, AI research scientist at WRITER
    Oct 1 2025

    This week, we're doing something special and sharing an episode from another podcast we love: The Humans of AI by our friends at Writer. We're huge fans of their work, and you might remember Writer's CEO, May Habib, from the inaugural episode of our own show.

    From The Humans of AI:

    Learn how Melisa Russak, lead research scientist at WRITER, stumbled upon fundamental machine learning algorithms, completely unaware of existing research — twice. Her story reveals the power of approaching problems with fresh eyes and the innovative breakthroughs that can occur when constraints become catalysts for creativity.

    Melisa explores the intersection of curiosity-driven research, accidental discovery, and systematic innovation, offering valuable insights into how WRITER is pushing the boundaries of enterprise AI. Tune in to learn how her journey from a math teacher in China to a pioneer in AI research illuminates the future of technological advancement.


    Follow the hosts

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Atin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Conor⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vikram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Follow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠Yash⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠


    Follow Today's Guest(s)

    Check out Writer’s YouTube channel to watch the full interviews.

    Learn more about WRITER at writer.com.

    Follow Melisa on LinkedIn

    Follow May on LinkedIn


    Check out Galileo

    ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Try Galileo⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Agent Leaderboard

    Mehr anzeigen Weniger anzeigen
    21 Min.