The Ruby AI Podcast Titelbild

The Ruby AI Podcast

The Ruby AI Podcast

Von: Valentino Stoll Joe Leo
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

The Ruby AI Podcast explores the intersection of Ruby programming and artificial intelligence, featuring expert discussions, innovative projects, and practical insights. Join us as we interview industry leaders and developers to uncover how Ruby is shaping the future of AI.

© 2026 The Ruby AI Podcast
Management & Leadership Politik & Regierungen Ökonomie
  • Real vs. Fake AI with Evan Phoenix
    Jan 6 2026

    In this episode of the Ruby AI podcast, hosts Valentino Stoll and Joe Leo engage with Evan Phoenix, a seasoned Ruby programmer and CEO of Mirren. The conversation explores Evan's unique name origin, his career trajectory, and the integration of AI in development workflows. They discuss the distinction between real and fake AI in products, the impact of AI on engineering practices, and the future of AI in development tools. Evan shares insights on performance optimization, human-centric AI interactions, and the role of AI in deployment and architecture detection. In this conversation, Joe, Evan Phoenix, and Valentino Stoll discuss the evolving landscape of software development, particularly focusing on the role of AI, automation, and the Ruby programming language. They explore how AI can assist in analyzing code bases, the future of development with ambient agents, and the potential resurgence of monolithic architectures. The discussion also touches on the importance of human-centric design in software, the significance of experimentation, and the unique strengths of Ruby in the current tech environment. The conversation concludes with predictions about the future of small teams in software development and the impact of AI on coding practices.

    Mehr anzeigen Weniger anzeigen
    1 Std. und 2 Min.
  • Running Self-Hosted Models with Ruby and Chris Hasinski
    Dec 2 2025

    In this episode of the Ruby AI Podcast, hosts Valentino Stoll and Joe Leo
    welcome AI and Ruby expert Chris Hasinski. They delve into the benefits and
    challenges of self-hosting AI models, including control over model updates, cost
    considerations, and the ability to fine-tune models. Chris shares his journey
    from machine learning at UC Davis to his extensive work in AI and Ruby, touching
    upon his contributions to open source projects and the Ruby AI community. The
    discussion also covers the limitations of current LLMs (Large Language Models)
    in generating Ruby code, the importance of high-quality data for effective AI,
    and the potential for Ruby to become a strong contender in AI development.
    Whether you're a Ruby enthusiast or interested in the intersection of AI and
    software development, this episode offers valuable insights and practical
    advice.

    00:00 Introduction and Guest Welcome
    00:31 Why Self-Host Models?
    01:28 Challenges and Benefits of Self-Hosting
    03:14 Chris's Background in Machine Learning
    04:13 Applications Beyond Text
    06:39 Fine-Tuning Models
    12:27 Ruby in Machine Learning
    16:06 Distributed Training and Model Porting
    18:22 Choosing and Deploying Models
    25:19 Testing and Data Engineering in Ruby
    27:56 Database Naming Conventions in Different Languages
    28:19 Importance of Data Quality for AI
    18:03 Monitoring Locally Hosted AI Models
    29:37 Challenges with LLMs and Performance Tracking
    31:09 Improving Developer Experience in Ruby
    31:45 Ruby's Ecosystem for Machine Learning
    32:43 The Need for Investment in Ruby's AI Tools
    38:25 Challenges with AI Code Generation in Ruby
    43:35 Future Prospects for Ruby in AI
    51:26 Conclusion and Final Thoughts

    Mehr anzeigen Weniger anzeigen
    54 Min.
  • The Latent Spark: Carmine Paolino on Ruby’s AI Reboot
    Nov 18 2025

    In this episode of the Ruby AI Podcast, hosts Joe Leo and his co-host interview Carmine Paolino, the developer behind Ruby LLM. The discussion covers the significant strides and rapid adoption of Ruby LLM since its release, rooted in Paolino's philosophy of building simple, effective, and adaptable tools. The podcast delves into the nuances of upgrading Ruby LLM, its ever-expanding functionality, and the core principles driving its design. Paolino reflects on the personal motivations and community-driven contributions that have propelled the project to over 3.6 million downloads. Key topics include the philosophy of progressive disclosure, the challenges of multi-agent systems in AI, and innovative ways to manage contexts in LLMs. The episode also touches on improving Ruby’s concurrency handling using Async and Rectors, the future of AI app development in Ruby, and practical advice for developers leveraging AI in their applications.

    00:00 Introduction and Guest Welcome
    00:39 Depend Bot Upgrade Concerns
    01:22 Ruby LLM's Success and Philosophy
    05:03 Progressive Disclosure and Model Registry
    08:32 Challenges with Provider Mechanisms
    16:55 Multi-Agent AI Assisted Development
    27:09 Understanding Context Limitations in LLMs
    28:20 Exploring Context Engineering in Ruby LLM
    29:27 Benchmarking and Evaluation in Ruby LLM
    30:34 The Role of Agents in Ruby LLM
    39:09 The Future of AI Apps with Ruby
    39:58 Async and Ruby: Enhancing Performance
    45:12 Practical Applications and Challenges
    49:01 Conclusion and Final Thoughts

    Mehr anzeigen Weniger anzeigen
    52 Min.
Noch keine Rezensionen vorhanden