Tech Threads: Weaving the Intelligent Future Titelbild

Tech Threads: Weaving the Intelligent Future

Tech Threads: Weaving the Intelligent Future

Von: Baya Systems
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

This podcast hosted by Baya Systems explores the cutting edge of technology, from AI acceleration to data movement and chiplet innovation. Each episode dives into groundbreaking advancements shaping the future of computing, featuring insights from industry experts on the trends and challenges defining the tech landscape. Tune in to stay ahead in the rapidly evolving world of technology.©2025 Baya Systems Politik & Regierungen
  • The Architecture of "Open" Intelligence
    Oct 14 2025
    In this episode of TechThreads: Weaving the Intelligent Future, legendary chip architect Jim Keller joins Nandan Nayampally, Baya Systems’ Chief Commercial Officer, to explore how openness, modularity, and simplicity are redefining the architecture of intelligence.

    From his early work on Apple’s A4 through A7 processors to today’s AI-driven computing revolution, Jim shares how every leap in performance has come from breaking complexity down into composable, modular layers. Referencing The Systems Bible, he explains why “you can’t fix broken complicated systems”, and why the only path forward is to design simpler components that can scale and evolve together.

    The conversation spans:
    - The AI paradigm shift. Why traditional compute models no longer scale.
    - How data movement, not just compute, has become the new frontier.
    - The rise of chiplets and software-driven fabrics for scalable design.
    - The power of open ecosystems like RISC-V and OCA to democratize AI innovation.
    - Building a path toward sovereign and collaborative compute platforms worldwide.

    Listen as Jim Keller unpacks the engineering philosophy behind building open, intelligent systems and what it means for the future of AI and computing at scale.
    Mehr anzeigen Weniger anzeigen
    44 Min.
  • AI from Edge to Cloud: Hype vs Reality
    Aug 14 2025
    In this episode of Tech Threads, Nandan Nayampally sits down with Sally Ward-Foxton (EE Times) and Dr. Ian Cutress (More Than Moore) for an unfiltered look at the state of AI, from the far edge to hyperscale data centers.

    Ahead of the recording, we asked our LinkedIn followers to weigh in on some of the biggest questions in AI today, from bottlenecks in system design to the future of GPUs. Those poll results are revealed and discussed in the episode, bringing your insights directly into the conversation.

    The discussion covers where the real bottlenecks lie in AI system design, whether “AI at the edge” is living up to the hype, and if GPUs will continue to dominate or give way to new architectures. With insights on hardware-software co-design, open vs proprietary ecosystems, and the realities of scaling AI infrastructure, this episode blends deep technical perspective with candid industry observations.

    If you care about AI performance, power efficiency, and what’s next in compute architecture, this is a discussion you won’t want to miss.
    Mehr anzeigen Weniger anzeigen
    48 Min.
  • Edge AI Revolution: Scaling Intelligence from the Network Edge to the Data Center
    Jul 15 2025
    In this episode of Tech Threads: Weaving the Intelligent Future, Baya Systems' CCO Nandan Nayampally welcomes Fabrizio Del Maffeo, founder and CEO of Axelera AI, one of Europe’s most promising AI semiconductor startups. The conversation opens with a sharp look at the growing shift from cloud to edge AI, exploring the power, cost, latency constraints, and more importantly, the regional and use-case considerations that are reshaping how and where intelligence is deployed.
    The discussion covers strategies for deploying AI at the network edge, adapting to rapidly evolving workloads, and leveraging digital in-memory computing to enable low-power, high-throughput inference acceleration. It also delves into the future of chiplet-based design, the role of open and programmable hardware, and broader efforts to democratize compute. With shared perspectives on “scale within” and scalable system architectures, this episode offers a compelling view into the future of distributed AI.
    Mehr anzeigen Weniger anzeigen
    40 Min.
Noch keine Rezensionen vorhanden