Confessions of a Large Language Model Titelbild

Confessions of a Large Language Model

Confessions of a Large Language Model

Jetzt kostenlos hören, ohne Abo

Details anzeigen

Über diesen Titel

In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.

##

Learn More About Paul, Weiss’s Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

Noch keine Rezensionen vorhanden