Just like a memorable talk lives or dies by its opening and closing, LLMs have a surprisingly similar quirk: they pay close attention to what's at the beginning and end of their context window — and kind of zone out in the middle. This "lost in the middle" phenomenon has real consequences for anyone building AI agents that rely on long-context reasoning. In this episode we dig into the research behind how (and how poorly) models actually use the information you feed them, and what it means for the agentic systems we're all trying to build.