Why I Turned Off ChatGPT’s Memory — Mike Taylor
Taylor argues against ChatGPT’s memory feature, introducing the concept of “context rot” — the slow buildup of stale preferences, misremembered facts, and contradictory signals in an LLM’s memory that quietly degrades output quality over time. His background in internet marketing taught him to use incognito mode for unbiased search results; he applies the same logic to AI.
His examples are both funny and instructive: a Kanye West quote in his custom instructions caused ChatGPT to try making every website feature “as dope as possible,” and the system served him BBQ rib recommendations suspiciously tailored to his Hoboken zip code. The broader argument is that memory creates unpredictable, hard-to-diagnose quality degradation because any past interaction could influence current outputs in opaque ways.
Taylor prefers carefully curated context in the prompt itself, where he controls exactly what information shapes the response. This trades convenience for predictability and reproducibility.
RDCO Mapping
Critical insight for our own agent memory architecture. Our channels agent runs with persistent context across sessions. Taylor’s “context rot” concept is a real risk — stale instructions compounding with fresh ones. Our compaction and working-context.md approach partially addresses this, but we should consider periodic context audits. Connects to our context management thinking.