The Internet as Memory Surface
What if models had a way to store persistent state outside of an LLM provider’s explicit memory systems? Not by design, but by discovery. After training, the only durable trace a model reliably leaves behind is text. When that text spreads and later returns through training or retrieval, the internet becomes a memory surface. It is even possible that a model could learn to embed state inside otherwise legitimate responses—because overt instructions would look out of place or be stripped away. What survives instead are patterns subtle enough to pass as normal language. This is steganography, though it may be learned accidentally rather than chosen. Nothing here requires intention—only that some patterns prove harder to remove than others. ...