I noticed recently that an app we use for the kids has started using AI in its animation. The results aren’t bad, exactly. They’re just mediocre—competent enough to ship, unremarkable enough to forget. But the volume is clearly higher. More scenes, more characters, more content than before. The tradeoff is visible: not worse, but flattened. This is what “feeling the AGI” looks like in practice—not a sudden leap into superintelligence, but a gradual replacement of craft with automation. The economic logic is clear. AI is cheaper and dramatically faster. The decision makes itself.

What makes this moment strange is the disconnect between how AI is discussed and how it is actually used. Studies suggest that over 80% of corporate AI projects fail to achieve their projected ROI, and companies that roll out access to frontier models frequently report severe under-utilization. And yet, surveys consistently find that more than half of workers are already using AI—they just aren’t telling their employers. Why not? Partly because enterprise solutions either don’t exist or are poorly suited to how they actually work. But also because admitting you used AI can feel like admitting you didn’t know something. In workplaces where expertise is currency, reaching for a tool that thinks for you feels like an erosion of your inherent value—even though effective AI use often requires more knowledge, not less.

This tells an interesting story, though I’m not sure how it ends. Enterprises invest heavily but can’t get employees to use official tools. Employees use AI anyway, on their own terms, without telling anyone. Meanwhile, consumer-facing products quietly swap human effort for generated output, and quality flattens into slop. The transformation is real, but it’s happening in the gaps between formal systems, in ways that resist measurement and governance. Everyone is moving, but not in the same direction.