I’ve been on a Bob Ross kick lately. Not only is it relaxing to watch before bed, but I’m fascinated by how a blank canvas transforms in just 30 minutes. Even the first five minutes are remarkable. What really strikes me is that none of his brushstrokes look like anything on their own, yet together our brains turn them into something real. Those strokes become a bridge from his mind to ours.
It makes me think about how we train generative AI and how we interpret its output. We start with small strokes of text or images. Each piece carries a bit of meaning, and we eventually notice the patterns that link those pieces together. In a simple analogy, it would be like saying a certain cluster of strokes corresponds to mountains, although the actual relationships exist in a much higher-dimensional space.
On the other side of the process, generative AI produces its own strokes, or tokens, that we interpret as meaning. We do something similar with the world around us. If you look closely at a tree or think about how we perceive color, our brains are constantly assembling tiny, meaningless bits into something that feels coherent.