Are We Becoming Distilled Versions of AI?
I’ve been thinking about a possibility that seems right to me but I don’t see discussed directly. As people use AI for more decisions, our cognition may start to shift through normal learning processes. The brain absorbs repeated patterns. If AI becomes part of everyday decision-making, some of its reasoning habits may get reflected in ours. This would be a kind of “cognitive distillation,” similar to how small AI models learn from large ones.
Most AI use today are medium decisions: planning a trip, organizing a project, or writing an algorithm. These have low emotional pressure and low friction, so it’s easy to ask an AI for help. But small and large decisions are not currently widely influenced.
Small decisions are things like where to put an item, which door to use at a gas station (AI that can see the broken door sign you miss), or the order of misc tasks. We make thousands of these each day without thinking. AI doesn’t influence these yet because the interface friction is too high. It’s not convenient to open a device for choices that happen in seconds.
Large decisions are major life choices: lying to get out of a family event, complex interpersonal situations (even psych pros struggle to influence these), or who inherits a sentimental item desired by multiple family members. People ask AI about these already, but the barrier isn’t the interface. It’s that these choices have deep personal weight and are heavily influenced by emotion.
Right now AI lives in the middle, but both edges are shifting.
On the small-decision side, friction is dropping fast. Glasses, earbuds, smart environments, and real-time overlays will bring AI into the same sensory space we use. Instead of being something you consult, AI will simply be present and able to offer a suggestion at the moment a decision happens. That doesn’t require control. Even small cues can shape many tiny choices per day. These small decisions matter because they are frequent and form habits.
On the large-decision side, AI systems are becoming better at recognizing behavioral patterns and presenting structured analysis. And as people interact with them more often, they may feel a kind of narrative familiarity with the system, similar to how characters in books become mentally “predictable.” Over time this could give AI regular influence over complex situations without needing emotional depth.
Once AI informs both rapid small decisions and major long-term ones, it stops being a tool used only for specific tasks and becomes part of the whole decision-making pipeline.
This returns to the idea of distillation. In machine learning, a small model can learn from a large one by observing its outputs. The small model ends up with a compressed version of the large model’s behavior.
Humans learn similarly. Repeated exposure leads to internal shortcuts. When you interact with AI regularly, you start to pick up its patterns. Eventually you start structuring your own thoughts in similar ways without intending to. Similar to how we learn writing styles, heuristics, or professional habits simply by being exposed to them often.
If AI becomes heavily involved in daily decisions, especially rapid ones, it becomes a dense pattern source. Over time this could shift how people naturally break down problems or frame choices. It doesn’t require AI to be humanlike, only consistent.
If large numbers of people rely on the same families of AI systems over long periods, their thinking may converge in certain ways and eventually dramatic changes may occur given enough time fully interfaced. This may be most drastic with early exposure. As this distillation starts you may find yourself wondering if a given thought is entirely your own. And what does it even mean for a thought to be mine when my own neural pathways are a ChatGPT distillation?
I’m posting this because I’m curious whether you find this framing reasonable and if there’s existing research along these lines.
Yes, I think it's reasonable. We humans adjust to our environments. Whether physical, social, or informational.
Spoiler for DC's Legends of Tomorrow season 5.
I don't know enough to look for existing research, but what you wrote reminded me of a DC's Legends of Tomorrow episode (Swan Thong). In it the three fates of Greek mythology have established effective control over the world through a smartwatch app that people ask for decisions from (earlier they had tried direct totalitarianism, but the Legends had foiled that). https://youtu.be/aJZlJcmPUnc?t=75
In the episode, people adjust mentally somewhat, but I don't think it gets quite to the detail you ask about.
The Outer Limits episode Stream of Consciousness also deals with this topic a bit: https://theouterlimits.fandom.com/wiki/Stream_of_Consciousne...
And I just participated in a conversation here on HN somewhat along those lines: https://news.ycombinator.com/item?id=46070610
Interesting. It does seem like technology barring AI was already standardizing communication in some ways. I imagine that real universal languages may just naturally emerge.
Imagine this idea of distillation to language. You are speaking to someone and neither of you speak the same language. The AI is translating. With enough exposure to this, you might start picking up some of their words and vice versa.
Over enough time words from languages will begin to merge as a mix of many languages. Take this far enough along and we might all speak the same hybrid language.
I predict problems if the AI doesn't translate the non-verbal expressions and cultural context as well as the spoken words.
It would definitely take a few full generations for a truly universal creole language to emerge, even with the help of instant translations. Another possibility is further balkanization into many more languages and dialects than currently exist, unless we limit AI auto-translation exposure to adults or older adolescents only. Because if the AI is personalized, it knows how to translate your specific googoo gaga such that you never have to learn the adult word.