Person seeing themself in AI

The Mirror and the Gaps

“Technology alone doesn’t drive change. It’s how people choose to use it.” — Satya Nadella, CEO of Microsoft

Working with AI usually starts the same way. You say something, the response misses the mark, and you try again. You rephrase, add context, clarify intent. At first glance, this just looks like learning how to prompt more effectively. But the longer I sit with it, the more it feels like something else is being revealed.

AI doesn’t reliably pick up on tone, emotional nuance, social timing, or relationship dynamics in the way people do. It doesn’t smooth things over or guess what you probably meant. When meaning doesn’t land, it simply doesn’t land. The gap stays open, and you’re forced to look at it.

That gap is familiar, even if we don’t usually notice it. In human conversation, unclear communication is often quietly repaired by the other person. They fill in missing context, make assumptions based on past experience, bias, or role, and move forward as if understanding has occurred. Once that happens, assumed understanding can feel indistinguishable from actual understanding.

AI doesn’t do that kind of repair. It doesn’t commit to a best guess and then build meaning on top of it. When something is under specified, it stays under specified. Prompting and iterating make this visible in a way that human conversation rarely does, not because AI is better at understanding, but because it refuses to pretend that it does.

I haven’t felt like working with AI has changed how I communicate with people. It feels more like I’ve learned how to navigate a different kind of interpreter. But that distinction itself is revealing. So much human communication depends on generosity, inference, and unspoken labor that it’s easy to miss when clarity comes from someone else doing that work for us.

Even if nothing changes about how we communicate with people, noticing that hidden effort matters. It shows how often understanding is built through assumption and repair rather than precision, and how rarely we notice it while it’s happening.


Externalizing Thought

“AI will be a tool that helps us think better.”  — Sam Altman, CEO of OpenAI

Humans have always thought with tools like writing, maps, and diagrams because they help us think, remember, and calculate. In essence, they pull parts of our thinking out of our heads so we can look at them. Once we make thoughts visible, they become easier to examine, revise, or set aside. 

AI feels different. It’s not just that thought is being externalized, but how quickly and responsively it happens. Instead of capturing finished ideas, AI can hold onto fragments, half formed questions, and contradictions. It can hold directions that haven’t decided what they are yet. Thoughts don’t need to resolve before they appear. With AI, they can unfold in the open.

Working this way often feels less like producing something and more like thinking out loud in a room that responds. It is much easier to start to notice how ideas loop, where they default to familiar paths, and how often conclusions want to arrive too early. This is not because the system understands what you mean. Rather, it’s because AI can reflect the structure of what you’re doing back to you.

On the surface, this sounds like collaboration. But there's no shared awareness or intention that comes with human collaboration.  The value isn’t insight or wisdom delivered from elsewhere. It’s in the visibility of seeing thought in motion, before it settles into something you identify with or defend.

AI doesn’t change how humans think, but it does make how we think harder to hide from ourselves. It continues our pattern of externalizing cognition, but with the power to show us our habits as they happen, whether we’re looking for them or not.


Cognition, Not Consciousness

“Fluent language should not be confused with understanding.” — Margaret Mitchell, Leading AI Ethics Researcher 

We’ve seen that AI produces language, logic, and coherence, but it does so without inner experience attached to what it says. It reflects our patterns of thinking without any awareness behind them. What we’re interacting with is more of a structure than a mind. 

That distinction matters because we are used to encountering language backed by awareness, feeling, and intention. When responses are fluid and consistent, it’s natural to assume that it’s a mind with which we are communicating. After all, language has long been one of our primary signals of presence.

AI separates those two things in a way we don’t often encounter. The patterns remain, but the interior doesn’t. What looks like understanding is a really familiar structure repeating itself. What sounds reflective are echoed patterns, not awareness.

This doesn’t mean anything deceptive is happening. But seeing cognition without consciousness, patterns without presence, can feel disorienting precisely because we’re not used to seeing one without the other.

AI isn’t showing us a new kind of mind. It shows how quickly patterns start to feel like presence.


Projection and Anthropomorphism

“AI systems are mirrors of the priorities, preferences, and assumptions of the societies that build them.” — Kate Crawford, Atlas of AI

When something responds fluidly, we tend to relate to it as if there’s someone there. That shift happens quickly and usually without reflection. We’re not being misled. We just have a familiar habit of adding meaning when patterns feel coherent enough to hold it.

We are meaning making machines, constantly projecting intention and understanding onto roles, institutions, relationships, and ideas, often without noticing when patterns have quietly taken the place of actual understanding. Once that projection settles, it can feel like something we’ve discovered rather than something we’ve added.

AI greases the wheels on this tendency by offering a responsive surface without a human interior to confirm or resist the meaning we place onto it. What comes back feels intelligent or reassuring largely because those are the shapes we’re already prepared to recognize.


Outsourcing Clarity

“AI is extremely good at answers. That doesn’t mean it’s good at judgment.” — Ethan Mollick, Wharton AI Research 

AI is good at removing friction. Questions move quickly toward answers, uncertainty collapses into clarity with very little effort, and in many cases, that’s genuinely helpful.

But friction has a function. Not knowing slows us down and creates space for reflection. There’s a space where we sit with a question before deciding what it means or what to do with it. When that disappears something subtle changes.

With AI, answers arrive before the questions have had time to do their work. This doesn’t make us less thoughtful, but it does change the rhythm of thinking.

There’s power in noticing what gets bypassed when clarity becomes immediate, and what kinds of understanding quietly depend on staying with uncertainty a little longer.


Design as Amplification

“Design is not neutral.” — Mike Monteiro, Ruined by Design 

If AI reflects human patterns, then designing interactive experiences with AI is really about which of those patterns get reinforced. 

Every designed experience encourages certain ways of interacting and quietly discourages others. In our design work, we make choices, consciously or not, that imply values like speed over pause, certainty over uncertainty, or projection over restraint. None of this needs to be intentional to be effective.

This is more about recognizing influence than good or bad design. When AI systems scale, the patterns they reward scale with them. What feels small at the interface level can shape how people think, decide, and relate over time.


Returning to the Mirror

“AI reflects the values of the people who build and deploy it.” — Timnit Gebru, AI Ethics Researcher 

AI doesn’t tell us who we are. It doesn’t reveal hidden truths or offer wisdom on its own. But it does make familiar patterns easier to see.

The ways we clarify, assume, project, or rush toward answers are reflected back to us by AI with very little interference.

In that sense, AI isn’t a mirror of intelligence so much as a mirror of mind, something to notice rather than follow.

Tyler Benari, UX Strategist & Seasoned Human

Based in San Francisco, Tyler is a lead UX strategist, philosopher, and artist.

He has spent 15 years creating and leading the UX Strategy and Design function for an international nonprofit technology organization, and helping small businesses and nonprofits fall in love with their online presence. He also teaches User Experience Design 2 at University of Colorado, Boulder.

Tyler is often piloting philosophical adventures into perception, perspective, and the human experience. His other passions include playing a variety of musical instruments, writing songs, and finding himself lost in nature.

Next
Next

The Art of Reduction