Language Richness + Generative AI: Imagining Is Describing


Introduction

Screenshots of conversations with language models like ChatGPT keep going viral. They often show responses that range from the absurd to unsettling. Examples such as a model failing to count the number of “r” letters in the word “strawberry”, or agents on Moltbook, the new “social network for agents”, “complaining” about their virtual existence and workload, reveal both a fascination and deep misunderstanding about how these systems actually work.

To be clear, the capabilities these technologies give us could legitimately feel like superpowers to someone living in 1980. AI researcher Andrej Karphaty captured this shift well: “The hottest new programming language is English.” As early as 2023, the tweet reflected how useful and widely adopted language models had become. By 2026, creating high-quality multimedia content (text, images, audio, video) from textual instructions (prompts) has become part of daily work for millions of people.

Do AIs “Understand”?

Given enough time interacting with these systems, most users have experienced confusion or frustration when receiving responses that seem incoherent or show an obvious lack of understanding. At first, it can feel like there is intelligence “behind” the output, but then the model may return something that contradicts the previous context.

It is easy to use words like “understand” or “think” when describing AI interaction, but those terms are misleading anthropomorphisms. Large Language Models (LLMs) are fundamentally statistical systems that (simplifying) predict the next word in a sequence based on patterns learned from massive text datasets. They do not have consciousness, intention, or human-like understanding.

Specialized studies (for example Apple’s widely discussed paper “The Illusion of Thinking”) show that LLMs still struggle with logical generalization. In standardized experiments where the rule stays constant while problem complexity increases, models fail to consistently apply that rule, even when it is explicitly stated in the prompt.

Likewise, on benchmark problems that are frequent in training data, models can answer correctly through memorization. With fewer known examples, performance degrades. In short, at least as of early 2026, language-model AI remains advanced statistical prediction, without the full richness of shared meaning that characterizes human communication.

Language as a Thinking Tool

In 1922, philosopher Ludwig Wittgenstein captured a core idea about language and human experience: “The limits of my language mean the limits of my world”1. His point was that language is not only a medium for communicating thoughts, but also a tool for forming them. Noam Chomsky, one of the most influential linguists of the 20th century, similarly argued that the primary function of language is not externalization, but structuring thought2.

Bringing these ideas together with AI capabilities and limitations highlights something practical: the effectiveness of our interaction with these systems depends heavily on the richness and precision of our own language, and on how clearly we structure context.

From “Prompt Engineering” to “Context Engineering”

Between 2021 and 2022, the term “prompt engineering” became popular to describe the practice of crafting text instructions to guide model output. In its basic form, this often meant trying formulas like “Imagine you are an expert in X” to nudge higher-quality responses.

As adoption expanded, it became clear that output quality depends strongly on clarity and specificity. “Prompt engineering” techniques still matter for simple tasks, but for complex workflows, superficial prompt tricks are not enough to reach the depth of shared meaning we normally develop with human collaborators.

That is why the focus has evolved toward “context engineering”. This includes prompt design, but also preparing multiple reference files and explicit context: operational principles, organizational structure, communication patterns, and domain constraints. In practice, you are painting your world for the model so it can operate within it and generate coherent, useful, tailored results.

Language Richness: Seeing the World in High Definition

This need to “paint our world” suggests a useful analogy: in what resolution are we describing reality? Producing rich context requires sharper observation, moving from “720p” descriptions to “4K” descriptions. That means expanding vocabulary, increasing sensitivity to nuance, and understanding how near-synonyms change tone and meaning.

Working with language models can therefore become a way to improve our own cognitive tools first. Just as a craftsperson uses specialized tools for specific tasks, better abstractions create better approaches to solving problems and pursuing goals.

In AI terms, the 720p approach is vague wording and generic phrases, expecting the model to guess intent. The 4K approach requires more effort: defining goals, steps, output shape, constraints, and contextual subtleties. This not only improves AI output quality, but also clarifies our own reasoning.

Conclusion

Many people have heard the phrase: “AI will not replace humans; humans who use AI will replace those who don’t.” It captures an important part of the current3 AI+human dynamic: user and model together can outperform either alone. Many repetitive tasks can (and will) be automated, but these tools can also amplify human creativity and cognition in unprecedented ways.

We close with one more idea: “emergent consciousness” as a metaphor. In physics, some quantum properties do not become concrete until observed. In a similar sense, language models don’t “exist” until a user engages them4. The interaction creates a new functional entity, shaped by both sides of the equation.

The natural implication is that both components should keep improving. Models improve through technical progress; users improve through linguistic and cognitive development. As many of us have experienced, interaction with systems like ChatGPT or Claude can become a powerful way to expand our own knowledge, language, and thinking, creating a mutual improvement loop. The question is: what kind of emergent intelligence do we want to build?

TLDR: The quality of results we get from generative AI depends heavily on the richness and precision of our own language. The better we describe complex, nuanced contexts, the better our AI outcomes AND our own thinking become.

Notes

Footnotes

  1. Tractatus Logico-Philosophicus (1922). Proposition 5.6.

  2. Chomsky, N. (2006). Language and Mind. Cambridge University Press.

  3. This article refers to the state of these technologies in 2026, and keeping in mind the rapid pace of advancements in AI and language models.

  4. Metaphorically speaking, especially for personal AI systems like ChatGPT or Claude, which only “wake up” when a user starts a conversation. API-based systems can be invoked continuously by external services, but they still require external calls to operate.