Nominal Intent
When someone says their taste is showing through in an AI-generated image, or that their aesthetic sensibility guided the output, they are engaged in nominal intent. The claim of creative ownership is not exactly a lie — it functions more like a useful fiction, a comforting story the author tells themselves — but it is not true in the way they mean it. Reaching for the word “intent” here is a self-delusion, one that flatters the prompter while obscuring what actually happened.
Intent, by its nature, is vague. A letter of intent is not a contract. It is something nebulous, indistinct — a basis for further discussion, not a specification. This vagueness is not a flaw in the concept; it is the feature that makes it useful as a claim. Because intent is inherently invisible and unmeasurable, it cannot be falsified — that is not a bug, it is the point — it allows a user to retroactively claim authorship over whatever the machine happens to output, as long as they can gesture toward some prior mental state. People lean on it precisely when they want credit for an outcome they did not reliably produce.
The machine adds substantially to the output. This is the key distinction from any traditional creative tool. A paintbrush does not paint itself. It does not generate large areas of the canvas unprompted. A hand tool may resist or redirect at the margins — wet paint spreads, a chisel slips — but these are perturbations of texture, not contributions of content. An LLM injecting an unpredicted compositional choice, an unasked-for tonal register, a structural decision the prompter would not have made — that is of a different order. The human’s guidance does not adequately control the output.
Prompts are constraints, not intentions. Writing a long, detailed prompt that narrows the output space is not the same as intending the result. A person placed in a small box has their movement constrained, but the walls of the box are not an expression of their intent — they are a limit on it. Most human intent does operate stochastically — “I intend to walk home” leaves every footfall underdetermined — but the intent and the constraint remain categorically distinct things. The prompt defines the box. It does not intend what emerges from inside it.
Curation of random outputs is not intent. There is a temptation to argue that selecting a good result from a hundred bad ones constitutes authorship — that intent shifts from making to selecting. A photographer who clicks the shutter a thousand times with eyes closed and then picks one beautiful frame has expressed a conceptual orientation, perhaps, but there is no intent other than a basis to click a shutter. Curation of noise is bullshit. The selection is real. The claim of authorship is not.
Look at the word itself. Intent only appears when there is a gap between what was claimed and what arrived. “I intended X, but Y happened” is the template. Without the gap, the word is unnecessary — you would simply say “I did X.” So when an LLM user says “my intent was to create a minimalist logo,” what they are doing is pre-excusing every deviation from minimalism that the machine introduced. The word signals: judge my invisible mental picture, not the output in front of you. It is a shield against accountability, raised precisely because the output cannot be defended on its own terms.
This is what intent is actually doing in these contexts: providing justification. A mental crutch — not a crutch to reach something, but a crutch of justification to maintain some sense of worth in a medium that has been prepared by a machine. The person claims credit not because they steered the output but because they need the output to mean something about them. The crutch is invoked to paper over the gap between claimed mastery and actual control.
It is worth separating prospective intent from retrospective intent. Prospective intent is legitimate: I intend to write a novel, I intend to move in a certain direction — that is open, that is a goal, something to aspire to. It is honest precisely because it makes no claim over the specifics of what arrives. When I get into a car I do not say “I intend not to hit a child on the street” — that is not a meaningful statement of intent, it is a baseline condition of responsible action. Intent names a direction, not a guarantee of outcome.
Retrospective intent is the problem. When the machine returns something unexpected and the user retreats to “but my intent was minimalist,” they are using intent as an excuse for a deviation they did not produce and could not have prevented. The word has shifted from pointing forward to pointing backward — from a goal to an alibi. That shift is where nominal intent lives.
Think of Clever Hans — the horse that appeared to do arithmetic by tapping its hoof. Observers interpreted the taps as answers. The horse was responding to subtle cues from its owner, who also believed, for a time, that the horse was calculating. Some LLMs can be predicted — so you know what input is needed to guide toward an answer. In some ways LLMs are like Clever Hans. But that is not a demonstration of what it means to intend a result. The test is not whether you can predict the output — it is whether the output would exist, in any meaningful form, without the machine’s autonomous contribution.
If someone knows your stated intent and sees the output, they can recognise from the outside that the intent is false. Not dishonest — false in the way a belief can be sincerely held and still wrong. “Intent” is the wrong word for that relationship. “Steering” is closer. “Endorsement” is closer still. Neither carries the ownership that intent implies.
Nominal intent is not specific to LLMs. It operates wherever a tool’s autonomous contribution overwhelms the human’s guidance. But this points toward a more important distinction: the difference between a tool and a system. See also: Tools vs Systems.
A tool has a predictable, reproducible relationship between input and output. Intent is possible with a tool because control is possible. A system is larger: multi-dimensional, with its own internal dynamics. A military can be thought of as a system — wide-ranging, multi-dimensional. But a system, no matter how complex, can be wielded as a tool when the operator possesses sufficient mastery — deep knowledge of its behaviour, its tendencies, its failure modes under pressure. If there is awareness of that system and control of that system, it can be used as a tool, wielded as a tool, by an expert user or group of users. In the hands of an expert crew, a carrier battle group becomes a tool of strategy. Intent is meaningful because control is real.
An LLM is a system. Same prompt, different outputs. Same output, achievable through different prompts. The relationship is stochastic, non-linear, and opaque. Most people who use LLMs do not have mastery of this system — they have familiarity with its surface, learned approximately how to elicit approximately what they want. That is not mastery. And claiming intent without mastery is what makes it nominal.
Nominal intent is not a categorical accusation. It is a diagnosis of a specific condition: the gap between claimed control and actual reproducible control. An expert who understands a system’s stochastic nature, accounts for it, and can reliably reproduce a class of output can claim intent in a meaningful sense. The system is their tool. They have earned the word.
The problem is not the technology. It is the language used to describe a relationship with it that most people have not actually achieved. When someone says “my intent produced this” and they could not reproduce it, could not have predicted it, and could not have controlled the deviation — the word intent is doing work it has not earned. It is a crutch of justification, raised to maintain a sense of worth in a medium that the machine has substantially prepared for them.
That is nominal intent. Not a lie. A self-delusion — and one with consequences for how we build tools, evaluate work, and understand what it means to make something.
Challenges — red: AI challenge embedded in the argument / green: user’s nullifying response
