UI input paradigms for parametric CAD — slider deficiencies and a precision physical-input alternative developed through HST helmet design
Thesis
The slider paradigm's fundamental failure is not cosmetic but cognitive — it imposes a universal translation layer between designer intent and geometric outcome that precision physical input, spatially mirrored on-screen, and decoupled from compute latency, can eliminate.
The slider was designed for the mouse. Click the selector, drag it across the bar — it's a fast, flexible way to enter a number that doesn't need to be exact. For that purpose, it works. The problem is that parametric CAD needs numbers that do need to be exact, and often over a very large range.
The classic presentation — left-to-right, low-to-high — is a convention, not a law. Some parameters have a logic that runs downward from a center origin. A VU meter goes bottom-to-top; the bottom depth of a helmet envelope makes more sense as a slider moving down. The directional assumption in standard sliders fails those parameters before any question of precision even comes up.
The bigger problem is range versus resolution. A parameter might need to cover minus 500 to plus 500 — a thousand integer units — but somewhere in the middle you need to land in the hundredths. The screen doesn't have the resolution for that. Neither does the mouse. You end up using modifier keys or number field overrides — workarounds that bring their own cognitive load.
The discovery, working in Grasshopper, was that a physical dial solves both sides of the problem at once. Tie the rotational velocity of the dial to the magnitude of the parameter change: turn slowly, get thousandths of a unit per revolution; turn fast, get tens of units. The velocity sensitivity lives in the monitoring software, not in the dial. To the person using it, the mechanism is transparent — you just turn slower or faster. That was the key.
The slider still has one thing the dial doesn't. Tapping a position on a range bar to get coarsely to a value is fast, and no other method matches that immediacy. The dial approach addresses the resolution problem without replacing that. They cover different parts of the same problem and work together.
The slider doesn't disappear. It stays for linear representations — forward, backward, width, height, depth. Rotations go to dials. One section of the interface is a direct one-to-one spatial map of the physical 16-dial input device. The logic isn't "replace sliders with dials." It's "assign input paradigm to parameter type." That distinction came out of practice over six or seven years — what seemed workable, what was easily remembered, what felt like a more logical configuration.
Take the side view of a helmet envelope with the center as origin. Adjusting the top height gets an upward-moving slider — that's the natural direction. The bottom depth, below center, needs a downward-moving slider to mirror where that parameter actually sits in the geometry. Three directional paradigms ended up being implemented: bottom-up (standard), top-down (reversed), and center-deviation, where the median is the default and movement can go either direction.
The principle underneath all of it is the elimination of the mental jump between input and what that input is affecting. A mixing board with a hundred vertical sliders is the antithesis — the remapping required to find and adjust any one parameter is immense. The goal is to reduce that remapping effort to near zero. Speed is a byproduct of that. It's not the goal.
On-screen drag controls — handles, gizmos, direct manipulation in a CAD viewport — have the same resolution problem as sliders. But there's a structural issue that goes beyond resolution. In complex parametric models, the geometry takes time to recompute. When input and visual feedback are bound together, as they are with direct manipulation, the interaction loop freezes while the model is computing. You're waiting and don't know what value you've landed on.
The middleware layer keeps those two things separate. A layer dedicated to input confirmation — updating numeric values, reflecting dial and slider positions on screen — operates completely independently of how long the CAD program takes to regenerate the geometry. Those are two distinct processes with different performance characteristics, kept decoupled. The fluidity and immediacy of the feedback is governed entirely by the middleware engine. The designer knows the value. The model catches up in its own time.
This matters most in complex models with significant regeneration lag. In computationally lighter or faster environments, simultaneous input and geometry feedback may make the separation unnecessary.
Provenance
This document is a distillation of a structured interview conducted on 24 April 2026 using the Cognitive Interview–derived Conversation Specification v0.1. The interview proceeded in seven phases: topic framing, free report, probing, challenge, synthesis, transcript cleaning, and evaluation. The interviewee's claims were probed and challenged by an AI interviewer operating under explicit role constraints — no conclusions before Phase 5; steelman before challenge; all AI contributions labelled by type. The text above is derived from the interviewee's words, converted from spoken to written register under voice preservation rules that prohibit synonym substitution, hedge elevation, and meaning change.
An AI system (Claude, Anthropic) conducted the interview and produced this initial distillation. The interviewee reviewed and accepted, revised, or rejected each element of the thesis and supporting claims. Four revisions were made during the session. The AI's contributions — questions, steelmans, challenges, and synthesis proposals — are logged in the interview record and do not appear in the body of this document except where explicitly marked in margin annotations. The title is offered by the AI and is pending interviewee acceptance or replacement.
Contribution summary