Context and Constraints

You open an app and something is off. Not broken, exactly; but the settings page works differently from the onboarding flow; copy shifts tone between screens; tables look and behave like they were built by different people in different time zones. The product feels assembled rather than designed.

This fragmentation has always been a risk when products are built across multiple teams, over time, without shared constraints. What held things together (just) was friction. Design reviews caught inconsistencies, standups surfaced conflicts, and the seasoned tech lead remembered why things were done a certain way.

AI-powered development compresses the opportunity for friction. When features are built in hours rather than weeks, informal coordination doesn’t have time to operate. Agents don’t necessarily build badly but products are more likely to become unmoored. Like a kid’s game where each person draws a piece of a body without seeing what came before, the head doesn’t match the torso, which doesn’t fit the legs. Every contributor might produce competent work, but nobody sees the whole picture.

I recently argued that AI-powered development compresses execution risk while leaving direction risk intact. This article explores a companion problem: how products maintain coherence when the friction that used to enforce consistency disappears. Ambient alignment – the tacit coordination that compensated for underspecified product decisions – breaks down when speed removes the human interactions it depends on. In response, context and constraints must move from tacit to explicit, from narrative to structured form. Get this right, and speed becomes an advantage. Get things wrong and speed accelerates drift.

Context engineering has a product dimension

Conversation around spec-driven development is growing fast. Tools like Cursor and Claude Code are making specification quality a practical concern for every development team. But the conversation is overwhelmingly framed as an engineering discipline: how to structure prompts, manage context windows, decompose technical tasks. This is incomplete. The choices that specifications encode are frequently product choices, like:

  • What problem are we solving and for whom?
  • What are our boundaries?
  • What does success look like?

Product Managers have always defined intent, constraints, and trade-offs. Historically, these were communicated through narrative: PRDs, user stories, decks, conversations. These were created for humans who could fill gaps through judgement. But now we’re writing for agents too. And agents don’t fill gaps with judgement: they fill gaps with plausible inference. For them, ambiguous requirements can lead to plausible and wholly incorrect interpretations, applied consistently and at great pace. This changing audience requires PMs to extend beyond persuasive narratives and into structured plans and requirements that an agent can act on without guessing.

None of this means PMs should commandeer the technical specification process. But it’s imperative that we express product thinking with enough clarity in ways agents understand. In this light, a PM’s ability to wield language to deliver outcomes becomes even more valuable. Providing context and constraints is part of the job.

Context: what agents need to aim for

Context tells agents what to aim for and why: product strategy, customer understanding, brand positioning, and the choices the organisation has already made. It is the difference between ‘build a settings page’ and ‘build a settings page for time-poor mid-market ops managers, using a task-focused interaction model, that prioritises clarity over configurability.’

This goes beyond what is typically meant by ‘context engineering,’ which centres on technical information: architecture decisions, API schemas, coding conventions. In fact, there is an entire layer of context that is not technical at all, but it does provide a crucial frame of reference to guide decision making.

Think of it like this: a new (human!) team member might intuit things like product positioning and tone from their interactions with the company and brand. Even without formal onboarding, they could use this implicit understanding to make reasonable choices. On the other hand, agents lack this ambient understanding. They need explicit guidance to tell them where they are and what they are trying to achieve. Without clear specifications, they’ll produce work that is competent but unmoored. So context should be precise, concise, and stored where agents can reference it automatically, while success criteria should be testable.

Constraints: what holds the product together

Think of constraints as what agents should not do and instructions for how things should be done. They are decisions already taken, applied consistently across features.

Some constraints are global: design system connections, accessibility standards, brand guidelines, data handling patterns, API design conventions. These form a foundation layer. Others are feature-specific: scope boundaries, interaction requirements, performance targets. The foundations maintain coherence. The feature layer makes things easy to use, practical and, hopefully, even delightful.

In practice, many teams maintain foundation constraints as structured markdown files in the codebase. Claude Code uses CLAUDE.md, Cursor uses its own Rules format, and AGENTS.md has emerged as a cross-platform standard backed by the Agentic AI Foundation under the Linux Foundation. The naming and mechanics differ, but the principle has clearly stabilised: persistent, structured files that agents reference automatically.

I learned the value of defining constraints the hard way. On a recent project, tables across the same application had disparate styles and behaviours: different sorting, different pagination, different empty states. Not because the agents produced bad work, but because each table was generated as a fresh interpretation from different prompts before design rules had been established. Individually, the outputs were fine. But together, the product felt disjointed.

After a period of consolidation and review, we extracted the common patterns into a shared table specification covering layout, interactions, responsive behaviour, and data display conventions, similar to this:

## Tables
### Appearance
- All data tables use the shared <DataTable> component from `src/components/ui/DataTable.tsx`. Never implement ad hoc table markup.
- Tables have a 1px border in `border-default` (CSS variable), rounded corners (`rounded-lg`), and a white background on light mode / `surface-secondary` on dark mode.
- Column headers use `text-xs font-semibold uppercase tracking-wide text-secondary`. They have a bottom border and a light `bg-surface-subtle` background.
- Rows alternate background colours using the `striped` prop (odd rows: transparent; even rows: `bg-surface-subtle`).
- All cell padding is `px-4 py-3`. Do not override this per-cell.
- The last column is always right-aligned if it contains a numeric value or an action button.
### Behaviour
- All tables are sortable by default. Clicking a column header sorts ascending; clicking again sorts descending. Indicate sort direction with a Chevron icon from the icon library.
- If a table has more than 10 rows, pagination is required. Default page size is 10. The user may select 10, 25, or 50 rows per page using the PageSizeSelector component.
- Tables with more than 5 columns must support horizontal scrolling on viewports below 1024px. Use `overflow-x-auto` on the wrapper div.
- Empty states: if the data array is empty, render the <EmptyState> component inside the table body, centred, with a contextually appropriate message passed via the `message` prop.
- Loading states: use the <TableSkeleton> component while data is fetching. Never show a spinner inside a cell.
- Row selection: if the `selectable` prop is passed, render a checkbox in the first column. A "select all" checkbox appears in the header. Selected rows highlight with `bg-primary-subtle`.
### Do Not
- Do not use raw `<table>` elements outside of the DataTable component.
- Do not hardcode colours, padding, or font sizes on table elements.
- Do not implement custom sorting or pagination logic inline; use the hooks provided in `src/hooks/useTableState.ts`.

With neat, clear specifications, the agent was able to one-shot a strong approach to tables and could apply this standard consistently throughout the application. The product started to feel like one product. But time was wasted along the way.

And this principle applies to code conventions too. Without shared constraints, data models fragment across features and metric definitions contradict each other, creating poor performance and hard-to-manage backends.

To be clear, not everything should be tightly specified. Early exploration and prototyping still benefit from looseness. But once work moves towards production, precision pays for itself many times over.

How solid are your specs?

There is an active debate in the spec-driven development community about whether specifications are disposable intermediates or persistent sources of truth. Some treat them as scaffolding: use them to prompt the agent, then discard them once the code exists. Others argue that the spec is the product, and code is merely its current expression.

This all gets a bit philosophical, but I broadly side with persistence. If specifications are disposable, the product has no stable anchor. Every feature becomes a fresh interpretation of intent, and interpretations drift. But while specifications should evolve as the product evolves, this process must be deliberate.

There’s evidence to support this view too. Teams that invest in specification quality see rework rates around 10–20%, while teams that skip it see 50–60%. OpenAI’s engineering team treats its repository knowledge base as a structured system of record because documentation alone doesn’t keep a fully agent-generated codebase coherent. And data from LinearB shows AI-generated pull requests face rejection rates of 67%, compared to 16% for manually written code, much of it from coherence failures.

The objection is real: specs go stale, they create overhead. But specifications go stale when nobody owns them. The answer is ownership and cadence, not abandonment.

Putting it into practice

The architecture of how we manage agents echoes best practice in other areas of software development: the core discipline is decomposition. Breaking ambiguous product goals into precise units wards against context rot and drives clean execution.

For example: ‘Improve the onboarding experience’ is a goal, not a specification.

Decomposed, it might become: ‘reduce the signup flow from seven screens to four, use progressive disclosure to defer non-essential profile fields, maintain the existing brand voice and visual style, and meet WCAG 2.1 AA accessibility standards.’

Each statement is something an agent can act on. The original goal is not.

This applies to both context and constraints:

  • On the context side, define who a feature serves in terms specific enough to inform design decisions. Make trade-offs explicit. An agent that knows the product prioritises task completion over discoverability will make different choices than one working from a blank slate.
  • On the constraints side, foundation-level specifications should persist across features as structured files that agents reference automatically. Feature specifications point to the foundation layer, not duplicate it.

This does require a shift in how product teams work: structured artefact conventions, explicit ownership, a review cadence. Yet the increase is in rigour, not bureaucracy.

Familiar discipline, new mechanism

Product coherence was never free. It was always the result of deliberate choices, consistently applied. AI-powered development hasn’t changed that. The audience now includes agents and the format demands more structure than narrative alone. But the underlying discipline is the same: understand the problem, define what ‘right’ looks like, and give the people and systems building it enough clarity to get there without guessing.


Sources

Thanks for reading!

Want to chat about product? I’d love to hear from you.

Get in touch