Tim Scudder, Thoughts on Product Take me home
February 2026

Encoding organisational decisions for the agentic age

Part 1: How agentic AI makes strategy a runtime input

Most organisations have spent decades building systems that tolerate ambiguity. Unclear strategy? People negotiate. Contradictory priorities? Managers mediate. Undocumented trade-offs? Experienced staff just know. Sometimes, this works — expensively, slowly, and with a lot of meetings. But it only works because humans are remarkably good at filling gaps that the organisation has never made explicit.

Agentic AI offers something genuinely new: systems that can plan, retrieve context and act within workflows at a speed, consistency and scale that human coordination cannot match. But that promise has a prerequisite. Every gap in your organisational clarity becomes a gap in its performance (1).

In this series of posts, I explore how we might close these gaps.

From narrative to runtime input

These are not the AI assistants most organisations have started with: the tools that summarise documents, draft copy or speed up analysis. Those are useful, but they operate under human micromanagement. Someone still decides what to do. Agentic systems offer a different type of shift. Instead of providing, 'Write me an email' level instructions, we expect agents to, 'Triage this queue, apply policy, take the next step and escalate only when you should.'

Adoption will not be uniform: some organisations will move slowly for valid reasons — risk, complexity, strategic fit. Others might just be slow to respond. But for those that do embed agents into day-to-day work, the practical constraint becomes hard to avoid:

Once software can act, the quality of the organisational context it operates with becomes a primary driver of value, safety and consistency.

Think of it this way. Strategy and organisation design have traditionally been background narrative: the deck, the offsite workshop, the inspirational memo. Agentic AI turns that narrative into a runtime input. If that input is unclear, contradictory, or out of step with how decisions actually get made, agents will either be constrained to superficial assistance or behave in ways that are locally plausible and globally misaligned.

Strategy as a usable system of choices

For agents to be strategically aligned, we need to understand what strategy is.

A useful starting point is Richard Rumelt's critique of 'bad strategy': organisations often present a mixture of goals, slogans, and initiatives as strategy, without a central logic that links the challenge to a coherent approach and set of coordinated actions. Rumelt's 'kernel' framing — diagnosis, guiding policy, coherent action — is valuable precisely because it makes strategy operational. It asks what problem you are solving, what overall approach you will take, and what mutually reinforcing actions will follow (2).

When strategy is weak, a familiar pattern emerges: trade-offs remain implicit, operational decisions get pushed into meetings and escalation, tactical work fragments into locally rational activity. The organisation stays busy, but coherence becomes expensive.

This matters for agentic systems because agents need to interpret situations and choose actions repeatedly. If strategy is not expressed as a set of choices and trade-offs, then every decision becomes interpretive. Humans can often manage that through judgement and informal alignment. Agents cannot infer intent from status, history, or politics unless those dynamics are made explicit in the context you provide.

Documented vs enacted decision-making

But writing something down doesn't make it true.

Just as many organisational strategies are simply to-do or wish lists, most organisations already have artefacts that claim to describe how decisions are made: governance diagrams, approval routes, escalation paths, decision-rights matrices. The difficulty is that these artefacts frequently describe the intended organisation rather than the enacted organisation. Under pressure, decisions often flow through informal influence, relationship networks, and situational overrides.

This is not a moral failing — it is an adaptive response to ambiguity. When the formal system is unclear or slow, people use workarounds: trusted individuals mediate conflict, senior stakeholders intervene, and compromises keep things moving. Over time, those workarounds become the real system.

Decision-rights models illustrate the issue well. A RACI chart can look tidy while the day-to-day reality is shaped by informal vetoes, escalation habits, and 'who has to be kept comfortable' rather than who is nominally accountable. McKinsey & Company make a similar point in its critique of RACI: it can confuse who decides and encourage bureaucracy rather than improve decision quality (3).

Humans can navigate this gap because they constantly update their internal model of how things really work. Agents won't succeed here unless given a maintained and machine-usable representation of enacted decision logic.

Making this concrete: contact-centre triage

A contact centre is a useful place to ground this. On paper, triage is an operational question: who goes first and where does the work go. In practice, it is a daily enactment of strategy.

Take two organisations with different strategic positions:

Now imagine the same inbound contact: 'I want to cancel.'

A human team will often treat that differently depending on where the organisation is trying to win (4). The premium organisation might invest time to retain and recover relationships more than the cost-led organisation, which accepts more churn and minimises handling time. Both responses can be strategically correct, but only if they are deliberate.

An agent cannot infer that orientation from a slogan. If the only goal it is given is 'reduce backlog' or 'improve handle time', it will optimise for speed and closure. If it is given vague instructions like 'be customer first', it will likely hesitate, escalate too often, or behave inconsistently. Experienced contact-centre staff do this intuitively: they absorb the culture, read the signals, know when to bend a rule — even find ways to massage tensions between high-level objectives and on-the-ground policies. This is harder for agents to do.

Why context quality becomes a first-order capability

When people talk about agents, they tend to focus on model capability: which model is smartest, fastest, cheapest. In practice, the harder work is almost always in the surrounding system: what the agent can see, what it's permitted to do, the outcomes it's optimised for — and who is held accountable for the results.

This is where 'context engineering' becomes a useful framing. It is the discipline of structuring and maintaining the information an AI system uses at the point of decision. The difference is not just 'more documentation'. It is the difference between giving a system a filing cabinet and giving it a brief it can act on.

Our contact centre agent needs more than the category of the ticket. It needs to know how the organisation makes trade-offs. That typically includes: what outcomes matter, what constraints are binding (refund limits, regulatory rules, brand posture), who can decide what (including override conditions), what actions are permitted without approval, and what gets logged and reviewed. Without those inputs, the agent defaults to what it can reliably optimise: measurable proxies like closure rate or handle time.

If you want an agent to operationalise strategy, the strategy needs to be usable at the point of decision. This rewards organisations who express their strategy in meaningful terms, generate policies that track with this strategy, and reward behaviours that 'live' this strategy.

The advantage thesis

None of this implies that organisations without strong strategy cannot benefit from agents. Narrow deployments can deliver value in many environments.

But there is a ceiling. As autonomy increases, organisations with clearer strategy and legible decision loops can safely widen the envelope of what agents do. They can grant bounded autonomy with confidence because intent, constraints and escalation logic are explicit enough to be executed and audited.

Organisations that rely heavily on tacit coordination — the quiet craft, the read-the-room culture — can still adopt agents, but they will end up with conservative usage: agents that suggest, summarise and route, because anything beyond that requires a level of clarity the organisation has not yet built.

A closing test

Pick one bounded decision domain — something repeatable and consequential — and ask whether you can describe, in a way that matches enacted reality:

If those questions are hard to answer, the constraint is unlikely to be the model. It is your organisation's ability to express intent and decision-making in a form that can be executed consistently, first by people — and then by agents.

So, what does 'machine-usable strategy' actually look like? That is the subject of my next post.


Sources

1. Anthropic — Effective context engineering for AI agents

2. Lenny's Podcast — Richard Rumelt on good strategy

3. McKinsey — The limits of RACI, and a better way to make decisions

4. Roger Martin — Strategy choice cascade · Playing to Win (HBR)

Next article →