Context beats prompts
There’s a lot of attention right now on prompt engineering. The right wording. The right structure. The idea that somewhere there is a perfect formulation that unlocks better answers from an LLM.
Prompts matter. But they’re rarely the main lever.
In practice, the quality of what you get from an AI system depends far more on the context you provide than on how clever your phrasing is. For an engineering manager, that difference is significant.
Context is the real leverage
An engineering manager lives inside a stream of fragmented information: pull requests across months, code reviews, Slack threads and DMs, one-to-one notes, shifting objectives, half-forgotten role expectations.
No one can reliably hold all of this in their head. When we try, we default to shortcuts: what happened recently, what was loud, what felt important.
LLMs aren’t magical. But they are good at absorbing large volumes of mixed information and surfacing patterns that are difficult to see piece by piece if you give them enough material to work with.
Prompts guide structure. Context enables insight.
A good prompt still matters. It defines the structure of the output: summary, strengths and development areas, neutral framing, evidence-backed statements.
But no prompt compensates for missing context. A well-crafted instruction applied to shallow input produces shallow results. A simple instruction applied to rich context often produces something surprisingly useful.
As a manager, the leverage isn’t rhetorical. It’s curatorial.
A practical example: performance reviews
End-of-year reviews make this obvious.
Most are written from memory plus a few recent highlights. Even with good intentions, that skews the result. Quiet contributors get overlooked. Early work fades. Behaviour gets reduced to a couple of anecdotes.
The improvement doesn’t come from crafting a better prompt.
It comes from better input.
Load the system with context:
- Pull requests authored or significantly contributed to
- Code review comments
- Relevant Slack discussions
- Decisions or mentoring that happened in DMs
- Documented objectives
- Career path expectations
Once that material is present, the AI can help surface themes: ownership patterns, collaboration style, consistency, growth areas that only appear over time.
At that point, the prompt shapes the format, but the insight comes from the data.
Context reduces bias, not responsibility
This isn’t about outsourcing judgement. The manager still decides and owns the feedback.
What context does is widen the lens. It reduces blind spots created by memory and organisational noise. It gives you a broader base to reason from before you speak or write.
The real limitation isn’t the model. It’s how little context we usually provide. We’re used to tools that operate on small inputs. LLMs reward completeness more than cleverness.
For engineering managers, that’s an opportunity, not to automate people management, but to approach it with more consistency and less reliance on memory alone.
Context beats prompts.