
A lot of talent reviews still run on memory, confidence, and whoever speaks first in the room.
You know the pattern. One manager says someone is “obviously leadership material”. Another pushes back because the person struggled in a recent project. A reliable specialist gets ignored because they don't self-promote. Someone else gets a high-potential label mostly because they work in a visible team. By the end, the grid is full, but the logic behind it is shaky.
That's why the talent management 9 box grid still matters. It gives teams a shared frame for discussing performance and potential. Used well, it cuts through loose opinions. Used badly, it just gives bias a nicer layout.
The difference comes down to inputs. If you build the grid from manager instinct alone, it becomes a political exercise. If you combine role-based performance evidence with privacy-preserving digital work signals such as tool adoption, focus time, and context switching, the discussion gets sharper. You can see who is delivering now, who is learning fast, and who may be overloaded even if their output still looks strong.
Why You Need a Framework for Talent Decisions
Every manager is making talent calls all the time. Who gets the stretch project. Who covers for the team lead. Who should move into a critical role next. Who needs support before performance slips further.
Without a framework, those calls drift. Recent events get too much weight. Quiet people disappear from the conversation. Teams apply different standards to similar roles. That's how promotion decisions feel inconsistent even when everyone involved means well.
The 9-box grid solves a basic problem. It creates a common language for two separate questions: how well is this person performing now and how much headroom do they have for broader or more complex work.
Why this model stuck
The 9-box grid was launched by McKinsey & Company in the 1970s and has been used across industries for over fifty years as a succession planning and talent management tool, which is a good sign that the model is simple enough to survive real organisational use, not just HR theory (Leapsome's overview of the 9-box grid).
It also helps when workforce planning gets messy. If you're trying to tie succession, team structure, and role coverage together, this kind of map is more useful than a pile of separate review notes. That's also where a broader workforce planning strategy matters, because the grid is strongest when it feeds actual headcount and capability decisions.
A talent review without a framework usually rewards visibility, not value.
What it fixes in practice
A good grid won't make talent decisions perfect. It will make them more consistent.
It gives you a way to:
- Separate performance from promotability so your best specialist doesn't get pushed towards management by default
- Spot bench strength for succession before a vacancy becomes urgent
- Direct development spending towards people and roles where it will matter most
- Force evidence into the room instead of vague statements like “she seems ready”
It also works well beside retention work. If your business is already dealing with churn, the review process shouldn't sit apart from that. Work on addressing the causes of revolving door turnover belongs in the same conversation, because losing strong people often starts with poor development visibility and weak career signals.
The 9 Boxes and What They Mean
The grid itself is simple. It's a 3x3 matrix. One axis measures current performance as low, moderate, or high. The other measures future potential as low, moderate, or high.
That gives you nine boxes. The names vary by company. The decisions behind them shouldn't.
Read the grid as a decision tool
The mistake I see most often is treating the boxes as labels. They're not. Each box should trigger a different management response.
Here's a practical version.
| Box Name | Performance/Potential | Recommended Action |
|---|---|---|
| Future Leader | High / High | Put into succession plans, assign visible stretch work, protect retention |
| Core Contributor | High / Moderate | Reward, deepen scope, use as mentor or anchor on key work |
| Valuable Specialist | High / Low | Retain through specialist growth, don't force management track |
| High Potential | Moderate / High | Give stretch assignments, test judgement, remove blockers |
| Consistent Performer | Moderate / Moderate | Build skills steadily, maintain clear goals, watch for hidden upside |
| Solid Contributor | Moderate / Low | Keep role fit stable, support lateral growth, maintain engagement |
| Developing Talent | Low / High | Diagnose role fit, manager fit, or ramp issues quickly |
| Improvement Needed | Low / Moderate | Set short review window, define performance gaps clearly |
| Underperformer | Low / Low | Decide whether coaching, reassignment, or exit is the right step |
The top row needs investment
The top-right box gets the attention for a reason. These are your strongest current performers with headroom for broader responsibility. They usually belong in succession discussions for critical roles.
The top-middle box matters almost as much. These people may not be operating at peak level yet, but they show signs that they can. They're often earlier in role, still building range, or underused.
Budget choices start to show up at this stage. Organisations often direct around 60% of development resources to high-potential employees, which tells you how heavily the grid influences investment choices when used seriously (AIHR on 9-box resource allocation).
Practical rule: Don't spend your whole development budget evenly. Spread feels fair. It rarely solves succession risk.
If someone is in a high-potential box, the next step should be specific. A broader project. Exposure to decision-making. A test of learning speed in a new system. A chance to influence outside their immediate lane.
The middle row keeps the business running
Most organisations have a lot of people in the centre of the grid. That's healthy.
These employees are the operators, analysts, engineers, coordinators, and team members who deliver steady work, hold processes together, and keep standards from slipping. In many companies, they're more important to day-to-day output than the people at the top-right corner.
Three practical responses work well here:
- Keep expectations sharp. Moderate performance often slips when goals are vague.
- Use targeted development. Don't hand out generic training. Tie it to role expansion or a real gap.
- Check aspiration. Some people want broader scope. Others want depth and stability.
A skill set matrix helps here because it gives more detail than the grid alone. The 9-box tells you where someone sits. A skills matrix shows what they can do and where the next training move should be.
The bottom row needs diagnosis, not theatre
Low performance doesn't always mean low value.
Someone in the low-performance, high-potential box may be in the wrong role, reporting to the wrong manager, or still ramping. Someone in the low-performance, low-potential box may need a hard decision. The point is to diagnose the cause before you assign a path.
That's why I prefer short review windows and concrete evidence here. If output is weak, ask:
- Is the work understood?
- Is the role a fit?
- Is the person engaged?
- Is there evidence they can apply feedback?
If the answer stays no, the grid should not become an excuse for delay. A clean staffing decision is often kinder to the team than months of vague “development” with no real movement.
Measuring Performance and Potential Objectively
Most organisations say they use the 9-box grid objectively. Many don't.
Potential, especially, becomes a container for personal preference. Managers give higher marks to people who speak like them, lead meetings in a familiar style, or work on visible tasks. That's one reason the grid gets criticised even when the idea behind it is sound.
The criticism is fair. Research cited by 365Talents says 86% of HR leaders acknowledge bias in succession planning processes (365Talents on bias in succession planning).

Stop treating potential as a vibe
Performance is easier. Teams can often find output measures if they try. Revenue, delivery quality, incident closure, project completion, customer outcomes, error rates, support handling, code review quality. The exact metric changes by role.
Potential is where reviews usually drift into soft language.
I'd define potential through observable behaviour under changing conditions. Not charisma. Not confidence. Behaviour.
Useful indicators include:
- Adoption of new tools or workflows
- Ability to maintain focus without constant supervision
- Handling of broader scope without output collapse
- Response to feedback across multiple cycles
- Evidence of judgement in ambiguous work
- Collaboration patterns that improve team throughput
That's where privacy-preserving digital usage data becomes useful. Not because it replaces judgement, but because it tests the story managers tell.
What digital signals can show
If a person learns new systems quickly, uses the right tools consistently, and can work with fewer disruptive switches across tasks, that tells you something. If another person looks polished in meetings but struggles to adopt core systems or fractures their day into constant toggling, that tells you something too.
You don't need invasive monitoring for this. You need aggregated behavioural signals.
For a talent review, I'd look at patterns like these:
| Signal | What it may indicate | Where it helps |
|---|---|---|
| Tool adoption | Learning speed and adaptability | Potential |
| Focus time | Ability to sustain deep work | Performance and readiness for complexity |
| Context switching | Task fragmentation and workload strain | Performance quality and burnout risk |
| Application churn | Process friction or weak workflow discipline | Role fit and coaching needs |
| Usage consistency | Reliable execution habits | Current performance |
A good human resource analytics setup makes these patterns easier to compare across teams without relying on anecdote.
Don't ask whether someone “feels senior”. Ask what changed when the work got harder.
Use digital evidence carefully
Behavioural data is useful when it stays in proportion.
It should never become a scoreboard detached from context. High activity doesn't equal high value. Low app switching isn't always good if the person is stuck. Long hours can signal commitment, but they can just as easily signal poor systems or overload.
That's why the best use is triangulation. Put three things side by side:
- Role outcomes
- Manager evidence
- Behavioural patterns from digital work data
When all three point the same way, placement gets easier. When they conflict, you've found the people who need discussion.
A senior engineer with strong delivery, fast adoption of new tooling, and stable focus patterns probably belongs higher on both axes than a louder peer with weaker operating evidence. A project lead with decent output but chaotic after-hours usage and fragmented work patterns may still perform well today, but the sustainability question should affect how you assess readiness for more.
That's what makes the talent management 9 box grid more credible. Not more complexity. Better inputs.
Your Guide to a Practical Talent Review
A workable talent review doesn't need a giant annual ritual. It needs clean criteria, decent evidence, and one disciplined conversation.

Step one defines the grid before names go on it
Start with role-based definitions.
“High performance” in sales won't look like “high performance” in engineering or support. Same with potential. If you don't define both axes in writing before managers discuss people, the strongest personality in the room will set the standard on the fly.
I'd keep it tight:
- Performance should point to output, quality, reliability, and scope handled today.
- Potential should point to learning speed, judgement, role stretch, and capacity for more complex work.
This matters even more in high-growth businesses. In those periods, an effective distribution often places 20% to 25% of talent in the top-right boxes and 65% to 70% in middle-tier positions so the company has enough bench strength without pretending everyone is a future executive (PeopleTree Group on 9-box distribution in growth periods).
Step two gathers evidence before debate starts
A bad calibration session is a room full of opinions. A useful one starts with prepared input for each person under review.
Bring a short evidence pack for every employee:
- Recent results tied to the role
- Manager notes with specific examples, not adjectives
- Development history such as stretch work, role changes, or coaching
- Behavioural work signals where available, especially around tool use and workflow habits
Keep the pack short enough that managers will read it.
Step three is calibration, not defence
Most value sits in this segment. Managers need to challenge placements across teams and functions, not just explain their own.
A few rules help:
- Start with obvious cases to establish the scale.
- Ask for evidence when someone uses vague phrases.
- Separate “excellent in role” from “ready for broader complexity”.
- Watch for inflation in high-visibility teams.
- Revisit outliers after the room has calibrated on the middle.
Here's a useful explainer before you run the meeting:
Good calibration meetings are slightly uncomfortable. If nobody's assumptions get tested, you probably just collected approvals.
Step four turns boxes into actions
Every plotted person should leave the review with a next move.
Not a label. A move.
Examples:
- High performance, high potential gets stretch work, succession visibility, and retention attention.
- High performance, lower potential gets specialist recognition and deeper ownership.
- Moderate performance, high potential gets focused coaching and a proving assignment.
- Low performance gets a defined support path with a short review date.
If the meeting ends with a filled grid and no named actions, you've done admin, not talent management.
Common Pitfalls and How to Avoid Them
The 9-box grid fails in predictable ways. Most aren't problems with the model itself. They come from how people use it.

Permanent labels
The grid is a snapshot. Too many companies treat it like a verdict.
A person in the centre today may move quickly with the right manager, clearer scope, or a system change that removes friction. A person in the top-right box may stall next year. If you freeze people in place, the grid becomes a quiet way of rationing opportunity.
Weak communication
Employees don't need a dramatic reveal of “your box”. They do need clearer feedback, better development conversations, and some logic behind what comes next.
When leaders keep the whole process opaque, people fill the gap with rumours. Then even good talent decisions look political.
Confusing output with sustainability
This matters more than many teams realize.
A high performer can still be close to leaving. The traditional grid won't show that. It maps performance and potential, not attrition risk. Yet high-potential employees often leave because of burnout or unclear career paths, and signals such as meeting fatigue or after-hours activity spikes can warn you before the resignation lands (TMI on 9-box blind spots and attrition signals).
That means your “star” may be your biggest retention risk.
How to make the grid less fragile
A few habits improve the quality of the whole process:
- Review more than once a year. A lighter quarterly check catches movement earlier.
- Track behavioural shifts. Fragmented focus, rising evening work, or sudden changes in tool use often tell a story before performance ratings do.
- Check manager patterns. Some managers overrate everyone. Others underrate their team members.
- Keep role context visible. A specialist and a people manager shouldn't be judged by the same growth script.
The best talent review question isn't “who's our top talent?” It's “who are we misreading?”
When you combine the grid with work pattern data, you get something closer to reality. The grid tells you who appears strong, stable, or struggling. Behaviour tells you whether that picture is holding up under the actual conditions of work.
From Snapshot to Continuous Development
The best use of the talent management 9 box grid starts after the meeting.
A single review gives you a map of where people stand now. That's useful, but limited. Real value comes when the grid feeds regular development choices, workload decisions, and succession planning.
Keep the system moving
Use placements to trigger action over the next review cycle.
That can mean pairing a high-potential employee with a mentor, giving a core contributor ownership of a tougher process, or redesigning work for someone whose performance is being dragged down by role mismatch. For specialists, it may mean deepening technical scope instead of pushing them into management. For someone at risk of burnout, it may mean rebalancing work before output drops.
The grid should change as the work changes. That's why ongoing behavioural evidence matters. If learning speed improves, if tool adoption picks up, if focus time stabilises, if overload signs start showing, the review should move with those facts.
A static chart is fine for an annual HR exercise. A living talent system needs fresher signals and better follow-through.
If you want a clearer view of how work happens before your next talent review, WhatPulse gives teams privacy-first visibility into application usage, focus time, tool adoption, and workflow friction. That makes it easier to support succession planning, spot burnout risk early, and bring evidence into performance and potential discussions without relying on guesswork.
Start a free trial