How Should Employees Be Evaluated for Advancement?

Employees should be evaluated for advancement using a combination of sustained performance data, demonstrated potential for higher-level work, and verified skills that match the demands of the target role. Relying on any single factor, whether tenure, a manager’s gut feeling, or a single performance review, leads to poor promotion decisions and disengaged teams. A structured approach that separates what someone has done from what they’re capable of doing next gives you the clearest picture of who’s ready to move up.

Measure Performance Over Time, Not Just Recently

A single strong quarter or one high-profile project doesn’t prove someone is ready for a bigger role. The most reliable performance signal is a sustained track record, ideally averaged over three years. That timeframe smooths out lucky breaks and bad stretches, showing you who consistently delivers above their peers.

Equally important: evaluate both what someone delivers and how they deliver it. An employee who hits every target but alienates colleagues or cuts ethical corners isn’t demonstrating the behaviors you want at the next level. When you blend results and behaviors into one performance picture, weak conduct pulls the overall rating down, which prevents the “brilliant jerk” from advancing on numbers alone. Your top category should genuinely differentiate people who perform at roughly the 75th percentile or above compared to peers in similar roles.

Separate Performance From Potential

One of the most common mistakes in advancement decisions is treating high performance and high potential as the same thing. They aren’t. A high-performing employee excels in their current role. A high-potential employee shows the capacity to succeed at a meaningfully higher level within a defined timeframe. Those are two different conversations, and conflating them leads to promoting great individual contributors into leadership roles they’re not equipped for.

To assess potential, ask how far and how fast someone could realistically move up. This framing forces you to think beyond today’s output and consider learning agility, adaptability, and appetite for bigger challenges. One useful starting question: does this person consistently perform at a level higher than their peers? Consistently low performers rarely have high potential to advance, regardless of other signals. But someone performing well who also shows curiosity about the broader business, volunteers for stretch assignments, and adapts quickly to unfamiliar problems is showing potential that goes beyond their current job description.

The 9-box grid is a common tool for mapping these two dimensions. It places performance on one axis and potential on the other, creating categories that help leadership teams discuss talent with a shared vocabulary. The grid itself isn’t magic. Its value is that it forces managers to articulate why someone belongs in a particular box using observable evidence rather than vague impressions.

Define the Competencies the Next Role Requires

Before evaluating anyone, you need a clear picture of what the target role actually demands. A competency framework breaks this down into specific knowledge, skills, abilities, and behaviors that can be observed and measured. For individual contributor roles moving into management, core competencies typically include coaching and mentoring, planning and organizing, relationship building, staff management, and teamwork. For senior leadership, the list shifts toward strategic vision, business alignment, ethics and integrity, and decision-making under ambiguity.

Each competency should come with performance statements: concrete descriptions of what the behavior looks like in practice. Instead of rating someone on “communication” in the abstract, you’d assess whether they tailor messages to different audiences, surface disagreements constructively, or keep stakeholders informed without being prompted. These observable indicators make the evaluation defensible and give candidates clear signals about what’s expected at the next level.

The U.S. Office of Personnel Management uses a similar structure in its leadership assessments, evaluating candidates through exercises that test problem solving, conflict management, strategic thinking, decisiveness, flexibility, and interpersonal skills. Whether you’re a federal agency or a 50-person company, the principle is the same: name the capabilities the role requires, then look for evidence that the candidate has demonstrated them.

Assess Leadership Readiness Separately

Technical excellence doesn’t automatically translate into leadership effectiveness. When evaluating someone for a management or senior role, you need to probe a distinct set of qualities: accountability, resilience, decisiveness, and the ability to develop others. These aren’t soft extras. They’re the core of what makes someone effective when they’re no longer just responsible for their own output.

Structured exercises can help surface these qualities. Individual simulations test how a candidate approaches unfamiliar problems and communicates their reasoning. Group exercises reveal how they navigate conflict, build consensus, and influence without authority. Strategic analysis scenarios show whether someone can zoom out from daily operations and think about longer-term direction. If formal assessment centers feel too elaborate for your organization, you can approximate these signals by evaluating how candidates have handled cross-functional projects, mentored junior colleagues, or navigated ambiguous situations where no one gave them a playbook.

Personality factors also play a role, though they work best as supplementary data rather than decision drivers. Traits like emotional stability, conscientiousness, and openness to new experiences correlate with leadership effectiveness across a wide range of industries. The key is using personality insights to support development planning, not as a pass/fail gate.

Prioritize Skills Over Titles and Tenure

Many organizations still base advancement decisions on job titles, degree requirements, and time in role. These proxies made more sense when jobs changed slowly, but they increasingly miss the mark. Employees often possess skills well beyond what their current role requires, and traditional systems fail to capture those capabilities, limiting internal mobility.

A skills-based approach evaluates employees on what they can actually do rather than the credentials on their resume. This starts with mapping skills across teams, departments, and projects so you have visibility into the full range of capabilities your workforce holds. When a role opens up, you match it against that skills inventory rather than scanning for whoever has the right title or the most seniority.

This model also supports continuous development. When employees can see which skills they need for roles they’re interested in, reskilling and upskilling become targeted rather than generic. And when you staff projects based on skills rather than rigid job descriptions, you give people opportunities to demonstrate readiness for advancement in real work, not just in annual review conversations.

Use Calibration to Ensure Fairness

Even with clear competencies and good data, individual managers apply different standards. Some grade tough, others grade easy, and the result is that equally strong employees get rated differently depending on who their boss is. Calibration sessions fix this by bringing managers together to review and compare evaluations before final decisions are made.

The process works like this: managers prepare preliminary performance appraisals with proposed ratings. Then a group of managers, typically at the same organizational level, meets to discuss and compare those ratings across teams. The goal is to ensure the same yardstick is applied to everyone, neutralizing the effect of inconsistent grading. As an added safeguard, all appraisals and ratings should be reviewed and approved by the evaluating manager’s own boss before any action is taken.

Distribution guidelines can anchor these conversations. A common framework recommends that roughly 50% to 60% of employees fall into the solid performer category, 20% to 30% rate as superior, 5% to 10% earn the highest distinction, and the remaining percentages cover employees who need improvement. These aren’t rigid quotas. They’re reference points that prevent rating inflation, where everyone gets top marks and the ratings lose all meaning. When advancement decisions rest partly on performance ratings, those ratings need to actually differentiate people.

Build Transparency Into the Process

The best evaluation framework in the world fails if employees don’t understand how it works. People who know what’s being measured, how decisions get made, and what they need to develop are far more likely to engage with the process and trust the outcome, even when they aren’t the ones selected.

Share the competency model openly. Let employees see the performance statements that define each level. Give them access to the skills map so they can identify gaps and pursue development on their own. When promotion decisions are made, explain the criteria that drove the choice. You don’t need to share every detail of the calibration discussion, but the general framework should never be a mystery. Advancement systems that feel opaque breed cynicism, and cynicism drives your best people out the door.