How to Measure Competency in the Workplace

Measuring competency means defining what good performance looks like in a specific role, then using structured methods to evaluate how closely someone matches that standard. The process combines a clear framework of what to measure, a scale for rating proficiency, and one or more assessment methods that generate reliable evidence. Here’s how to put all of that together.

Build a Competency Framework First

Before you can measure anything, you need to know what you’re measuring. A competency framework breaks a role down into the knowledge, skills, and abilities (often called KSAs) required to perform it well. Knowledge is the information someone has learned, like understanding a programming language or knowing employment law. Skills are the practiced proficiencies someone can demonstrate, like writing code or conducting interviews. Abilities are the demonstrated capacity to apply knowledge and skills when the situation calls for it, like debugging a production outage under time pressure or mediating a workplace conflict.

Start by identifying the five to ten competencies most critical to a given role. Pull from job descriptions, performance data, and input from people currently doing the work. Each competency should be specific enough to observe. “Communication” is a start, but it becomes measurable only when you define what communication looks like in practice: writing clear reports, listening without interrupting, sharing information with the right people at the right time.

Define Behavioral Indicators for Each Level

A competency name alone doesn’t tell you whether someone is performing well or poorly. You need behavioral indicators: concrete, observable actions that distinguish one proficiency level from another. These turn a vague label into something two different managers could rate the same way.

For a competency like “communication,” low-level performance might look like reports that are unclear, grammatically poor, or inappropriate for the audience. Someone who interrupts frequently, appears distracted during conversations, or withholds information from colleagues is showing weak behavioral indicators. High-level performance, by contrast, shows up as written and verbal messages that are consistently clear, persuasive, and tailored to the audience. A top performer actively listens, synthesizes others’ ideas, and shares accurate information with the right people in a timely format.

Write these descriptions for every competency in your framework, across at least three or four levels. The key test: could a reviewer read the description and match it to what they actually see an employee doing? If yes, the indicator is specific enough. If it could mean almost anything, rewrite it.

Choose a Proficiency Scale

You need a consistent scale so that ratings mean the same thing across the organization. One well-known model, developed by researchers Stuart and Hubert Dreyfus, describes five stages of skill acquisition that work well as a proficiency ladder.

  • Novice: Follows rules and checklists. Needs explicit instructions for each step and doesn’t yet recognize how context changes the approach.
  • Advanced beginner: Starts to recognize patterns from real experience. Can handle straightforward situations but still relies heavily on guidelines.
  • Competent: Can prioritize what matters in a complex situation. Makes deliberate plans, filters out irrelevant information, and takes ownership of outcomes.
  • Proficient: Intuitively reads situations and sees what needs to happen, though still thinks analytically about how to get there. Adjusts approach fluidly based on experience.
  • Expert: Sees both the goal and the path to it immediately. Draws on a deep repertoire of past situations to respond intuitively, making refined distinctions that less experienced performers miss.

You don’t have to use all five levels. Many organizations simplify to three or four tiers (for example, developing, proficient, advanced, and expert) to make the scale easier to apply. What matters is that each level has a clear description tied to the behavioral indicators you’ve already written, so raters aren’t guessing.

Select Your Assessment Methods

No single method captures everything. The best competency measurements combine two or more approaches to reduce blind spots.

360-Degree Feedback

In a 360-degree review, multiple people who work with the employee, including peers, direct reports, supervisors, and sometimes clients, rate the person against the same competency framework. This gives a more rounded picture than a single manager’s opinion. It’s especially useful for competencies like collaboration, leadership, and communication, where the employee’s behavior varies depending on who they’re interacting with. The trade-off is that it takes time to collect and synthesize, and raters need clear behavioral anchors so the feedback is specific rather than just “they’re great” or “needs improvement.”

Work Simulations and Skills Tests

Practical simulations put someone in a realistic scenario and let you observe their performance directly. A software developer might debug a broken codebase. A manager might handle a simulated employee conflict. A customer service representative might work through a difficult client call. These tests reveal how someone applies their knowledge under realistic conditions, including how they solve problems under pressure and where their skill gaps are. Assessment centers take this further by combining multiple exercises (interviews, group tasks, simulations, and sometimes psychometric tests) into a structured program designed to evaluate several competencies at once.

Self-Assessment

Asking employees to rate themselves against the same framework creates a useful comparison point. When self-ratings diverge sharply from manager or peer ratings, it highlights a development conversation worth having. Self-assessments are easy to administer but unreliable on their own, since people tend to overestimate strengths and underestimate gaps.

Performance Data and Work Output Review

Sometimes the best evidence is the work itself. Reviewing actual deliverables, project outcomes, client satisfaction scores, or error rates gives you an objective measure of competencies that show up in tangible results. Pair this with qualitative methods to understand not just what was achieved but how.

Run the Assessment

With your framework, scale, and methods chosen, the process itself follows a predictable sequence. First, communicate to everyone involved what competencies are being measured, what the proficiency levels mean, and how the results will be used. People perform and respond more honestly when they understand the purpose is development, not punishment.

Train your raters. Managers and peers giving feedback need to understand the behavioral indicators and how to apply the proficiency scale consistently. Without calibration, one manager’s “proficient” is another’s “competent,” and the data becomes unreliable. A short calibration session where raters score the same sample scenario and discuss their reasoning is one of the simplest ways to improve consistency.

Collect assessments across your chosen methods within a defined window, typically one to four weeks. For 360-degree feedback, give respondents enough time to provide thoughtful input but set a clear deadline. For simulations, schedule them so the employee isn’t rushed or caught off guard.

Score and Interpret the Results

Once data is collected, map each employee’s results against the proficiency scale for every competency. Look for convergence: if three different sources rate someone as competent in project management, that’s a reliable signal. If a manager rates someone as expert but peers rate them as advanced beginner, dig into that gap before drawing conclusions.

Aggregate results across a team or department to spot patterns. If most of your team scores low in data analysis, that’s an organizational skill gap worth addressing with training. If one person scores significantly higher than peers across multiple competencies, that’s a potential candidate for leadership development or mentorship roles.

Present individual results in a development-focused conversation. Show the employee where they fall on the proficiency scale, what specific behaviors earned that rating, and what moving to the next level would look like in practice. Tie the results to concrete next steps: training, stretch assignments, coaching, or project opportunities that build the competencies they need.

Use Technology to Track Progress Over Time

Competency measurement isn’t a one-time event. The real value comes from tracking how people grow over months and years. Many organizations now use AI-driven platforms that map skills in real time, matching employee capabilities to internal opportunities and generating personalized development recommendations based on current skill levels, career goals, and business needs. These tools can also surface patterns that manual tracking misses, like which development programs actually move people up the proficiency scale and which don’t.

Even without sophisticated software, a simple spreadsheet that tracks competency ratings over time, linked to the development actions taken between assessments, gives you a clear picture of whether your measurement process is driving real growth. Reassess at regular intervals, typically every six to twelve months, so the data stays current and employees can see their progress.